How can I remotely connect to docker swarm?
Asked Answered
R

5

18

Is it possible to execute commands on a docker swarm cluster hosted in cloud from my local mac? If yes, how?

I want to execute command such as following on docker swarm from my local:

docker create secret my-secret <address to local file>
docker service create --name x --secrets my-secret image
Roux answered 18/5, 2017 at 19:40 Comment(0)
R
12

Answer to the question can be found here.

What one needs to do for ubuntu machine is define daemon.json file at path /etc/docker with following content:

{
  "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
}

The above configuration is unsecured and shouldn't be used if server is publicly hosted.

For secured connection use following config:

{
  "tls": true,
  "tlscert": "/var/docker/server.pem",
  "tlskey": "/var/docker/serverkey.pem",
  "hosts": ["tcp://x.x.x.y:2376", "unix:///var/run/docker.sock"]
}

Details for generating certificate can be found here as mentioned by @BMitch.

Roux answered 12/6, 2017 at 14:40 Comment(0)
V
10

This is the easiest way of running commands on remote docker engine:

docker context create --docker host=ssh://myuser@myremote myremote
docker --context myremote ps -a
docker --context myremote create secret my-secret <address to local file>
docker --context myremote service create --name x --secrets my-secret image

or

docker --host ssh://myuser@myremote ps -a

You can even set the remote context as default and issue commands as if it is local:

docker context use myremote
docker ps # lists remote running containers

In this case you don't even need to have docker engine installed, just docker-ce-cli.

You need to use key based authentication for this do work (you should already be using it). Other options include setting up tls cert based socket, or ssh tunnels.

Also, consider setting up ssh control socket to avoid re-authenting on each command, so your commands will run faster, as it was local.

Vic answered 8/7, 2021 at 14:6 Comment(1)
Or even better: export DOCKER_HOST=ssh://myuser@myremote and then use docker commands as usual.Deflagrate
B
4

One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.

I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.

I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.

The solution that worked for me was to run the docker commands using individual commands over ssh, which allows me to pipe the secret and/or stackfile using stdin.

To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm: ssh into the one that will be the manager node, type docker swarm init (possibly with the --advertise-addr option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Then ssh into each of the other nodes and issue the join command, and your swarm is created.

Then, export the ssh command you will need to issue commands on the manager node, like

export SSH_CMD='ssh [email protected]'

Now, you have a couple of options. You can issue individual docker commands like:

$SSH_CMD docker service ls

You can create a secret on your swarm without copying the secret file to the swarm manager:

$SSH_CMD docker create secret my-secret - < /path/to/local/file
$SSH_CMD docker service create --name x --secrets my-secret image

(Using - to indicate that docker create secret should accept the secret on stdin, and then piping the file to stdin using ssh)

You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:

$SSH_CMD docker secret create rabbitmq.config.01 - < rabbitmq/rabbitmq.config
$SSH_CMD docker secret create enabled_plugins.01 - < rabbitmq/enabled_plugins
$SSH_CMD docker secret create rmq_cacert.pem.01 - < rabbitmq/cacert.pem
$SSH_CMD docker secret create rmq_cert.pem.01 - < rabbitmq/cert.pem
$SSH_CMD docker secret create rmq_key.pem.01 - < rabbitmq/key.pem
$SSH_CMD docker stack up -c - rabbitmq_stack < rabbitmq.yml

where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:

version: '3.1'
services:
  rabbitmq:
    image: rabbitmq
    secrets:
      - source: rabbitmq.config.01
        target: /etc/rabbitmq/rabbitmq.config
      - source: enabled_plugins.01
        target: /etc/rabbitmq/enabled_plugins
      - source: rmq_cacert.pem.01
        target: /run/secrets/rmq_cacert.pem
      - source: rmq_cert.pem.01
        target: /run/secrets/rmq_cert.pem
      - source: rmq_key.pem.01
        target: /run/secrets/rmq_key.pem
    ports: 
      # stomp, ssl:
      - 61614:61614
      # amqp, ssl:
      - 5671:5671
      # monitoring, ssl:
      - 15671:15671
      # monitoring, non ssl:
      - 15672:15672
  # nginx here is only to show another service in the stackfile
  nginx:
    image: nginx
    ports: 
      - 80:80
secrets:
  rabbitmq.config.01:
    external: true
  rmq_cacert.pem.01:
    external: true
  rmq_cert.pem.01:
    external: true
  rmq_key.pem.01:
    external: true
  enabled_plugins.01:
    external: true

(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)

Now, if you want to bring up another swarm, your script will work with that swarm once you have set the SSH_CMD environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.

So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.

Bram answered 18/11, 2018 at 7:1 Comment(0)
S
2

To connect to a remote docker node, you should setup TLS on both the docker host and client signed from the same CA. Take care to limit what keys you sign with this CA since it is used to control access to the docker host.

Docker has documented the steps to setup a CA and create/install the keys here: https://docs.docker.com/engine/security/https/

Once configured, you can connect to the newer swarm mode environments using the same docker commands you run locally on the docker host just by changing the value of $DOCKER_HOST in your shell.

Spinozism answered 18/5, 2017 at 19:44 Comment(2)
Is it feasible to connect to docker daemon without setting up TLS?Roux
It's not recommended. Anyone with access to an unencrypted service (keep in mind the client would also have a TLS key) can have root access on the system without the need for a password.Spinozism
A
0

If you start from scratch, you can create the manager node using a generic docker-machine driver. Afterwards you will be able to connect to that docker engine from your local machine with the help of docker-machine env command.

Athanasius answered 23/1, 2019 at 15:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.