How do I set up linkage between Docker containers so that restarting won't break it?
Asked Answered
W

11

80

I have a few Docker containers running like:

  • Nginx
  • Web app 1
  • Web app 2
  • PostgreSQL

Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:

  • Nginx --- link ---> Web app 1
  • Nginx --- link ---> Web app 2
  • Web app 1 --- link ---> PostgreSQL
  • Web app 2 --- link ---> PostgreSQL

This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.

For the web app containers, their IP addresses at first would be something like:

  • 172.17.0.2
  • 172.17.0.3

And after I replace them, they will have new IP addresses:

  • 172.17.0.5
  • 172.17.0.6

Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.

Wun answered 16/6, 2014 at 21:33 Comment(0)
H
53

The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).

We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).

1) Use dynamic DNS

The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.

We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).

An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.

2) Use the docker bridge ip

When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.

When replacing a container with a new version, just make the new container publish the same port on the same IP.

This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.

Hang answered 25/6, 2014 at 17:35 Comment(0)
F
19

Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.

The new networking feature allows you to connect to containers by their name, so if you create a new network, any container connected to that network can reach other containers by their name. Example:

1) Create new network

$ docker network create <network-name>       

2) Connect containers to network

$ docker run --net=<network-name> ...

or

$ docker network connect <network-name> <container-name>

3) Ping container by name

docker exec -ti <container-name-A> ping <container-name-B> 

64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms

See this section of the documentation;

Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.

This feature currently doesn't support aliases

Fare answered 1/2, 2016 at 22:16 Comment(2)
I'd like to point out this only works in version 1.9 or later. Some distributions have yet to release with the latest.Oto
Another option is to use network-scoped alias instead of container name (which has to be globally unique is not always nice). But answer is nevertheless absolutely correct.Griego
I
10

You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:

postgres volume:

$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true

postgres-container:

$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql

ambassador-container for postgres:

$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador

Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):

$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root@b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root@b94251eac8be:/# echo $PGHOST
172.17.42.1
root@b94251eac8be:/#
root@b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres: 
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.

postgres=#
postgres=# select 6*7 as answer;
 answer 
--------
     42
(1 row)

bpostgres=# 

Now you can restart the ambassador container whithout having to restart the client.

Invisible answered 20/6, 2014 at 17:48 Comment(5)
won't "-p 5432:5432" exposes the PostgreSQL to outside world?Wun
Yes it will. If you do not want this, you can use "-p 172.17.42.1:5432:5432".Trapezohedron
by the way, why you need to create that "PGDATA" container and link it to a postgresql container? I don't understand, why not just create postgresql container, and map the its volume to a host directory directly?Wun
The PGDATA container is not needed, I use it just for separation of concerns. When starting the posgres container I do not need to remember how the volumes in the PGDATA container are mapped. I've added it because that's how I'm currently doing it. It's basically a matter of taste - I myself am not yet sure whether it's a good idea or not...Trapezohedron
It's indeed best practice to use a data volume container like Swen does.Lammond
L
2

If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.

There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..

The following is from docker docs

Important notes on Docker environment variables

Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.

These environment variables are only set for the first process in the container. Some daemons, such as sshd, will scrub them when spawning shells for connection.

Lascivious answered 13/5, 2015 at 2:16 Comment(0)
R
2

This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md

You should be able to get a dynamic link in place by running a docker container with the --publish-service <name> arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)

Remarque answered 29/7, 2015 at 20:30 Comment(3)
How do you install that version? github.com/docker/docker/releases/tag/v1.8.0-rc1Norris
See this page for more info:github.com/docker/docker/tree/master/experimental. Short version: run wget -qO- https://experimental.docker.com/ | sh to install experimental versionRemarque
This answer was valid but is now outdated as docker removed experimental publish-service option. Now they have network-scoped aliases instead. Essentially the same thing though.Griego
T
1

You may use dockerlinks with names to solve this.

Most basic setup would be to first create a named database container :

$ sudo docker run -d --name db training/postgres

then create a web container connecting to db :

$ sudo docker run -d -P --name web --link db:db training/webapp python app.py

With this, you don't need to manually connect containers with their IP adresses.

Tuberculous answered 17/6, 2014 at 13:8 Comment(5)
Hm... it looks like docker will somehow generate the linked hostname for you, but the way it does, it generates the name in /etc/hosts, it's static, when I restart the linked container, the IP changes, but /etc/hosts remain the same, so it won't work.Wun
Since Docker version 1.0 is more aggressive assigning IP addresses. When you restart a container (db in this case) it will receive a new IP address. Your other container (restarted or not) will retain the ENV values from the moment when your launched it and it is useless.Baud
fyi looks like a fix is coming, /etc/hosts will be updated when a linked container is restarted: github.com/docker/docker/issues/6350Tarpon
This issue seems to be fixed and the proposed method is working for me.Natividad
This is the most correct answer here. The only problem is link is unidirectional and adds a dependency between containers: you can't cross-link two containers and you can't stop the linked container and then start it again (with new options or something). In any of those case, use networks (and either net-alias or container name).Griego
H
1

with OpenSVC approach, you can workaround by :

  • use a service with its own ip address/dns name (the one your end users will connect to)
  • tell docker to expose ports to this specific ip address ("--ip" docker option)
  • configure your apps to connect to the service ip address

each time you replace a container, you are sure that it will connect to the correct ip address.

Tutorial here => Docker Multi Containers with OpenSVC

don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)

the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.

Histology answered 19/6, 2014 at 16:29 Comment(0)
H
1

You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info

Hyponasty answered 20/6, 2014 at 7:28 Comment(2)
Ambassador is a nice pattern yet it suffers from the same problem: ip address will not be necessarily updated on restarts. They are great for inter-host connectivity though. Well, maybe with new docker release that won't be needed either.Griego
@IvanAnishchuk true, but at the time the comment was made this was the way to go... (+2 years ago ;))Hyponasty
S
0

You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.

This has its drawbacks as well, but it might work in your case.

Sled answered 18/6, 2014 at 6:3 Comment(1)
Binding localhost ports has its drawbacks indeed. New docker networking makes it outdated.Griego
G
0

Another alternative is to use the --net container:$CONTAINER_ID option.

Step 1: Create "network" containers

docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity

Step 2: Inject services into "network" containers

docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx

As long as you do not touch the "network" containers, the IP addresses of your links should not change.

Gamboa answered 18/11, 2015 at 17:16 Comment(1)
User-created bridge network with a meaningful name is probably a better option. Doesn't require creating containers just to use their networks.Griego
G
0

Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.

It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts

Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.

Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).

In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.

It's all not very well documented as of yet, hard to figure out just by reading the man page.

Griego answered 6/7, 2016 at 0:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.