Can (or should) 2 docker containers interact with each other via localhost?
Asked Answered
H

3

13

We're dockerizing our micro services app, and I ran into some discovery issues.

The app is configured as follows:

When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry. When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)

After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters) But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.

Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?

If not, what is the best way to approach this? I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution) Or maybe use a dockerized version of Consul and integrate with it?

This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.

Thanks!

Handy answered 7/8, 2017 at 16:50 Comment(1)
Bind to 0.0.0.0 instead of localhost and then share via linking as mentioned. You don't need to manually specify a hostname, by default it's the container name.Gettysburg
I
8

But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.

Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.

The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.

For example, with a docker-compose.yml file like this:

a:
  expose:
    - "9999"
b:
  links:
    - a

With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run

ping a

which would send ping requests to the a container from the b container.

So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that

tcp://<container_name>:61001

should work instead of

tcp://localhost:61001

Just make sure you define the link in docker-compose.yml.

Hope this helps

Indiscrete answered 7/8, 2017 at 18:3 Comment(0)
B
4

On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).

On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )

If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.

Boaz answered 7/8, 2017 at 17:12 Comment(2)
Why should one never use docker-compose on production?Indiscrete
If one container crash, there is no automatic restart. If one container restart you have to restart all container to resync link (other container keep the old ip). you have to use only one host, scale is hard, ...Boaz
D
2

If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.

This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.

Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.

socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001
Deloris answered 7/8, 2017 at 18:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.