How to share localhost between two different Docker containers?
Asked Answered
T

3

20

I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:

version: "2"

services:

  service_a:
    container_name: service_a.dev
    image: service_a.dev
    ports:
      - "6473:6473"
      - "6474:6474"
      - "1812:1812"
    depends_on:
      - postgres
    volumes:
      - ../configs/service_a/var/conf:/opt/services/service_a/var/conf

  postgres:
    container_name: postgres.dev
    hostname: postgres.dev
    image: postgres:9.6
    ports:
      - "5432:5432"
    volumes:
      - ../configs/postgres/scripts:/docker-entrypoint-initdb.d/

I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.

Mac version: 10.12.4

Docker version: Docker version 17.03.0-ce, build 60ccb22

I have done quite some prior research, but couldn't find a solution. Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2

Thee answered 21/4, 2017 at 16:32 Comment(3)
Several (varying degrees of hackiness) ways of doing this, but how come you want to?Lewes
I'll reiterate, why do you want to do this? Messing with localhost is only going to cause confusion. Running both processes in the same container could achieve the same without the hackiness.Instancy
My use case is this: these two services must be deployed on the same host for some security purposes, so service_a in the real world would be configured to only listen to localhost/127.0.0.1. Ideally, I'd create a Docker image to have both postgres and service_a, but that didn't seem feasible, hence my approach here.Thee
A
15

The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.


The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:

version: "2"

services:

  service_a:
    container_name: service_a.dev
    image: service_a.dev
    network_mode: "host"
    depends_on:
      - postgres
    volumes:
      - ../configs/service_a/var/conf:/opt/services/service_a/var/conf

  postgres:
    container_name: postgres.dev
    image: postgres:9.6
    network_mode: "host"
    volumes:
      - ../configs/postgres/scripts:/docker-entrypoint-initdb.d/

Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.

The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.


The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.

For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:

version: "2"
services:
  postgres:
    container_name: postgres.dev
    image: postgres:9.6
    ports:
      - "5432:5432"
    volumes:
      - ../configs/postgres/scripts:/docker-entrypoint-initdb.d/

Then you can make a second service in that same network namespace:

version: "2"
services:
  service_a:
    container_name: service_a.dev
    image: service_a.dev
    network_mode: "container:postgres.dev"
    ports:
      - "6473:6473"
      - "6474:6474"
      - "1812:1812"
    volumes:
      - ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Amitosis answered 22/4, 2017 at 2:52 Comment(3)
I think this is the right way to go; unfortunately Docker for Mac seems to be problematic with host networking: forums.docker.com/t/should-docker-run-net-host-work/14215 and forums.docker.com/t/…Thee
The forum post point out the issue of trying to access from the host to the container with localhost. The question and this answer are covering from container to container with localhost. But I'll reiterate, please don't do this, if it breaks, you get to keep all the pieces.Amitosis
Ironically, "The really wrong way" actually works great for containerized OpenVPN client. So that's maybe wrong for typical case. But for this special case, it's really helpful.Merkle
B
5

Here is the "really wrong way" setup from @BMitch's answer. I find it useful for local development using a single project configuration to develop with and without Docker. I cannot see other cases for this configuration.

version: "3.8"
services:
  app:
    image: governmentpaas/curl-ssl
    tty: true
    network_mode: "service:localhost"
    depends_on:
      - localhost
      - web1
      - web2

  web1:
    image: jstastny/envechoserver
    network_mode: "service:localhost"
    environment:
      PORT: 81
    depends_on:
      - localhost

  web2:
    image: jstastny/envechoserver
    network_mode: "service:localhost"
    environment:
      PORT: 82
    depends_on:
      - localhost

  localhost:
    image: alpine:latest
    command: sleep infinity

To test it go into the app container:

docker exec -it localhost_services-app-1 sh

You can get responses from the web containers like that:

curl http://localhost:81
curl http://localhost:82

But if you try to curl to services running on the host they will not be reached.

I have added the empty localhost service to manage dependencies we can have circular references otherwise.

It is possible to make services visible from the outside, the port mapping have to be setup in localhost in this case:

localhost:
  image: alpine:latest
  command: sleep infinity
  ports:
    - "81:81"
    - "82:82"
Brotherinlaw answered 11/8, 2023 at 19:40 Comment(1)
This is "the really wrong way" from @BMitch's answer and I wouldn't recommend this.Spermatophore
C
0

Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html

Coulombe answered 3/12, 2017 at 15:24 Comment(1)
your link is no longer on blogspot and the code (shared on now removed images) are no longer there in the redirected page.Suppress

© 2022 - 2024 — McMap. All rights reserved.