Forward host port to docker container
Asked Answered
I

11

260

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.

I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.

EDIT: some clarification:

Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT     STATE SERVICE
6311/tcp open  unknown

joelkuiper@vps20528 ~ % docker run -i -t base /bin/bash
root@f043b4b235a7:/# apt-get install nmap
root@f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway

Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT     STATE    SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds

I had to do this trick to get any internet connection within the container: My firewall is blocking network connections from the docker container to outside

EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.

Infirmary answered 21/7, 2013 at 9:32 Comment(1)
I'd love a solution that supports services listening on 127.0.0.1 (not 0.0.0.0 or the Docker network's host IP) and port forwards (that specific port) into the container. I'm surprised that Docker supports forwarding from the container to the host, but not vice-versa...Thurible
M
81

Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run

ip addr

This will give you a list of network adapters, one of which will look something like

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
   valid_lft forever preferred_lft forever

You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.

Melt answered 5/9, 2013 at 21:14 Comment(8)
How does the container know what IP to send requests to? I can hardcode the value(172.17.42.1 here and on my test rig, but is that always true?), but that seems to go against the docker principles of working with any host!Traceetracer
another option is to bind your host machine service (e.g. mongodb) to listen on all network interfaces (i.e. 0.0.0.0), configure your firewall so that only machines on your internal network can connect to the relevant ports and then connect to the host machine's network IP address from within the container.Formic
@Seldo: Does that interface need configuration to show up? I am using docker 1.7.1, and I only have lo and eth0.Dreg
Is it possible to do this somehow, if the host is only listening on 127.0.0.1?Slusher
"You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers." Would be nice if you explained how to do thatThoroughwort
As @Thoroughwort mentioned, can somebody please explain that process?Silden
@Thoroughwort @keskinsaf MongoDB: mongod --bind_ip=127.0.0.1 or bind_ip = 127.0.0.1 in /etc/mongodb.confMopes
A valuable thing to remember is to run ip -br -c a. It takes a while to get used to it but it's so much nicer for the general day-to-day usage.Abstract
F
166

A simple but relatively insecure way would be to use the --net=host option to docker run.

This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.

This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.

For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:

docker run --rm -i -t --net=host my_image telnet localhost 25

If you consider doing it this way, please see the caution about security on this page:

https://docs.docker.com/articles/networking/

It says:

--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.

Fleshpots answered 28/9, 2014 at 23:11 Comment(6)
For anyone not using docker on Linux (e.g. using some virtualization) this won't work, since the host will be the containing VM, not the actual host OS.Sferics
In particular, on MacOS, this is not possible (without some workarounds): docs.docker.com/docker-for-mac/networking/…Kirshbaum
On MacOS, --net=host does not work for allowing your container process to connect to your host machine using localhost. Instead, have your container connect to the special MacOS only hostname docker.for.mac.host.internal instead of localhost. No extra parameters are needed to docker run for this to work. You can pass this in as an env var using -e if you want to keep your container platform agnostic. That way you can connect to the host named in the env var and pass docker.for.mac.host.internal on MacOS and localhost on Linux.Teaspoon
latest hostname for mac is host.docker.internal, see docStruggle
Same for Windows docker run --rm -it --net=host postgres bash then psql -h host.docker.internal -U postgresCubiculum
Yea don't use this. It's indeed insecure. Also not available when using namespace remapping with Docker (userns-remap). Instead see my answer below: https://mcmap.net/q/109215/-forward-host-port-to-docker-containerCraftwork
M
81

Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run

ip addr

This will give you a list of network adapters, one of which will look something like

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
   valid_lft forever preferred_lft forever

You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.

Melt answered 5/9, 2013 at 21:14 Comment(8)
How does the container know what IP to send requests to? I can hardcode the value(172.17.42.1 here and on my test rig, but is that always true?), but that seems to go against the docker principles of working with any host!Traceetracer
another option is to bind your host machine service (e.g. mongodb) to listen on all network interfaces (i.e. 0.0.0.0), configure your firewall so that only machines on your internal network can connect to the relevant ports and then connect to the host machine's network IP address from within the container.Formic
@Seldo: Does that interface need configuration to show up? I am using docker 1.7.1, and I only have lo and eth0.Dreg
Is it possible to do this somehow, if the host is only listening on 127.0.0.1?Slusher
"You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers." Would be nice if you explained how to do thatThoroughwort
As @Thoroughwort mentioned, can somebody please explain that process?Silden
@Thoroughwort @keskinsaf MongoDB: mongod --bind_ip=127.0.0.1 or bind_ip = 127.0.0.1 in /etc/mongodb.confMopes
A valuable thing to remember is to run ip -br -c a. It takes a while to get used to it but it's so much nicer for the general day-to-day usage.Abstract
P
21

As stated in one of the comments, this works for Mac (probably for Windows/Linux too):

I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST

The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.

You can also reach the gateway using gateway.docker.internal.

Quoted from https://docs.docker.com/docker-for-mac/networking/

This worked for me without using --net=host.

Pretext answered 29/12, 2020 at 15:45 Comment(3)
This is the easiest solution for Mac.Tagalog
This works on Windows, but not on Linux (which the question was specifically about). On Linux with a sufficiently recent version of Docker, there is a workaround: https://mcmap.net/q/40719/-what-is-the-linux-equivalent-of-quot-host-docker-internal-quot-duplicateSolothurn
Good solution for Windows. Working hereDuvalier
B
14

You could also create an ssh tunnel.

docker-compose.yml:

---

version: '2'

services:
  kibana:
    image: "kibana:4.5.1"
    links:
      - elasticsearch
    volumes:
      - ./config/kibana:/opt/kibana/config:ro

  elasticsearch:
    build:
      context: .
      dockerfile: ./docker/Dockerfile.tunnel
    entrypoint: ssh
    command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"

docker/Dockerfile.tunnel:

FROM buildpack-deps:jessie

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive \
    apt-get -y install ssh && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/config && \
    chown $USER:$USER -R /root/.ssh

config/ssh/config:

# Elasticsearch Server
Host elasticsearch
    HostName jump.host.czerasz.com
    User czerasz
    ForwardAgent yes
    IdentityFile ~/.ssh/id_rsa

This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.

Bare answered 7/7, 2016 at 7:53 Comment(3)
You're basically putting the private key in the Docker image. Secrets should never get into a Docker image.Imbrication
This is the only usable sane solution so far.Horrid
@TeohHanHui IMO for early integrational test or local development this solution is perfect. We can mock those services, will be available later in production environment. I'd like to do this for Cosmos DB emulator, where we only get localhost CERT, but with port forward we can use it on 'remote' host too.Dalury
S
14

TLDR;

For local development only, do the following:

  1. Start the service or SSH tunnel on your laptop/computer/PC/Mac.
  2. Build/run your Docker image/container to connect to hostname host.docker.internal:<hostPort>

Note: There is also gateway.docker.internal, which I have not tried.

END_TLDR;

For example, if you were using this in your container:

PGPASSWORD=password psql -h localhost -p 5432 -d mydb -U myuser

change it to this:

PGPASSWORD=password psql -h host.docker.internal -p 5432 -d mydb -U myuser

This magically connects to the service running on my host machine. You do not need to use --net=host or -p "hostPort:ContainerPort" or -P

Background

For details see: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds

I used this with an SSH tunnel to an AWS RDS Postgres Instance on Windows 10. I only had to change from using localhost:containerPort in the container to host.docker.internal:hostPort.

Steve answered 6/7, 2021 at 21:21 Comment(1)
Did the original question mention they are working on a mac?Ljubljana
R
9

I had a similar problem accessing a LDAP-Server from a docker container. I set a fixed IP for the container and added a firewall rule.

docker-compose.yml:

version: '2'
services:
  containerName:
    image: dockerImageName:latest
    extra_hosts:
      - "dockerhost:192.168.50.1"
    networks:
      my_net:
        ipv4_address: 192.168.50.2
networks:
  my_net:
    ipam:
      config:
      - subnet: 192.168.50.0/24

iptables rule:

iptables -A INPUT -j ACCEPT -p tcp -s 192.168.50.2 -d $192.168.50.1 --dport portnumberOnHost

Inside the container access dockerhost:portnumberOnHost

Rabiah answered 15/6, 2017 at 14:16 Comment(0)
L
3

I’m not sure I’m answering the correct question, but if you need to allow access from inside the container to a remote service through ssh tunneling, this works:

Connect with ssh and Create a tunnel like this (notice the 0.0.0.0 at the start), assuming the service on the remote host is accessible at port 8081:

ssh ubuntu@remoteIp -L 0.0.0.0:8080:localhost:8081

This will allow anyone with (network) access to your computer to connect to port 8080 and thus access port 8081 on the connected server.

Then, inside the container just use "host.docker.internal", for example:

curl host.docker.internal:8081
Lowrance answered 24/5, 2022 at 20:42 Comment(2)
Can you be more complete with your answer? What are you passing -L to? Doesn't look like docker -- I think this is elaborating on the various ssh-based answers (eg https://mcmap.net/q/109215/-forward-host-port-to-docker-container)?Thurible
Hope the edit helpsLowrance
C
3

Easier way under all platforms nowadays is to use host.docker.internal. Let's first start with the Docker run command:

docker run --add-host=host.docker.internal:host-gateway [....]

Or add the following to your service, when using Docker Compose:

extra_hosts:
  - "host.docker.internal:host-gateway"

Full example of such a Docker Compose file should then look like this:

version: "3"
services:
  your_service:
    image: username/docker_image_name
    restart: always
    networks:
      - your_bridge_network
    volumes:
      - /home/user/test.json:/app/test.json
    ports:
      - "8080:80"
    extra_hosts:
      - "host.docker.internal:host-gateway"

networks:
  your_bridge_network:

Again, it's just an example. But in if this docker image will start a service on port 80, it will be available on the host on port 8080.

And more importantly for your use-case; if the Docker container want to use a service from your host system that would now be possible using the special host.docker.internal name. That name will automatically be resolved into the internal Docker IP address (of the docker0 interface).

Anyway, let's say... you also running a web service on your host machine on (port 80). You should now be able to reach that service within your Docker container.. Try it out: nc -vz host.docker.internal 80.

All WITHOUT using network_mode: "host".

Craftwork answered 14/1, 2023 at 18:45 Comment(3)
Under what circumstances do you think '--add-host=host.docker.internal:host-gateway' is necessary?Dozier
@Unknown if you want to access a local running service that is on your host machine, from a docker container. This might only works with root docker setup (rootless might be different). Anyhow, let's say you have a postfix running on your debian server, and you want your docker container to connect to your postfix server (instead of setting-up a separate container for postfix), this is the way.Craftwork
I know, but in a few images I tried, they resolved host.docker.internal automatically without requiring me to pass that argument.Dozier
G
3

host.docker.internal or gateway.docker.internal usually works very well for docker mac desktop containers, but this also fails when it comes to kafka which is super stingy in the accepted listeners it sets.

I bypassed this issue by forwarding 9092 port inside the container to the host by running socat

apt-get install -y socat && socat tcp-l:9092,fork tcp:host.docker.internal:9092 &

Basically I have 1 container running kafka, and another app container that needs to connect to kafka. Setting app container kafka brokers as ["host.docker.internal:9092"] doesn't work. So I set kafka brokers as ["localhost:9092"], and use socat to porward the port to host, which has port forwarding with the kafka container.

I arrived at this solution after a lot of trial and error. I hope this helps someone !

Gaynor answered 6/3, 2023 at 10:49 Comment(0)
H
2

If MongoDB and RabbitMQ are running on the Host, then the port should already exposed as it is not within Docker.

You do not need the -p option in order to expose ports from container to host. By default, all port are exposed. The -p option allows you to expose a port from the container to the outside of the host.

So, my guess is that you do not need -p at all and it should be working fine :)

Haig answered 22/7, 2013 at 17:16 Comment(4)
I knew that, but it seems that I'm missing a bit of information: see the recent edit, as I am unable to reach the ports on the host.Infirmary
You need to setup rabbitmq and mongodb to also listen on the bridge and not only on your main network interface.Haig
@Haig how do you get rabbitmq and mongodb to listen on the bridge?Squier
@RyanWalls Try docker inspect network bridge. Look under IPAM -> Config -> Gateway. In my case, it was 172.17.0.1.Alesha
B
0

why not use slightly different solution, like this?

services:
  kubefwd:
    image: txn2/kubefwd
    command: ...
  app:
    image: bash
    command:
     - sleep
     - inf
    init: true
    network_mode: service:kubefwd

REF: txn2/kubefwd: Bulk port forwarding Kubernetes services for local development.

Bremen answered 18/6, 2022 at 16:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.