Docker exposed port stops working when connected to a VPN
Asked Answered
D

2

7

I'm trying to create a Docker image which will forward a port through a VPN. I've created a simple image which exposes port 5144, and tested that it works properly:

sudo docker run -t -d -p 5144:5144 \
                --name le-bridge \
                --cap-add=NET_ADMIN \
                --device=/dev/net/tun \
                bridge
sudo docker exec -it le-bridge /bin/bash

I check that the port is exposed correctly like this:

[CONTAINER] root@6116787b1c1e:~# nc -lvvp 5144
[HOST] user$ nc -vv 127.0.0.1 5144

Then, whatever I type is correctly echoed in the container's terminal. However, as soon as I start the openvpn daemon, this doesn't work anymore:

[CONTAINER] root@6116787b1c1e:~# openvpn logger.ovpn &
[1] 33
Sun Apr  5 22:52:54 2020 OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 14 2019
Sun Apr  5 22:52:54 2020 library versions: OpenSSL 1.1.1  11 Sep 2018, LZO 2.08
Sun Apr  5 22:52:54 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]
Sun Apr  5 22:52:54 2020 UDPv4 link local (bound): [AF_INET][undef]:1194
Sun Apr  5 22:52:54 2020 UDPv4 link remote: 
Sun Apr  5 22:52:54 2020 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Sun Apr  5 22:52:55 2020 [] Peer Connection Initiated with [AF_INET]
Sun Apr  5 22:53:21 2020 TUN/TAP device tun0 opened
Sun Apr  5 22:53:21 2020 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Sun Apr  5 22:53:21 2020 /sbin/ip link set dev tun0 up mtu 1500
Sun Apr  5 22:53:21 2020 /sbin/ip addr add dev tun0 10.X.0.2/24 broadcast 10.X.0.255
Sun Apr  5 22:53:21 2020 Initialization Sequence Completed

root@6116787b1c1e:~#
root@6116787b1c1e:~# nc -lvvp 5144
listening on [any] 5144 ...

From here, using the exact same netcat command, I cannot reach the exposed port anymore from the host. What am I missing?

EDIT: It's maybe worth mentioning that after the VPN is started, the connexion still succeeds from the host ; it just never reaches the netcat process inside the container.

Disafforest answered 5/4, 2020 at 23:4 Comment(2)
I wonder if the VPN is assigning your machine an IP out of reach of the network in which your container is running. 🤔Granddaughter
Yes, it does. The host network is 192.168.0.0/24, the docker's IP is 172.17.0.1/16 and the VPN gives an IP from the range 10.14.0.0/24.Disafforest
D
3

I'm not exactly sure why, but it turns out that routes need to be fixed inside the container. In my case, the following command solves the issue:

ip route add 192.168.0.0/24 via 172.17.42.1 dev eth0

...where 172.17.42.1 is the IP of the docker0 interface on my host. Hopefully this is helpful to someone one day.

Disafforest answered 6/4, 2020 at 10:12 Comment(0)
U
0

I also faced this issue and it's caused due to Docker subnet becoming unreachable. You may read more about it here: What is the "Docker Subnet" used for?

So to fix it, you'd want to run the following before connecting to your VPN:

DEFAULT_GW=$(ip -4 route | awk '/default via/ {print $3}')
NS_IP=$(grep 'nameserver' /etc/resolv.conf | awk 'NR==1 {print $2}')
SUBNET=$(echo $NS_IP | awk -F '.' '{print $1"."$2"."$3".0/24"}')
ip route add "$SUBNET" via "$DEFAULT_GW" dev eth0

As mentioned in the StackOverflow post that I referenced:

  • If you docker network create a network or you're using Docker Compose, a new subnet will be allocated.
  • Inside Docker it provides a DNS system so you can use container names as host names

So the bash commands/script above gets the default gateway that could change based on the Docker network that your container is in. It also gets the name server from the Docker subnet in use, creates a subnet using the name server's IP address, and then sets up routing correctly using the default gateway and the Docker subnet.

Unsocial answered 16/3 at 8:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.