If you set X11UseLocalhost = no
, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost
in the sshd config.
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123@if5 --|-- eth0@if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes
, sshd listens only on 127.0.0.1
on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0
bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0
, lo
and ens33
) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >--ssh x11 fwd-+
(loopback) |
v
192.168.1.2 |
<-- ssh -- ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1
to "escape" the docker net ns. This is achieved by setting the DISPLAY
appropriately: export DISPLAY=172.17.0.1:10
:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123@if5 --|-- eth0@if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Note that we're using port 6010
, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY
environment variable in a shell started by SSH.
Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet
is needed, to be honest. It appears that 127/8
is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.
There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10
inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10
). Use Ruben's suggestion to add a new entry visible inside the docker container:
# inside container
xauth add 172.17.0.1:10 . <cookie>
where <cookie>
is the cookie set up by the SSH X11 forwarding, e.g. via xauth list
.
You might also have to allow traffic ingress to 172.17.0.1:6010
in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su
, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0
bridge). Therefore, it will work for any applications inside the container network namespace.