Run X application in a Docker container reliably on a server connected via SSH without "--net host"
Asked Answered
J

5

44

Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the same thing working when the application runs inside a Docker container on a server. When SSH-ing into a server with the -X option, an X11 tunnel is set up and the environment variable "$DISPLAY" is automatically set to typically "localhost:10.0" or similar. If I simply try to run an X application in a Docker, I get this error:

Error: GDK_BACKEND does not match available displays

My first idea was to actually pass the $DISPLAY into the container with the "-e" option like this:

docker run -ti -e DISPLAY=$DISPLAY name_of_docker_image

This helps, but it does not solve the issue. The error message changes to:

Unable to init server: Broadway display type not supported: localhost:10.0
Error: cannot open display: localhost:10.0

After searching the web, I figured out that I could do some xauth magic to fix the authentication. I added the following:

SOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
chmod 777 $XAUTH
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \ 
  -e XAUTHORITY=$XAUTH name_of_docker_image

However, this only works if also add "--net host" to the docker command:

docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \ 
  -e XAUTHORITY=$XAUTH --net host name_of_docker_image

This is not desirable since it makes the whole host network visible for the container.

What is now missing in order to get it fully to run on a remote server in a docker without "--net host"?

Jarv answered 12/1, 2018 at 22:43 Comment(0)
J
58

I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.

Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.

When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.

This can be done with a sed replacement:

DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`

Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:

X11UseLocalhost no

and then restart the SSH server, and re-login to the server with "ssh -X".

This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.

To get the TCP port number, you can run:

X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`

Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:

ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp

All the commands together can be put into a script:

XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp 
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
   -e XAUTHORITY=$XAUTH name_of_docker_image

Assuming you are not root and therefore need to use sudo.

Instead of sudo chmod 777 $XAUTH, you could run:

sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH

to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.

I hope this should make it properly work for most scenarios.

Jarv answered 12/1, 2018 at 23:12 Comment(2)
Instead of the cryptic line with "xauth nlist", one can also use a more understandable command: xauth -f /tmp/.docker.xauth add 172.17.0.1:$X11PORT . $MAGIC_COOKIE where $MAGIC_COOKIE can be found with: xauth list $DISPLAY | awk '{print $3}'Jarv
> When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine. -- It does not for me, I use --net host and I still get the following: X11: Failed to open display localhost:11.0Monobasic
H
4

If you set X11UseLocalhost = no, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.


Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost in the sshd config.

                                           + docker container net ns +
                                           |                         |
           172.17.0.1                      |   172.17.0.2            |
        +- docker0 --------- veth123@if5 --|-- eth0@if6              |
        |  (bridge)          (veth pair)   |   (veth pair)           |
        |                                  |                         |
        |  127.0.0.1                       +-------------------------+
routing +- lo
        |  (loopback)
        |
        |  192.168.1.2
        +- ens33
           (physical host interface)

With the default X11UseLocalhost yes, sshd listens only on 127.0.0.1 on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0 bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0, lo and ens33) can communicate via routing.

We want to achieve the following:

                                           + docker container net ns +
                                           |                         |
           172.17.0.1                      |   172.17.0.2            |
        +- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
        |  (bridge)          (veth pair)   |   (veth pair)           |
        v                                  |                         |
        |  127.0.0.1                       +-------------------------+
routing +- lo >--ssh x11 fwd-+
           (loopback)        |
                             v
           192.168.1.2       |
<-- ssh -- ens33 ------<-----+
           (physical host interface)

We can let the X11 application talk directly to 172.17.0.1 to "escape" the docker net ns. This is achieved by setting the DISPLAY appropriately: export DISPLAY=172.17.0.1:10:

                                           + docker container net ns+
                                           |                         |
           172.17.0.1                      |   172.17.0.2            |
           docker0 --------- veth123@if5 --|-- eth0@if6 -----< xeyes |
           (bridge)          (veth pair)   |   (veth pair)           |
                                           |                         |
           127.0.0.1                       +-------------------------+
           lo
           (loopback)
         
           192.168.1.2
           ens33
           (physical host interface)

Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:

iptables \
  --table nat \
  --insert PREROUTING \
  --proto tcp \
  --destination 172.17.0.1 \
  --dport 6010 \
  --jump DNAT \
  --to-destination 127.0.0.1:6010

sysctl net.ipv4.conf.docker0.route_localnet=1

Note that we're using port 6010, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY environment variable in a shell started by SSH.

Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet is needed, to be honest. It appears that 127/8 is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.

With the commands given above, we end up with:

                                           + docker container net ns +
                                           |                         |
           172.17.0.1                      |   172.17.0.2            |
        +- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
        |  (bridge)          (veth pair)   |   (veth pair)           |
        v                                  |                         |
        |  127.0.0.1                       +-------------------------+
routing +- lo
           (loopback)

           192.168.1.2
           ens33
           (physical host interface)

The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.

There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10 inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10). Use Ruben's suggestion to add a new entry visible inside the docker container:

# inside container
xauth add 172.17.0.1:10 . <cookie>

where <cookie> is the cookie set up by the SSH X11 forwarding, e.g. via xauth list.

You might also have to allow traffic ingress to 172.17.0.1:6010 in your firewall.


You can also start an application from the host inside the docker container network namespace:

sudo nsenter --target=<pid of process in container> --net su - $USER <app>

Without the su, you'll be running as root. Of course, you can also use another container and share the network namespace:

sudo docker run --network=container:<other container name/id> ...

The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0 bridge). Therefore, it will work for any applications inside the container network namespace.

Haaf answered 9/10, 2020 at 17:0 Comment(2)
Did this solution work for anyone? I can't get this to work in Ubuntu 22.04. The DNAT with route_localnet solution does not work.Michamichael
Hi @rustyx, I just verified that it works on Ubuntu 22.04 (used both inside and outside the container). What exactly are your symptoms?Haaf
S
2

In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":

remote --> docker_host --> docker_container

To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".

So I can connect directly with the running container via ssh (from "remote"):

ssh -Y -p 1234 appuser@docker_host.local

(where appuser is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)

This creates a connection directly from my "remote" to "docker_container" via ssh.

remote --> (ssh) --> docker_container

Inside the "docker_container", I installed sshd with sudo apt-get install openssh-server (you can add this to your Dockerfile to install at build time).

To allow X11 forwarding to work, edit the /etc/ssh/sshd_config file as such:

X11Forwarding yes
X11UseLocalhost no

Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash)

Restart sshd: sudo service ssh restart

When you connect via ssh to the "docker_container", check the $DISPLAY environment variable. It should say something like

appuser@3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0

Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())

Stardom answered 14/7, 2020 at 13:26 Comment(4)
how to implement the same when your GUI app and X11 servers are running in same containers. Let's say, from my GUI app, If I type xeye then I could see the xeye popped up in x11 server connected via localhost port 6080 and noVNC as client. So, my question is, how can I show the same outcome of xeye by staying in my GUI app ? My GUI app is Jupyter lab.Idaline
I'm a little confused. Isn't jupyter lab a web based platform? Meaning that you're actually viewing the result of the Jupyter lab in a browser on your local native system. The container may be running the Jupyter app, but your real observation of this (GUI) is on your native display (be it VNC, VM, or physical display). If this is the case, then the connection would have to go from the VNC viewing container/VM/local machine to your container which is forwarding the X11 display. The ssh -Y function will address the display to the platform which it was called fromStardom
thanks for your reply. yes, you are right Jupyter lab is web based but the application which I want to run is not compatible with Jupyter frameworks. Hence I created Xserver and made a link between jupyterlab and X-server via noVNC. So, whenever I write X-server-app on my jupyterlab then it automatically. runs the app in X-server. But my question is since all the libs and everything is present, instead of opening the outcome in X-server. How can I pop-up the output on the jupyterlab itself.Idaline
@RexBarker, I've got the DISPLAY env varaible set. sshuser@9a64d08b9764:/Volumes/Workspace/work/csroot/private/sensor$ echo $DISPLAY 9a64d08b9764:10.0 But when I try to execute my UI application, I get the error: QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-sshuser' qt.qpa.gl: QXcbConnection: Failed to initialize GLX The X11 connection broke: No error (code 0) XIO: fatal IO error 2 (No such file or directory) on X server "9a64d08b9764:10.0" after 352 requests (352 known processed) with 0 events remaining.Repent
P
0

I use an automated approach which can be executed entirely from within the docker container.

All that is needed is to pass the DISPLAY variable to the container, and mounting .Xauthority. Moreover, it only uses the port from the DISPLAY variable, so it will also work in cases where DISPLAY=localhost:XY.Z.

Create a file, source-me.sh, with the following content:

# Find the containers address in /etc/hosts
CONTAINER_IP=$(grep $(hostname) /etc/hosts | awk '{ print $1 }')
# Assume the docker-host IP only differs in the last byte
SUBNET=$(echo $CONTAINER_IP | sed 's/\.[^\.]$//')
DOCKER_HOST_IP=${SUBNET}.1

# Get the port from the DISPLAY variable
DISPLAY_PORT=$(echo $DISPLAY | sed 's/.*://'  | sed 's/\..*//')
# Create the correct display-name
export DISPLAY=$DOCKER_HOST_IP:$DISPLAY_PORT

# Find an existing xauth entry for the same port (DISPLAY_PORT), 
# and copy everything except the dispay-name
# filtering out entries containing /unix: which correspond to "same-machine" connections
ENTRY=$(xauth -n list | grep -v '/unix\:' | grep "\:${DISPLAY_PORT}" | head -n 1 | sed 's/^[^ ]* *//')
# Prepend our display-name
ENTRY="$DOCKER_HOST_IP:$DISPLAY_PORT $ENTRY"
# Add the new xauth entry. 
# Because our .Xauthority file is mounted, a new file 
# named ${HOME}/.Xauthority-n will be created, and a warning 
# is printed on std-err 
xauth add $ENTRY 2> /dev/null
# replace the content of ${HOME}/.Xauthority with that of ${HOME}/.Xauthority-n
# without creating a new i-node.
cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority

Create the following Dockerfile for testing:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y xauth
COPY source-me.sh /root/
RUN cat /root/source-me.sh >> /root/.bashrc
 
# xeyes for testing:
RUN apt-get install -y x11-apps

Build and run:

docker build -t test-x .
docker run -ti \
    -v $HOME/.Xauthority:/root/.Xauthority:rw \
    -e DISPLAY=$DISPLAY \
    test-x \
    bash

Inside the container, run:

xeyes

To run non-interactively, you must ensure source-me.sh is sourced:

docker run \
    -v $HOME/.Xauthority:/root/.Xauthority:rw \
    -e DISPLAY=$DISPLAY \
    test-x \
    bash -c "source source-me.sh ; xeyes"
Pomace answered 14/4, 2022 at 16:2 Comment(3)
for me $HOME/.Xauthority is a directory... docker run -ti \ -v $HOME/.Xauthority:/root/.Xauthority:rw \ and rewriting by cat: # without creating a new i-node. cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority has no effect, but error :-(Eraeradiate
Check if $HOME/.Xauthority exist on your host machine, and that it is a file and not a directory. If it does not exists, docker will assume you are mounting a directory, and will create it for you.Pomace
Getting this error: cat: /root/.Xauthority-n: No such file or directory and from in bash I get this when trying to run xeyes: Error: Can't open display: 10.4.0.32.1:10Boxfish
B
0

I have been struggling with this issue as well when trying to set up a VSCode remote development container (docker compose build but should not make a big difference) with x11 support.

My requirements were as follows, since I intended not to expose my development machine too much to the open internet:

  • No network_mode: host or --net=host respectively
  • No X11UseLocalhost no
  • No xhost

Please note that using the host network potentially (depending on your firewall configuration) exposes anything running on some port in your devcontainer to anyone being able to reach your machine.

I ended up spinning up a daemonized socat process on the remote host at devcontainer initialization, tunneling the TCP x11 traffic from port 6XXX used for x11 with SSH to an x11 UNIX socket that I share with my container.

Solution is best described in a commit: https://github.com/flxtrtwn/devious/pull/16/commits/c6a233eb7312ca606f9d53a102bea8c1f8282578

Benzaldehyde answered 31/3 at 12:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.