Share a FUSE FS mounted inside a docker container through volumes
Asked Answered
W

1

11

I created a docker container where I mount a fuse S3QL FS. And this is working.

Now I'd like to be able to share this mount point with the host or with other containers but it does not work.

To make it short, I run the container that way :

docker run --rm -d -v /s3ql:/s3ql \
           --cap-add SYS_ADMIN --device /dev/fuse \
           --name myContainer \
                myS3qlIimage mount.s3ql swiftks://url:container /s3ql

docker exec myContainer ls /s3ql shows the actual S3QL content but /s3ql on host is empty.

More details on how I did so far on my repo: https://gitlab.com/Salokyn/docker-s3ql

Do you think it is possible to make that work ?

Walke answered 5/12, 2018 at 11:45 Comment(0)
S
20

Normally, when you start a Docker container, it is run in a private mount namespace: this means that (a) filesystems mounted inside the container won't be visible on the host, and (b) filesystems mounted on the host won't be visible inside the container.

You can modify this behavior using the bind-propagation flag to the --mount option. There are six values available for this flag:

  • shared: Sub-mounts of the original mount are exposed to replica mounts, and sub-mounts of replica mounts are also propagated to the original mount.
  • slave: similar to a shared mount, but only in one direction. If the original mount exposes a sub-mount, the replica mount can see it. However, if the replica mount exposes a sub-mount, the original mount cannot see it.
  • private: The mount is private. Sub-mounts within it are not exposed to replica mounts, and sub-mounts of replica mounts are not exposed to the original mount.
  • rshared: The same as shared, but the propagation also extends to and from mount points nested within any of the original or replica mount points.
  • rslave: The same as slave, but the propagation also extends to and from mount points nested within any of the original or replica mount points.
  • rprivate: The default. The same as private, meaning that no mount points anywhere within the original or replica mount points propagate in either direction.

Based on your question, you probably want the rshared option, which would permit mounts inside the container to be visble on the host. This means your docker command line would look something like:

docker run --rm \
  --mount type=bind,source=/s3ql,target=/s3ql,bind-propagation=rshared \
  --cap-add SYS_ADMIN --device /dev/fuse --name myContainer \
  myS3qlIimage mount.s3ql swiftks://url:container /s3ql

But there may be a second problem here: if your fuse mount requires a persistent process in order to function, this won't work, because your container is going to exit as soon as the mount command completes, taking any processes with it. In this case, you would need to arrange for the container to hang around for as long as you need the mount active:

docker run -d \
  --mount type=bind,source=/s3ql,target=/s3ql,bind-propagation=rshared \
  --cap-add SYS_ADMIN --device /dev/fuse --name myContainer \
  myS3qlIimage sh -c 'mount.s3ql swiftks://url:container /s3ql; sleep inf'

(This assumes that you have a version of the sleep command that supports the inf argument to sleep forever).

Siddon answered 5/12, 2018 at 12:14 Comment(8)
Thank you for your great explanation.Walke
It works with rshared bind-propagation. I already managed the 2nd point you mentioned with wait and trap commands.Walke
what about the other way, when I try to mount a fuse mount (on the host) to use in a docker I get permission errorsStillman
@Stillman you probably want to open a new question with a complete description of what you're trying to do.Siddon
how do you manage the unmout bit ?Sodomite
@Sodomite I expect that you (a) exit the container and then (b) unmount the filesystem normally (e.g. with fusermount).Siddon
how do you do this with compose?Jarlathus
THANK YOU, rshared was exactly what I needed. If the mount disconnects, it is able to reconnect, which doesn't work on any other propagation.Leatherback

© 2022 - 2024 — McMap. All rights reserved.