Can't expose a fuse based volume to a Docker container
Asked Answered
S

3

20

I'm trying to provide my docker container a volume of encrypted file system for internal use. The idea is that the container will write to the volume as usual, but in fact the host will be encrypting the data before writing it to the filesystem.

I'm trying to use EncFS - it works well on the host, e.g:

encfs /encrypted /visible

I can write files to /visible, and those get encrypted. However, when trying to run a container with /visible as the volume, e.g.:

docker run -i -t --privileged -v /visible:/myvolume imagename bash

I do get a volume in the container, but it's on the original /encrypted folder, not going through the EncFS. If I unmount the EncFS from /visible, I can see the files written by the container. Needless to say /encrypted is empty.

Is there a way to have docker mount the volume through EncFS, and not write directly to the folder? In contrast, docker works fine when I use an NFS mount as a volume. It writes to the network device, and not to the local folder on which I mounted the device.

Thanks

Salaam answered 4/3, 2015 at 21:31 Comment(0)
D
17

I am unable to duplicate your problem locally. If I try to expose an encfs filesystem as a Docker volume, I get an error trying to start the container:

FATA[0003] Error response from daemon: Cannot start container <cid>:
setup mount namespace stat /visible: permission denied 

So it's possible you have something different going on. In any case, this is what solved my problem:

By default, FUSE only permits the user who mounted a filesystem to have access to that filesystem. When you are running a Docker container, that container is initially running as root.

You can use the allow_root or allow_other mount options when you mount the FUSE filesystem. For example:

$ encfs -o allow_root /encrypted /other

Here, allow_root will permit the root user to have acces to the mountpoint, while allow_other will permit anyone to have access to the mountpoint (provided that the Unix permissions on the directory allow them access).

If I mounted by encfs filesytem using allow_root, I can then expose that filesystem as a Docker volume and the contents of that filesystem are correctly visible from inside the container.

Dunsinane answered 8/3, 2015 at 20:41 Comment(2)
Thank you for your answer, and time taken to evaluate the issue. After much time spent, I found the underlying issue to be quite a bit more mundane: It seems that every time I add a new mount to the system, I need to restart the Docker service. Doesn't sound very logical, but that solved it. Unless I do it, Docker will always uses what the mount or folder was when it started - and in case there was no mount there, it will indeed write to the underlying local folder.Salaam
It's possible you are hitting blog.oddbit.com/2015/01/18/docker-vs-privatetmpDunsinane
D
4

This is definitely because you started the docker daemon before the host mounted the mountpoint. In this case the inode for the directory name is still pointing at the hosts local disk:

ls -i /mounts/
1048579 s3-data-mnt

then if you mount using a fuse daemon like s3fs:

/usr/local/bin/s3fs -o rw -o allow_other -o iam_role=ecsInstanceRole /mounts/s3-data-mnt
ls -i
1 s3-data-mnt

My guess is that docker does some bootstrap caching of the directory names to inodes (someone who has more knowledge of this than can fill in this blank).

Your comment is correct. If you simply restart docker after the mounting has finished your volume will be correctly shared from host to your containers. (Or you can simply delay starting docker until after all your mounts have finished mounting)

What is interesting (but makes complete since to me now) is that upon exiting the container and un-mounting the mountpoint on the host all of my writes from within the container to the shared volume magically appeared (they were being stored at the inode on the host machines local disk):

[root@host s3-data-mnt]# echo foo > bar
[root@host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx  1 root root    0 Jan  1  1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r--  1 root root    4 Sep 16 17:11 bar
[root@host s3-data-mnt]# docker run -ti -v /mounts/s3-data-mnt:/s3-data busybox /bin/bash
root@5592454f9f4d:/mounts/s3-data# ls -als
total 8
4 drwxr-xr-x  3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
root@5592454f9f4d:/s3-data# echo baz > beef
root@5592454f9f4d:/s3-data# ls -als
total 9
4 drwxr-xr-x  3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
1 -rw-r--r--  1 root root    4 Sep 16 17:11 beef
root@5592454f9f4d:/s3-data# exit
exit
[root@host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx  1 root root    0 Jan  1  1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r--  1 root root    4 Sep 16 17:11 bar
[root@host /]# umount -l s3-data-mnt
[root@host /]# ls -als
[root@ip-10-0-3-233 /]# ls -als /s3-stn-jira-data-mnt/
total 8
4 drwxr-xr-x  2 root root 4096 Sep 16 17:28 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r--  1 root root    4 Sep 16 17:11 bar
Doralia answered 16/9, 2015 at 17:53 Comment(0)
Z
1

You might be able to work around this by wrapping the mount call in nsenter to mount it in the same Linux mount namespace as the docker daemon, eg.

nsenter -t "$PID_OF_DOCKER_DAEMON" encfs ...

The question is whether this approach will survive a daemon restart itself. ;-)

Zug answered 8/2, 2016 at 6:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.