Is s3fs not able to mount inside docker container?
Asked Answered
P

3

16

I want to mount s3fs inside of docker container.

I made docker image with s3fs, and did like this:

host$ docker run -it --rm docker/s3fs bash
[ root@container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
fuse: failed to open /dev/fuse: Operation not permitted

Showing "Operation not permitted" error.

So I googled, and did like this (adding --privileged=true) again:

host$ docker run -it --rm --privileged=true docker/s3fs bash
[ root@container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root@container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
[ root@container:~ ]$ fusermount -u /mnt/s3bucket
[ root@container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root@container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected

Then, mounting not shows error, but if run ls command, "Transport endpoint is not connected" error is occured.

How can I mount s3fs inside of docker container? Is it impossible?

[UPDATED]

Add Dockerfile configuration.

Dockerfile:

FROM dockerfile/ubuntu

RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libfuse-dev
RUN apt-get install -y fuse
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y mime-support

RUN \
  cd /usr/src && \
  wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz && \
  tar xvzf s3fs-1.74.tar.gz && \
  cd s3fs-1.74/ && \
  ./configure --prefix=/usr && \
  make && make install

ADD passwd/passwd-s3fs /etc/passwd-s3fs
ADD rules.d/99-fuse.rules /etc/udev/rules.d/99-fuse.rules
RUN chmod 640 /etc/passwd-s3fs

RUN mkdir /mnt/s3bucket

rules.d/99-fuse.rules:

KERNEL==fuse, MODE=0777
Plerre answered 26/7, 2014 at 0:30 Comment(0)
H
12

I'm not sure what you did that did not work, but I was able to get this to work like this:

Dockerfile:

FROM ubuntu:12.04

RUN apt-get update -qq
RUN apt-get install -y build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support automake libtool wget tar

RUN wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz -O /usr/src/v1.77.tar.gz
RUN tar xvz -C /usr/src -f /usr/src/v1.77.tar.gz
RUN cd /usr/src/s3fs-fuse-1.77 && ./autogen.sh && ./configure --prefix=/usr && make && make install

RUN mkdir /s3bucket

After building with:

docker build --rm -t ubuntu/s3fs:latest .

I ran the container with:

docker run -it -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged ubuntu/s3fs:latest bash

and then inside the container:

root@efa2689dca96:/# s3fs s3bucket /s3bucket
root@efa2689dca96:/# ls /s3bucket
testing.this.out  work.please  working
root@efa2689dca96:/#

which successfully listed the files in my s3bucket.

You do need to make sure the kernel on your host machine supports fuse, but it would seem you have already done so?

Note: Your S3 mountpoint will not show/work from inside other containers when using Docker's --volume or --volumes-from directives. For example:

docker run -t --detach --name testmount -v /s3bucket -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged --entrypoint /usr/bin/s3fs ubuntu/s3fs:latest -f s3bucket /s3bucket
docker run -it --volumes-from testmount --entrypoint /bin/ls ubuntu:12.04 -ahl /s3bucket
total 8.0K
drwxr-xr-x  2 root root 4.0K Aug 21 21:32 .
drwxr-xr-x 51 root root 4.0K Aug 21 21:33 ..

returns no files even though there are files in the bucket.

Halophyte answered 21/8, 2014 at 21:35 Comment(3)
Thank you! I test your procedure, it worked like charm. But also, as you say, it cannot mount from other container... Is there any way to use it from other container?Plerre
--privileged is what does it, unfortunately this only works during the run phase and not the build phase.Undying
is there a way to do it without the --privileged modeLivonia
C
0

Adding another solution.

Dockerfile:

FROM ubuntu:16.04

# Update and install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update --fix-missing && \
    apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget libfuse-dev libssl-dev libxml2-dev make pkg-config

# Clone and run s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \
    cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && ldconfig && /usr/local/bin/s3fs --version

# Remove packages
RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make  && \
    apt-get -y autoremove --purge && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Set user and group
ENV USER='appuser'
ENV GROUP='appuser'
ENV UID='1000'
ENV GID='1000'

RUN groupadd -g $GID $GROUP && \
    useradd -u $UID -g $GROUP -s /bin/sh -m $USER

# Install fuse
RUN apt-get update   && \
    apt install fuse && \
    chown ${USER}.${GROUP} /usr/local/bin/s3fs

# Config fuse
RUN chmod a+r /etc/fuse.conf && \
    perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf

# Copy credentials
ENV SECRET_FILE_PATH=/home/${USER}/passwd-s3fs
COPY ./passwd-s3fs $SECRET_FILE_PATH
RUN chmod 600 $SECRET_FILE_PATH && \
    chown ${USER}.${GROUP} $SECRET_FILE_PATH

# Switch to user
USER ${UID}:${GID}


# Create mnt point
ENV MNT_POINT_PATH=/home/${USER}/data
RUN mkdir -p $MNT_POINT_PATH && \
    chmod g+w $MNT_POINT_PATH

# Execute
ENV S3_BUCKET = ''
WORKDIR /home/${USER}
CMD exec sleep 100000 && /usr/local/bin/s3fs $S3_BUCKET $MNT_POINT_PATH -o passwd_file=passwd-s3fs -o allow_other

docker-compose-yaml:

version: '3.8'
services:
  s3fs:
    privileged: true
    image: <image-name:tag>
    ##Debug
    #stdin_open: true # docker run -i
    #tty: true        # docker run -t
    environment:
      - S3_BUCKET=my-bucket-name
    devices:
      - "/dev/fuse"
    cap_add:
      - SYS_ADMIN
      - DAC_READ_SEARCH
    cap_drop:
      - NET_ADMIN

Build image with docker build -t <image-name:tag> .
Run with: docker-compose -d up

Cotton answered 29/8, 2020 at 20:36 Comment(0)
T
0

If you would prefer to use docker-compose for testing on your localhost use the following. Note you don't need to use --privileged flag as we are passing --cap-add SYS_ADMIN --device /dev/fuse flags in the docker-compose.yml

create file .env

AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxx
AWS_BUCKET_NAME=xxxxxx

create file docker-compose.yml

version: "3"
services:
  s3-fuse:
    image: debian-aws-s3-mount
    restart: always
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      - AWSACCESSKEYID=${AWS_ACCESS_KEY_ID}
      - AWSSECRETACCESSKEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_BUCKET_NAME=${AWS_BUCKET_NAME}
    cap_add:
      - SYS_ADMIN
    devices:
      - /dev/fuse

create file Dockerfile. i.e You can use any docker image you prefer but first, check if your distro is supported here

FROM node:16-bullseye

RUN apt-get update -qq
RUN apt-get install -y s3fs
RUN mkdir /s3_mnt

To run container execute:

$ docker-compose run --rm -t s3-fuse /bin/bash

Once inside the container. You can mount your s3 Bucket by running the command:

# s3fs ${AWS_BUCKET_NAME} s3_mnt/

Note: For this setup to work .env, Dockerfile and docker-compose.yml must be created in the same directory. Don't forget to update your .env file with the correct credentials to the s3 bucket.

Tripersonal answered 28/9, 2022 at 16:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.