Allowing permission using S3FS bucket directory for other users
Asked Answered
H

8

22

I'm having problem using S3FS. I'm using

ubuntu@ip-x-x-x-x:~$ /usr/bin/s3fs --version
Amazon Simple Storage Service File System 1.71

And I have the password file installed in the /usr/share/myapp/s3fs-password with 600 permission.

I have succeeded mounting the S3 bucket.

sudo /usr/bin/s3fs -o allow_other -opasswd_file=/usr/share/myapp/s3fs-password -ouse_cache=/tmp mybucket.example.com /bucket

And I have user_allow_other enabled in the /etc/fuse.conf

When I tried creating a file in the bucket as root it worked.

ubuntu@ip-x-x-x-x:~$ sudo su
root@ip-x-x-x-x:/home/ubuntu# cd /bucket
root@ip-x-x-x-x:/bucket# echo 'Hello World!' > test-`date +%s`.txt
root@ip-x-x-x-x:/bucket# ls
test-1373359118.txt

I checked the bucket mybucket.example.com's content and the file was successfully created.

But I was having difficulties writing into the directory /bucket as different user.

root@ip-x-x-x-x:/bucket# exit
ubuntu@ip-x-x-x-x:~$ cd /bucket
ubuntu@ip-x-x-x-x:/bucket$ echo 'Hello World!' > test-`date +%s`.txt
-bash: test-1373359543.txt: Permission denied

I desperately tried chmod-ing to 777 the test-1373359118.txt. And I can write into the file

ubuntu@ip-x-x-x-x:/bucket$ sudo chmod 777 test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ echo 'Test' > test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cat test-1373359118.txt
Test

Funnily, I could create a directory inside the bucket, set the chmod to 777, and write a file there.

ubuntu@ip-x-x-x-x:/bucket$ sudo mkdir -m 1777 test
ubuntu@ip-x-x-x-x:/bucket$ ls
test  test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cd test
ubuntu@ip-x-x-x-x:/bucket/test$ echo 'Hello World!' > test-`date +%s`.txt
ubuntu@ip-x-x-x-x:/bucket/test$ ls
test-1373360059.txt
ubuntu@ip-x-x-x-x:/bucket/test$ cat test-1373360059.txt
Hello World

But then I tried

ubuntu@ip-x-x-x-x:~$ sudo chmod 777 /mybucket
chmod: changing permissions of '/mybucket': Input/output error

It didn't work.

Initially I was thinking to use this /bucket directory to store large and rarely accessed files from my LAMP stacks located several EC2 machines. (I think it's suitable enough to use this without making a special handling library using AWS PHP SDK, but that's not the point.)

Because of that reason, I can settle using a directory inside the /mybucket to store the files. But I'm just curious if there is a way to allow entire /mybucket to other users?

Hodgkinson answered 9/7, 2013 at 9:3 Comment(1)
it seems the answer is umask. see https://mcmap.net/q/587692/-how-to-mount-aws-s3-using-s3fs-to-allow-full-access-to-any-userReceptor
O
34

Permission was an issue with older versions of S3FS. Upgrade to latest version to get it working.

As already stated in the question itself and other answers, While mounting you will have to pass the following parameters: -o allow_other

Example:

s3fs mybucket:/ mymountlocation/ -o allow_other 

Also, before doing this ensure the following is enabled in /etc/fuse.conf:

user_allow_other

It is disabled by default ;)

Ornithorhynchus answered 8/5, 2015 at 8:59 Comment(2)
This doesn't seem to work recursively, meaning I cannot access the subdirectories in the S3 bucket even though the allow_other option is set (as well as user_allow_other)Roselba
@Roselba it seems the answer is umask. see https://mcmap.net/q/587692/-how-to-mount-aws-s3-using-s3fs-to-allow-full-access-to-any-userReceptor
P
7

This works for me:

s3fs ec2downloads:/ /mnt/s3 -o use_rrs -o allow_other -o use_cache=/tmp

It must have been fixed in a recent version, I'm using the latest clone (1.78) from the github project.

Pyne answered 16/11, 2014 at 4:8 Comment(1)
I'm using version 1.90 and it still does not work recursively as @Roselba commentedRundlet
C
4

This is the only thing that worked for me:

You can pass the uid option to make sure it does:

    -o umask=0007,uid=1001,gid=1001 # replace 1001 with your ids

from: https://github.com/s3fs-fuse/s3fs-fuse/issues/673

To find your uid and gid, see the first two number from here:

sudo cat /etc/passwd | grep $USER
Combes answered 31/7, 2021 at 16:56 Comment(0)
O
1

I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs.

This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time.

We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.

Regarding your issue, in order to run RioFS as a root user and allow other users to have r/w access rights to the mounted directory:

  1. make sure /etc/fuse.conf contains user_allow_other option
  2. launch RioFS with -o "allow_other" parameter.

The whole command line to launch RioFS will look like:

sudo riofs -c /path/to/riofs.conf.xml http://s3.amazonaws.com mybucket.example.com /bucket

(make sure you exported both AWSACCESSKEYID and AWSSECRETACCESSKEY variables or set them in riofs.conf.xml configuration file).

Hope it helps you and we are looking forward to seeing you joined our community !

Operand answered 16/7, 2013 at 16:30 Comment(0)
D
1

There could be several reasons, and I'm listing a possible reason as I encountered the same issue. If you look at your file permission, it could have inherited '---------' - no permissions/ACL.

If that's the case, you could add the "x-amz-meta-mode" to the meta data of the file. Do check out my post on how to do it/do it dynamically.

Deidredeific answered 14/8, 2013 at 3:49 Comment(2)
this was my problem - I just chmod 777'ed it and got unstuck.Cayuse
But how to add this header to files which are uploaded to storage by Amazon SES?Essive
T
0

if you are using centos you need to enable httpd_use_fusefs option otherwise no matter what you give for s3fs option it will never have the permission to access via httpd

setsebool -P httpd_use_fusefs on
Torritorricelli answered 12/12, 2014 at 6:6 Comment(0)
H
0

for all users to access the mounted bucket use the umask=0002 in the /etc/fstab file and remount the s3 bucket

example fuse.s3fs _netdev,allow_other,umask=0002,passwd_file=/etc/passwdfile.txt

Hiero answered 10/11, 2022 at 15:17 Comment(0)
L
0

If you want to automount it from fstab the same allow_other needs to be added as mentioned above and the user_allow_other uncommented in /etc/fuse.conf If you mount it from an AWS instance, I'd also recommend using instance-profile

s3fs#mys3bucketname /mnt/mys3folder fuse defaults,uid=1000,gid=1000,umask=022,allow_other,_netdev,iam_role=my-instance-role-name 0 0

where mys3bucketname is your bucket in S3, uid=1000 and gid=1000 are the user and group IDs for the user you want the S3 objects owned locally and my-instance-role-name is the instance-role attached to the instance-profile of the AWS instance you want to mount from, 0 0 is to disable ext3 dump and fsck

Lecythus answered 21/11, 2023 at 10:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.