Mount S3 bucket as filesystem on AWS ECS container
Asked Answered
P

4

20

I am trying to mount S3 as a volume on AWS ECS docker container using rexray/s3fs driver.

I am able to do this on my local machine, where I installed plugin

$docker plugin install rexray/s3fs

and mounted S3 bucket on docker container.

$docker plugin ls

ID                  NAME                 DESCRIPTION                                   ENABLED

3a0e14cadc17        rexray/s3fs:latest   REX-Ray FUSE Driver for Amazon Simple Storage   true 

$docker run -ti --volume-driver=rexray/s3fs -v s3-bucket:/data img

I am trying replicate this on AWS ECS.

Tried follow below document: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html

If I give Driver value then task is not able to run and giving "was unable to place a task because no container instance met all of its requirements." error.

I am using t2.medium instance and giving of it requirement for task so it should not be H/W requirement issue.

If I remove the Driver config from Job definition task gets executed.

It seems I am miss configuring something.

Is anyone trying/tried same thing, please share the knowledge.

Thanks!!

Pompidou answered 27/8, 2018 at 14:30 Comment(2)
AWS has declared that AWS ECS supports Volume plugin but I couldn't find much documentation for the configuration. aws.amazon.com/about-aws/whats-new/2018/08/…Pompidou
Isn't this a duplicate of https://mcmap.net/q/664196/-mount-s3-bucket-on-aws-ecs ?Propst
A
9

Your approach of using the rexray/s3fs driver is correct.

These are the steps I followed to get things working on Amazon Linux 1.

First you will need to install s3fs.

yum install -y gcc libstdc+-devel gcc-c+ fuse fuse-devel curl-devel libxml2-devel mailcap automake openssl-devel git gcc-c++
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
make install

Now install the driver. There are some options here you might want to modify such as using an IAM role instead of Access Key and AWS region.

docker plugin install rexray/s3fs:latest S3FS_REGION=ap-southeast-2 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/ --grant-all-permissions

Now the very important step of restarting the ECS agent. I also update for good measure.

yum update -y ecs-init
service docker restart && start ecs

You should now be ready to create your task definition. The important part is your volume configuration which is shown below.

"volumes": [
  {
    "name": "name-of-your-s3-bucket",
    "host": null,
    "dockerVolumeConfiguration": {
      "autoprovision": false,
      "labels": null,
      "scope": "shared",
      "driver": "rexray/s3fs",
      "driverOpts": null
    }
  }
]

Now you just need to specify the mount point in the container definition:

"mountPoints": [
  {
    "readOnly": null,
    "containerPath": "/where/ever/you/want",
    "sourceVolume": "name-of-your-s3-bucket"
  }
]

Now as long as you have appropriate IAM permissions for accessing the s3 bucket your container should start and you can get on with using s3 as a volume.

If you get an error running the task that says "ATTRIBUTE" double check that the plugin has been successfully installed on the ec2 instance and the ecs agent has been restarted. Also double check your driver name is "rexray/s3fs".

Aneroid answered 29/5, 2019 at 5:6 Comment(8)
s3fs-fuse is not a reliable solution, I have tried it and failed in many ways.Washerwoman
@RobinVarghese - I guess it depends on your definition of reliable and your use case. s3fs-fuse works for my use case of serving up some s3 files to be read once or twice an hour.Aneroid
Is there a way I can do this for ecs fargate instead of using ecs ec2 ?Mercado
No. Fargate does not support the rexray/s3fs driver.Aneroid
@RobinVarghese It might be useful if you were to comment on the ways it failed. "failed in many ways" doesn't really help future readers.Quagmire
Now as long as you have appropriate IAM permissions for accessing the s3 bucket would you mind clarifying whether it's the "Task Role" or "Execution Role" that needs correct permissions?Quagmire
Please note that I used s3fs-fuse morethan 2 years back and it had reliability issues. Every 2 days the mount goes unavailable and my scheduled job used to fail. The new drivers might be better and this issue might have got already fixed. There were limitations in the way IAM can be applied in this solution.Washerwoman
@PhilipCouling when I wrote original answer I'm pretty sure execution roles didn't even exist - it was task roles only so I would try that first but perhaps things changed in the proceeding years.Aneroid
K
5

I have gotten s3fs to work in my ECS containers by just running the s3fs command directly to mount the bucket in my container. I'm not familiar with the rexray driver, it may provide some benefits over just using s3fs, but for a lot of use cases this might work well and does not require any UserData editing.

I made it a little smoother by setting my container's entrypoint to be the following:

#!/bin/bash

bucket=my-bucket

s3fs ${bucket} /data -o ecs

echo "Mounted ${bucket} to /data"

exec "$@"

The -o ecs option is critical for assuming the ECS Task Role, if you use the regular -o iam_role=auto s3fs will assume the IAM role of the EC2 instance running the ECS agent.

Note the ECS Task Role will need to be provided with the s3:GetObject, s3:PutObject, and s3:ListObjects IAM action permissions for the bucket you are trying to mount. If you want the container to have read-only access to the bucket you can enforce that at the IAM level by leaving off the s3:PutObject permission. You can also use fine grained IAM resource statements to disallow or allow writes to only certain s3 prefixes. Some ugly errors will be thrown if you try to write a file to the s3fs filesystem and it does not have permission to actually make the underlying s3 api calls, but it all generally works fine.

Note: The version of s3fs installed by apt-get install s3fs is old and does not have this option available as of the time of this writing, which means you may need to install s3fs from source.

Also note: you will need to run your containers in privileged mode for the s3fs mount to work.

Khajeh answered 6/3, 2020 at 0:59 Comment(4)
Is there a way I can do this for ecs fargate instead of using ecs ec2 ?Mercado
I believe should work on fargate the same as it works on ec2 based ecs, both use the same ecs execution interface, the difference is just the infrastructure it is running on. That being said I have not tested itKhajeh
I think one difference between your solution and a Rexray driver might be where the IAM permission sits. I'm not certain but I suspect using Rexray requires the permission to be on the "Execution Role" where your approach requires it on the "Task Role". If you get time, would you mind adding a note on setting up correct IAM permissions?Quagmire
Depends on your setup, in the way described in the original question using docker volume mounts yes the "execution role", or whatever IAM role the docker host has, will be used. But I'm not sure you could even achieve running in the original way described though because you would need to use a custom AMI that has rexray installed on the host machine. I will add a note about IAM permissions generally.Khajeh
H
2

The ecs cluster ec2 instances need to have the rexray driver installed. In this aws blogpost they discuss this. https://aws.amazon.com/blogs/compute/amazon-ecs-and-docker-volume-drivers-amazon-ebs/

To help you get started, we’ve created an AWS CloudFormation template that builds a two-node ECS cluster. The template bootstraps the rexray/ebs volume driver onto each node and assigns them an IAM role with an inline policy that allows them to call the API actions that REX-Ray needs.

The same would apply to the s3 driver

Hubbs answered 2/2, 2019 at 15:34 Comment(0)
J
1

Thanks to @wimnat for guidance.

With regards to getting the rexray/s3fs plugin installed on Ec2 instances in an ECS cluster via LaunchConfiguration UserData this is what I ended up with (for AMI version amzn-ami-2018.03.o-amazon-ecs-optimized):

  #install s3fs required by rexray/s3fs docker plugin
          yum install -y gcc libstdc+-devel gcc-c+ fuse fuse-devel curl-devel libxml2-devel mailcap automake openssl-devel git gcc-c++
          git clone https://github.com/s3fs-fuse/s3fs-fuse
          cd s3fs-fuse/
          ./autogen.sh
          ./configure --prefix=/usr --with-openssl
          make
          make install
          #install plugin to enable s3 volumes, using the task execution role to access s3.
          docker plugin install rexray/s3fs:0.11.1  S3FS_REGION=us-east-1 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/ --grant-all-permissions

Points to note:

  1. Using rexray/s3fs:latest the volumes showed up when doing 'docker volume ls' but I got error when mounting the volumes (https://github.com/rexray/rexray/issues/1187).
  2. If using versioned rexray/s3fs you need to include that version in the drivername when you define the mount, ie Driver: 'rexray/s3fs:0.11.1'
  3. To check that the container instances have the required attributes you can use aws-cli: aws ecs list-attributes --cluster my-cluster --target-type container-instance --profile myprofile --attribute-name ecs.capability.docker-plugin.rexray/s3fs.0.11.1
  4. It seems the ecs agent starts up after UserData script has been completed so no need to restart agent, or wait for agent to start.
    1. The install works for bucket with default encryption enabled (AES256). If you use your own kmskey to encrypt bucket you need to provide the correct s3fs option to handle the encryption/decryption. I have not tried this.
Jink answered 29/8, 2019 at 10:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.