How would you best handle persistent data between instances with a load-balanced service in Amazon ECS? Data only containers will not work and neither will the volumes you can specify in the tasks, they will both only persist on the instance itself. I have been trying to read up on attaching a EBS upon instance creation with User Data in Launch Configuration but i had no luck there.
Depending on data needs you have two options I can think of:
Mapping S3 bucket as a local drive
You can share an S3 bucket and limit access to any number of instances. We use a drive mapping solution in Windows that will mount an S3 bucket as a local drive. Similar drivers exist for Linux. So each instance gets the same mapped drive, and share that persistent data. The data is read/write, so if we scale in or out, each instance has access to the S3 data in a consistent format.
Mount a volume from a Snapshot
As you suggest, if it is read-only data that you need access to, you can use Userdata scripts to mount a volume from a snapshot at launch time. You just need a script, and credentials/IAM Role to run the appropriate commands at launch time
You can use Amazon EFS to share a filesystem across ECS containers and instances. EFS is based on NFS so it can be mounted at multiple host instances at the same time. This allows cluster scheduling and scaling to work as intended. See a tutorial for persisting MySQL data this way here:
https://aws.amazon.com/blogs/compute/using-amazon-efs-to-persist-data-from-amazon-ecs-containers/
I suggest using Amazon EFS ( https://aws.amazon.com/blogs/compute/using-amazon-efs-to-persist-data-from-amazon-ecs-containers/).
Just add a limitation that there are only 4 regions to support EFS.
EU (Ireland)
US East (N. Virginia)
US East (Ohio)
US West (Oregon)
If your region is not supported then we can implement your own NFS share to share persistent folder between EC2 instances. S3FS looks cool but it's buggy ( I tested 2 years ago. Things may change today)
Depending on data needs you have two options I can think of:
Mapping S3 bucket as a local drive
You can share an S3 bucket and limit access to any number of instances. We use a drive mapping solution in Windows that will mount an S3 bucket as a local drive. Similar drivers exist for Linux. So each instance gets the same mapped drive, and share that persistent data. The data is read/write, so if we scale in or out, each instance has access to the S3 data in a consistent format.
Mount a volume from a Snapshot
As you suggest, if it is read-only data that you need access to, you can use Userdata scripts to mount a volume from a snapshot at launch time. You just need a script, and credentials/IAM Role to run the appropriate commands at launch time
© 2022 - 2024 — McMap. All rights reserved.