Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2
Asked Answered
D

6

9

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.

I was hoping the following would work in .ebextensions/ebs.config:

commands:
  01mkdir:
    command: "mkdir /data"
  02mount:
    command: "mount /dev/sdh /data"

option_settings:
  - namespace: aws:autoscaling:launchconfiguration
    option_name: BlockDeviceMappings
    value: /dev/sdh=vol-XXXXX

https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments

But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."

Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.

Dunderhead answered 17/8, 2015 at 19:2 Comment(0)
D
10

I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.

container_commands:
  01mount:
    command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
    ignoreErrors: true
  02wait:
    command: "sleep 10"
  03mkdir:
    command: "mkdir /data"
    test: "[ ! -d /data ]"
  04mount:
    command: "mount /dev/sdh /data"
    test: "! mountpoint -q /dev/sdh"

Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.

Dunderhead answered 18/8, 2015 at 14:58 Comment(7)
Instead of ignoreErrors you can test: "[ ! -b /dev/sdh ]".Battista
I don't understand your note: what environment variables? If this were in commands instead of container_commands would you not need to restart the docker container as @hashinclude mentioned?Trifle
Shouldn't test: "! mountpoint -q /dev/sdh" be test: "! mountpoint -q /data"?Radiochemical
Use aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID} to wait for the attachment.Glossitis
this is what I was looking for, but it seems to required my API key/secret to be stored inside the ec2 instance... which pause another problem... I am now thinking it might be best to request for a fresh EBS instance to be created from snapshotStoush
Also consider adding --filters "Name=attachment.status,Values=attached" to the aws ec2 wait volume-in-use commandTruncate
Autoscaling groups max instances need to be set to 1 because we can attach EBS volumes to a single instance at a time.Rhynd
J
5

To add to @Simon's answer (to avoid traps for the unwary):

  • If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
  • You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
  • You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Jahvist answered 18/10, 2016 at 8:27 Comment(0)
I
2

Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).

http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html

Insalivate answered 19/10, 2017 at 15:38 Comment(0)
G
2

Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.

container_commands:
  00attach:
    command: |
      export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
      export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
      export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')

      aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
      aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
      sleep 1
    test: "! file -E /dev/xvdh"
  01mkfs:
    command: "mkfs -t ext3 /dev/xvdh"
    test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
  02mkdir:
    command: "mkdir -p /data"
  03mount:
    command: "mount /dev/xvdh /data"
    test: "! mountpoint /data"
Glossitis answered 18/5, 2019 at 12:45 Comment(0)
G
1

Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.

.ebextensions/whatever.config

container_commands:
  chmod:
    command: chmod +x .platform/hooks/predeploy/mount-volume.sh

Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.

.platform/hooks/predeploy/mount-volume.sh

#!/bin/sh

# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".

# All platform hooks run as root user, no need for sudo.

if mountpoint /path/to/mount/point; then
  # Don't need to attach and mount if it's not an initial deploy but an app version update
  exit 0
fi

# If it's a new EC2 instance created by ASG because the old one terminated,
# need to wait until the volume becomes available again. Can take a few minutes.
aws ec2 wait volume-available --volume-ids vol-xxx --region us-east-1

# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')

aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1

# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')

FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)

# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
  mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi

mkdir -p /path/to/mount/point

mount /dev/$NON_ROOT_VOLUME_NAME /path/to/mount/point

# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.

cp /etc/fstab /etc/fstab.orig

NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')

# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /path/to/mount/point xfs defaults,nofail 0 2" | tee -a /etc/fstab

Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.

Instance profile should include these permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AttachVolume",
                "ec2:DetachVolume",
                "ec2:DescribeVolumes"
            ],
            "Resource": "*"
        }
    ]
}

You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.

Griseofulvin answered 28/2, 2022 at 15:24 Comment(0)
T
0

Here it is with missing config:

commands:
  01mount:
    command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
    ignoreErrors: true
  02wait:
    command: "sleep 10"
  03mkdir:
    command: "mkdir /home/lucene"
    test: "[ ! -d /home/lucene ]"
  04mount:
    command: "mount /dev/xvdf /home/lucene"
    test: "! mountpoint -q /dev/xvdf"
Tuttle answered 4/10, 2017 at 17:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.