Command 01_migrate failed on Amazon Linux 2 AMI
Asked Answered
C

4

14

I have a Django project which is deployed to Elastic Beanstalk Amazon Linux 2 AMI. I installed PyMySQL for connecting to the db and i added these lines to settings.py such as below;

import pymysql

pymysql.version_info = (1, 4, 6, "final", 0)
pymysql.install_as_MySQLdb()

And also i have a .config file for migrating the db;

container_commands:
  01_migrate:
    command: "django-admin.py migrate"
    leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: mysite.settings

Normally, i was using mysqlclient on my Linux AMI with this .config file but it doesn't work on Linux 2 AMI so i installed the PyMySQL. Now, i'm trying to deploy the updated version of my project but i'm getting an error such as below;

Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 171, in <module>
    worklog.build(metadata, configSets)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 129, in build
    Contractor(metadata).build(configSets, self)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 530, in build
    self.run_config(config, worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 542, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 260, in build
    changes['commands'] = CommandTool().apply(self._config.commands)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/command_tool.py", line 117, in apply
    raise ToolError(u"Command %s failed" % name)
ToolError: Command 01_migrate failed

How can i fix this issue?

Cords answered 29/6, 2020 at 6:44 Comment(7)
When you run the django-admin.py migrate manually from the instance, does it work as expected?Ordonez
when i run the command on my powershell, it works as expected. i can migrate the changes to my db.Develop
That's good, but does it work on the EB instance itself when you ssh into it and run the command?Ordonez
I tried to run the migrate command but i got an error such as No such file or directoryDevelop
Where does django-admin.py migrate come from? is it part of your application, or some dependency?Ordonez
Yes, it is part of my application. It is a command for database migration which belongs to Django.Develop
similar questions: https://mcmap.net/q/828272/-container-command-fails-in-django-on-elastic-beanstalk-python-3-7, https://mcmap.net/q/828273/-django-collectstatic-command-fails-in-aws-elastic-beanstalk-amazon-linux-2-python-3-platformBrooch
A
17

Amazon Linux 2 has a fundamentally different setup than AL1, and the current documentation as of Jul 24, 2020 is out of date. django-admin of the installed environment by beanstalk does not appear to be on the path, so you can source the environment to activate and make sure it is.

I left my answer here as well which goes into much more detail in how I arrived at this answer, but the solution (which I don't love) is:

container_commands:
    01_migrate:
        command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
        leader_only: true

Even though I don't love it, I have verified with AWS Support that this is in fact the recommended way to do this. You must source the python environment, as with AL2 they use virtual environments in an effort to stay more consistent.

Archduke answered 24/7, 2020 at 13:45 Comment(3)
Note that this activates the correct python (sets the PATH) but not the environmental values that are set by the EBS UI. If dango uses any of them (credentials BAD, secret name, etc...) it will not be set for the migration command.Mord
This command relies (at least for me) on environment variables I set in the GUI for some of the secrets and it works just fine. I'm not sure what you mean @MatanDroryArchduke
That's strange. I just tested it on the EC2 machine my EBS created. I ran printenv and did not see my environmental variables, then ran source /var/app/venv/*/bin/activate && printenv and again I don't see the environmental variables, the only change is in PATH. I currently have an ugly workaround with a prebuild hook that copies the env file into export format and I source that instead. The problem with that is that the env file at "/opt/elasticbeanstalk/deployment/env" is only created after a successful deployment forcing me to start with the sample appMord
B
12

The answer from @nick-brady is great, and it provides the basic solution.

However, the AWS docs on migrating to Amazon Linux 2 suggest that we should do things like this using .platform hooks (this also applies to Amazon Linux 2023):

We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.

and from the AWS Knowledge Center:

... it's a best practice to use platform hooks instead of providing files and commands in .ebextension configuration files.

As a bonus, output from the platform hooks is collected in a separate log file (/var/log/eb-hooks.log), which is included in bundle and tail logs by default. This makes debugging a bit easier.

The basic idea is to create a shell script in your application source bundle, e.g. .platform/hooks/postdeploy/01_django_migrate.sh. This is described in more detail in the platform hooks section in the docs for extending EB linux platforms.

The file must be executable, so: chmod +x .platform/hooks/postdeploy/01_django_migrate.sh

Update: On AL2 and AL2023 execute permissions are now automatically granted to all platform hook scripts.

The file content could look like this (based on @nick-brady's answer):

#!/bin/bash

source "$PYTHONPATH/activate" && {
# log which migrations have already been applied
python manage.py showmigrations;
# migrate
python manage.py migrate --noinput;
}

You can do the same with collectstatic etc.

Note that the path to the Python virtual environment is available to platform hooks as the environment variable PYTHONPATH. You can verify this by inspecting the file /opt/elasticbeanstalk/deployment/env on your instance, e.g. via ssh. Also see AWS knowledge center.

For those wondering, the && in the shell script is a kind of conditional execution: only do the following if the preceding succeeded. See e.g. here.

Leader only

During deployment, there should be an EB_IS_COMMAND_LEADER environment variable, which can be tested in order to implement leader_only behavior in .platform hooks (based on this post):

...

if [[ $EB_IS_COMMAND_LEADER == "true" ]];
then 
  python manage.py migrate --noinput;
  python manage.py collectstatic --noinput;
else 
  echo "this instance is NOT the leader";
fi

...

File permission issues

Note that .platform hooks run as the root user, whereas the app runs as webapp. This may lead to file permission errors if a file is created during the manage.py call in a platform hook, e.g. a logfile.

If that happens, a workaround is to run manage.py as the webapp user, for example with the help of su and heredoc:

#!/bin/bash

su webapp << HERE
source "$PYTHONPATH/activate" && {
python manage.py showmigrations;
python manage.py migrate --noinput;
}
HERE
Brooch answered 8/12, 2020 at 13:14 Comment(4)
Is there a way to know if this is the leader as you can do in commands? I like this solution more because the environmental variable are accessible (I haven't tested but I see you are using $PYTHONPATH). The only issue is that I'd rather migrations to only run on the leader.Mord
@MatanDrory: Not sure about that. I have not been able to find anything explicit in the documentation, and haven't tried it myself, yet. It looks like you could still use leader_only in .ebextensions, or maybe achieve something similar using tests in your .platform scripts (also note the distinction between hooks and config-hooks).Brooch
@MatanDrory: I found the answer: During deployment, there is an environment property called EB_IS_COMMAND_LEADER which you can check, as described in this post. I'll update the answer.Brooch
NOTE: to activate python when logged in to the instance (e.g. through eb ssh), you can use the get_config tool to get the value of PYTHONPATH: source "$(/opt/elasticbeanstalk/bin/get-config environment -k PYTHONPATH)/activate"Brooch
A
1

in my case worked this .config

container_commands: 01_migrate: command: "django-admin.py migrate" leader_only: true 02_collectstatic: command: "django-admin.py collectstatic --noinput"

i had this command: "source /var/app/venv/*/bin/activate && python3 manage.py config until 4 Jan and suddenly i got a deployment error

Arris answered 4/1, 2022 at 13:48 Comment(2)
didnt work for meLippert
yeah... aws is changing all the timeArris
T
0

I ran into this issue as well. @nick-brady answer was the solution until recently when I started to get the error again.

The issue seemed to be that when AL2 ran python manage.py migrate it didn't have access to my environment variables storing the database connection info.

The solution was to add another file to .ebextensions with the following code:

    commands:
    setvars:
        command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /etc/profile.d/sh.local
packages:
    yum:
        jq: []

I named this file setvars.config

Source: https://repost.aws/knowledge-center/elastic-beanstalk-env-variables-shell

Tonality answered 15/9, 2023 at 16:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.