When I came to this question, my main concern was data loss. Here is how I copied data from a volume to AWS S3:
# Create a dummy container - I like Python
host$ docker run -it -v my_volume:/datavolume1 python:3.7-slim bash
# Prepare AWS stuff
# executing 'cat ~/.aws/credentials' on your development machine
# will likely show them
python:3.7-slim$ pip install awscli
python:3.7-slim$ export AWS_ACCESS_KEY_ID=yourkeyid
python:3.7-slim$ export AWS_SECRET_ACCESS_KEY=yoursecrectaccesskey
# Copy
python:3.7-slim$ aws s3 cp /datavolume1/thefile.zip s3://bucket/key/thefile.zip
Alternatively you can use aws s3 sync
.
MySQL / MariaDB
My specific example was about MySQL / MariaDB. If you want to backup a database of MySQL / MariaDB, just execute
$ mysqldump -u [username] -p [database_name] \
--single-transaction --quick --lock-tables=false \
> db1-backup-$(date +%F).sql
You might also want to consider
--skip-add-drop-table
: Skip the table creation if it already exists. Without this flag, the table is dropped.
--complete-insert
: Add the column names. Without this flag, there might be a column mismatch.
To restore the backup:
$ mysql -u [username] -p [database_name] < [filename].sql