How can I backup a Docker-container with its data-volumes?
Asked Answered
L

20

268

I've been using this Docker-image tutum/wordpress to demonstrate a Wordpress website. Recently I found out that the image uses volumes for the MySQL-data.

So the problem is this: If I want to backup and restore the container I can try to commit an image, and then later delete the container, and create a new container from the committed image. But if I do that the volume gets deleted and all my data is gone.

There must be some simple way to backup my container plus its volume-data but I can't find it anywhere.

Leeds answered 13/10, 2014 at 1:9 Comment(1)
Check out this script I wrote which backs up absolutely everything in a docker project, including named & unnamed volumes, images, config, logs, container root filesystem, databases, and more: docker-compose-backup.sh.Obligation
M
213

if I want to revert the container I can try to commit an image, and then later delete the container, and create a new container from the committed image. But if I do that the volume gets deleted and all my data is gone

As the docker user guide explains, data volumes are meant to persist data outside of a container filesystem. This also eases the sharing of data between multiple containers.

While Docker will never delete data in volumes (unless you delete the associated container with docker rm -v), volumes that are not referenced by any docker container are called dangling volumes. Those dangling volumes are difficult to get rid of and difficult to access.

This means that as soon as the last container using a volume is deleted, the data volume becomes dangling and its content difficult to access.

In order to prevent those dangling volumes, the trick is to create an additional docker container using the data volume you want to persist so that there will always be at least that docker container referencing the volume. This way you can delete the docker container running the wordpress app without losing the ease of access to that data volume content.

Such containers are called data volume containers.

There must be some simple way to back up my container plus volume data but I can't find it anywhere.

back up docker images

To back up docker images, use the docker save command that will produce a tar archive that can be used later on to create a new docker image with the docker load command.

back up docker containers

You can back up a docker container by different means

  • by committing a new docker image based on the docker container current state using the docker commit command
  • by exporting the docker container file system as a tar archive using the docker export command. You can later on create a new docker image from that tar archive with the docker import command.

Be aware that those commands will only back up the docker container layered file system. This excludes the data volumes.

back up docker data volumes

To back up a data volume you can run a new container using the volume you want to back up and executing the tar command to produce an archive of the volume content as described in the docker user guide.

In your particular case, the data volume is used to store the data for a MySQL server. So if you want to export a tar archive for this volume, you will need to stop the MySQL server first. To do so you will have to stop the wordpress container.

back up the MySQL data

An other way is to remotely connect to the MySQL server to produce a database dump with the mysqldump command. However in order for this to work, your MySQL server must be configured to accept remote connections and also have a user who is allowed to connect remotely. This might not be the case with the wordpress docker image you are using.


Edit

Docker recently introduced Docker volume plugins which allow to delegate the handling of volumes to plugins implemented by vendors.

The docker run command has a new behavior for the -v option. It is now possible to pass it a volume name. Volumes created in that way are named and easy to reference later on, easing the issues with dangling volumes.

Edit 2

Docker introduced the docker volume prune command to delete all dangling volumes easily.

Mayne answered 13/10, 2014 at 12:18 Comment(5)
Actually I'm more interested in making a container that I can move easily, I don't understand the point of a container that can't be moved.Leeds
In that case you should look at tools that help managing Docker data volume for you, such as FlockerMayne
Docker is not deleting data volumes automatically. Data volumes are designed to persist data, independent of the container’s life cycle. Docker therefore never automatically delete volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container. so data only containers are legacyBuhr
you don't need a remote connection for the mysqldump. Just shell into the container, dump it, and then copy it out with docker cp.Djebel
@AndriiZarubin re: data only container obsolete? Not at all. The data-only container gives you a container to docker exec data-container tar -czf snapshot.tgz /data then docker cp data-container:snapshot.tgz ./snapshot.tgz and the like. If you want the container to be long lived, then make its command something like tail -f /dev/null it never exits, using minimal resources.Napoleon
D
53

UPDATE 2

Raw single volume backup bash script:

#!/bin/bash
# This script allows you to backup a single volume from a container
# Data in given volume is saved in the current directory in a tar archive.
CONTAINER_NAME=$1
VOLUME_PATH=$2

usage() {
  echo "Usage: $0 [container name] [volume path]"
  exit 1
}

if [ -z $CONTAINER_NAME ]
then
  echo "Error: missing container name parameter."
  usage
fi

if [ -z $VOLUME_PATH ]
then
  echo "Error: missing volume path parameter."
  usage
fi

sudo docker run --rm --volumes-from $CONTAINER_NAME -v $(pwd):/backup busybox tar cvf /backup/backup.tar $VOLUME_PATH

Raw single volume restore bash script:

#!/bin/bash
# This script allows you to restore a single volume from a container
# Data in restored in volume with same backupped path
NEW_CONTAINER_NAME=$1

usage() {
  echo "Usage: $0 [container name]"
  exit 1
}

if [ -z $NEW_CONTAINER_NAME ]
then
  echo "Error: missing container name parameter."
  usage
fi

sudo docker run --rm --volumes-from $NEW_CONTAINER_NAME -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Usage can be like this:

$ volume_backup.sh old_container /srv/www
$ sudo docker stop old_container && sudo docker rm old_container
$ sudo docker run -d --name new_container myrepo/new_container
$ volume_restore.sh new_container

Assumptions are: backup file is named backup.tar, it resides in the same directory as backup and restore script, volume name is the same between containers.

UPDATE

It seems to me that backupping volumes from containers is not different from backupping volumes from data containers.

Volumes are nothing else than paths linked to a container so the process is the same.

I don't know if docker-backup works also for same container volumes but you can use:

sudo docker run --rm --volumes-from yourcontainer -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data

and:

sudo docker run --rm --volumes-from yournewcontainer -v $(pwd):/backup busybox tar xvf /backup/backup.tar

END UPDATE

There is this nice tool available which lets you backup and restore docker volumes containers:

https://github.com/discordianfish/docker-backup

if you have a container linked to some container volumes like this:

$ docker run --volumes-from=my-data-container --name my-server ...

you can backup all the volumes like this:

$ docker-backup store my-server-backup.tar my-server

and restore like this:

$ docker-backup restore my-server-backup.tar

Or you can follow the official way:

How to port data-only volumes from one host to another?

Dubose answered 13/10, 2014 at 12:19 Comment(9)
No it's not a "--volumes-from" situation, rather the volumes are defined in the dockerfile which is what causes the data to not persist. If you look at the dockerfile for tutum/lamp you will see what I mean.Leeds
The answer I already gave is good for any kind of volume because volumes are volumes and containers are containers there is no difference if you use a container as a data container from a volumes perspectiveDubose
The volume that's defined in the dockerfile is destroyed when the container is destroyed. So there's no way to get that data back when you move the container.Leeds
you have to get the data out before moving the container then relaunch the container and put the data backDubose
Right, so in other words, no there's no simple way to move a container that declares volumes in the dockerfile.Leeds
Well as containers are ephemeral data persistence is not their main concern. Nonetheless you can achieve what you want with a very simple bash script. So I think it depends on what you expect and what you define as simple.Dubose
Hmm, I guess I would consider a 3 or 4 line bash script as simple, can it be done?Leeds
I get an error: unknown shorthand flag: 'r' in -rm. Should it be --rm? (Docker version 18.09.5, build e8ff056)Allynallys
fyi the path you're in can't have a space or this script won't workFranciscafranciscan
S
50

If your project uses docker-compose, here is an approach for backing up and restoring your volumes.

docker-compose.yml

Basically you add db-backup and db-restore services to your docker-compose.yml file, and adapt it for the name of your volume. My volume is named dbdata in this example.

version: "3"

services:
  db:
    image: percona:5.7
    volumes:
      - dbdata:/var/lib/mysql

  db-backup:
    image: alpine    
    tty: false
    environment:
      - TARGET=dbdata
    volumes:
      - ./backup:/backup
      - dbdata:/volume
    command: sh -c "tar -cjf /backup/$${TARGET}.tar.bz2 -C /volume ./"

  db-restore:
    image: alpine    
    environment:
      - SOURCE=dbdata
    volumes:
      - ./backup:/backup
      - dbdata:/volume
    command: sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xjf /backup/$${SOURCE}.tar.bz2"

Avoid corruption

For data consistency, stop your db container before backing up or restoring

docker-compose stop db

Backing up

To back up to the default destination (backup/dbdata.tar.bz2):

docker-compose run --rm db-backup

Or, if you want to specify an alternate target name, do:

docker-compose run --rm -e TARGET=mybackup db-backup

Restoring

To restore from backup/dbdata.tar.bz2, do:

docker-compose run --rm db-restore

Or restore from a specific file using:

docker-compose run --rm -e SOURCE=mybackup db-restore

I adapted commands from https://loomchild.net/2017/03/26/backup-restore-docker-named-volumes/ to create this approach.

Sandpit answered 3/6, 2019 at 18:41 Comment(6)
And if I need to make a cron, wouldn't it be better to solve all this mess with Bind mount? I'm talking about doing it in production.Executioner
@jcarlosweb: sure, you can put these commands into a cron job. Alternatively, yes, you could use a bind mount to back up the volume. It depends on your needs and whether you want a point-in-time snapshot of your data, and whether you are pushing the backup offsite (you might need a cron job either way).Sandpit
What is this going to remove? /volume/..?*Ctenophore
@s1n7ax: Nothing is removed. If you are referring to the --rm in the above commands , that means remove the temporary container (used to run the backup or restore command) after it exits.Sandpit
@Sandpit I'm talking about the /volume/..?* in command: sh -c "rm -rf /volume/* /volume/..?*. What is it going to remove?Ctenophore
@Ctenophore Ah, yes, that code removes the contents of the volume before restoring it from a backup. It takes care to remove all files including those that start with one or two periods, but without trying to remove . (the current directory) or .. (the parent directory). ..?* matches a file starting with two periods followed by one more character (?) followed by zero or more characters (*).Sandpit
P
30

If you only need to backup mounted volumes you can just copy folders from your Dockerhost.

Note: If you are on Ubuntu, Dockerhost is your local machine. If you are on Mac, Dockerhost is your virtual machine.

On Ubuntu

You can find all folders with volumes here: /var/lib/docker/volumes/ so you can copy them and archive wherever you want.

On MAC

It's not so easy as on Ubuntu. You need to copy files from VM.

Here is a script of how to copy all folders with volumes from virtual machine (where Docker server is running) to your local machine. We assume that your docker-machine VM named default.

docker-machine ssh default sudo cp -v -R /var/lib/docker/volumes/ /home/docker/volumes

docker-machine ssh default sudo chmod -R 777 /home/docker/volumes

docker-machine scp -R default:/home/docker/volumes ./backup_volumes

docker-machine ssh default sudo rm -r /home/docker/volumes

It is going to create a folder ./backup_volumes in your current directory and copy all volumes to this folder.

Here is a script of how to copy all saved volumes from your local directory (./backup_volumes) to Dockerhost machine

docker-machine scp -r ./backup_volumes default:/home/docker

docker-machine ssh default sudo mv -f /home/docker/backup_volumes /home/docker/volumes

docker-machine ssh default sudo chmod -R 777 /home/docker/volumes

docker-machine ssh default sudo cp -v -R /home/docker/volumes /var/lib/docker/

docker-machine ssh default sudo rm -r /home/docker/volumes

Now you can check if it works by:

docker volume ls
Parenteau answered 10/3, 2016 at 14:55 Comment(3)
Do we need to shutdown the container to make a backup of that folder /var/lib/docker/volumes under Ubuntu?Lakendra
No necessary, You can copy that folder anytime you want.Parenteau
Technically yes, you can, but you are exposed to data corruption issues as the copy is non-atomic and there might be concurrent writes to the volume, I'd rather stop the container first.Kippy
D
15

Let's say your volume name is data_volume. You can use the following commands to backup and restore the volume to and from a docker image named data_image:

To backup:

docker run --rm --mount source=data_volume,destination=/data alpine tar -c -f- data | docker run -i --name data_container alpine tar -x -f-
docker container commit data_container data_image
docker rm data_container

To restore:

docker run --rm data_image tar -c -f- data | docker run -i --rm --mount source=data_volume,destination=/data alpine tar -x -f-
Despain answered 5/1, 2018 at 11:45 Comment(3)
Is this a real-time back-up?Col
As the same volume can be mounted on multiple dockers, yes this is real-time backup. Eg. volume mounted on a Mysql container can be backed up (assuming no data-corruption). But for services which need to be stopped for fear of data corruption, no this isn't real time.Despain
It's a good approach if you have a full backup from data_image image too for example: docker run --rm -v $(pwd)/backup:/backup data_image tar cvf /backup/backup.tar /dataBradbradan
T
10

I know this is old, but I realize that there isnt a well documented solution to pushing a data container (as backup) to docker hub. I just published a short example on how doing so at https://dzone.com/articles/docker-backup-your-data-volumes-to-docker-hub

Following is the bottom line

The docker tutorial suggest you can backup and restore the data volume locally. We are going to use this technique, add a few more lines to get this backup pushed into docker hub for easy future restoration to any location we desire. So, lets get started. These are the steps to follow:

Backup the data volume from the data container named data-container-to-backup

docker run --rm --volumes-from data-container-backup --name tmp-backup -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /folderToBackup

Expand this tar file into a new container so we can commit it as part of its image

docker run -d -v $(pwd):/backup --name data-backup ubuntu /bin/sh -c "cd / && tar xvf /backup/backup.tar"

Commit and push the image with a desired tag ($VERSION)

docker commit data-backup repo/data-backup:$VERSION
docker push repo/data-backup:$VERSION

Finally, lets clean up

docker rm data-backup
docker rmi $(docker images -f "dangling=true" -q)

Now we have an image named data-backup in our repo that is simply a filesystem with the backup files and folders. In order use this image (aka restore from backup), we do the following:

Run the data container with the data-backup image

run -v /folderToBackup --entrypoint "bin/sh" --name data-container repo/data-backup:${VERSION}

Run your whatEver image with volumes from the data-conainter

docker run --volumes-from=data-container repo/whatEver

Thats it.

I was surprised there is no documentation for this work around. I hope someone find this helpful. I know it took me a while to think about this.

Tupler answered 22/12, 2016 at 9:31 Comment(0)
B
6

The following command will run tar in a container with all named data volumes mounted, and redirect the output into a file:

docker run --rm `docker volume list -q | egrep -v '^.{64}$' | awk '{print "-v " $1 ":/mnt/" $1}'` alpine tar -C /mnt -cj . > data-volumes.tar.bz2

Make sure to test the resulting archive in case something went wrong:

tar -tjf data-volumes.tar.bz2
Banbury answered 31/10, 2016 at 20:14 Comment(0)
J
5

If you just need a simple backup to an archive, you can try my little utility: https://github.com/loomchild/volume-backup

Example

Backup:

docker run -v some_volume:/volume -v /tmp:/backup --rm loomchild/volume-backup backup archive1

will archive volume named some_volume to /tmp/archive1.tar.bz2 archive file

Restore:

docker run -v some_volume:/volume -v /tmp:/backup --rm loomchild/volume-backup restore archive1

will wipe and restore volume named some_volume from /tmp/archive1.tar.bz2 archive file.

More info: https://medium.com/@loomchild/backup-restore-docker-named-volumes-350397b8e362

Joiejoin answered 4/9, 2017 at 19:29 Comment(2)
I created a similar tool github.com/01e9/docker-backup It creates backup archives and adds them to a Resilio sync directoryChartreuse
if the volume is defined in docker-compose, it's name is normally prepended with the directory name containing the docker-compose.yml file. e.g. if your docker-compose project is in /nginx-proxy/ and your volume is named db_data, the some_volume in above example would be nginx-proxy_db_data. Check this with docker volume lsAlvira
J
3

I have created a tool to orchestrate and launch backup of data and mysql containers, simply called docker-backup. There is even a ready-to-use image on the docker hub.

It's mainly written in Bash as it is mainly orchestration. It uses duplicity for the actual backup engine. You can currently backup to FTP(S) and Amazon S3.

The configuration is quite simple: write a config file in YAML describing what to backup and where, and here you go!

For data containers, it automatically mount the volumes shared by your container to backup and process it. For mysql containers, it links them and execute a mysqldump bundled with your container and process the result.

I wrote it because I use Docker-Cloud which is not up-to-date with recent docker-engine releases and because I wanted to embrace the Docker way by not including any process of backup inside my application containers.

Jackiejackinoffice answered 25/4, 2017 at 7:52 Comment(0)
B
2

If you want a complete backup, you will need to perform a few steps:

  1. Commit the container to an image
  2. Save the image
  3. Backup the container's volume by creating a tar file of the volume's mount point in the container.
  4. Repeat steps 1-3 for the database container as well.

Note that doing just a Docker commit of the container to an image does NOT include volumes attached to the container (ref: Docker commit documentation).

"The commit operation will not include any data contained in volumes mounted inside the container."

Bullivant answered 2/5, 2019 at 17:53 Comment(0)
B
2

We can use an image to back up all our volumes. I write a script to help backup and restore. furthermore, I save the data to a tar file compression to save all data on a local disc. I use this script to save my Postgres and Cassandra volume databases at the same image. for example, if we have a pg_data for Postgres and cassandra_data for Cassandra database we can call the following script twice one with pg_data argument and then cassandra_data argument for Cassandra

backup script:

#! /bin/bash
GENERATE_IMAGE="data_image"
TEMPRORY_CONTAINER_NAME="data_container"
VOLUME_TO_BACKUP=${1}
RANDOM=$(head -200 /dev/urandom | cksum | cut -f1 -d " ")

if docker images | grep -q ${GENERATE_IMAGE};  then
    docker run --rm --mount source=${VOLUME_TO_BACKUP},destination=/${VOLUME_TO_BACKUP} ${GENERATE_IMAGE} tar -c -f- ${VOLUME_TO_BACKUP} | docker run -i --name ${TEMPRORY_CONTAINER_NAME} ${GENERATE_IMAGE} tar -x -f-
else
    docker run --rm --mount source=${VOLUME_TO_BACKUP},destination=/${VOLUME_TO_BACKUP} alpine tar -c -f- ${VOLUME_TO_BACKUP} | docker run -i --name ${TEMPRORY_CONTAINER_NAME} alpine tar -x -f-
fi

docker container commit ${TEMPRORY_CONTAINER_NAME} ${GENERATE_IMAGE}
docker rm ${TEMPRORY_CONTAINER_NAME}

if [ -f "$(pwd)/backup/${VOLUME_TO_BACKUP}.tar" ]; then
    docker run --rm -v $(pwd)/backup:/backup ${GENERATE_IMAGE} tar cvf /backup/${VOLUME_TO_BACKUP}_${RANDOM}.tar /${VOLUME_TO_BACKUP}
else
    docker run --rm -v $(pwd)/backup:/backup ${GENERATE_IMAGE} tar cvf /backup/${VOLUME_TO_BACKUP}.tar /${VOLUME_TO_BACKUP}
fi

example:

  • ./backup.sh cassandra_data
  • ./backup.sh pg_data

Restore script:

#! /bin/bash
GENERATE_IMAGE="data_image"
TEMPRORY_CONTAINER_NAME="data_container"
VOLUME_TO_RESTORE=${1}

docker run --rm ${GENERATE_IMAGE} tar -c -f- ${VOLUME_TO_RESTORE} | docker run -i --rm --mount source=${VOLUME_TO_RESTORE},destination=/${VOLUME_TO_RESTORE} alpine tar -x -f-

example:

  • ./restore.sh cassandra_data
  • ./restore.sh pg_data
Bradbradan answered 27/7, 2021 at 20:40 Comment(0)
S
2
docker container run --rm --volumes-from your_db_container -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /your_named_volume

run creates the new container

--rm option removes the container just after the execution of the tar cvf /backup/backup.tar /dbdata command

--volumes-from creates a named volume (your_named_volume) taken from the one you've created in your_db_container

-v $(pwd):/backup creates a bind mount between your current host directory ($(pwd)) and a /backup directory in your new container

tar cvf /backup/backup.tar /your_named_volume creates the archive

source: backup a volume

Stillas answered 23/9, 2022 at 9:11 Comment(0)
S
1

The problem: You want to backup you image container WITH the data volumes in it but this option is Not out off the box, The straight forward and trivial way would be copy the volumes path and backup the docker image 'reload it and and link it both together. but this solution seems to be clumsy and not sustainable and maintainable - You would need to create a cron job that would make this flow each time.

Solution: Using dockup - Docker image to backup your Docker container volumes and upload it to s3 (Docker + Backup = dockup) . dockup will use your AWS credentials to create a new bucket with name as per the environment variable ,gets the configured volumes and will be tarballed, gzipped, time-stamped and uploaded to the S3 bucket.

Steps:

  1. configure the docker-compose.yml and attach the env.txt configuration file to it, The data should be uploaded to a dedicated secured s3 bucket and ready to be reloaded on DRP executions. in order to verify which volumes path to configure run docker inspect <service-name> and locate the volumes :

"Volumes": { "/etc/service-example": {}, "/service-example": {} },

  1. Edit the content of the configuration file env.txt, and place it on the project path:

    AWS_ACCESS_KEY_ID=<key_here>
    AWS_SECRET_ACCESS_KEY=<secret_here>
    AWS_DEFAULT_REGION=us-east-1
    BACKUP_NAME=service-backup
    PATHS_TO_BACKUP=/etc/service-example /service-example
    S3_BUCKET_NAME=docker-backups.example.com
    RESTORE=false
    
  2. Run the dockup container

$ docker run --rm \
--env-file env.txt \
--volumes-from <service-name> \
--name dockup tutum/dockup:latest
  1. Afterwards verify your s3 bucket contains the relevant data
Seabee answered 23/4, 2020 at 14:16 Comment(1)
Whether it works with the AWS instance role in place of credentials?Jollenta
B
1

I have been using this Bash script to back up all my volumes. The script takes the container name as the single argument, and automatically finds all its mounted volumes.

Then it creates one tar archive for each volume.

#! /bin/bash

container=$1
dirname="backup-$container-$(date +"%FT%H%M%z")"

mkdir $dirname
cd $dirname

volume_paths=( $(docker inspect $container | jq '.[] | .Mounts[].Name, .Mounts[].Source') )

volume_count=$(( ${#volume_paths[@]} / 2 ))

for i in $(seq $volume_count); do

    volume_name=${volume_paths[i-1]}
    volume_name=$(echo $volume_name | tr -d '"')

    volume_path=${volume_paths[(i-1)+volume_count]}
    volume_path=$(echo $volume_path | tr -d '"')
    echo "$volume_name : $volume_path"

    # create an archive with volume name
    tar -zcvf "$volume_name.tar" $volume_path

done

The code is available at Github.

Blackfish answered 19/11, 2022 at 11:56 Comment(1)
What about the container data itself? In our case the container's own volume has data files!Chinchin
B
0

If you have a case as simple as mine was you can do the following:

  1. Create a Dockerfile that extends the base image of your container
  2. I assume that your volumes are mapped to your filesystem, so you can just add those files/folders to your image using ADD folder destination
  3. Done!

For example, assuming you have the data from the volumes on your home directory, for example at /home/mydata you can run the following:

DOCKERFILE=/home/dockerfile.bk-myimage
docker build --rm --no-cache -t $IMAGENAME:$TAG -f $DOCKERFILE /home/pirate

Where your DOCKERFILE points to a file like this:

FROM user/myimage
MAINTAINER Danielo Rodríguez Rivero <[email protected]>

WORKDIR /opt/data
ADD mydata .

The rest of the stuff is inherited from the base image. You can now push that image to docker cloud and your users will have the data available directly on their containers

Bedrabble answered 4/1, 2017 at 13:34 Comment(3)
what's the point in using a volume if you're just going to bake it into the image eventually.Djebel
@Djebel having a volume allows you to override the data in the containerBedrabble
I can override data without a volume too, using docker cp.Djebel
F
0

If you like entering arcane operators from the command line, you’ll love these manual container backup techniques. Keep in mind, there’s a faster and more efficient way to backup containers that’s just as effective. I've written instructions here: https://www.morpheusdata.com/blog/2017-03-02-how-to-create-a-docker-backup-with-morpheus

Step 1: Add a Docker Host to Any Cloud As explained in a tutorial on the Morpheus support site, you can add a Docker host to the cloud of your choice in a matter of seconds. Start by choosing Infrastructure on the main Morpheus navigation bar. Select Hosts at the top of the Infrastructure window, and click the “+Container Hosts” button at the top right.

To back up a Docker host to a cloud via Morpheus, navigate to the Infrastructure screen and open the “+Container Hosts” menu.

Choose a container host type on the menu, select a group, and then enter data in the five fields: Name, Description, Visibility, Select a Cloud and Enter Tags (optional). Click Next, and then configure the host options by choosing a service plan. Note that the Volume, Memory, and CPU count fields will be visible only if the plan you select has custom options enabled.

Here is where you add and size volumes, set memory size and CPU count, and choose a network. You can also configure the OS username and password, the domain name, and the hostname, which by default is the container name you entered previously. Click Next, and then add any Automation Workflows (optional).Finally, review your settings and click Complete to save them.

Step 2: Add Docker Registry Integration to Public or Private Clouds Adam Hicks describes in another Morpheus tutorial how simple it is to integrate with a private Docker Registry. (No added configuration is required to use Morpheus to provision images with Docker’s public hub using the public Docker API.)

Select Integrations under the Admin tab of the main navigation bar, and then choose the “+New Integration” button on the right side of the screen. In the Integration window that appears, select Docker Repository in the Type drop-down menu, enter a name and add the private registry API endpoint. Supply a username and password for the registry you’re using, and click the Save Changes button.

Integrate a Docker Registry with a private cloud via the Morpheus “New Integration” dialog box.

To provision the integration you just created, choose Docker under Type in the Create Instance dialog, select the registry in the Docker Registry drop-down menu under the Configure tab, and then continue provisioning as you would any Docker container.

Step 3: Manage Backups Once you’ve added the Docker host and integrated the registry, a backup will be configured and performed automatically for each instance you provision. Morpheus support provides instructions for viewing backups, creating an instance backup, and creating a server backup.

Floriated answered 3/3, 2017 at 15:38 Comment(0)
G
0

I would suggest using restic. It's an easy to use backup application that can back up to various targets such as local file systems, S3 compatible storage services or a restic REST target server to mention some of the options. Using resticker, you will have an already prepared container that can be scheduled with cron syntax: https://github.com/djmaze/resticker

For the ones that want to learn more about restic and it's usage, I did write a blog post series on that topic including examples on its usage: https://remo-hoeppli.medium.com/restic-backup-i-simple-and-beautiful-backups-bdbbc178669d

Geographical answered 8/3, 2021 at 16:44 Comment(0)
C
0

There are some great answers in here.

This is what I've been using. I know the question is old but maybe it will help someone.

This is the command that does the backup:

docker run --rm -v "$volume:/mnt/$volume" alpine $TAR_CMD > "$BACKUP_DIR/$timestamp-$volume.tar.bz2"

It creates an alpine image with a volume mounted as /mnt in the container. The container runs the $TAR_CMD which outputs to stdout and exits. Stdout is piped to the tar.bz2 file.

The bash script below takes a backup directory and optionally volume names to be backed up. If no volume names are provided it will backup all non-system volumes.

#!/bin/bash

# Get the backup directory as first argument
BACKUP_DIR="$1"

# Check if backup directory is provided
if [ -z "$BACKUP_DIR" ]; then
  echo "Error: Please provide backup directory as first argument."
  exit 1
fi

# Create the backup directory if it doesn't exist
if ! mkdir -p "$BACKUP_DIR"; then
  echo "Error: Could not create backup directory."
  exit 2
fi

# Command to create tar.bz2
TAR_CMD="tar -C /mnt -cj ."

# Get list of volumes to back up
if [ -z "$2" ]; then
  # No volumes provided, back up all volumes
  volumes=$(docker volume list -q | egrep -v '^.{64}$')
else
  shift
  volumes="$@"
fi

# Create one timestamp
timestamp=$(date +"%Y-%m-%d-%H-%M")

# Keep track of any errors that occur during backup process
ERRORS=()

# Create a backup for each volume
for volume in $volumes
do
  # Create the backup file
  verify_volume=$(docker volume inspect "$volume" 2>/dev/null | wc -l)
  if [ "$verify_volume" -le 1 ]; then
    echo "Error: volume $volume does not exist."
    ERRORS+=("$volume")
  elif ! docker run --rm -v "$volume:/mnt/$volume" alpine $TAR_CMD > "$BACKUP_DIR/$timestamp-$volume.tar.bz2"; then
    echo "Error: Could not create backup for volume $volume."
    ERRORS+=("$volume")
  else
    # Get the size of the backup file in human-readable format
    size=$(du -h "$BACKUP_DIR/$timestamp-$volume.tar.bz2" | awk '{print $1}')
    echo "Created backup for $volume: $size"
  fi
done

# Check if any errors occurred and exit with non-zero status code if so
if [ ${#ERRORS} -ne 0 ]; then
  echo "Errors occurred during backup process for volumes: ${ERRORS}"
  exit 3
fi
Circumscissile answered 7/3, 2023 at 4:23 Comment(0)
M
0

Here is one of the succesfull tried & tested Volume backup & restore.

BACKUP:

docker run -v [volume-name]:/volume --rm --log-driver none loomchild/volume-backup backup > [archive-path]

Backup Example:

docker run -v your_volume_name:/volume --rm --log-driver none loomchild/volume-backup backup > ./your_volume_name.tar.bz2

To find the volume name use docker volume ls, To store your backup on Host machine add ./ before the name of backup, as done in above example.

RESTORE:

docker run -i -v [volume-name]:/volume --rm loomchild/volume-backup restore < [archive-path]

Restore Example:

docker run -i -v your_volume_name:/volume --rm loomchild/volume-backup restore < ./your_volume_name.tar.bz2

This will successfully restore your volume.

To verify use command : docker volume ls.

IMPORTANT: After restoring the backup volume, if you try to use it, in previous container, it want get mounted, else docker will create a new volume for it.

HOW TO SOLVE THIS?

In your docker-compose.yaml in volumes section, replace below lines

OLD not working:

volumes:
    your_volume_name:

NEW Working code:

volumes:
    your_volume_name:
        external: true
        name: your_volume_name

Now rerun docker-compose down & then docker-compose up.

Metchnikoff answered 2/10, 2023 at 15:53 Comment(0)
O
-1

This is a volume-folder-backup way.
If you have docker registry infra, This method is very helpful.
This uses docker registry for moving the zip file easily.

#volume folder backup script. !/bin/bash

#common bash variables. set these variable before running scripts
REPO=harbor.otcysk.org:20443/levee
VFOLDER=/data/mariadb
TAG=mariadb1

#zip local folder for volume files
tar cvfz volume-backup.tar.gz $VFOLDER

#copy the zip file to volume-backup container.
#zip file must be in current folder.
docker run -d -v $(pwd):/temp --name volume-backup ubuntu \
       bash -c "cd / && cp /temp/volume-backup.tar.gz ."


#commit for pushing into REPO
docker commit volume-backup $REPO/volume-backup:$TAG

#check gz files in this container
#docker run --rm -it --entrypoint bash --name check-volume-backup \
        $REPO/volume-backup:$TAG

#push into REPO
docker push $REPO/volume-backup:$TAG

In another server

#pull the image in another server
docker pull $REPO/volume-backup:$TAG

#restore files in another server filesystem
docker run --rm -v $VFOLDER:$VFOLDER --name volume-backup $REPO/volume-backup:$TAG \
       bash -c "cd / && tar xvfz volume-backup.tar.gz"

Run your image which uses this volume folder.
You can make a image which has both one run-image and one volume zip file easily.
But I do not recommened for various reasons(image size, entry command, ..).

Overreact answered 12/3, 2020 at 3:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.