Export docker volume to another machine
Asked Answered
A

3

5

I am currently trying to use redmine with a postgres database but I'm running into some issues when setting up the environment.

Lets say I have the following compose file

  db-redmine:
    image: 'bitnami/postgresql:11'
    user: root
    container_name: 'orchestrator_redmine_pg'
    environment:
      - POSTGRESQL_USERNAME=${POSTGRES_USER}
      - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRESQL_DATABASE=${POSTGRES_DB}
      - BITNAMI_DEBUG=true
    volumes:
      - 'postgresql_data:/bitnami/postgresql'

  redmine:
    image: 'bitnami/redmine:4'
    container_name: 'orchestrator_redmine'
    ports:
      - '3000:3000'
    environment:
      - REDMINE_DB_POSTGRES=orchestrator_redmine_pg
      - REDMINE_DB_USERNAME=${POSTGRES_USER}
      - REDMINE_DB_PASSWORD=${POSTGRES_PASSWORD}
      - REDMINE_DB_NAME=${POSTGRES_DB}
      - REDMINE_USERNAME=${REDMINE_USERNAME}
      - REDMINE_PASSWORD=${REDMINE_PASSWORD}
    volumes:
      - 'redmine_data:/bitnami'
    depends_on:
      - db-redmine

volumes:
  postgresql_data:
    driver: local
  redmine_data:
    driver: local

This generates the postgres database for redmine and creates the redmine instance.

Once the containers are up, I enter the redmine instance and configure the application, this means creating custom fields, adding trackers, issue types, etc. Quite a lot of setup is required, so I don't want to do this everytime I want to deploy this kind of containers.

I figured that, because all of the setup data is going to the volumes, I could export such volumes and then, on a new machine, import them. This way, when both apps start on the new machine, they will have all the neccessary information from the previous setup.

This sounded simple enough, but I'm struggling with the export then import phase.

Form what I have seen, I am able to export postgresql_data to a .tar file by doing the following:

docker export postgresql_data --output="postgres_data.tar"

But how can I import the newly generated .tar file on a new machine? If I'm not mistaken, by importing the .tar file to a volume called postgresql_data in the new machine, the data from the template will be used for generating the new container.

Is it there a way to do this? Is this the correct way of duplicating a setup between two hosts?

Is doing something like docker volume create postgresql_data and then copying the files to the volume directory the way to go?

Allethrin answered 2/7, 2021 at 18:19 Comment(0)
S
6

My suggestion is to use pg_dump and pg_restore tools, instead of coping the volume. You can add a mount path to the postgres container e.g:

db-redmine:
  image: 'bitnami/postgresql:11'
  user: root
  container_name: 'orchestrator_redmine_pg'
  environment:
    - POSTGRESQL_USERNAME=${POSTGRES_USER}
    - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
    - POSTGRESQL_DATABASE=${POSTGRES_DB}
    - BITNAMI_DEBUG=true
  volumes:
    - 'postgresql_data:/bitnami/postgresql'
    - /dump-files:/dump-files

Now login to the container and run pg_dump "Your-db-name" > /dump-files/dump Now Copy this file to any newly created container and inside the new container run: pg_restore -d "your-db-name" /dump-files/dump

Shophar answered 2/7, 2021 at 19:17 Comment(2)
Thanks for the answer! Would something like command: pg_restore -d "dbname" /dump-files/dump work as a way of doing it automatically? Nevertheless, will probably take this approach even if it needs to be done manuallyAllethrin
Your welcome! I don't think there is a non complicated, safe way, doing it automatically. anyway, in my opinion, it is better to avoid running such commands automatically.Shophar
W
4

First, note that docker export has nothing to do with volumes; that command exports a container filesystem as a tar archive. That won't include anything from mounted volumes.

The easiest way to copy the contents of a volume is probably something like:

docker run -v postgresql_data:/data alpine tar -C /data -cf- . |
  ssh remotehost docker run -i -v postgresql_data:/data alpine tar -C /data -xf-

This tars up everything from your existing volume and pipes it over an ssh connection to a remote host, where we create a new postgresql_data container and populate it by extracting the tar archive.

For postgres in particular you could do a pg_dump and pg_restore instead of using tar.

You will have to fiddle your container names because you're creating them with docker-compose.

Worldlywise answered 2/7, 2021 at 19:11 Comment(0)
F
0

Update

If you have Docker Desktop installed, you can directly export Volumes from there:

Docker Desktop Screenshot

Festivity answered 3/10 at 13:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.