How to copy docker volume from one machine to another?
Asked Answered
J

2

16

I have created a docker volume for postgres on my local machine.

docker create volume postgres-data

Then I used this volume and run a docker.

docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres

After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.

Note - Database size is very large

Jaborandi answered 23/3, 2017 at 10:26 Comment(3)
Do you have an overlay network in docker, or swarm mode configured with the two hosts?Deaden
see #27693415 and the discussion referenced in the commentBanda
@Deaden - both machines are in LAN.Jaborandi
W
17

If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.

That would look like this:

docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
    "cd /from ; tar -cf - . " | \
    ssh <TARGET_HOST> \
    'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'

You will need to change:

  • SOURCE_DATA_VOLUME_NAME
  • TARGET_HOST
  • TARGET_DATA_VOLUME_NAME

Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts

Hope this helps.

Worldwide answered 23/3, 2017 at 12:7 Comment(8)
I got one error in cli. In another machine docker is installed with sudo. So I added sudo in your command. sudo: no tty present and no askpass program specified write /dev/stdout: broken pipeJaborandi
I resolved the sudo issue. Now I get the error as tar: short read. write /dev/stdout: broken pipe. I tried to run docker on another machine but it does not get the dataJaborandi
Which method are you using? The single line or the helper script?Worldwide
Single line. Issue is resolved now. I copied volume to another machine. Your answer is correct, but data is too large, can we make tar or gzip and transfer ?Jaborandi
The single line is a tar, if you want to try compressing it then you could try adding in the -z option at other ends, it would look like something along the lines of docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cfz - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvfz - "Worldwide
Thanks it is working perfect. Is there any way to making tar and physically transferring data ? Can you please explain the command ?Jaborandi
Notice, command in the answer is missing an ' in the end of the line otherwise it works fine. Ensure that your destination containers are stopped before copying the volumeVicinal
syntax for compression should be tar -czf - and tar -xpvzf - (-f - is the final argument)Mycology
D
4

I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:

$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
  --mount source=postgres-data,target=/data,readonly \
  quay.io/suda/dvsync-server

and then start the dvsync-client on the target machine:

docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
  --mount source=MY_TARGET_VOLUME,target=/data \
  quay.io/suda/dvsync-client

The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.

Once the synchronization is done, the dvsync-client container will stop.

Duston answered 10/7, 2018 at 19:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.