How to copy Docker images from one host to another without using a repository
Asked Answered
T

22

2285

How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?

I create my own image in VirtualBox, and when it is finished I try to deploy to other machines to have real usage.

Since it is based on my own based image (like Red Hat Linux), it cannot be recreated from a Dockerfile. My dockerfile isn't easily portable.

Are there simple commands I can use? Or another solution?

Tailspin answered 29/5, 2014 at 13:57 Comment(3)
Does this answer your question? How to save all Docker images and copy to another machineTswana
There are ways to copy from one machine to another - see the answer https://mcmap.net/q/40547/-how-to-copy-docker-images-from-one-host-to-another-without-using-a-repository (@kolypto) below. However, gzip compression/decompression on either end may cause more delays than transferring over Ethernet or WiFi. You can set up a private Docker repository of your own which may speed up transferring images between machines.Berrie
Related discussions: https://mcmap.net/q/41056/-incremental-docker-image-save-lt-images-gt-xz-zc-gt-images-tar-xz/320399 , https://mcmap.net/q/41057/-docker-image-size-discrepency-between-local-and-remote/320399, and https://mcmap.net/q/41058/-how-to-compress-the-latest-docker-image/320399Otiliaotina
P
3695

You will need to save the Docker image as a tar file:

docker save -o <path for generated tar file> <image name>

Then copy your image to a new system with regular file transfer tools such as cp, scp, or rsync (preferred for big files). After that you will have to load the image into Docker:

docker load -i <path to image tar file>

You should add filename (not just directory) with -o, for example:

docker save -o c:/myfile.tar centos:16

your image syntax may need the repository prefix (:latest tag is default)

docker save -o C:\path\to\file.tar repository/imagename

PS: You may need to sudo all commands.

Prototherian answered 29/5, 2014 at 17:9 Comment(18)
This is the better answer for images.Martinmas
also, it is better to use repo:tag as the image reference rather than image id. If you use image id, the loaded image will not retain the tag (and you will have to do another step to tag the image).Tower
I used the image id instead of the name:tag Now I'm sitting here, loaded the image and have a <none> for REPOSITORY and TAG. What is the right way to bring the name and tag back? @TowerDriscoll
To tag, first identity the IMAGE ID using docker images, then use docker tag DESIREDIMAGEID mycompany/myreponame. If your image id is 591de551d6e4, you can abbreviate the image id: docker tag 59 mycompany/myreponameAssimilable
@AndiJay Take a look at How to copy docker images from one host to another?Devotion
Make sure the filepath is to a new file or it won't workNorseman
It looks like loading the path to the .tar file that it was saved to on the remote server doesn't work after using docker-machine scp. For example, if you use docker-machine to copy over into new_machine:/home/example.tar, running docker load -i new_machine:/home/example.tar or just docker load -i /home/example.tar does not work.Gumma
@Gumma that sounds pretty normal to me: the image should be copied to a host, not inside a machine. Or am I misunderstanding your comment?Electrometer
@jj_ the answer is rather Linux oriented... isn't it?Casals
@Casals not at all, it even has an example where a windows path is used...Carolus
I've tried this way and I can save the image in the tar file but I can't take it to the other machine. The command line says unsupported os linux when I try to execute the load command in the new machine.Remember
This is a not good solution! please notice that docker load will squash all layers!!!!Irs
The result won't be compressed, so running pigz --keep <path to image tar file> is almost necessary.Verret
usually you are doing something terriibly wrong when running sudo with docker ... your docker should be configured for non-root users to runPretzel
Why is rsync preferred for big files?Selfdrive
it gives me open /home/ahmad/.docker_temp_52129870: permission deniedMachination
So @Daiwei, will a docker image built from a Windows machine can directly load into a Linux machine? I mean is there nothing we need to change?Alasdair
@Abhilash, rsync and scp both have a compression option, which is important for larger files that compress effectively. The preference for rsync for large files is the resume functionality in case the transfer is interrupted. Scp restarts, but rsync will resume from where it left off in most casesTarbes
E
905

Transferring a Docker image via SSH, bzipping the content on the fly:

docker save <image> | bzip2 | ssh user@host docker load

Note that docker load automatically decompresses images for you. It supports gzip, bzip2 and xz.

It's also a good idea to put pv in the middle of the pipe to see how the transfer is going:

docker save <image> | bzip2 | pv | ssh user@host docker load

(More info about pv: home page, man page).

  • Use gzip/gunzip when your network is fast and you can upload at 10 Mb/s and more -- because bzip won't be able to compress fast enough, gzip will be much faster (Thanks @Thomas Steinbach)

  • Use xz if you're on really slow network (e.g. mobile internet). xz offers a higher compression ratio (Thanks @jgmjgm)

Eavesdrop answered 6/10, 2014 at 23:11 Comment(19)
When using docker-machine, you can do docker $(docker-machine config mach1) save <image> | docker $(docker-machine config mach2) load to copy images between machines mach1 and mach2.Geisler
@manojlds eval $(docker-machine env dev) is good for general communication with a single docker host but not to copy between two machines, since this involves two different docker hosts / docker machines.Geisler
to do this in reverse (remote to local): ssh target_server 'docker save image:latest | bzip2' | pv | bunzip2 | docker loadEcumenical
@Geisler Would you post this as a separate answer? It's exactly what I was looking forWaadt
@Waadt see my new answer - thanks for your encouragement.Geisler
Is there any way to do this when docker requires sudo on the target machine? I tried (without compression) docker save my_img:v1 | ssh -t -t my_user@my_machine sudo docker load. Without the "-t" switch, sudo complains sudo: sorry, you must have a tty to run sudo; with one "-t" it's the same message because ssh says Pseudo-terminal will not be allocated because stdin is not a terminal. and finally, with two "-t"s, I get the content of the tar file (i.e. the image) on my terminal. Any ideas?Albur
@JosefStark I needed to add "Defaults:<target username> !requiretty" when editing the sudoers file to stop the "Sorry" message from sudo. I don't know how much of a difference it makes but I also put everything after the user@host in quotes (so "[...] | ssh user@host 'bunzip2 | sudo docker load'").Henebry
Cool combo. Here my hint: gzip is much faster on high bandwith than bzip: docker save myimage | pv | ssh remoteserver 'docker load' 1,44GiB 0:02:31 [9,71MiB/s] docker save myimage | bzip2 | pv | ssh remoteserver 'bunzip2 | docker load' 613MiB 0:02:47 [3,67MiB/s] docker save myimage | gzip | pv | ssh remoteserver 'gunzip | docker load' 667MiB 0:01:34 [7,08MiB/s]Socio
If you do this regularly, I created a docker-send script based on this answer with support for sudo. I'll probably add support for other compression formats in the future. gist.github.com/flungo/6b5f607db87c3c034609c8dbc5b40966Calvo
@Albur a bit late to the party but for anyone out there looking to do this. Edit the sudoers file to allow sudo without password entry. (Be sure to check security policies though). I.e. add: hunger ALL=(ALL) NOPASSWD:ALLBrainsick
@Brainsick You can grant access without password in a more restrictive fassion: ostechnix.com/…Philine
xz is probably a better option than bzip. gzip for portability, xz for best results. xz level 1 performanes roughly the same as gzip level 9 but produces significantly smaller files.Boyette
In fact, you can almost certainly tell ssh to compress things though it'll probably be a compression library optimised for speed.Boyette
I have tried this solution (docker commit save load scp etc), however, all file generated inside previous container are lost and it like recover to "factor setting". is there any wrong with my setup?Burgonet
remote machine requires sudo permission. any body know how to solve? ``` Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/load?quiet=1": dial unix /var/run/docker.sock: connect: permission denied ```Luvenialuwana
@DanielInbaraj Create the docker group on the remote machine and add your user to it. There may be some security implications that I am not completely up to date on. I would think it was worse always to run docker with sudo.Inflection
Even after the docker load was successful at the target location. It required me to do 'docker push image' again at the target location before being able to pull the image and use it.Salaried
@Salaried wait ... doesn't that push and pull the image to and from DockerHub?Dinsdale
@Dinsdale I had a local registry, so DockerHub was not in picture.Salaried
N
169

To save an image to any file path or shared NFS place see the following example.

Get the image id by doing:

docker images

Say you have an image with id "matrix-data".

Save the image with id:

docker save -o /home/matrix/matrix-data.tar matrix-data

Copy the image from the path to any host. Now import to your local Docker installation using:

docker load -i <path to copied image file>
Nonaggression answered 4/11, 2014 at 5:54 Comment(1)
And then what? The loaded image, which is fine on the source machine I just copied it from, doesn't work on the target.Pianoforte
H
101

You can use a one-liner with DOCKER_HOST variable:

docker save app:1.0 | gzip | DOCKER_HOST=ssh://user@remotehost docker load
Hynes answered 3/6, 2020 at 15:13 Comment(4)
This is definitely the command I was looking for, I think this answer deserves more loveOosphere
prereqs: ssh credentials setup on the remote (ssh-copy-id) and the local and remote user both need to be in the docker group (sudo usermod -aG docker $USER)Purnell
Are there any (performance) differences to ssh user@remotehost docker load here?Hangman
@Hangman I made a comparison between the two and this one (DOCKER_HOST=ssh://user@remotehost docker load) seems to be a bit faster than ssh user@remotehost docker load. On my setup the difference was about 5-10 seconds on a process that took about 1 minute and 50 seconds. (I timed both of the options multiple times)Someplace
A
76

First save the Docker image to a compressed archive:

docker save <docker image name> | gzip > <docker image name>.tar.gz

Then load the exported image to Docker using the below command:

zcat <docker image name>.tar.gz | docker load
Accepted answered 27/9, 2016 at 4:33 Comment(2)
For loading, docker load < my-image.tar.gz is sufficient. The image gets decompressed automatically for gzip, bzip2, and xz.Ezraezri
Tried this twice. The resulting container does not work. It starts up, but near as I can tell the software on it is not actually running. The same container starts up with no issue on the machine I saved it from. So something is missing from these instructions, as it is from, near as I can tell, literally 100% of the instructions online for doing this.Pianoforte
C
55

Run

docker images

to see a list of the images on the host. Let's say you have an image called awesomesauce. In your terminal, cd to the directory where you want to export the image to. Now run:

docker save awesomesauce:latest > awesomesauce.tar

Copy the tar file to a thumb drive or whatever, and then copy it to the new host computer.

Now from the new host do:

docker load < awesomesauce.tar

Now go have a coffee and read Hacker News...

Chadwick answered 9/11, 2016 at 20:28 Comment(4)
Worth noting here is that this will only work if save and load are executed on the same OS. Use docker save [image] -o file.tar and docker load -i file.tar to avoid this!Nephelometer
docker save [image] -o file.tar also appears to be wildly fasterHaematogenous
@AndreasForslöw Why does using pipes mean that this only works on the same OS?Dyl
Does not end in a working container on the destination machine.Pianoforte
G
40

The fastest way to save and load docker image through gzip command:

docker save <image_id> | gzip > image_file.tgz

To load your zipped image on another server use immediate this command, it will be recognized as zipped image:

docker load -i image_file.tgz

to rename, or re-tag the image use:

docker image tag <image_id> <image_path_name>:<version>

for example:

docker image tag 4444444 your_docker_or_harbor_path/ubuntu:14.0
Guillot answered 31/3, 2022 at 7:13 Comment(2)
This is a great answer, and it works a treat, but do you have any idea how to get the container back?Homeward
Why does everybody give the same nonworking instructions? THIS DOES NOTHING.Pianoforte
I
29

For a flattened export of a container's filesystem, use;

docker export CONTAINER_ID > my_container.tar

Use cat my_container.tar | docker import - to import said image.

Isadoraisadore answered 29/5, 2014 at 14:19 Comment(4)
it shall be cat my_container.tar | docker import - my_container:new if import locally according to cmd helpTailspin
This is more for backing up a running container than for deploying an image.Bromberg
I tried docker save at ubuntu machines which all docker images up and running good. Then i docker load them at windows machine. There are many errors when i docker run or start them. Any ideas whats wrong?Barbital
this does not work on windows command prompt or powershell directly because there is no *.tgz support to load package, you may need to install packages and change the command on windowsGuillot
I
26

docker-push-ssh is a command line utility I created just for this scenario.

It sets up a temporary private Docker registry on the server, establishes an SSH tunnel from your localhost, pushes your image, then cleans up after itself.

The benefit of this approach over docker save (at the time of writing most answers are using this method) is that only the new layers are pushed to the server, resulting in a MUCH quicker upload.

Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.

https://github.com/brthor/docker-push-ssh

Install:

pip install docker-push-ssh

Example:

docker-push-ssh -i ~/my_ssh_key [email protected] my-docker-image

The biggest caveat is that you have to manually add your localhost to Docker's insecure_registries configuration. Run the tool once and it will give you an informative error:

Error Pushing Image: Ensure localhost:5000 is added to your insecure registries.
More Details (OS X): https://mcmap.net/q/41060/-where-should-i-set-the-39-insecure-registry-39-flag-on-mac-os

Where should I set the '--insecure-registry' flag on Mac OS?

Incoming answered 12/9, 2018 at 5:20 Comment(3)
This is a promising utility. Do any other answers offer a solution which copies only the updated layers? But, I had to work through a) no py3 support, b) ssh identify file must be specified though mine is in default location, and c) port 5000 is already in use on my servers and there is no option to change to another port.Purplish
@Purplish Consider making a PR to the utility on GitHub so others can benefit from your changes.Incoming
There is PR for Python3: github.com/brthor/docker-push-ssh/pull/15 @Incoming can you merge it?Guyon
G
18

When using docker-machine, you can copy images between machines mach1 and mach2 with:

docker $(docker-machine config <mach1>) save <image> | docker $(docker-machine config <mach2>) load

And of course you can also stick pv in the middle to get a progess indicator:

docker $(docker-machine config <mach1>) save <image> | pv | docker $(docker-machine config <mach2>) load

You may also omit one of the docker-machine config sub-shells, to use your current default docker-host.

docker save <image> | docker $(docker-machine config <mach>) load

to copy image from current docker-host to mach

or

docker $(docker-machine config <mach>) save <image> | docker load

to copy from mach to current docker-host.

Geisler answered 14/10, 2016 at 9:53 Comment(0)
B
17

The best way to save all the images is like this :

docker save $(docker images --format '{{.Repository}}:{{.Tag}}') -o allimages.tar

Above code will save all the images in allimages.tar and to load the images go to the directory where you saved the images and run this command :

docker load -i allimages.tar

Just make sure to use this commands in PowerShell and not in Commad Prompt

Brion answered 16/4, 2022 at 5:55 Comment(1)
This answer on this page saying to use docker load, third time I am pointing out that this is missing information. It does not work as stated.Pianoforte
S
16

I assume you need to save couchdb-cartridge which has an image id of 7ebc8510bc2c:

stratos@Dev-PC:~$ docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
couchdb-cartridge                      latest              7ebc8510bc2c        17 hours ago        1.102 GB
192.168.57.30:5042/couchdb-cartridge   latest              7ebc8510bc2c        17 hours ago        1.102 GB
ubuntu                                 14.04               53bf7a53e890        3 days ago          221.3 MB

Save the archiveName image to a tar file. I will use the /media/sf_docker_vm/ to save the image.

stratos@Dev-PC:~$ docker save imageID > /media/sf_docker_vm/archiveName.tar

Copy the archiveName.tar file to your new Docker instance using whatever method works in your environment, for example FTP, SCP, etc.

Run the docker load command on your new Docker instance and specify the location of the image tar file.

stratos@Dev-PC:~$ docker load < /media/sf_docker_vm/archiveName.tar

Finally, run the docker images command to check that the image is now available.

stratos@Dev-PC:~$ docker images
REPOSITORY                             TAG        IMAGE ID         CREATED             VIRTUAL SIZE
couchdb-cartridge                      latest     7ebc8510bc2c     17 hours ago        1.102 GB
192.168.57.30:5042/couchdb-cartridge   latest     bc8510bc2c       17 hours ago        1.102 GB
ubuntu                                 14.04      4d2eab1c0b9a     3 days ago          221.3 MB

Please find this detailed post.

Sparklesparkler answered 29/9, 2014 at 5:25 Comment(1)
Incredible. All these answers all repeating the same two nonworking sets of instructions. Tried it. It doesn't work.Pianoforte
T
15

REAL WORLD EXAMPLE

#host1
systemctl stop docker
systemctl start docker
docker commit -p 1d09068ef111 ubuntu001_bkp3
#create backup
docker save -o ubuntu001_bkp3.tar ubuntu001_bkp3

#upload ubuntu001_bkp3.tar to my online drive
aws s3 cp ubuntu001_bkp3.tar s3://mybucket001/


#host2
systemctl stop docker
systemctl start docker
cd /dir1

#download ubuntu001_bkp3.tar from my online drive
aws s3 cp s3://mybucket001/ubuntu001_bkp3.tar /dir1

#restore backup
cat ./ubuntu001_bkp3.tar  | docker load
docker run --name ubuntu001 -it ubuntu001_bkp3:latest bash
docker ps -a
docker attach ubuntu001




Tremolo answered 12/7, 2021 at 0:42 Comment(2)
Why are you stopping docker service before you do anything?Ricketts
it's not working for me !! Getting this error --- > 'open /var/lib/docker/tmp/docker-import-315206241/app/json: no such file or directory'Isochronal
A
12

To transfer images from your local Docker installation to a minikube VM:

docker save <image> | (eval $(minikube docker-env) && docker load)
Ascetic answered 26/5, 2017 at 17:25 Comment(0)
C
9

All other answers are very helpful. I just went through the same problem and figure out an easy way with docker machine scp.

Since Docker Machine v0.3.0, scp was introduced to copy files from one Docker machine to another. This is very convenient if you want copying a file from your local computer to a remote Docker machine such as AWS EC2 or Digital Ocean because Docker Machine is taking care of SSH credentials for you.

  1. Save you images using docker save like:

    docker save -o docker-images.tar app-web
    
  2. Copy images using docker-machine scp

    docker-machine scp ./docker-images.tar remote-machine:/home/ubuntu
    

Assume your remote Docker machine is remote-machine and the directory you want the tar file to be is /home/ubuntu.

  1. Load the Docker image

    docker-machine ssh remote-machine sudo docker load -i docker-images.tar
    
Corruptible answered 9/2, 2016 at 3:21 Comment(1)
why not just 'scp <source> <remote>' ?Infeudation
L
6

If you are working on a Windows machine and uploading to a linux machine commands such as

docker save <image> | ssh user@host docker load

will not work if you are using powershell as it seems that it adds an additional character to the output. If you run the command using cmd (Command Prompt) it will however work. As a side note you can also install gzip using Chocolatey and the following will also work from cmd.

docker save <image> | gzip | ssh user@host docker load
Lasala answered 2/12, 2021 at 16:58 Comment(0)
S
5

Based on the @kolypto 's answer, this worked great for me but only with sudo for docker load:

docker save <image> | bzip2 | pv | ssh user@host sudo docker load

or if you don't have / don't want to install the pv:

docker save <image> | bzip2 | ssh user@host sudo docker load

No need to manually zip or similar.

Sihon answered 17/2, 2022 at 14:11 Comment(2)
This is because your user is not a member of the "docker" group :) Don't do sudo; just do this once: sudo adduser $USER dockerEavesdrop
This command doesn't even connect to the remote machine for me. Exit status 127. Something is missing from every one of these duplicated answers.Pianoforte
J
4

I want to move all images with tags.

```
OUT=$(docker images --format '{{.Repository}}:{{.Tag}}')
OUTPUT=($OUT)
docker save $(echo "${OUTPUT[*]}") -o /dir/images.tar
``` 

Explanation:

First OUT gets all tags but separated with new lines. Second OUTPUT gets all tags in an array. Third $(echo "${OUTPUT[*]}") puts all tags for a single docker save command so that all images are in a single tar.

Additionally, this can be zipped using gzip. On target, run:

tar xvf images.tar.gz -O | docker load

-O option to tar puts contents on stdin which can be grabbed by docker load.

Johan answered 6/8, 2018 at 21:1 Comment(1)
docker load doesn't leave you with a usable container. Something is missing from this answer, as it is from the five other people on this page who all gave the same nonworking suggestion.Pianoforte
B
3

You may use sshfs:

$ sshfs user@ip:/<remote-path> <local-mount-path>
$ docker save <image-id> > <local-mount-path>/myImage.tar
Bathysphere answered 4/5, 2018 at 3:41 Comment(0)
T
3

1. Pull an image or a repository from a registry.

docker pull [OPTIONS] NAME[:TAG|@DIGEST]

2. Save it as a .tar file.

docker save [OPTIONS] IMAGE [IMAGE...]

For example:

docker pull hello-world
docker save -o hello-world.tar hello-world
Truckload answered 12/10, 2021 at 18:19 Comment(0)
C
2

Script to perform Docker save and load function (tried and tested):

Docker Save:

#!/bin/bash

#files will be saved in the dir 'Docker_images'
mkdir Docker_images
cd Docker_images
directory=`pwd`
c=0
#save the image names in 'list.txt'
doc= docker images | awk '{print $1}' > list.txt
printf "START \n"
input="$directory/list.txt"
#Check and create the image tar for the docker images
while IFS= read -r line
do
     one=`echo $line | awk '{print $1}'`
     two=`echo $line | awk '{print $1}' | cut -c 1-3`
     if [ "$one" != "<none>" ]; then
             c=$((c+1))
             printf "\n $one \n $two \n"
             docker save -o $two$c'.tar' $one
             printf "Docker image number $c successfully converted:   $two$c \n \n"
     fi
done < "$input"

Docker Load:

#!/bin/bash

cd Docker_images/
directory=`pwd`
ls | grep tar > files.txt
c=0
printf "START \n"
input="$directory/files.txt"
while IFS= read -r line
do
     c=$((c+1))
     printf "$c) $line \n"
     docker load -i $line
     printf "$c) Successfully created the Docker image $line  \n \n"
done < "$input"
Chinchy answered 6/9, 2019 at 11:35 Comment(0)
S
0

For those guys who use windows WSL and docker desktop I recommend a very simple solution.

On your Host machine :

1- Stop Docker
2- in command prompt type :

   wsl --shutdown

   wsl --export docker-desktop-data E:\docker-desktop\docker-desktop-data.tar

now you can copy

docker-desktop-data.tar

to your external storage (for example external HDD) then copy the file in the destination machine and then :

On your destination machine :

1- Stop Docker
2- in command prompt type :

   wsl --shutdown

   wsl --unregister docker-desktop-data

   wsl --import docker-desktop-data E:\docker-desktop\data E:\docker-desktop\docker-desktop-data.tar --version 2

In this step, we may meet the error of cannot create a specific network. Just re-run the import command.

Now Start Docker

Hope it Helps ;) Don't forget to vote up.

Schleicher answered 25/3 at 10:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.