My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip
produced by sbt dist
in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp
works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash
in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar
file after running docker save $IMAGENAME
just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker
in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip
from an image with others. Thoughts?
save
is the only option, if do not have a runnable image, e.g. an imageFROM scratch
withCOPY --from ...
lines, that does not contain e.g.bash
and has noENTRYPOINT
. The reason is thatdocker container create
fails on those images. – Lotetgaronne