Using SSH keys inside docker container
Asked Answered
B

34

499

I have an app that executes various fun stuff with Git (like running git clone & git push) and I'm trying to docker-ize it.

I'm running into an issue though where I need to be able to add an SSH key to the container for the container 'user' to use.

I tried copying it into /root/.ssh/, changing $HOME, creating a git ssh wrapper, and still no luck.

Here is the Dockerfile for reference:

#DOCKER-VERSION 0.3.4                                                           

from  ubuntu:12.04                                                              

RUN  apt-get update                                                             
RUN  apt-get install python-software-properties python g++ make git-core openssh-server -y
RUN  add-apt-repository ppa:chris-lea/node.js                                   
RUN  echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
RUN  apt-get update                                                             
RUN  apt-get install nodejs -y                                                  

ADD . /src                                                                       
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa                             
RUN   cd /src; npm install                                                      

EXPOSE  808:808                                                                 

CMD   [ "node", "/src/app.js"]

app.js runs the git commands like git pull

Bloody answered 8/8, 2013 at 21:23 Comment(3)
Anyone approaching this question ought to think though the end game as it's easy to create a security hole and forget about it here if you're not careful. Read all answers and choose wisely.Mosa
It is available now, see https://mcmap.net/q/12882/-using-ssh-keys-inside-docker-containerAmmonic
I have an answer here, using ssh-add, which is considered safe (as Josh Habdas says above, choose wisely). I had real difficulties to make it work on Ubuntu 20.04, mainly because of the fact that debugging docker is difficult (see Debugging Docker build) but also because of AppArmor and the name of the key which by default has to be id_rsa.Briefing
B
105

Turns out when using Ubuntu, the ssh_config isn't correct. You need to add

RUN  echo "    IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config

to your Dockerfile in order to get it to recognize your ssh key.

Bloody answered 9/8, 2013 at 0:19 Comment(6)
You probably also need to set the correct username like this RUN echo " Host example.com" >> /root/.ssh/config RUN echo " User <someusername>" >> /root/.ssh/configSammy
Why would someone copy private key from a host machine to a container. Command is OK, but I don't see sense in doing of above-mentioned...Bandeen
This isn't secure! See my solution below for the latest 1.13 version of Docker. @BloodyFarewell
@VladimirDjuricic There are things like deployment keys though.Marquetry
actually you need to run ssh-keygen -A to setup ssh properly on ubuntu minimal container. Then you can add pub/priv keys and start sshd. I have this entry in my dockerfile: 'RUN ssh-keygen -A' as one of the steps.Fonz
@VladimirDjuricic This appears to be the suggested way to install private packages during AWS sam build commands, since the command can only take a docker image (not a container with a volume mount or dockerfile with an ssh socket) github.com/aws/aws-sam-cli/pull/3084#issuecomment-1430195521Arouse
F
252

It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.

The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:

Build command

$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .

Dockerfile

FROM python:3.6-slim

ARG ssh_prv_key
ARG ssh_pub_key

RUN apt-get update && \
    apt-get install -y \
        git \
        openssh-server \
        libmysqlclient-dev

# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
    chmod 0700 /root/.ssh
# See: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints
COPY known_hosts > /root/.ssh/known_hosts

# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
    echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
    chmod 600 /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/id_rsa.pub

# Avoid cache purge by adding requirements first
ADD ./requirements.txt /app/requirements.txt

WORKDIR /app/

RUN pip install -r requirements.txt

# Remove SSH keys
RUN rm -rf /root/.ssh/

# Add the rest of the files
ADD . .

CMD python manage.py runserver

Update: If you're using Docker 1.13 and have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.

Farewell answered 8/2, 2017 at 22:57 Comment(17)
This GitHub issue thread would indicate that this approach is still not secure. See this comment for another similar solution.Craggie
Another solution instead of squashing is to add and remove the key in the same RUN command, and in between adding and removing you use it for what you need it for.Prat
Maybe you can remove the lines for creating the id_rsa.pub file as it is not required.Piperonal
Instead of squashing, make use of multi stage image builds.Candiot
If your key is password protected, use $(openssl rsa -in ~/.ssh/id_rsa) insteadRetiform
cannot do $(cat ~/.ssh/id_rsa) in docker-composeOverzealous
I get Error loading key "/root/.ssh/id_rsa": invalid format. Echo will remove newlines/tack on double quotes for me. Is this only for ubuntu or is there something different for alpine:3.10.3?Treatise
Do not echo the private key into a file (echo "$ssh_prv_key" > /root/.ssh/id_rsa). That will destroy the line format, at least in my case, see https://mcmap.net/q/13201/-docker-load-key-quot-root-ssh-id_rsa-quot-invalid-format.Hausfrau
Using ssh-keyscan in a docker file is really not secure! You are completely bypassing a security check to see if github.com is being spoofed by an attacker. Instead you should once manually execute ssh-keyscan, save the result and then COPY that in your docker file. That way if an attacker succeeds in spoofing github.com your build will fail and not copy their malicious code into your build.Nickelic
@PhilipCouling In a CI environment, you can't simply manually run things. Unless if you store known_hosts that you've previously manually generated somewhere. But in that case, is it safe to store that in the repository so that the CI can COPY it?Complicity
@Complicity please read up on ssh and public private key authentication. The content of known_hosts is public keys. Yes it is absolutely secure to save these somewhere they can be copied. The whole point of a public key is that it is public knowledge. ssh-keyscan is just copying GitHub's pubic key. Everyone who runs it gets the same result.Nickelic
@PhilipCouling It was a question rather than a comment but ok. No need to be condescending.Complicity
@Complicity Apologies, Not intentionally condesending. Genuinely it's a security risk. People follow these answers "in good faith".Nickelic
See https://mcmap.net/q/12882/-using-ssh-keys-inside-docker-container, this is supported nowAmmonic
i liked the link shared by @funnydman, seems better also this medium link for more detailsBlanks
@JosiahL. how did you solve this? any other way that can replace echo and keep it simple?Albion
@Albion Opted not to use echo/env vars. I had a pre-docker compose script that grabs the id_rsa file itself however I like, copy it into the docker image, do what I need to do, then delete the id_rsa. These docker images are private though, not sure you want to do this for public images. If the image is public, you probably want to use build targets/stages so you don't accidentally publish an image with a private rsa key. This was a long time ago though, today I would have a pre script that clones/pulls my private stuff, THEN build the docker image using thoseTreatise
B
105

Turns out when using Ubuntu, the ssh_config isn't correct. You need to add

RUN  echo "    IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config

to your Dockerfile in order to get it to recognize your ssh key.

Bloody answered 9/8, 2013 at 0:19 Comment(6)
You probably also need to set the correct username like this RUN echo " Host example.com" >> /root/.ssh/config RUN echo " User <someusername>" >> /root/.ssh/configSammy
Why would someone copy private key from a host machine to a container. Command is OK, but I don't see sense in doing of above-mentioned...Bandeen
This isn't secure! See my solution below for the latest 1.13 version of Docker. @BloodyFarewell
@VladimirDjuricic There are things like deployment keys though.Marquetry
actually you need to run ssh-keygen -A to setup ssh properly on ubuntu minimal container. Then you can add pub/priv keys and start sshd. I have this entry in my dockerfile: 'RUN ssh-keygen -A' as one of the steps.Fonz
@VladimirDjuricic This appears to be the suggested way to install private packages during AWS sam build commands, since the command can only take a docker image (not a container with a volume mount or dockerfile with an ssh socket) github.com/aws/aws-sam-cli/pull/3084#issuecomment-1430195521Arouse
C
102

If you are using Docker Compose an easy choice is to forward SSH agent like that:

something:
    container_name: something
    volumes:
        - $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
    environment:
        SSH_AUTH_SOCK: /ssh-agent

or equivalently, if using docker run:

$ docker run --mount type=bind,source=$SSH_AUTH_SOCK,target=/ssh-agent \
             --env SSH_AUTH_SOCK=/ssh-agent \
             some-image
Commend answered 15/4, 2016 at 13:24 Comment(11)
Just a note that this doesn't work for Mac hosts, whether using docker-machine (via VirtualBox) or Docker for Mac (which uses xhyve) because the unix domain sockets aren't proxied.Apology
SSH_AUTH_SOCK is a variable, which contains a path to a ssh-agentCommend
more details about SSH_AUTH_SOCK blog.joncairns.com/2013/12/understanding-ssh-agent-and-ssh-addLysozyme
ssh-forwarding is now also supported on macOS hosts - instead of mounting the path of $SSH_AUTH_SOCK, you have to mount this path - /run/host-services/ssh-auth.sock.Nonperishable
This works great! I did have to define the environment variable like SSH_AUTH_SOCK=/ssh-agent versus as-written in the answer. With the colon I got an error that my variable wasn't a string.Maloney
I think it's worth pointing out that with this solution you'll get an error in the container if you try using SSH before the key you need is added to the agent on the host. It makes sense, you decided to allow SSH access without putting any keys in the container, but it might not be entirely intuitive to someone who's not familiar with the problem you wanted to solve, so it might be a good idea to document it somewhere.Sandasandakan
Actually, the next day after applying this solution I wasn't able to start my container. As it turned out, the agent socket changed and it was could no longer be mounted in the container. I downed and upped the service and was able to start the container again, but the agent no longer worked inside it. So I'm back with mounting ~/.ssh in the container instead.Sandasandakan
@RafałG. can you expand on why it won't work in the container? I don't understand - if the ssh agent is forwarded to the host, and the key is available to the host, why doesn't the container get to use the key? I have to ssh-add -k to make the container be able to sshOuttalk
@AndyRay We mount $SSH_AUTH_SOCK in the container. This path has some random elements and will change after reboot, so the path will no longer be valid. Maybe we could set $SSH_AUTH_SOCK on the host to a fixed path (just like we do in the container), but I guess it's semi-random for a reason, so I'd be cautious.Sandasandakan
This doesn't seem to work for WSL for some reason :(Woo
under what permissions should I mount the ssh auth socket?Ingridingrim
P
96

Note: only use this approach for images that are private and will always be!

The ssh key remains stored within the image, even if you remove the key in a layer command after adding it (see comments in this post).

In my case this is ok, so this is what I am using:

# Setup for ssh onto github
RUN mkdir -p /root/.ssh
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
Preponderant answered 24/7, 2014 at 15:4 Comment(11)
This will keep your key in the image, don't do it.Isabea
@Isabea you are right, this does store the key in the image, and that might be a security issue in some cases. Thanks for highlighting that. However, there are many situations where this is perfectly save. For example for images that are stored in a private repository, or images that are built directly on a production server copying the local keys to the image.Preponderant
Also, if you install your vendors within the Dockerfile, there is nothing stopping you from removing the ssh key once the vendors are installed.Canto
@SebScoFr, apparently the keys will be stored in one of the layers, even if you remove them in a later command (see link in updated answer). So the image will always expose the ssh key, and the solution should only be used for private images!Preponderant
@Preponderant not if you --squash the buildTuranian
@Turanian are you sure that squashing is save in this case and the content of the image can not be reversed engineered if you know docker well enough? From the docs there is not much information about what squashing does (except reducing everything to one layer of course).Preponderant
# FROM node:argon FROM node:latest RUN mkdir -p /root/.ssh ADD id_rsa /root/.ssh/id_rsa RUN chmod 700 /root/.ssh/id_rsa RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config WORKDIR /app RUN git clone [email protected]:ImperialCardioGenetics/gnomadjs.git RUN cd gnomadjs RUN yarn RUN cd projects/variantfx RUN npm install EXPOSE 8013 CMD ["npm", "start"] ~ I Tried this but didnt work. Any reason? ThanksArchival
Why not RUN chmod 400 /root/.ssh/id_rsa?Psychodiagnostics
Would this work with Bitbucket as well? What would be the syntax for the last command? Just replacing "github.com" with "bitbucket.org"?Gowon
this one really helpful RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/configUnpack
StrictHostKeyChecking no really isn't a good idea here since it allows someone to MITM your app by pretending to be Github.com. It'd be better to just hardcode the host key Github publishes docs.github.com/en/authentication/… in your known_hostsRodomontade
R
76

Expanding Peter Grainger's answer I was able to use multi-stage build available since Docker 17.05. Official page states:

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.

Keeping this in mind here is my example of Dockerfile including three build stages. It's meant to create a production image of client web application.

# Stage 1: get sources from npm and git over ssh
FROM node:carbon AS sources
ARG SSH_KEY
ARG SSH_KEY_PASSPHRASE
RUN mkdir -p /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
    echo "${SSH_KEY}" > /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/id_rsa
WORKDIR /app/
COPY package*.json yarn.lock /app/
RUN eval `ssh-agent -s` && \
    printf "${SSH_KEY_PASSPHRASE}\n" | ssh-add $HOME/.ssh/id_rsa && \
    yarn --pure-lockfile --mutex file --network-concurrency 1 && \
    rm -rf /root/.ssh/

# Stage 2: build minified production code
FROM node:carbon AS production
WORKDIR /app/
COPY --from=sources /app/ /app/
COPY . /app/
RUN yarn build:prod

# Stage 3: include only built production files and host them with Node Express server
FROM node:carbon
WORKDIR /app/
RUN yarn add express
COPY --from=production /app/dist/ /app/dist/
COPY server.js /app/
EXPOSE 33330
CMD ["node", "server.js"]

.dockerignore repeats contents of .gitignore file (it prevents node_modules and resulting dist directories of the project from being copied):

.idea
dist
node_modules
*.log

Command example to build an image:

$ docker build -t ezze/geoport:0.6.0 \
  --build-arg SSH_KEY="$(cat ~/.ssh/id_rsa)" \
  --build-arg SSH_KEY_PASSPHRASE="my_super_secret" \
  ./

If your private SSH key doesn't have a passphrase just specify empty SSH_KEY_PASSPHRASE argument.

This is how it works:

1). On the first stage only package.json, yarn.lock files and private SSH key are copied to the first intermediate image named sources. In order to avoid further SSH key passphrase prompts it is automatically added to ssh-agent. Finally yarn command installs all required dependencies from NPM and clones private git repositories from Bitbucket over SSH.

2). The second stage builds and minifies source code of web application and places it in dist directory of the next intermediate image named production. Note that source code of installed node_modules is copied from the image named sources produced on the first stage by this line:

COPY --from=sources /app/ /app/

Probably it also could be the following line:

COPY --from=sources /app/node_modules/ /app/node_modules/

We have only node_modules directory from the first intermediate image here, no SSH_KEY and SSH_KEY_PASSPHRASE arguments anymore. All the rest required for build is copied from our project directory.

3). On the third stage we reduce a size of the final image that will be tagged as ezze/geoport:0.6.0 by including only dist directory from the second intermediate image named production and installing Node Express for starting a web server.

Listing images gives an output like this:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ezze/geoport        0.6.0               8e8809c4e996        3 hours ago         717MB
<none>              <none>              1f6518644324        3 hours ago         1.1GB
<none>              <none>              fa00f1182917        4 hours ago         1.63GB
node                carbon              b87c2ad8344d        4 weeks ago         676MB

where non-tagged images correpsond to the first and the second intermediate build stages.

If you run

$ docker history ezze/geoport:0.6.0 --no-trunc

you will not see any mentions of SSH_KEY and SSH_KEY_PASSPHRASE in the final image.

Reverential answered 1/2, 2018 at 14:25 Comment(4)
Old post, but I want to stress this is by far the best way of doing it pre 18.09. Squash is unnecessary, and risk prone. With multi-stage, you know you are only bringing in the artifacts you want. Think of squash as opt-out of the files you don't want, and multistage as opt-in. This answer needs to be higher. Baking your ssh keys in the image is terrible practice.Counterstroke
@Reverential Thank you very much for this very useful post :) SSH-agent is driving me crazy, I did something similar as what u did : I correctly see in docker build logs Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) but when I check in another RUN or even in the same RUN command by doing a ssh-add -l it tells me that "The agent has no identities". Starting to pull my hairs off, any thoughts ?Olshausen
Do not echo the private key into a file (echo "$ssh_prv_key" > /root/.ssh/id_rsa). That will destroy the line format, at least in my case, see https://mcmap.net/q/13201/-docker-load-key-quot-root-ssh-id_rsa-quot-invalid-format.Hausfrau
Best answer. BTW, you can remove the passphrase completely if it is blank. Also, if you're using it for a module, make sure that in package.json you add the git user, otherwise bitbucket won't authenticate. It should be "my-pkg": "ssh://[email protected]:[user]/repo.git. If you don't add the git@ then you can see that it tries to authenticate as root instead of git and it doesn't work.Jowers
A
47

This is now available since 18.09 release!

According to the documentation:

The docker build has a --ssh option to allow the Docker Engine to forward SSH agent connections.

Here is an example of Dockerfile using SSH in the container:

# syntax=docker/dockerfile:experimental
FROM alpine

# Install ssh client and git
RUN apk add --no-cache openssh-client git

# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts

# Clone private repository
RUN --mount=type=ssh git clone [email protected]:myorg/myproject.git myproject

Once the Dockerfile is created, use the --ssh option for connectivity with the SSH agent:

$ docker build --ssh default .

Also, take a look at https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Ammonic answered 21/2, 2021 at 11:1 Comment(2)
Linux users will need to enable BuildKit to be able to make use of this as it does not appear to be enabled by default. This can be done either by running export DOCKER_BUILDKIT=1 before running your build or by configuring your Docker Daemon to have it enabled by default by putting { "features": { "buildkit": true } } into the file at: /etc/docker/daemon.json (at least that's how it's done on Ubuntu 20.04, other distros may vary.) Docs: docs.docker.com/develop/develop-images/build_enhancements/…Similitude
Another important thing on Linux, you need to edit the AppArmor if enabled on your system. In my case, it would prevent access to the ssh-agent keyring socket. See Go Build in Docker.Briefing
M
41

In order to inject you ssh key, within a container, you have multiple solutions:

  1. Using a Dockerfile with the ADD instruction, you can inject it during your build process

  2. Simply doing something like cat id_rsa | docker run -i <image> sh -c 'cat > /root/.ssh/id_rsa'

  3. Using the docker cp command which allows you to inject files while a container is running.

Mcclendon answered 8/8, 2013 at 21:44 Comment(11)
So, as of now, I've tried copying it into /root/.ssh/id_rsa but still receive "Host key verification failed. fatal: The remote end hung up unexpectedly" errors from Git, which I'm pretty sure means it isn't using the key for whatever reason. So I'm thinking there is something else I need to do to actually tell the system to use it as it's ssh key? Not sure exactly how to debug this one. (and I know this key works because it runs without issue from the host)Bloody
can you make sure the /etc/ssh/ssh_config target the correct key file?Mcclendon
Is there a good way to inspect the docker container's files? Or should I just try and copy in a valid configuration?Bloody
I just tried with 'base' image, doing apt-get install openssh-server and putting my key in /root/.ssh/id_rsa and it worked fine. What image are you using?Mcclendon
if you need to inspect a container's file, the best way would be to commit and run the resulting image with 'cat'.Mcclendon
I'm using ubuntu:12.04, I'll get the Dockerfile up in a second.Bloody
Got it fixed! Thanks for the help and the hint on using cat that was a big help @McclendonBloody
I thought docker cp was only for copying from containers?Indent
@Mcclendon You can't use it from the build step because the ADD or COPY command requires that your keys reside in the context of the build!Farewell
While this answer is enlightening and practical what it lacks is to adequately describe when someone should use these techniques and eschews the should they part.Mosa
This is the security hole the guy commented about on the parent post. Is anyone certain they want to inject their PRIVATE id_rsa keystore into a Docker image to be passed around?Mycorrhiza
M
32

One cross-platform solution is to use a bind mount to share the host's .ssh folder to the container:

docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>

Similar to agent forwarding this approach will make the public keys accessible to the container. An additional upside is that it works with a non-root user too and will get you connected to GitHub. One caveat to consider, however, is that all contents (including private keys) from the .ssh folder will be shared so this approach is only desirable for development and only for trusted container images.

Masturbation answered 25/9, 2017 at 13:34 Comment(9)
this might work, but not during docker build only during docker runFiberglass
That's exactly the point. You don't want to put your ssh keys inside a docker file.Masturbation
Given SSH agent forwarding doesn't work outside Linux this makes a fine solution for getting up-and-running in a development environment without a lot of fuss.Mosa
I am running docker using docker-compose up in my local Windows 10. How should I use your solution in that scenario?Placatory
Essentially you are asking how to map volume in docker compose. Above there is an answer answering this. Specifically for Windows this might help stackoverflow.com/questions/41334021/…Masturbation
Thanks for point out this solution. This ended up being the simplest solution for using with my "builder" image which is only ever used locally to build my linux executable.Wilke
Only I run docker build ... as myself and that tool changes the user to docker so it won't work. Using the ssh-add and the --ssh default command line option will work (although under Linux you need to fix the AppArmor if enabled on your computer, see Issue 1: AppArmor)Briefing
Duplicate of this earlier answer.Hausfrau
Love this solution. Good for development.Condescend
E
28

Starting from docker API 1.39+ (Check API version with docker version) docker build allows the --ssh option with either an agent socket or keys to allow the Docker Engine to forward SSH agent connections.

Build Command

export DOCKER_BUILDKIT=1
docker build --ssh default=~/.ssh/id_rsa .

Dockerfile

# syntax=docker/dockerfile:experimental
FROM python:3.7

# Install ssh client (if required)
RUN apt-get update -qq
RUN apt-get install openssh-client -y

# Download public key for github.com
RUN --mount=type=ssh mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts

# Clone private repository
RUN --mount=type=ssh git clone [email protected]:myorg/myproject.git myproject

More Info:

Elissaelita answered 15/11, 2019 at 19:47 Comment(11)
The tilde expansion did not work for me; I got: could not parse ssh: [default=~/.ssh/id_rsa]: stat ~/.ssh/id_rsa: no such file or directory. Use the full path if it does not work.Buttocks
But this will put the SSH key in the image, itself. Useful for development, but not safe for production.Bagley
@CameronHudson That's not true, the SSH connection is forwarded to the host and at build time only, the SSH keys are never added to the image.Elissaelita
After reading more closely, you're right @EdisonArango, it doesn't put the SSH key in the final image. However, it looks like the SSH key is only available at build time, not at runtime. This could work for some use cases, but OP and I are trying to use the SSH key at runtime.Bagley
@CameronHudson I believe in that case, you can just create a bind volume from the host to the container, and adding the SSH key inside that volume.Elissaelita
This doesn't work for me in docker 19.03.13 which claims to support API version 1.40. I get Error response from daemon: Dockerfile parse error line 86: Unknown flag: mount during docker build.Encephalitis
Hi @Tom, just to confirm: do you have this line at the beginning of your Dockerfile: # syntax=docker/dockerfile:experimental?Elissaelita
@EdisonArango: No, I didn't, and eventually I figured this out.Encephalitis
As a "normal" user, I get a permission denied when I use this technique. My .ssh directory is protected (as it should be) so the docker process is not able to make use of it.Briefing
where will be cloned repository saved? if I have to run commands commands npm install after cloning how should I run?Acutance
At least for me, the 0600 needs to be 0700 so that there are sufficient permissions for the ssh-keyscan command output to be written to known_hosts.Generosity
D
16

This line is a problem:

ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa

When specifying the files you want to copy into the image you can only use relative paths - relative to the directory where your Dockerfile is. So you should instead use:

ADD id_rsa /root/.ssh/id_rsa

And put the id_rsa file into the same directory where your Dockerfile is.

Check this out for more details: http://docs.docker.io/reference/builder/#add

Dogs answered 17/4, 2014 at 12:22 Comment(2)
This is also security problem because it puts a private key into an image that can be easily forgotten.Greece
docker cp just puts it in the container and not the image, right?Fiberglass
C
15

Docker containers should be seen as 'services' of their own. To separate concerns you should separate functionalities:

1) Data should be in a data container: use a linked volume to clone the repo into. That data container can then be linked to the service needing it.

2) Use a container to run the git cloning task, (i.e it's only job is cloning) linking the data container to it when you run it.

3) Same for the ssh-key: put it is a volume (as suggested above) and link it to the git clone service when you need it

That way, both the cloning task and the key are ephemeral and only active when needed.

Now if your app itself is a git interface, you might want to consider github or bitbucket REST APIs directly to do your work: that's what they were designed for.

Covalence answered 23/6, 2015 at 0:37 Comment(0)
H
15

We had similar problem when doing npm install in docker build time.

Inspired from solution from Daniel van Flymen and combining it with git url rewrite, we found a bit simpler method for authenticating npm install from private github repos - we used oauth2 tokens instead of the keys.

In our case, the npm dependencies were specified as "git+https://github.com/..."

For authentication in container, the urls need to be rewritten to either be suitable for ssh authentication (ssh://[email protected]/) or token authentication (https://${GITHUB_TOKEN}@github.com/)

Build command:

docker build -t sometag --build-arg GITHUB_TOKEN=$GITHUB_TOKEN . 

Unfortunately, I'm on docker 1.9, so --squash option is not there yet, eventually it needs to be added

Dockerfile:

FROM node:5.10.0

ARG GITHUB_TOKEN

#Install dependencies
COPY package.json ./

# add rewrite rule to authenticate github user
RUN git config --global url."https://${GITHUB_TOKEN}@github.com/".insteadOf "https://github.com/"

RUN npm install

# remove the secret token from the git config file, remember to use --squash option for docker build, when it becomes available in docker 1.13
RUN git config --global --unset url."https://${GITHUB_TOKEN}@github.com/".insteadOf

# Expose the ports that the app uses
EXPOSE 8000

#Copy server and client code
COPY server /server 
COPY clients /clients
Heartthrob answered 27/4, 2017 at 13:2 Comment(0)
N
15

Forward the ssh authentication socket to the container:

docker run --rm -ti \
        -v $SSH_AUTH_SOCK:/tmp/ssh_auth.sock \
        -e SSH_AUTH_SOCK=/tmp/ssh_auth.sock \
        -w /src \
        my_image

Your script will be able to perform a git clone.

Extra: If you want cloned files to belong to a specific user you need to use chown since using other user than root inside the container will make git fail.

You can do this publishing to the container's environment some additional variables:

docker run ...
        -e OWNER_USER=$(id -u) \
        -e OWNER_GROUP=$(id -g) \
        ...

After you clone you must execute chown $OWNER_USER:$OWNER_GROUP -R <source_folder> to set the proper ownership before you leave the container so the files are accessible by a non-root user outside the container.

Nolpros answered 2/10, 2017 at 14:18 Comment(5)
In newer Docker versions you can pass -u root:$(id -u $USER) to at least have the files owned by the same primary group as your user, which should make all of them at least readable without sudo unless something is creating them with 0600 permissions.Bookrest
@Bookrest I think you have a typo: -u root:$(id -u $USER) should be -g.Nolpros
Good call! I don't seem to be able to fix it from mobile, will try on desktop soon.Bookrest
I have /tmp/ssh_auth.sock: No such file or directory now it's /tmp/ssh-vid8Zzi8UILE/agent.46016 on my host machineAbate
@Abate the error is pretty generic. Could be caused due to permissions on /tmp inside your container. Or a typo on the docker run command. Make sure that the bind statement is correct -v $SSH_AUTH_SOCK:/tmp/ssh_auth.sock: Order is important and semicolon is also important. Please check docker documentation for further help.Nolpros
H
15

At first, some meta noise

There is a dangerously wrong advice in two highly upvoted answers here.

I commented, but since I have lost many days with this, please MIND:

Do not echo the private key into a file (meaning: echo "$ssh_prv_key" > /root/.ssh/id_ed25519). This will destroy the needed line format, at least in my case.

Use COPY or ADD instead. See Docker Load key “/root/.ssh/id_rsa”: invalid format for details.

This was also confirmed by another user:

I get Error loading key "/root/.ssh/id_ed25519": invalid format. Echo will remove newlines/tack on double quotes for me. Is this only for ubuntu or is there something different for alpine:3.10.3?


1. A working way that keeps the private key in the image (not so good!)

If the private key is stored in the image, you need to pay attention that you delete the public key from the git website, or that you do not publish the image. If you take care of this, this is secure. See below (2.) for a better way where you could also "forget to pay attention".

The Dockerfile looks as follows:

FROM ubuntu:latest
RUN apt-get update && apt-get install -y git
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519 && \
    apt-get -yqq install openssh-client && \
    ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone [email protected]:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh

2. A working way that does not keep the private key in the image (good!)

The following is the more secure way of the same thing, using "multi stage build" instead. If you need an image that has the git repo directory without the private key stored in one of its layers, you need two images, and you only use the second in the end. That means, you need FROM two times, and you can then copy only the git repo directory from the first to the second image, see the official guide "Use multi-stage builds".

We use "alpine" as the smallest possible base image which uses apk instead of apt-get; you can also use apt-get with the above code instead using FROM ubuntu:latest.

The Dockerfile looks as follows:

# first image only to download the git repo
FROM alpine as MY_TMP_GIT_IMAGE

RUN apk add --no-cache git
RUN mkdir -p /root/.ssh &&  chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519

RUN apk -yqq add --no-cache openssh-client && ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone [email protected]:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh


# Start of the second image
FROM MY_BASE_IMAGE
COPY --from=MY_TMP_GIT_IMAGE /MY_GIT_REPO ./MY_GIT_REPO

We see here that FROM is just a namespace, it is like a header for the lines below it and can be addressed with an alias. Without an alias, --from=0 would be the first image (=FROM namespace).

You could now publish or share the second image, as the private key is not in its layers, and you would not necessarily need to remove the public key from the git website after one usage! Thus, you do not need to create a new key pair at every cloning of the repo. Of course, be aware that a passwordless private key is still insecure if someone might get a hand on your data in another way. If you are not sure about this, better remove the public key from the server after usage, and have a new key pair at every run.


A guide how to build the image from the Dockerfile

  • Install Docker Desktop; or use docker inside WSL2 or Linux in a VirtualBox; or use docker in a standalone Linux partition / hard drive.

  • Open a command prompt (PowerShell, terminal, ...).

  • Go to the directory of the Dockerfile.

  • Create a subfolder ".ssh/".

  • For security reasons, create a new public and private SSH key pair - even if you already have another one lying around - for each Dockerfile run. In the command prompt, in your Dockerfile's folder, enter (mind, this overwrites without asking):

      Write-Output "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N '""'
    

    (if you use PowerShell) or

      echo "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N ''
    

    (if you do not use PowerShell).

    Your key pair will now be in the subfolder .ssh/. It is up to you whether you use that subfolder at all, you can also change the code to COPY id_ed25519 /root/.ssh/id_ed25519; then your private key needs to be in the Dockerfile's directory that you are in.

  • Open the public key in an editor, copy the content and publish it to your server (e.g. GitHub / GitLab --> profile --> SSH keys). You can choose whatever name and end date. The final readable comment of the public key string (normally your computer name if you did not add a -C comment in the parameters of ssh-keygen) is not important, just leave it there.

  • Start (Do not forget the "." at the end which is the build context):

    docker build -t test .

Only for 1.):

  • After the run, remove the public key from the server (most important, and at best at once). The script removes the private key from the image, and you may also remove the private key from your local computer, since you should never use the key pair again. The reason: someone could get the private key from the image even if it was removed from the image. Quoting a user's comment:

    If anyone gets a hold of your image, they can retrieve the key... even if you delete that file in a later layer, b/c they can go back to Step 7 when you added it

    The attacker could wait with this private key until you use the key pair again.

Only for 2.):

  • After the run, since the second image is the only image remaining after a build, we do not necessarily need to remove the key pair from client and host. We still have a small risk that the passwordless private key is taken from a local computer somewhere. That is why you may still remove the public key from the git server. You may also remove any stored private keys. But it is probably not needed in many projects where the main aim is rather to automate building the image, and less the security.

At last, some more meta noise

As to the dangerously wrong advice in the two highly upvoted answers here that use the problematic echo-of-the-private-key approach, here are the votes at the time of writing:

We see here that something must be wrong in the answers, as the top 1 answer votes are not at least on the level of the question votes.

There was just one small and unvoted comment at the end of the comment list of the top 1 answer naming the same echo-of-the-private-key problem (which is also quoted in this answer). And: that critical comment was made three years after the answer.

I have upvoted the top 1 answer myself. I only realised later that it would not work for me. Thus, swarm intelligence is working, but on a low flame? If anyone can explain to me why echoing the private key might work for others, but not for me, please comment. Else, 326k views (minus 2 comments ;) ) would have overseen or left aside the error of the top 1 answer. I would not write such a long text here if that echo-of-the-private-key code line would not have cost me many working days, with absolutely frustrating code picking from everything on the net.

Hausfrau answered 16/3, 2021 at 3:2 Comment(0)
A
14

You can use multi stage build to build containers This is the approach you can take :-

Stage 1 building an image with ssh

FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp

RUN apt-get update && \
    apt-get install -y git npm 

RUN mkdir /root/.ssh/ &&\
    echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
    chmod 600 /root/.ssh/id_rsa &&\
    touch /root/.ssh/known_hosts &&\
    ssh-keyscan github.com >> /root/.ssh/known_hosts

COPY package*.json ./

RUN npm install

RUN cp -R node_modules prod_node_modules

Stage 2: build your container

FROM node:10-alpine

RUN mkdir -p /usr/app

WORKDIR /usr/app

COPY ./ ./

COPY --from=sshImage /root/temp/prod_node_modules ./node_modules

EXPOSE 3006

CMD ["npm", "run", "dev"] 

add env attribute in your compose file:

   environment:
      - SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}

then pass args from build script like this:

docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"

And remove the intermediate container it for security. This Will help you cheers.

Aegean answered 26/3, 2020 at 9:32 Comment(1)
@CameronHudson You are wrong with your comment, this answer uses the COPY --from=sshImage command to copy only the chosen folder from the temporary image to the new image. Anything else, and that means the ssh key as well, is left behind, and the temporary image gets automatically deleted in "multi stage build". Therefore, this example is secure. I found out about it too late and now have a kind of duplicated answer, perhaps it is at least good as another example.Hausfrau
B
13

I ran into the same problem today and little bit modified version with previous posts I found this approach more useful to me

docker run -it -v ~/.ssh/id_rsa:/root/.my-key:ro image /bin/bash

(Note that readonly flag so container will not mess my ssh key in any case.)

Inside container I can now run:

ssh-agent bash -c "ssh-add ~/.my-key; git clone <gitrepourl> <target>"

So I don't get that Bad owner or permissions on /root/.ssh/.. error which was noted by @kross

Beetle answered 28/12, 2016 at 16:21 Comment(2)
Thank you! This was the key to get it working for me: having the ssh-agent and ssh-add in a single command like: ssh-agent bash -c "ssh-add...". I can then pass that right into docker run. All previous examples I found used eval ssh-agent, followed by ssh-add and I could not figure out a way to pass that eval through the docker run command.Oval
You just mount a volume that gives you the ssh key, and a volume does not get saved in the image. The disadvantage is that you have a more complex run command (ok, that is not important), and you need two steps when cloning a git repo, while the idea of automating the installation is about doing all in one go at best. Still +1 for the plain idea.Hausfrau
F
11

This issue is really an annoying one. Since you can't add/copy any file outside the dockerfile context, which means it's impossible to just link ~/.ssh/id_rsa into image's /root/.ssh/id_rsa, and when you definitely need a key to do some sshed thing like git clone from a private repo link..., during the building of your docker image.

Anyways, I found a solution to workaround, not so persuading but did work for me.

  1. in your dockerfile:

    • add this file as /root/.ssh/id_rsa
    • do what you want, such as git clone, composer...
    • rm /root/.ssh/id_rsa at the end
  2. a script to do in one shoot:

    • cp your key to the folder holding dockerfile
    • docker build
    • rm the copied key
  3. anytime you have to run a container from this image with some ssh requirements, just add -v for the run command, like:

    docker run -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --name container image command

This solution results in no private key in both you project source and the built docker image, so no security issue to worry about anymore.

Fillip answered 1/5, 2015 at 5:21 Comment(2)
"Since you can't add/copy any file outside the dockerfile context, " Have you seen docker cp? It's used to "Copy files/folders between a container and your host."Emiliaemiliaromagna
@JonathonReinhart, thanks for pointing that out. Yes, docker cp could do the trick. However in this very situation, I needed the ssh_key during the image being built, and there's no container at that time...will update my unclear expression, thanks anyways.Fillip
F
11

As eczajk already commented in Daniel van Flymen's answer it does not seem to be safe to remove the keys and use --squash, as they still will be visible in the history (docker history --no-trunc).

Instead with Docker 18.09, you can now use the "build secrets" feature. In my case I cloned a private git repo using my hosts SSH key with the following in my Dockerfile:

# syntax=docker/dockerfile:experimental

[...]

RUN --mount=type=ssh git clone [...]

[...]

To be able to use this, you need to enable the new BuildKit backend prior to running docker build:

export DOCKER_BUILDKIT=1

And you need to add the --ssh default parameter to docker build.

More info about this here: https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Fregoso answered 15/4, 2019 at 13:0 Comment(6)
Best solution IMHO. I had to do two more things to get it to work: 1) add my private key to ssh-agent with ssh-add ~/.ssh/id_rsa and 2) add the git host to known_hosts, i.e. for bitbucket: RUN ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hostsMilan
I have not been able to get this to work at all. I'm still getting permissions errors: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access and the repository exists. This despite passing the --ssh default flag in my docker build, and using --mount=type=ssh in the run command where I git clone. I am able to clone the same repo no problem on the build machine. It simply fails in the docker build container. I suspect that the mac version of Docker is not actually passing the ssh client along.Basilbasilar
@Basilbasilar were you able to figure out this issue you mentioned because I am also facing the same.Erv
@SadanArshad It turns out this functionality is currently only supported if you are running Docker from a Linux machine. It does not work if you're running your Docker commands from a Mac (and probably Windows, as well, though I can't confirm).Basilbasilar
Too bad that doesn't work with docker-compose... github.com/docker/compose/issues/6440Briefing
@AlexisWilke The link had a solution on Mar 12 '20, already there when you wrote this. You need to set both COMPOSE_DOCKER_CLI_BUILD=1 and DOCKER_BUILDKIT=1, see github.com/docker/compose/issues/6440#issuecomment-592939294.Hausfrau
R
10

You can also link your .ssh directory between the host and the container, I don't know if this method has any security implications but it may be the easiest method. Something like this should work:

$ sudo docker run -it -v /root/.ssh:/root/.ssh someimage bash

Remember that docker runs with sudo (unless you don't), if this is the case you'll be using the root ssh keys.

Rajkot answered 13/5, 2014 at 22:2 Comment(5)
Using this method works with docker 0.11 but if you use fig, it will throw a panic error. I don't know whyRajkot
This would be a preferred method, the trick would be to use my unprivileged host user's keys as the container's root. As you mention, trying to do it not as the host root user yields Bad owner or permissions on /root/.ssh/config.Clamorous
this can only be used during docker run, but not during docker build.Inconsequential
@Inconsequential , I view that as an advantage. Many of these answers leave private keys stored in an image; the key remains stored even after you remove the key in a subsequent layer command. By introducing the private keys only during run (not build), they can only exist in the container (not the image).Steradian
Pretty sure this would allow the container to modify the contents of /root/.ssh which is not what you want from an isolated containerShiff
D
9

'you can selectively let remote servers access your local ssh-agent as if it was running on the server'

https://developer.github.com/guides/using-ssh-agent-forwarding/

Drugge answered 15/1, 2014 at 11:16 Comment(2)
docker run -i -t -v $(readlink -f $SSH_AUTH_SOCK):/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent ubuntu /bin/bashBearskin
fruitl00p has created a docker-tunnel container in this fashion: github.com/kingsquare/docker-tunnelYan
M
9

A concise overview of the challenges of SSH inside Docker containers is detailed here. For connecting to trusted remotes from within a container without leaking secrets there are a few ways:

Beyond these there's also the possibility of using a key-store running in a separate docker container accessible at runtime when using Compose. The drawback here is additional complexity due to the machinery required to create and manage a keystore such as Vault by HashiCorp.

For SSH key use in a stand-alone Docker container see the methods linked above and consider the drawbacks of each depending on your specific needs. If, however, you're running inside Compose and want to share a key to an app at runtime (reflecting practicalities of the OP) try this:

  • Create a docker-compose.env file and add it to your .gitignore file.
  • Update your docker-compose.yml and add env_file for service requiring the key.
  • Access public key from environment at application runtime, e.g. process.node.DEPLOYER_RSA_PUBKEY in the case of a Node.js application.

The above approach is ideal for development and testing and, while it could satisfy production requirements, in production you're better off using one of the other methods identified above.

Additional resources:

Mosa answered 27/9, 2019 at 5:23 Comment(1)
Thanks for summarizing!Wilke
S
7

If you don't care about the security of your SSH keys, there are many good answers here. If you do, the best answer I found was from a link in a comment above to this GitHub comment by diegocsandrim. So that others are more likely to see it, and just in case that repo ever goes away, here is an edited version of that answer:

Most solutions here end up leaving the private key in the image. This is bad, as anyone with access to the image has access to your private key. Since we don't know enough about the behavior of squash, this may still be the case even if you delete the key and squash that layer.

We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.

In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.

By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.

The build script looks like:

# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .

Dockerfile looks like this:

FROM node

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    ssh -o StrictHostKeyChecking=no [email protected] || true && \
    npm install --production && \
    rm ./my_key && \
    rm -rf ~/.ssh/*

ENTRYPOINT ["npm", "run"]

CMD ["start"]
Symposiac answered 3/10, 2017 at 6:29 Comment(1)
The problem with this solution is that because pre_sign_url will change every time, the npm install can't be cached even there is no change to the packages.json file. It's better to get the key in the build.sh and set it as a build argument so that it won't change every timeNeedham
K
6

A simple and secure way to achieve this without saving your key in a Docker image layer, or going through ssh_agent gymnastics is:

  1. As one of the steps in your Dockerfile, create a .ssh directory by adding:

    RUN mkdir -p /root/.ssh

  2. Below that indicate that you would like to mount the ssh directory as a volume:

    VOLUME [ "/root/.ssh" ]

  3. Ensure that your container's ssh_config knows where to find the public keys by adding this line:

    RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config

  4. Expose you local user's .ssh directory to the container at runtime:

    docker run -v ~/.ssh:/root/.ssh -it image_name

    Or in your dockerCompose.yml add this under the service's volume key:

    - "~/.ssh:/root/.ssh"

Your final Dockerfile should contain something like:

FROM node:6.9.1

RUN mkdir -p /root/.ssh
RUN  echo "    IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config

VOLUME [ "/root/.ssh" ]

EXPOSE 3000

CMD [ "launch" ]
Kip answered 15/3, 2019 at 4:32 Comment(0)
W
6

I put together a very simple solution that works for my use case where I use a "builder" docker image to build an executable that gets deployed separately. In other words my "builder" image never leaves my local machine and only needs access to private repos/dependencies during the build phase.

You do not need to change your Dockerfile for this solution.

When you run your container, mount your ~/.ssh directory (this avoids having to bake the keys directly into the image, but rather ensures they're only available to a single container instance for a short period of time during the build phase). In my case I have several build scripts that automate my deployment.

Inside my build-and-package.sh script I run the container like this:

# do some script stuff before    

...

docker run --rm \
   -v ~/.ssh:/root/.ssh \
   -v "$workspace":/workspace \
   -w /workspace builder \
   bash -cl "./scripts/build-init.sh $executable"

...

# do some script stuff after (i.e. pull the built executable out of the workspace, etc.)

The build-init.sh script looks like this:

#!/bin/bash

set -eu

executable=$1

# start the ssh agent
eval $(ssh-agent) > /dev/null

# add the ssh key (ssh key should not have a passphrase)
ssh-add /root/.ssh/id_rsa

# execute the build command
swift build --product $executable -c release

So instead of executing the swift build command (or whatever build command is relevant to your environment) directly in the docker run command, we instead execute the build-init.sh script which starts the ssh-agent, then adds our ssh key to the agent, and finally executes our swift build command.

Note 1: For this to work you'll need to make sure your ssh key does not have a passphrase, otherwise the ssh-add /root/.ssh/id_rsa line will ask for a passphrase and interrupt the build script.

Note 2: Make sure you have the proper file permissions set on your script files so that they can be run.

Hopefully this provides a simple solution for others with a similar use case.

Wilke answered 11/11, 2020 at 18:30 Comment(1)
docker run .... -v ~/.ssh:/root/.ssh part did the trick for meLissa
B
4

In later versions of docker (17.05) you can use multi stage builds. Which is the safest option as the previous builds can only ever be used by the subsequent build and are then destroyed

See the answer to my stackoverflow question for more info

Blowup answered 15/1, 2018 at 12:22 Comment(6)
This seems the best answer after all, because it is the most secure. I have not tested it, but it sounds obvious. If you do not want to have the key stored in a layer of your image, the link says: just build a new image from the old image and take over just the layers that you need (without the key layers) - and delete the old image. That link seems very promising.Hausfrau
There is actually a better answer to this now @Hausfrau if you use docs.docker.com/develop/develop-images/build_enhancements/…Blowup
Perhaps you might take the time to add a second answer with an example here? This should be mentioned as an answer in this thread, not just as a very good side-note :). In this long thread, you do not see the comments without a click. And many people will not read the comments. Anyway, thank you for sharing.Hausfrau
@Hausfrau too many answers for this question. No chance it will get noticed even if I change itBlowup
Would not say so, I have seen two upvotes in five days for a new answer, that shows that low voted answers are read because the top voted are not good enough (top 1 is just half of the question votes). I would rather say that even if you show the best approach regarding security here, it does not answer the question of how to finally ssh into the server. The most secure setting is not the core of the question, it is just good to know.Hausfrau
@Hausfrau you convinced me :) Expect an I told you so in 6 monthsBlowup
T
3

Here's how I did to use ssh key during image build using docker composer:

.env

SSH_PRIVATE_KEY=[base64 encoded sshkey]

docker-compose.yml

version: '3'
services:
  incatech_crawler:
    build:
      context: ./
      dockerfile: Dockerfile
      args:
        SSH_PRIVATE_KEY: ${SSH_PRIVATE_KEY} 

dockerfile: ...

# Set the working directory to /app
WORKDIR /usr/src/app/
ARG SSH_PRIVATE_KEY 
 
RUN mkdir /root/.ssh/  
RUN echo -n ${SSH_PRIVATE_KEY} | base64 --decode > /root/.ssh/id_rsa_wakay_user
Terrell answered 13/4, 2021 at 8:41 Comment(0)
U
2

I'm trying to work the problem the other way: adding public ssh key to an image. But in my trials, I discovered that "docker cp" is for copying FROM a container to a host. Item 3 in the answer by creak seems to be saying you can use docker cp to inject files into a container. See https://docs.docker.com/engine/reference/commandline/cp/

excerpt

Copy files/folders from a container's filesystem to the host path. Paths are relative to the root of the filesystem.

  Usage: docker cp CONTAINER:PATH HOSTPATH

  Copy files/folders from the PATH to the HOSTPATH
Utterance answered 2/1, 2014 at 21:18 Comment(2)
This URL appears to be broken now.Ligneous
This is obsolete or incorrect. It can copy either direction, as of at latest 1.8.2.Emiliaemiliaromagna
B
1

You can pass the authorised keys in to your container using a shared folder and set permissions using a docker file like this:

FROM ubuntu:16.04
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
EXPOSE 22
RUN cp /root/auth/id_rsa.pub /root/.ssh/authorized_keys
RUN rm -f /root/auth
RUN chmod 700 /root/.ssh
RUN chmod 400 /root/.ssh/authorized_keys
RUN chown root. /root/.ssh/authorized_keys
CMD /usr/sbin/sshd -D

And your docker run contains something like the following to share an auth directory on the host (holding the authorised_keys) with the container then open up the ssh port which will be accessable through port 7001 on the host.

-d -v /home/thatsme/dockerfiles/auth:/root/auth -–publish=127.0.0.1:7001:22

You may want to look at https://github.com/jpetazzo/nsenter which appears to be another way to open a shell on a container and execute commands within a container.

Bunting answered 10/4, 2017 at 8:33 Comment(0)
L
1

Late to the party admittedly, how about this which will make your host operating system keys available to root inside the container, on the fly:

docker run -v ~/.ssh:/mnt -it my_image /bin/bash -c "ln -s /mnt /root/.ssh; ssh [email protected]"

I'm not in favour of using Dockerfile to install keys since iterations of your container may leave private keys behind.

Laforge answered 5/1, 2018 at 11:27 Comment(0)
K
1

You can use secrets to manage any sensitive data which a container needs at runtime but you don’t want to store in the image or in source control, such as:

  • Usernames and passwords
  • TLS certificates and keys
  • SSH keys
  • Other important data such as the name of a database or internal server
  • Generic strings or binary content (up to 500 kb in size)

https://docs.docker.com/engine/swarm/secrets/

I was trying to figure out how to add signing keys to a container to use during runtime (not build) and came across this question. Docker secrets seem to be the solution for my use case, and since nobody has mentioned it yet I'll add it.

Kettering answered 31/3, 2019 at 23:12 Comment(0)
O
1

In my case I had a problem with nodejs and 'npm i' from a remote repository. I fixed it added 'node' user to nodejs container and 700 to ~/.ssh in container.

Dockerfile:

USER node #added the part
COPY run.sh /usr/local/bin/
CMD ["run.sh"]

run.sh:

#!/bin/bash
chmod 700 -R ~/.ssh/; #added the part

docker-compose.yml:

nodejs:
      build: ./nodejs/10/
      container_name: nodejs
      restart: always
      ports:
        - "3000:3000"
      volumes:
        - ../www/:/var/www/html/:delegated
        - ./ssh:/home/node/.ssh #added the part
      links:
        - mailhog
      networks:
        - work-network

after that it started works

Oeildeboeuf answered 12/3, 2020 at 15:28 Comment(0)
F
-1

For debian / root / authorized_keys:

RUN set -x && apt-get install -y openssh-server

RUN mkdir /var/run/sshd
RUN mkdir -p /root/.ssh
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN  echo "ssh-rsa AAAA....yP3w== rsa-key-project01" >> /root/.ssh/authorized_keys
RUN chmod -R go= /root/.ssh
Feckless answered 13/7, 2019 at 11:13 Comment(0)
B
-2

In a running docker container, you can issue ssh-keygen with the docker -i (interactive) option. This will forward the container prompts to create the key inside the docker container.

Benevolent answered 14/4, 2015 at 6:46 Comment(1)
And then what? You can't do anything after this, because you don't have permission to do so.Emiliaemiliaromagna
P
-3

Simplest way, get a launchpad account and use: ssh-import-id

Paraclete answered 24/11, 2013 at 2:50 Comment(1)
The question was about private keys. ssh-import-id looks like it only imports public keys.Orlandoorlanta

© 2022 - 2024 — McMap. All rights reserved.