How do I access a private Docker registry with a self signed certificate using Kubernetes?
Asked Answered
E

7

20

Currently, running a private Docker registry (Artifactory) on an internal network that uses a self signed certificate for authentication.

When Kubernetes starts up a new node, it is unable to auth with the private Docker registry because this new node does not have the self signed certificate.

Any help would be much appreciated. Thanks!

Eggert answered 29/11, 2018 at 18:51 Comment(0)
J
19

The simplest solution I found after an extensive search is suggested in this guide by CoreOS : https://github.com/coreos/tectonic-docs/blob/master/Documentation/admin/add-registry-cert.md

It consists to create a secret that contains your certificate and a DaemonSet to populate it to /etc/docker/certs.d/my-private-insecure-registry.com/ca.crt on all the nodes of your cluster.

I think this answers your question because, when adding a new node, the DaemonSet is automatically executed on it.

I give the detailed solution below but all the credits goes to Kyle Brown (kbrwn) for his very cool guide (cf. link above).

Detailed solution for Kubernetes 1.16 to 1.23

Lets suppose that your certificate is a file named ca.crt in your working directory. Create a secret from this file content :

kubectl create secret generic registry-ca --namespace kube-system --from-file=registry-ca=./ca.crt

Then, use the following DaemonSet that mounts the certificate as the file /home/core/registry-ca and copy it to the desired location : /etc/docker/certs.d/reg.example.com/ca.crt.

Simply replace my-private-insecure-registry.com with the hostname of your container registry.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: registry-ca
  namespace: kube-system
  labels:
    k8s-app: registry-ca
spec:
  selector:
    matchLabels:
      name: registry-ca
  template:
    metadata:
      labels:
        name: registry-ca
    spec:
      containers:
      - name: registry-ca
        image: busybox
        command: [ 'sh' ]
        args: [ '-c', 'cp /home/core/registry-ca /etc/docker/certs.d/my-private-insecure-registry.com/ca.crt && exec tail -f /dev/null' ]
        volumeMounts:
        - name: etc-docker
          mountPath: /etc/docker/certs.d/my-private-insecure-registry.com
        - name: ca-cert
          mountPath: /home/core
      terminationGracePeriodSeconds: 30
      volumes:
      - name: etc-docker
        hostPath:
          path: /etc/docker/certs.d/my-private-insecure-registry.com
      - name: ca-cert
        secret:
          secretName: registry-ca

Save the file as registry-ca-ds.yaml and then create the DaemonSet :

kubectl create -f registry-ca-ds.yaml

You can now check that your application correctly pulls from your private self-signed registry.

As mentioned, the certificate will be added to new nodes' docker in an automatic fashion by the registry-ca DaemonSet. If you want to avoid this, simply delete the DaemonSet :

kubectl delete ds registry-ca --namespace kube-system

I think this is more secure than setting the insecure-registries flag of the docker daemon. Also, it is resilient to new nodes.

Known limitations

Solution for Kubernetes 1.24+ (edit 2023)

The Kubernetes project is removing built-in support for the Docker runtime in Kubernetes version 1.24 and later (source). Therefore, these solutions doesn't work for Kubernetes 1.24+ clusters that should use Containerd instead of Docker. As explained in Moly's answer, Containerd doesn't support yet adding certificates without restarting so a different approach using a privileged container is recommended for those clusters.

Please, see Moly's answer for detailed explanation and code.

Registries with specific ports (edit 2021)

According to these Kubernetes answers (here and here) to related github issues, a Kubernetes volume path cannot contain a colon. Therefore, this solution is not valid for registries that communicate securely with a self-signed certificate on specific ports (for example 5000).

In such case, please see Gary Plattenburg's answer to create the directory in the shell command instead of using Kubernetes to handle it during the mount.

Jillayne answered 19/7, 2020 at 14:34 Comment(8)
While this is a better option than insecure-registries the preferred approach should be to install the CA correctly on the host. This is a good option though if you just want EKS Managed Node Groups to work with AWS provided AMIs with minimum amount of modifications. Please note for Artifactory you need to provide the full long path to the registry, my-local.artifactory.mydomain.com not just artifactory.mydomain.comSello
@MarcusMaxwell, thank you for the suggestion to upgrade the API for Kubernetes 1.16+. I took the liberty to put both versions since there is still some Kubernetes ≤1.15 clusters out there.Jillayne
Concerning the domain name of the registry, as explained in @Rico's answer below, you should create the directory with the full domain name as you are using it in your deployments. This implies that if you specify the registry's port in your deployments, you should specify it too in the name of the directory created by the daemonset.Jillayne
As you mentioned, this solution is an alternative to the certificate installation by your provider during the nodes' installs, which may be preferred. However, such an option is not always available depending if you're running Kubernetes on one of the Clouds or on-premise. I think it often gets worse when you're running on-premise because then your K8s install scripts also need to handle some external certificates. At least, let's say that managing the certificates during nodes' creation highly depends on you're install processes so a generic solution is not likely to be possible in such case.Jillayne
The daemonset has the advantages to be an applicative solution running on top of Kubernetes and working for both on-premise and Cloud managed clusters. Also, it can be deployed by the team using the registry (modulo rights management) without implying a DevOps team nor running again the cluster installation scripts. A Helm chart may even simplify this work. That's why I found this solution to be a good compromise over the years. But, once again, it depends on your use case.Jillayne
Sometimes it leads to another problem: when my registry is running on a server without a hostname. The registry will look like 192.168.1.112:5000. In Kubernetes, it doesn't support mounting paths containing ":".Extinct
Thanks for pointing out @EdenLi. I didn't know this limitation on mount paths which makes sense in some ways, after reading about it, but which is also clearly a killer for your use case. Sorry I can't help you further and good luck with your private registry.Jillayne
isn't chicken-and-egg problem? I get same error also when trying to run daemonset pods..Applicator
C
14

Recent versions of kubernetes now use containerd instead of docker to pull images, so the other answers will no longer work. You can check which your nodes are using by running kubectl get nodes -o wide and looking under "CONTAINER-RUNTIME".

Currently the only way I can find to get this working with containerd is to add the certificate to the root store of the host and then restart the containerd service. Doing this via a DaemonSet requires using a privileged container and nsenter so that we can run a shell on the host. This config worked for me for an Ubuntu host (create the secret with your certificate first):

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: registry-ca
  namespace: kube-system
  labels:
    k8s-app: registry-ca
spec:
  selector:
    matchLabels:
      name: registry-ca
  template:
    metadata:
      labels:
        name: registry-ca
    spec:
      hostPID: true
      hostNetwork: true
      initContainers:
      - name: registry-ca
        image: busybox
        securityContext:
          privileged: true
        command: [ 'sh' ]
        args:
          - -c
          - |
            cp /home/core/registry-ca /usr/local/share/ca-certificates/registry-ca.crt
            nsenter --mount=/proc/1/ns/mnt -- sh -c "update-ca-certificates && systemctl restart containerd"
        volumeMounts:
        - name: usr-local-share-certs
          mountPath: /usr/local/share/ca-certificates
        - name: ca-cert
          mountPath: /home/core
      terminationGracePeriodSeconds: 30
      volumes:
      - name: usr-local-share-certs
        hostPath:
          path: /usr/local/share/ca-certificates
      - name: ca-cert
        secret:
          secretName: registry-ca
      containers:
        - name: wait
          image: k8s.gcr.io/pause:3.1

There is currently an open merge request for containerd that should allow adding certificates without restarting, so hopefully we can use something similar to Bichon's simpler answer which doesn't need the host access workarounds soon.

Cathrinecathryn answered 12/5, 2022 at 13:18 Comment(2)
This should be marked as the correct answer now because K8s 1.24+ has moved away from Docker to containerd and thus requires this solution for any current workNethermost
Note you should also check your node OS and change the path/command to update the ca certificates on the host accordingly (CentOS is different than Debian/Ubuntu): gist.github.com/kekru/deabd57f0605ed95d5c8246d18483687Oneill
E
2

You basically have to tell the Docker daemon to trust your self-signed certificate by telling it to trust the Certificate Authority (CA) that you used to sign the certificate. You can find more information here on the section that says "Use self-signed certificates".

In particular for example for Linux:

Linux: Copy the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt on every Docker host. You do not need to restart Docker.

This all different from authenticating by specifying ImagePullSecrets on your pods or docker login credentials in your docker config files.

Estes answered 29/11, 2018 at 20:33 Comment(3)
This is a good solution but only for few nodes. What if I have a 100 nodes ?! Is there a way to tell kubelet to trust server ?Vaillancourt
Probably, you can have a script to automate this during the node bootstrap process.Estes
Probably but how. I am starting to lose hope to be honest. It is neither practical nor efficient to copy the CA to each node!!Vaillancourt
B
2

The accepted answer from Bichon fro 1.16+ has in the comments that it does not work for URLs on a port due to not being able to mount the path with a colon (:) in it. This can be fixed by mounting the parent directory and changing the command arguments to make it.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: registry-ca
  namespace: kube-system
  labels:
    k8s-app: registry-ca
spec:
  selector:
    matchLabels:
      name: registry-ca
  template:
    metadata:
      labels:
        name: registry-ca
    spec:
      containers:
      - name: registry-ca
        image: busybox
        command: [ 'sh' ]
        args: [ '-c', 'mkdir /etc/docker/certs.d/my-private-insecure-registry.com:5000 && cp /home/core/registry-ca /etc/docker/certs.d/my-private-insecure-registry.com:5000/ca.crt && exec tail -f /dev/null' ]
        volumeMounts:
        - name: etc-docker
          mountPath: /etc/docker/certs.d
        - name: ca-cert
          mountPath: /home/core
      terminationGracePeriodSeconds: 30
      volumes:
      - name: etc-docker
        hostPath:
          path: /etc/docker/certs.d
      - name: ca-cert
        secret:
          secretName: registry-ca
Blasphemous answered 27/7, 2021 at 21:59 Comment(1)
Excellent idea, this is indeed the best solution for the use case with port number.Jillayne
T
0

You can access the keys for private docker registries in $HOME/.dockercfg or $HOME/.docker/config.json . If you add it to one of these search paths kubelet should use it as a credential when pulling the images.

  • {--root-dir:-/var/lib/kubelet}/config.json
  • {cwd of kubelet}/config.json
  • ${HOME}/.docker/config.json
  • /.docker/config.json
  • {--root-dir:-/var/lib/kubelet}/.dockercfg
  • {cwd of kubelet}/.dockercfg
  • ${HOME}/.dockercfg
  • /.dockercfg

https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

The "Configuring Nodes to Authenticate to a Private Registry" section gives you a step by step on how to do it.

Testosterone answered 29/11, 2018 at 19:37 Comment(0)
C
0

Kubernetes is likely using the docker daemon on the Kubernetes cluster nodes. For them to trust your local registry, you can the trusted registry hostnname to the file /etc/docker/daemon.json as follows:

{ "insecure-registries":["some.local.registry"] }

where some.local.registry is the hostname of the registry.

You need to restart the docker process(es) to make this effective. I did this for a domain that is not public and has no valid TLD, so I could not use cert-manager with letsencrypt.

You need to do the same on every machine that uses docker to connect to that registry.

Camala answered 29/1, 2020 at 16:33 Comment(3)
that worked for me, I also logged into the private registry with the docker login command on all the nodes.Creigh
please upvote when useful. docker login is only needed when the registry needs credentials to be accessed as far as I know, good addition though.Camala
I did upvote, it was at -1 for some reason, now 0Creigh
F
0

For anyone who runs into this issue. Here is the workaround I found. This relates to the answer above. And lets you use registries with ports in the name.

    volumes:
    - name: certs
      secret:
        items:
        - key: ca.crt
          path: my-registry:443/ca.crt
        secretName: registry-ca
    volumeMounts:
    - mountPath: /etc/docker/certs.d
      name: certs
      readOnly: true
Fullblown answered 29/9, 2021 at 18:49 Comment(1)
This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From ReviewAbagail

© 2022 - 2024 — McMap. All rights reserved.