x509 certificate signed by unknown authority- Kubernetes
Asked Answered
S

10

44

I am configuring a Kubernetes cluster with 2 nodes in CoreOS as described in https://coreos.com/kubernetes/docs/latest/getting-started.html without flannel. Both servers are in the same network.

But I am getting: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") while running kubelet in worker.

I configured the TLS certificates properly on both the servers as discussed in the doc.

The master node is working fine. And the kubectl is able to fire containers and pods in master.

Question 1: How to fix this problem?

Question 2: Is there any way to configure a cluster without TLS certificates?

Coreos version:
VERSION=899.15.0
VERSION_ID=899.15.0
BUILD_ID=2016-04-05-1035
PRETTY_NAME="CoreOS 899.15.0"

Etcd conf:

 $ etcdctl member list          
ce2a822cea30bfca: name=78c2c701d4364a8197d3f6ecd04a1d8f peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://172.24.0.67:2379

Master: kubelet.service:

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=http://127.0.0.1:8080 \
  --register-schedulable=false \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.67 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Master: kube-controller.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true 
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://172.24.0.67:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=172.24.0.67
    - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-scheduler.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1

Slave: kubelet.service

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests

Environment=KUBELET_VERSION=v1.2.2_coreos.0 
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=https://172.24.0.67:443 \
  --register-node=true \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.63 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Slave: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=https://172.24.0.67:443
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/worker-kubeconfig.yaml"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/kubernetes/ssl"
Spiffy answered 29/4, 2016 at 13:16 Comment(5)
please see kubernetes.io/docs/getting-started-guides/scratch/… and report if that failsEmulate
Will try this, and then get back to you. ThanksSpiffy
How did you generate your certs? Typically you need to edit the SANs (Subject alt names) of your certs, and add the IP or hostname of the master which in your case is: 172.24.0.67Profession
any news on this?Kashgar
I hit similar error but during kubernetes install with kubeadm. I had to delete previous "/etc/cni/net.d" and unset my proxy.Algae
M
69

This error often occurs when you are using an old or wrong config from a previous kubernetes installation or setup.

The below commands will remove the old or wrong config and copy the new config to the .kube directory as well as set the correct permissions. You can first make a backup of the old or wrong config if you think you might still need using the command mv $HOME/.kube $HOME/.kube.bak:

rm -rf $HOME/.kube || true    
mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Mansfield answered 17/2, 2019 at 8:24 Comment(5)
Welcome to SO! Please add some details explaining what your answer does, it will be more helpful for the OP and future readers of the post.Upward
i confirm that this solutions works. i created the cluster via kubeadm init, after that i deleted cluster via kubeadm and i recreated the cluster via kubeadm init. But i didnt delete old config from $HOME. And i got the error described above. So i tried to use the newcluster with the old config file that is with the old k8s cert. Thats why it didnt work. After i replace config from /etc to $HOME all is fine now. so in my opinion if you get x509 error it means you are trying yo use old config in your $HOME from some old cluster.Vashtivashtia
I confirm this solution worked for me too, even if some explanations would be very welcome for beginners like me.Latifundium
this should be accepted as the answer, it worked for me tooRadial
This is what kubeadm says after kubeadm init. Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube....cp -i....Nadanadab
P
31

From kubernetes official site:

  1. Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate

  2. Unset the KUBECONFIG environment variable using:

    unset KUBECONFIG

    Or set it to the default KUBECONFIG location:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  3. Another workaround is to overwrite the existing kubeconfig for the “admin” user:

    mv  $HOME/.kube $HOME/.kube.bak
    mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Reference: official site link reference

Pase answered 2/1, 2020 at 13:49 Comment(1)
For microk8s, welcome to see this answer on stackoverflow.comCountenance
M
4

Please see this as a reference and maybe help you resolve your issue by exporting your certs:

kops export kubecfg "your cluster-name"
export KOPS_STATE_STORE=s3://"paste your S3 store"

Hope that will help.

Mahogany answered 14/3, 2018 at 15:18 Comment(0)
B
1

Well, to answer your first question I think you have to do a few things to resolve your problem.

First, run the command given in this link : kubernetes.io/docs/setup/independent/create-cluster-kubeadm‌​/…

Complete with those commands :

  • mkdir -p $HOME/.kube
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config

This admin.conf should be known to kubectl so as to work properly.

Bittern answered 24/11, 2017 at 13:16 Comment(0)
T
1

The above mentioned regular method does not work. I have tried to use the complete commands for a successful certificate. Please see the commands as follows.

$ sudo kubeadm reset
$ sudo swapoff -a 

$ sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --kubernetes- 
  version "1.18.3"
$ sudo rm -rf $HOME/.kube

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ sudo systemctl enable docker.service
$ sudo service kubelet restart

$ kubectl get nodes

Notes:

If the port refuses to be connected, please add the following command.

$ export KUBECONFIG=$HOME/admin.conf
Tayyebeb answered 7/6, 2020 at 15:43 Comment(1)
This worked for me, i guess i needed to restart kubelet service as wellBlinkers
R
1

I had the problem persist even after:

mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In that case, restarting kubelet solved the problem:

systemctl restart kubelet
Rigney answered 6/4, 2022 at 23:1 Comment(0)
C
1

tldr: sudo rm -rd /root/.kube

In my case, I faced this error after running below commands:

sudo kubeadm init # success
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes # success
sudo kubeadm token list # ERROR, x509: certificate signed by unknown authority
kubeadm token list # success

The Problem

In a previous run, I accidentally created /root/.kube/config with below commands:

sudo -s
kubeadm init # success
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# this worked until I ran `kubeadm init` again and configs diverged

The Solution

Delete incorrect config

sudo kubeadm token list # ERROR, x509: certificate signed by unknown authority
sudo rm -rd /root/.kube
sudo kubeadm token list # SUCCESS
Champlin answered 4/3 at 1:25 Comment(0)
S
0

I found this error in coredns pods, pod creation failed due to x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") The issue was for me is that i already installed a k8s cluster before on the same node, and i used the kubeadm reset command to remove the cluster. This command left behind some files in /etc/cni/ that probably caused the issue for me. I deleted this folder and reinstalled the cluster with kubeadm init.

Supererogatory answered 25/8, 2022 at 22:9 Comment(0)
O
0

For anyone like me who is facing same error only in vs code Kubernetes extension.

I reinstalled docker/Kubernetes and didn't update vs code Kubernetes extension

You need to make sure you are using the correct kubeconfig since reinstalling Kubernetes creates a new certificate.

Either use $HOME/.kube/config in setKubeconfig option or just copy it to path where you have set vs code extension to read the config from. Using following command

cp $HOME/.kube/config /{{path-for-kubeconfig}}
Octopus answered 28/11, 2022 at 15:55 Comment(0)
B
-1

I followed the below steps and the problem resolved.

  1. Take the backup of original file. cp /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf_bkp

  2. Create a symlink file in the user's home directory inside the ./kube/ ln -s /etc/kubernetes/admin.conf $HOME/.kube/config

Now the original configuration is linked with main admin.conf file which resolved the problem.

Baisden answered 29/7, 2022 at 14:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.