Kubernetes: expired certificate
Asked Answered
V

14

50

Our Kubernetes 1.6 cluster had certificates generated when the cluster was built on April 13th, 2017.

On December 13th, 2017, our cluster was upgraded to version 1.8, and new certificates were generated [apparently, an incomplete set of certificates].

On April 13th, 2018, we started seeing this message within our Kubernetes dashboard for api-server:

[authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]

Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at the certificates generated on Dec 13th [apiserver-kubelet-client.crt and apiserver-kubelet-client.crt], but continue to see the above error.

Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at different certificates generated on Dec 13th [apiserver.crt and apiserver.crt] (I honestly don't understand the difference between these 2 sets of certs/keys), but continue to see the above error.

Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at non-existent files, and none of the kube* services would start, with /var/log/syslog complaining about this:

Apr 17 17:50:08 kuber01 kubelet[2422]: W0417 17:50:08.181326 2422 server.go:381] invalid kubeconfig: invalid configuration: [unable to read client-cert /tmp/this/cert/does/not/exist.crt for system:node:node01 due to open /tmp/this/cert/does/not/exist.crt: no such file or directory, unable to read client-key /tmp/this/key/does/not/exist.key for system:node:node01 due to open /tmp/this/key/does/not/exist.key: no such file or directory]

Any advice on how to overcome this error, or even troubleshoot it at a more granular level? Was considering regenerating certificates for api-server (kubeadm alpha phase certs apiserver), based on instructions within https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-certs ... but not sure if I'd be doing more damage.

Relatively new to Kubernetes, and the gentleman who set this up is not available for consult ... any help is appreciated. Thanks.

Vide answered 17/4, 2018 at 19:1 Comment(1)
If trying to use Tilt to deploy images, it has its own version of this error, with description+fix here.Seam
V
10

Each node within the Kubernetes cluster contains a config file for running kubelet ... /etc/kubernetes/kubelet.conf ... and this file is auto-generated by kubeadm. During this auto-generation, kubeadm uses /etc/kubernetes/ca.key to create a node-specific file, /etc/kubernetes/kubelet.conf, within which are two very important pieces ... client-certificate-data and client-key-data. My original thought process led me to believe that I needed to find the corresponding certificate file & key file, renew those files, convert both to base64, and use those values within kubelet.conf files across the cluster ... this thinking was not correct.

Instead, the fix was to use kubeadm to regenerate kubectl.conf on all nodes, as well as admin.conf, controller-manager.conf, and scheduler.conf on the cluster's master node. You'll need /etc/kubernetes/pki/ca.key on each node in order for your config files to include valid data for client-certificate-data and client-key-data.

Pro tip: make use of the --apiserver-advertise-address parameter to ensure your new config files contain the correct IP address of the node hosting the kube-apiserver service.

Vide answered 19/4, 2018 at 20:10 Comment(4)
Could you please share steps for this part: "the fix was to use kubeadm to regenerate kubectl.conf on all nodes, as well as admin.conf, controller-manager.conf, and scheduler.conf on the cluster's master node."? Many thanks.Hirokohiroshi
On each node in my cluster, I ran : kubeadm alpha phase kubeconfig all --apiserver-advertise-address <APIServerIP> ... described more in depth here. I needed the 4 conf files that command generates ( admin.conf, kubelet.conf, controller-manager.conf, and scheduler.conf ) on the master node ... each of the other cluster nodes only needed kubelet.conf ...Vide
might be joining late to the party but I am currently stuck at how to generate these certificates on the worker nodes without disrupting whats been currently running on these nodes any suggestions please in this regardUgly
Please add steps, also for setups using older versions of kubeadm and Kubernetes this set of commands are not available. kubeadm alpha phase needs to be used instedSahib
U
38

I think you need re-generate the apiserver certificate /etc/kubernetes/pki/apiserver.crt you can view current expire date like this.

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Dec 20 14:32:00 2017 GMT
            Not After : Dec 20 14:32:00 2018 GMT

Here is the steps I used to regenerate the certificates on v1.11.5 cluster. compiled steps from here https://github.com/kubernetes/kubeadm/issues/581


to check all certificate expire date:

find /etc/kubernetes/pki/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t  -i bash -c 'openssl x509  -noout -text -in {}|grep After'

Renew certificate on Master node.

*) Renew certificate

mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old
mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old
mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old
mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old
mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old
mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old


kubeadm alpha phase certs apiserver  --config /root/kubeadm-kubetest.yaml
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
 
mv /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcd-client.crt.old
mv /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.key.old
kubeadm alpha phase certs  apiserver-etcd-client


mv /etc/kubernetes/pki/etcd/server.crt /etc/kubernetes/pki/etcd/server.crt.old
mv /etc/kubernetes/pki/etcd/server.key /etc/kubernetes/pki/etcd/server.key.old
kubeadm alpha phase certs  etcd-server --config /root/kubeadm-kubetest.yaml

mv /etc/kubernetes/pki/etcd/healthcheck-client.crt /etc/kubernetes/pki/etcd/healthcheck-client.crt.old
mv /etc/kubernetes/pki/etcd/healthcheck-client.key /etc/kubernetes/pki/etcd/healthcheck-client.key.old
kubeadm alpha phase certs  etcd-healthcheck-client --config /root/kubeadm-kubetest.yaml


mv /etc/kubernetes/pki/etcd/peer.crt /etc/kubernetes/pki/etcd/peer.crt.old
mv /etc/kubernetes/pki/etcd/peer.key /etc/kubernetes/pki/etcd/peer.key.old
kubeadm alpha phase certs  etcd-peer --config /root/kubeadm-kubetest.yaml

*)  Backup old configuration files
mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old
mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old
mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old
mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old

kubeadm alpha phase kubeconfig all  --config /root/kubeadm-kubetest.yaml

mv $HOME/.kube/config .$HOMEkube/config.old
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
chmod 777 $HOME/.kube/config
export KUBECONFIG=.kube/config

Reboot the node and check the logs for etcd, kubeapi and kubelet.

Note: Remember to update your CI/CD job kubeconfig file. If you’re using helm command test that also.

Uttermost answered 17/4, 2018 at 19:51 Comment(10)
Many thanks for the reply @Uttermost ... looks like my current /etc/kubernetes/pki/apiserver.crt has not yet expired: /etc/kubernetes/pki# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not ' Not Before: Apr 13 14:03:16 2017 GMT Not After : Dec 13 12:13:33 2018 GMTVide
Ok looks like dashboard certificates may be expired. I am not sure about the location of this certUttermost
Thank you much @Uttermost ,I have the same issue, API Server key was expired, Could you explain little more on the steps 2 for sign the apiserver.csr and create apiserver.crt. Do I need to do anything on the nodes once I have apiserver.crt.Luteal
openssl x509 -req -sha256 -days 365 -in apiserver.csr -signkey ca.key -out apiserver.crt Could you confirm the second stepLuteal
There is new way to handle this in the github issue: just run kubeadm alpha certs renew allFreda
I had to use kubeadm init phase kubeconfig all --apiserver-advertise-address IP_ADDRESS instead of kubeadm alpha phase kubeconfig all taking into consideration I had 2 addresses for the master and I had to generate the configurations using the correct oneHesitate
there is a typo peert.crt.oldTannic
@Tannic May I know what is the typo your referring too.Uttermost
peert -> peer !?Tannic
I also have to run this export KUBECONFIG=/etc/kubernetes/admin.conf then kubectl workedVaivode
W
31

For anyone that stumbles upon this in the future, which are running a newer version of kubernetes >1.17, this is probably the simplest way to renew your certs.

The following renews all certs, restarts kubelet, takes a backup of the old admin config and applies the new admin config:

kubeadm certs renew all
systemctl restart kubelet
cp /root/.kube/config /root/.kube/.old-$(date --iso)-config
cp /etc/kubernetes/admin.conf /root/.kube/config
Woodall answered 4/5, 2022 at 9:59 Comment(3)
thanks this resolved the issue!!! afterwards i used the new /etc/kubernetes/admin.conf in my case i was using the kind cluster which is docker based. so i had to go into the container first then use the kubeadm cli to generate. then service restart. cat the admin.conf and we have the updated admin access config !!Zillah
most elegant and precise answer!!!Empanel
this one works for me on version 1.24.0Dialectician
D
25

This topic is also discussed in:


Kubernetes v1.15 provides docs for "Certificate Management with kubeadm":

kubeadm alpha certs check-expiration
  • Automatic certificate renewal:
    • kubeadm renews all the certificates during control plane upgrade.
  • Manual certificate renewal:
    • You can renew your certificates manually at any time with the kubeadm alpha certs renew command.
    • This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki.

For Kubernetes v1.14 I find this procedure the most helpful:

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
  • backup and re-generate all kubeconfig files:
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot
  • copy new admin.conf:
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Donate answered 1/8, 2019 at 12:14 Comment(2)
Added the essential quotes to the links. This also adds information relevant for the currently supported Kubernetes versions: v1.14 and v1.15.Donate
very helpful. but there's a typo in the mv {admin.conf... lineToomer
V
10

Each node within the Kubernetes cluster contains a config file for running kubelet ... /etc/kubernetes/kubelet.conf ... and this file is auto-generated by kubeadm. During this auto-generation, kubeadm uses /etc/kubernetes/ca.key to create a node-specific file, /etc/kubernetes/kubelet.conf, within which are two very important pieces ... client-certificate-data and client-key-data. My original thought process led me to believe that I needed to find the corresponding certificate file & key file, renew those files, convert both to base64, and use those values within kubelet.conf files across the cluster ... this thinking was not correct.

Instead, the fix was to use kubeadm to regenerate kubectl.conf on all nodes, as well as admin.conf, controller-manager.conf, and scheduler.conf on the cluster's master node. You'll need /etc/kubernetes/pki/ca.key on each node in order for your config files to include valid data for client-certificate-data and client-key-data.

Pro tip: make use of the --apiserver-advertise-address parameter to ensure your new config files contain the correct IP address of the node hosting the kube-apiserver service.

Vide answered 19/4, 2018 at 20:10 Comment(4)
Could you please share steps for this part: "the fix was to use kubeadm to regenerate kubectl.conf on all nodes, as well as admin.conf, controller-manager.conf, and scheduler.conf on the cluster's master node."? Many thanks.Hirokohiroshi
On each node in my cluster, I ran : kubeadm alpha phase kubeconfig all --apiserver-advertise-address <APIServerIP> ... described more in depth here. I needed the 4 conf files that command generates ( admin.conf, kubelet.conf, controller-manager.conf, and scheduler.conf ) on the master node ... each of the other cluster nodes only needed kubelet.conf ...Vide
might be joining late to the party but I am currently stuck at how to generate these certificates on the worker nodes without disrupting whats been currently running on these nodes any suggestions please in this regardUgly
Please add steps, also for setups using older versions of kubeadm and Kubernetes this set of commands are not available. kubeadm alpha phase needs to be used instedSahib
H
9

On k8s 1.7 I faced a similar problem (x509 expired error included inside /var/log/kube-apiserver.log) and could not find any certificate expired. We decided to restart only the apiserver docker on the master node. It resolved the problem.

$ sudo docker ps -a | grep apiserver
af99f816c7ec        gcr.io/google_containers/kube-apiserver@sha256:53b987e5a2932bdaff88497081b488e3b56af5b6a14891895b08703129477d85               "/bin/sh -c '/usr/loc"   15 months ago       Up 19 hours                                     k8s_kube-apiserver_kube-apiserver-ip-xxxxxc_0
40f3a18050c3        gcr.io/google_containers/pause-amd64:3.0                                                                                      "/pause"                 15 months ago       Up 15 months                                    k8s_POD_kube-apiserver-ip-xxxc_0
$ sudo docker restart af99f816c7ec
af99f816c7ec
$ 
Heron answered 17/11, 2018 at 0:1 Comment(0)
O
9

For version 1.21.5 this my solution:

step 1:

ssh to the master node, then check certificates in step 2.

step 2:

run this command: kubeadm certs check-expiration

root@kube-master-1:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                         EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                          Oct 21, 2022 16:05 UTC   <invalid>                               no      
apiserver                           Oct 21, 2022 16:05 UTC   <invalid>       ca                      no      
!MISSING! apiserver-etcd-client                                                                      
apiserver-kubelet-client            Oct 21, 2022 16:05 UTC   <invalid>       ca                      no      
controller-manager.conf             Oct 21, 2022 16:05 UTC   <invalid>                               no      
!MISSING! etcd-healthcheck-client                                                                    
!MISSING! etcd-peer                                                                                  
!MISSING! etcd-server                                                                                
front-proxy-client                  Oct 21, 2022 16:05 UTC   <invalid>       front-proxy-ca          no      
scheduler.conf                      Oct 21, 2022 16:05 UTC   <invalid>                               no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 19, 2031 16:05 UTC   8y              no      
!MISSING! etcd-ca                                                
front-proxy-ca          Oct 19, 2031 16:05 UTC   8y              no      

and see all of them expired yesterday.

step 3:

backup from all exists certificates:

root@kube-master-1:~# cp -R /etc/kubernetes/ssl /etc/kubernetes/ssl.backup
root@kube-master-1:~# cp /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.backup
root@kube-master-1:~# cp /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.backup
root@kube-master-1:~# cp /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.backup
root@kube-master-1:~# cp /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.backup

step 4:

for renew all, run this command: kubeadm certs renew all

root@kube-master-1:~# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1023 15:15:16.234334 2175921 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

step 5: the last line of step 4 tells us important note:

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates

for complete this run:

kubectl -n kube-system delete pod -l 'component=kube-apiserver'
kubectl -n kube-system delete pod -l 'component=kube-controller-manager'
kubectl -n kube-system delete pod -l 'component=kube-scheduler'
kubectl -n kube-system delete pod -l 'component=etcd'

step 6: then reboot the master node.

step 7: see the result:

root@kube-master-1:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1023 15:15:23.141925 2177263 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 23, 2023 07:15 UTC   364d                                    no      
apiserver                  Oct 23, 2023 07:15 UTC   364d            ca                      no      
apiserver-kubelet-client   Oct 23, 2023 07:15 UTC   364d            ca                      no      
controller-manager.conf    Oct 23, 2023 07:15 UTC   364d                                    no      
front-proxy-client         Oct 23, 2023 07:15 UTC   364d            front-proxy-ca          no      
scheduler.conf             Oct 23, 2023 07:15 UTC   364d                                    no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 19, 2031 16:05 UTC   8y              no      
front-proxy-ca          Oct 19, 2031 16:05 UTC   8y              no     

all of them renew to 2023

Orelle answered 23/10, 2022 at 8:7 Comment(1)
Once I restarted kubelet and docker service in master node after following the above steps, the issue got resolvedFrame
H
3

If you have already updated the certs or it has been updated automatically, you would have to restart the kube-apiserver on all masters nodes.

Go to the masters and look fordocker ps | grep -i kube-apiserver

Kill them with docker kill the containers and wait for 10-15 seconds it should start working.

For me it solved it.

Horseweed answered 19/4, 2021 at 13:8 Comment(0)
I
1

For a microk8s environment, this error can occur. Then your whole kubernetes setup won't work when it's the case. It happened for me after an upgrade & reboot of my Ubuntu dedicated server.

Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2022-04-02T16:38:24Z is after 2022-03-16T14:24:02Z

The solution for it is to ask microk8s to refresh its inner certificates, including the kubernetes ones.

To do that you can use: sudo microk8s.refresh-certs -c To list the expired certificates. sudo microk8s.refresh-certs -e name-of-cert

Inanna answered 2/4, 2022 at 17:1 Comment(0)
O
1

Check cert expiry: kubeadm alpha certs check-expiration

enter image description here

Version 1.15 and below

Use this link: https://github.com/kubernetes/kubeadm/issues/581

Version 1.15 and till version 1.17

kubeadm alpha certs renew all

Version 1.17 and above

kubeadm certs renew all

Note:

After certificates renewal, an error: "You must be logged in to the server (Unauthorized)": [Don't forgot to take backup of old certs, configs]

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If any issue refer below link: https://www.ibm.com/docs/en/fci/1.1.0?topic=kubernetes-renewing-cluster-certificates

Ops answered 23/8, 2022 at 2:37 Comment(1)
I got that "you must be logged in" error. Thanks for the fix.Valence
R
0

I had this issue (microk8s - ubuntu 20.04.3) and updating the time fixed it:

sudo timedatectl set-ntp off
sudo timedatectl set-ntp on
Requirement answered 21/10, 2021 at 23:56 Comment(0)
G
0

You could use this command to check the expiring date

kubectl get secret remote-certs -o json | jq -r '.data | ."remote.ca.crt"' | base64 -d | openssl x509 -noout -text | grep -A 2 -i validity

Validity Not Before: Dec 2 17:19:35 2021 GMT Not After : Dec 2 17:29:35 2022 GMT

Grab answered 25/2, 2022 at 9:59 Comment(0)
L
0

If you have a HA enviroment. You can just apply the following command:

kubeadm certs renew all

But make sure you apply it on each of the master nodes of your c.luster This mostly will work for most cases.

For additional checks:

kubeadm certs check-expiration -v6 | grep "Config loaded"

The above should result in "Config loaded" from file kubelet.conf, admin.conf, controller-manager.conf and scheduler.conf.

Ex:

Config loaded from file: /etc/kubernetes/kubelet.conf

Hope that helps.

Lagomorph answered 23/11, 2023 at 11:48 Comment(0)
J
0
kubeadm certs renew all

In newer version of Kubernetes, I found this instead!!!

Jacobsen answered 10/2 at 4:9 Comment(0)
P
0

This worked for me:

Go to: C:/Users/YourUser/AppData/Local/Docker/pki and doble click on apiserver-etcd-client.crt, this will open a new window where you can click on "Install certificate". Then reboot you pc and it will work

Papyrology answered 3/4 at 13:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.