How can kube-apiserver be restarted? [closed]
Asked Answered
D

4

12

I restarted my system today. After that my main system and the web browser are not connected to look for a kubernetes GUI.

When I ran the command systemctl status kube-apiserver.service, it gives output as shown below:

kube-apiserver.service
  Loaded: not-found (Reason: No such file or directory)
  Active: inactive (dead)

How can api-server be restarted?

Divulge answered 3/8, 2018 at 6:28 Comment(3)
Can you find your file kube-apiserver.service?Gargan
ho wdid you provision the cluster ?Gallivant
Yea, I could find the file kube-apiserver.service, and it was not active. By the way, I just restarted the container of the server and everything is working fine now.Divulge
E
11

Did you download and installed the Kubernetes Controller Binaries directly?

1 ) If so, check if the kube-apiserver.service systemd unit file exists:

cat /etc/systemd/system/kube-apiserver.service

2 ) If not, you probably installed K8S with .
With this setup the kubeapi-server is running as a pod on the master node:

kubectl get pods -n kube-system
NAME                                       READY   STATUS    
coredns-f9fd979d6-jsn6w                    1/1     Running  ..
coredns-f9fd979d6-tv5j6                    1/1     Running  ..
etcd-master-k8s                            1/1     Running  ..
kube-apiserver-master-k8s                  1/1     Running  .. #<--- Here
kube-controller-manager-master-k8s         1/1     Running  ..
kube-proxy-5kzbc                           1/1     Running  ..
kube-scheduler-master-k8s                  1/1     Running  ..

And not as a systemd service.

So, because you can't restart pods in K8S you'll have to delete it:

kubectl delete pod/kube-apiserver-master-k8s -n kube-system

And a new pod will be created immediately.


(*) When you run kubeadm init you should see the creation of the manifests for the control plane static Pods:

.
. 
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
.
.

The corresponding yamls:

ubuntu@master-k8s:/etc/kubernetes/manifests$ ls -la
total 24
drwxr-xr-x 2 root root 4096 Oct 14 00:13 .
drwxr-xr-x 4 root root 4096 Sep 29 02:30 ..
-rw------- 1 root root 2099 Sep 29 02:30 etcd.yaml
-rw------- 1 root root 3863 Oct 14 00:13 kube-apiserver.yaml <----- Here
-rw------- 1 root root 3496 Sep 29 02:30 kube-controller-manager.yaml
-rw------- 1 root root 1384 Sep 29 02:30 kube-scheduler.yaml

And the kube-apiserver spec:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.100.102.5:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.100.102.5
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    .
    .
    .
Eufemiaeugen answered 13/10, 2020 at 22:26 Comment(5)
That is not correct. You cannot run kubectl delete pod/kube-apiserver-master-k8s -n kube-system to restart the kube-apiserver container. This will delete the pod. The container will remain running. The pod will be recreated immediately, that's correct and reassign the running container to the pod without killing the containers process! Edit: If you want to restart the kube-apiserver, you have to kill the container itself. Via docker or crictl for example.Ossie
I'm not sure about what you wrote on "This will delete the pod. The container will remain running". If you delete a pod, the container inside it will be deleted. In K8S, you can't control and manage containers without pods, this is how K8s works.Eufemiaeugen
Maybe for Deployments, StatefulSets, etc. Today I tested this against cri-o. The delete command recreated the apiserver pod but the container was still running. The same process, as before. This gave me a big headache, because I thought exactly, what you wrote. But my apiserver container mounts a file not controlled by the manifests folder and a change to the configuration was not applied. Only restarting the container via cri-o helped, because the process was killed. The apiserver container process came up immediately because the June controller, I guess.Ossie
I admit I do not understand how the following line make sense: "The delete command recreated the apiserver pod but the container was still running. The same process, as before." (:Eufemiaeugen
Can confirm what @Ossie said. It's very weird, but deleting the pod keeps the container running and a new pod is created around it. Not sure is this is something specific to static pods or just a bug, but it also baffled me until I saw @Nortol's comment. I used crictl stop to kill the container directly and it worked.Excrete
S
6

move the kube-apiserver manifest file from /etc/kubernetes/manifests folder to a temporary folder. The advantage of this method is - you can stop the kube-apiserver as long as the file is removed from manifest folder.

vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml
vagrant@master01:~$ sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 12
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml

API Server is down now-

vagrant@master01:~$ k get pods -n kube-system
The connection to the server 10.0.0.2:6443 was refused - did you specify the right host or port?
vagrant@master01:~$ 

vagrant@master01:~$ sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml

API Server is up now

vagrant@master01:~$ k get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-269lt           1/1     Running   5          8d
coredns-558bd4d5db-967d8           1/1     Running   5          8d
etcd-master01                      1/1     Running   6          8d
kube-apiserver-master01            0/1     Running   2          24h
kube-controller-manager-master01   1/1     Running   8          8d
kube-proxy-q8mkb                   1/1     Running   5          8d
kube-proxy-x6trg                   1/1     Running   6          8d
kube-proxy-xxph9                   1/1     Running   8          8d
kube-scheduler-master01            1/1     Running   8          8d
weave-net-rh2gb                    2/2     Running   18         8d
weave-net-s2cg9                    2/2     Running   14         8d
weave-net-wksk2                    2/2     Running   11         8d
vagrant@master01:~$ 
Stadia answered 21/5, 2021 at 0:5 Comment(0)
J
0

I had similar issue but done something simple to get arround this. I think its just systemctl status kube-apiserver.

If the above works Please try these steps

On Master:

Restart all services etcd kube-apiserver kube-controller-manager kube-scheduler flanneld

On Worker/Node:

Restart all services kube-proxy kubelet flanneld docker

E.g:

systemctl restart kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

Note: if its node is both master and worker. Start both on same node.

The above steps worked for me (but we are working on 1.7). Hope that helps

Jibe answered 9/8, 2018 at 9:15 Comment(2)
That just gives the status of my clusters.Divulge
Earlier i was just curious to see why the command is not working. Now it seems like command is working, If so follow the steps above in modified postJibe
S
0

You can restart the api-server using:

systemctl restart kube-apiserver.service

However, if you don't want to SSH into a controller node, run the following command:

kubectl -n kube-system delete pod -l 'component=kube-apiserver'
Studio answered 9/8, 2020 at 14:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.