kubelet saying node "master01" not found [closed]
Asked Answered
S

4

6

I try to stack up my kubeadm cluster with three masters. I receive this problem from my init command...

[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

But I do not use no cgroupfs but systemd And my kubelet complain for not knowing his nodename.

Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.251885    5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.352932    5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.453895    5620 kubelet.go:2266] node "master01" not found

Please let me know where is the issue.

Sinclare answered 23/1, 2019 at 15:0 Comment(1)
what version of docker and kubernetes are you running?Individualism
I
3

The issue can be because of docker version, as docker version < 18.6 is supported in latest kubernetes version i.e. v1.13.xx.

Actually I also got the same issue but it get resolved after downgrading the docker version from 18.9 to 18.6.

Individualism answered 23/1, 2019 at 17:33 Comment(0)
S
1

If the problem is not related to Docker it might be because the Kubelet service failed to establish connection to API server.

I would first of all check the status of Kubelet: systemctl status kubelet and consider restarting with systemctl restart kubelet.

If this doesn't help try re-installing kubeadm or running kubeadm init with other version (use the --kubernetes-version=X.Y.Z flag).

Seethrough answered 22/10, 2020 at 0:17 Comment(0)
U
0

In my case,my k8s version is 1.21.1 and my docker version is 19.03. I solved this bug by upgrading docker to version 20.7.

Urbannal answered 5/7, 2021 at 9:45 Comment(0)
C
0

If you'r using haproxy setup then config the port 6443 in config file present at path /etc/haproxy/haproxy.cfg below are the following settings

frontend kubernetes
        bind 10.182.0.2:6443
        option tcplog
        mode tcp
        default_backend kubernetes-master-nodes

backend kubernetes-master-nodes
       mode tcp
       balance roundrobin
       option tcp-check
       server kubernetes-master-1 10.182.0.7:6443 check fall 3 rise 2
       server kubernetes-master-2 10.182.0.8:6443 check fall 3 rise 2
Chamorro answered 4/2 at 16:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.