kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host
Asked Answered
G

9

21

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:

userX@ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host

userX@ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host

kubectl cluster-info as well as other related commands gives same output. in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.

We also tried to add below entry in /etc/hosts , it is not working.

127.0.0.1       localhost \n
192.168.214.136 localhost \n
127.0.1.1       ubuntu

I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.

Giusto answered 20/5, 2019 at 11:46 Comment(0)
S
62

If you use minikube sometimes all you need is just to restart minikube.

Run: minikube start

Superfetation answered 13/8, 2020 at 14:1 Comment(3)
Why does this solve the issue?Cheerful
such a silly mistake that i didn't check whether the minkube started or notViv
Thanks for your existence. Live Longer.Radburn
D
8

I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.

Below are the steps I took to fix it.

1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with:
kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.

2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.

3 ) If port 6443 you should get something related to the certificate like:

curl: (60) SSL certificate problem: unable to get local issuer certificate

4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).

4.C ) Run sudo firewall-cmd --reload.

4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:

public
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client ssh
  ports: 6443/tcp <---- Here
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
Davy answered 30/9, 2020 at 1:11 Comment(1)
I did open the port 6443 on master node but it was still not working. sudo firewall-cmd --reload saved the day in the end. Thanks!Carlenecarleton
H
3

The common practice is to copy config file to the home directory

sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config

Also, make sure that api-server address is valid.

server: https://<master-node-ip>:6443

If not, you can manually edit it using any text editor.

Hylton answered 24/6, 2019 at 9:5 Comment(0)
C
2

You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable

export kubeconfig=<path>/admin.conf

after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.

Coolant answered 20/5, 2019 at 13:24 Comment(3)
Tried , no luck , i see even admin.conf has same old IP present. Does updating it directly using vi editor works ( replace IP with localhost ) ?Giusto
I am not sure if that will work. You can give it a try!!Coolant
@Giusto you can not use localhost as api-server address, because all your generated certificates will be not valid.Hylton
M
2

To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:

IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.

See the link for most common Oracle VM network adapter.

https://www.nakivo.com/blog/virtualbox-network-setting-guide/

I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.

> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........     "Error getting node" err="node \"node\" not found"

Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)

When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.

Mulvaney answered 19/11, 2022 at 21:56 Comment(0)
D
1

if you getting the below error then you also check once the token validity.

Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host

Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.

Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.

example:-

[root@master1 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
Diplomate answered 9/4, 2022 at 19:25 Comment(0)
A
0

Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes

Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.

The way I solved it (bruteforce solution) is by

  1. completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
  2. wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
  3. reinstall

This is what solved the problem in my case.

There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).

Advertisement answered 18/12, 2019 at 8:50 Comment(0)
F
0

Ubuntu 22.04 LTS Screenshot

1

Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>

Fruiter answered 24/8, 2022 at 11:44 Comment(0)
T
0

Run minikube start command The reason behind that is your minikube cluster with driver docker stopped when you shutdown the system

Treadmill answered 21/9, 2022 at 12:11 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Cirrate

© 2022 - 2024 — McMap. All rights reserved.