TL;DR; Check that your ~/.kube/config
file is correct with kubectl config get-contexts $(kubectl config current-context)
command.
Explanation :
The following error messages (telling that some core object is not know) are usually caused by a missing current-context
entry in the kubectl configuration file.
the server doesn't have a resource type "nodes"
the server doesn't have a resource type "pods"
the server doesn't have a resource type "services"
...
In that case, all contexts are ignored, and kubectl tries to connect to localhost:8080
If you have anything running locally on your machine on port 8080, then you have this weird error message.
Steps to reproduce (from a working configuration) :
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
(.... working access to your kubernetes cluster ...)
$ # EDIT ~/.kube/config file, and remove or comment the current-context entry
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ docker run -d --name dummy_8080 -p 8080:80 nginx
(...)
$ kubectl get nodes
error: the server doesn't have a resource type "nodes"
$ # EDIT ~/.kube/config file, and restore the current-context entry
$ k get nodes
NAME STATUS ROLES AGE VERSION
(.... working access to your kubernetes cluster again ...)
$ docker rm -f dummy_8080