Kubectl command throwing error: Unable to connect to the server: getting credentials: exec: exit status 2
Asked Answered
D

13

17

I am doing a lab setup of EKS/Kubectl and after the completion cluster build, I run the following:

> kubectl get node

And I get the following error:
Unable to connect to the server: getting credentials: exec: exit status 2

Moreover, I am sure it is a configuration issue for,

kubectl version
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:

create-cluster                           | delete-cluster                          
describe-cluster                         | describe-update                         
list-clusters                            | list-updates                            
update-cluster-config                    | update-cluster-version                  
update-kubeconfig                        | wait                                    
help                                    
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec: exit status 2

Please advise next steps for troubleshooting.

Drisko answered 21/1, 2020 at 0:19 Comment(0)
O
1

Can you check your ~/.kube/config file?

Assume if you have start local cluster using minikube for that if your config is available, you should not be getting the error for server.

Sample config file


    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /Users/singhvi/.minikube/ca.crt
        server: https://127.0.0.1:32772
      name: minikube
    contexts:
    - context:
        cluster: minikube
        user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
    - name: minikube
      user:
        client-certificate: /Users/singhvi/.minikube/profiles/minikube/client.crt
        client-key: /Users/singhvi/.minikube/profiles/minikube/client.key

Optimism answered 24/9, 2020 at 5:10 Comment(0)
L
10

Please delete the cache folder folder present in

~/.aws/cli/cache

Lagena answered 20/9, 2020 at 20:54 Comment(0)
D
3

For me running kubectl get nodes or kubectl cluster-info gives me the following error.

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:
      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

kubectl get nodes giving error

I did the following to resolve this.

  1. Deleted all of the contents inside ~/.kube/. In my case, its a windows machine, so its C:\Users\nis.kube. Here nis is the user name that I logged into.

  2. Ran the get credentials command as follows.

    az aks get-credentials --resource-group terraform-aks-dev --name terraform-aks-dev-aks-cluster --admin

Note --admin in the end. Without it, its giving me the same error.

Now the above two commands are working.

Reference: https://blog.baeke.info/2021/06/03/a-quick-look-at-azure-kubelogin/

Drinker answered 30/8, 2022 at 5:3 Comment(1)
It's bad practice to use admin credentials and in a proper configured AKS cluster with disabled local accounts, the --admin parameter will fail.Neurogenic
L
2

Did you have the kubectl configuration file ready?

Normally we put it under ~/.kube/config and the file includes the cluster endpoint, ceritifcate, contexts, admin users, and so on.

Furtherly, read this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Lecherous answered 21/1, 2020 at 0:22 Comment(1)
On the money! Thank you for the assist.Drisko
D
2

Make sure you have installed AWS CLI.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Dissert answered 13/9, 2022 at 11:13 Comment(0)
B
2

In my case, as I am using azure (not aws), I had to install "kubelogin" which resolved the issue.

"kubelogin" is a client-go credential (exec) plugin implementing azure authentication. This plugin provides features that are not available in kubectl. It is supported on kubectl v1.11+

Beethoven answered 16/1, 2023 at 13:24 Comment(0)
O
1

Can you check your ~/.kube/config file?

Assume if you have start local cluster using minikube for that if your config is available, you should not be getting the error for server.

Sample config file


    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /Users/singhvi/.minikube/ca.crt
        server: https://127.0.0.1:32772
      name: minikube
    contexts:
    - context:
        cluster: minikube
        user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
    - name: minikube
      user:
        client-certificate: /Users/singhvi/.minikube/profiles/minikube/client.crt
        client-key: /Users/singhvi/.minikube/profiles/minikube/client.key

Optimism answered 24/9, 2020 at 5:10 Comment(0)
B
1

In EKS you can retrieve your kubectl credentials using the following command:

% aws eks update-kubeconfig --name cluster_name
Updated context arn:aws:eks:eu-west-1:xxx:cluster/cluster_name in /Users/theofpa/.kube/config

You can retrieve your cluster name using:

% aws eks list-clusters
{
    "clusters": [
        "cluster_name"
    ]
}
Bottommost answered 8/11, 2022 at 8:16 Comment(0)
L
0

You need to update/recreate your local kubeconfig. In my case I deleted the whole ~/.kube/config and followed this tutorial:

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Lesalesak answered 15/9, 2021 at 9:42 Comment(0)
C
0

I had the same problem, the issue was that in my .aws/credentials file there was multiple users, and the user that had the permissions on the cluster of EKS (admin_test) wasn't the default user. So in my case, i made the "admin_test" user as my default user in the CLI using environment variables:

export $AWS_PROFILE='admin_test'

After that, i checked the default user with the command:

aws sts get-caller-identity

Finally, i was able to get the nodes with the kubectl get nodes command.

Reference: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Camion answered 6/11, 2022 at 23:22 Comment(0)
P
0

I had the same error and solved it by upgrading my awscli to the latest version.

Phile answered 31/1, 2023 at 0:22 Comment(0)
P
0

In Azure:

  1. Delete all contents of the ~/.kube/ folder

  2. Execute:

    sudo az aks install-cli

(https://learn.microsoft.com/en-us/answers/questions/1106601/aks-access-issue)

  1. Reconnect:

    az login

    az aks get-credentials -n {clustername} -g {resourcegroup}

It worked for me for Azure.

Product answered 6/3, 2023 at 17:10 Comment(0)
C
0

Simply updating the kube config with eks command file worked for me.

aws eks --region ap-south-1 update-kubeconfig --name <cluster name> --profile <profile name>
Catalysis answered 6/6 at 4:46 Comment(0)
J
-9

Removing and adding the ~/.aws/credentials file worked to resolve this issue for me.

rm ~/.aws/credentials
touch ~/.aws/credentials
Jameljamerson answered 20/11, 2020 at 22:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.