couldn't get current server API group list: the server has asked for the client to provide credentials error: You must be logged in to the server
Asked Answered
E

9

29

I created the eks cluster trying to connect it with local cli, for that, I installed the aws-cli and also provide the right 'aws configure' credentials. The user which I am using to connect with the aws have the EKS related policy. Still I am getting the following Error ...

E0209 21:09:44.893284 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:45.571635 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:46.380542 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:47.105407 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:47.869614 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Experiential answered 10/2, 2023 at 2:13 Comment(0)
B
9

Well in my case, the aws keys with which I created the cluster and with which I configured the kubectl were different. The two of them were different aws identities.

To give another user permission to access the control pane follow this

How do I resolve the error You must be logged in to the server Unauthorized when I connect to the Amazon EKS API server.

This solved my problem

Burnish answered 18/3, 2023 at 4:51 Comment(0)
G
4

I was working with a different AWS account than usual, so I set the environment variables:

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=

I do not need them to access other AWS accounts from my computer - credentials are managed differently, but I had to remove those variables to make the usual connection work again.

In similar scenarios you may need to:

  1. Remove them as well,
  2. Change them to correct values.
Gelt answered 30/10, 2023 at 12:0 Comment(0)
C
1

My problem got solved by removing cli_auto_prompt in the AWS profile

vi ~/.aws/config

[default]
region = us-west-2
# cli_auto_prompt = on

[profile <X>]
region = us-west-2
# cli_auto_prompt = on

Also, make sure to update the kubeconfig one more time after the above change. Please be sure to use the correct cluster name as well as region and also make sure the logged in user in your CLI do have admin permissions on EKS RBACK.

aws eks update-kubeconfig --name <EKS_CLUSTER_NAME> --region us-west-2
Colossian answered 22/6, 2023 at 16:49 Comment(2)
aws eks update-kubeconfig --name <EKS_CLUSTER_NAME> --region us-west-2 fixes it. Thanks!Ruy
When you run the aws eks update-kubeconfig command specify the aws profile that will assume the same role used to create the eks cluster.Andaman
E
0

You are probably not set to the correct AWS account where the relevant EKS is set.

Use "aws configure list" to verify you are connected to the correct profile (which is probably not correct).

Use "aws configure" to set the correct account. Or use relevant AWS env parameters instead.

Expectorate answered 9/8, 2023 at 8:47 Comment(0)
H
0

Check this link https://repost.aws/knowledge-center/eks-api-server-unauthorized-error

this worked for me, it was due to aws cli user is not the user used to create the cluster solved when same user used for both

Heartsick answered 8/9, 2023 at 19:11 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Classic
L
0

In my case, I got a similar error by having the "default" systemd cgroup driver (that is Kubernetes' default nowadays). However, if you don't use docker but containerd + runc, this will default to using cgroupfs cgroup driver. This difference will not show up as an actual error BUT it will lead to above error.

So basically I did not do THIS: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd

Luminous answered 7/10, 2023 at 13:10 Comment(0)
F
0

I managed to resolve the same problem by granting public API server endpoint access (note: be aware of doing it in production environment).

If you are using AWS console: Go to the cluster network tab and select manage endpoint access.

If you are using terraform: Set the the terraform module input cluster_endpoint_public_access as true

As explained in AWS official documentation aws doc, to allow kubectl to connect with EKS cluster, we will need to use allowed network, so either we enter the VPC that EKS cluster located in, or we allow public access and set allowed CIDR block.

Fokine answered 9/4 at 9:30 Comment(0)
H
0

In my case, I created the cluster using the root account but my AWS CLI was configured with some other account that was not root. You need to have the same account configured in AWS CLI as the one that was used to create the cluster.

I didn't want to use the add the root user to AWS CLI so in my case I solved the error by adding the IAM user in my AWS CLI to the EKS > Clusters > YOUR_CLUSTER > Access > IAM access entries section.

Hyphenated answered 26/5 at 21:12 Comment(0)
B
-3

The same error happened to me on k3d. Seems like the certificates were expired. I tried this and it worked

k3d kubeconfig get <name_of_cluster>
k3d kubeconfig merge <name_of_cluster> -d –u 
k3d cluster stop <name_of_cluster> 
k3d cluster start <name_of_cluster> 
Bawd answered 24/2, 2023 at 20:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.