Already saw this particular post kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster and followed some guides from AWS but still no success..
I'm creating a CI/CD pipeline. But CodeBuild is apparently not authorized to access the EKS cluster. I went to the specific CodeBuild role and added the following policies:
- AWSCodeCommitFullAccess
- AmazonEC2ContainerRegistryFullAccess
- AmazonS3FullAccess
- CloudWatchLogsFullAccess
- AWSCodeBuildAdminAccess
Also created and added the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
}
]
}
Afterwards I executed the following command in the terminal where I created the EKS cluster: eksctl create iamidentitymapping --cluster <my_cluster_name> --arn <arn_from_the_codebuild_role> --group system:masters --username admin
And checked if it successfully added to aws-auth by running the command kubectl get configmaps aws-auth -n kube-system -o yaml
. It returned:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::********:role/*********
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::*****:role/service-role/*******
username: ******
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2021-11-10T07:37:06Z"
name: aws-auth
namespace: kube-system
resourceVersion: *******
uid: *********
Still I get the error it's unauthorized.. Below is the buildspec.yml file:
version: 0.2
run-as: root
phases:
install:
commands:
- echo Installing app dependencies...
- chmod +x prereqs.sh
- sh prereqs.sh
- source ~/.bashrc
- echo Check kubectl version
- kubectl version --short --client
pre_build:
commands:
- echo Logging in to Amazon EKS...
- aws eks --region eu-west-2 update-kubeconfig --name <my-cluster-name>
- echo Check config
- kubectl config view
- echo Check kubectl access
- kubectl get svc
post_build:
commands:
- echo Push the latest image to cluster
- kubectl apply -n mattermost-operator -f mattermost-operator.yml
- kubectl rollout restart -n mattermost-operator -f mattermost-operator.yml
EDIT:
Running the command kubectl config view
in CodeBuild returns:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://**********eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
user: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
current-context: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-2
- eks
- get-token
- --cluster-name
- <cluster_name>
- --role
- arn:aws:iam::*********:role/service-role/<codebuild_role>
command: aws
env: null
Running the command kubectl config view
in the terminal where I created the EKS cluster returns:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: ***********eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
- cluster:
certificate-authority-data: DATA+OMITTED
server: *********eu-west-2.eks.amazonaws.com
name: <cluster_name>.eu-west-2.eksctl.io
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
user: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
- context:
cluster: <cluster_name>.eu-west-2.eksctl.io
user: ******@<cluster_name>.eu-west-2.eksctl.io
name: ******@<cluster_name>.eu-west-2.eksctl.io
current-context: arn:aws:eks:eu-west-2:********:cluster/<cluster_name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-2
- eks
- get-token
- --cluster-name
- <cluster_name>
command: aws
env: null
interactiveMode: IfAvailable
provideClusterInfo: false
- name: ******@******.eu-west-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- <cluster_name>
command: aws-iam-authenticator
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: eu-west-2
interactiveMode: IfAvailable
provideClusterInfo: false
ANYBODY IDEAS PLS? :D