Connecting to existing EKS cluster using kubectl or eksctl
Asked Answered
D

3

10

I have created one kubernetes cluster on EKS. I used eksctl create cluster to create the cluster. I am able to access everything which is great.

However, my colleague has created another cluster, and I am wondering how will I generate / get Kubeconfigs so that I can point to the cluster that my colleague has created.

Dialectic answered 3/5, 2020 at 14:47 Comment(0)
H
1

Accessing a private only API server

If you have disabled public access for your cluster's Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network. Here are a few possible ways to access the Kubernetes API server endpoint:

  • Connected network – Connect your network to the VPC with an AWS transit gateway or other connectivity option and then use a computer in the connected network. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network.

  • Amazon EC2 bastion host – You can launch an Amazon EC2 instance into a public subnet in your cluster's VPC and then log in via SSH into that instance to run kubectl commands. For more information, see Linux bastion hosts on AWS. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host. For more information, see Amazon EKS security group considerations.

    When you configure kubectl for your bastion host, be sure to use AWS credentials that are already mapped to your cluster's RBAC configuration, or add the IAM user or role that your bastion will use to the RBAC configuration before you remove endpoint public access. For more information, see Managing users or IAM roles for your cluster and Unauthorized or access denied (kubectl).

  • AWS Cloud9 IDE – AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. You can create an AWS Cloud9 IDE in your cluster's VPC and use the IDE to communicate with your cluster. For more information, see Creating an environment in AWS Cloud9. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your IDE security group. For more information, see Amazon EKS security group considerations.

    When you configure kubectl for your AWS Cloud9 IDE, be sure to use AWS credentials that are already mapped to your cluster's RBAC configuration, or add the IAM user or role that your IDE will use to the RBAC configuration before you remove endpoint public access. For more information, see Managing users or IAM roles for your cluster and Unauthorized or access denied (kubectl). Take a look here: eks-endpoints-access.

Isolation through multiple clusters

A possible alternative is to use multiple single tenant Amazon EKS clusters. With this strategy, each tenant will have the possibility to use its own Kubernetes cluster, within a shared AWS account or using dedicated accounts within an Organization for large enterprises. Once clusters are deployed, you might want to have an overview of all deployed clusters to monitor each tenant, make sure we are running the latest version of EKS control plane and operate at scale. Rancher is a popular open-source tool used to manage multiple Kubernetes clusters, make sure to check out this article on the Open Source blog for details on how to deploy and use it.

Clusters in the same VPC

If your colleague's cluster is in the same VPC I advice you to use AWS App Mesh. App Mesh is a service mesh that lets you control and monitor services spanning two clusters deployed in the same VPC.

Architecture:

Prerequisites

In order to successfully carry out the base deployment:

  • Make sure to have newest AWS CLI installed, that is, version 1.16.268 or above.
  • Make sure to have kubectl installed, at least version 1.11 or above.
  • Make sure to have jq installed.
  • Make sure to have aws-iam-authenticator installed, required for eksctl
  • Install eksctl, for example, on macOS with brew tap weaveworks/tap and brew install weaveworks/tap/eksctl, and make sure it’s on at least on version 0.1.26.

Note that this walkthrough assumes throughout to operate in the us-east-1 Region.

Assuming that both of cluster are working and

Update the KUBECONFIG environment variable on each cluster according to the eksctl output, respectively:
Run the following in respective tabs.

export KUBECONFIG=~/.kube/eksctl/clusters/first-cluster 

export KUBECONFIG=~/.kube/eksctl/clusters/second-cluster

You have now setup the two clusters and pointing kubectl to respective clusters.

Now it is time to deploy App Mesh custom components

To automatically inject App Mesh components and proxies on pod creation, you need to create some custom resources on the clusters. Use helm for that. Install tiller on both the clusters and then use helm to run the following commands on both clusters for that.

Download App Mesh repo

>> git clone https://github.com/aws/aws-app-mesh-examples (https://github.com/aws/aws-app-mesh-examples).git
>> cd aws-app-mesh-examples/walkthroughs/howto-k8s-cross-cluster

Install Helm

>>brew install kubernetes-helm

Install tiller

Using helm requires a server-side component called tiller installed on the cluster. Follow the instructions in the documentation to install tiller on both the clusters.

Verify tiller installation

>>kubectl get po -n kube-system | grep -i tiller
tiller-deploy-6d65d78679-whwzn 1/1 Running 0 5h35m

Install App Mesh Components

Run the following set of commands to install the App Mesh controller and Injector components.

helm repo add eks https://aws.github.io/eks-charts
kubectl create ns appmesh-system
kubectl apply -f https://raw.githubusercontent.com/aws/eks-charts/master/stable/appmesh-controller/crds/crds.yaml
helm upgrade -i appmesh-controller eks/appmesh-controller --namespace appmesh-system
helm upgrade -i appmesh-inject eks/appmesh-inject --namespace appmesh-system --set mesh.create=true --set mesh.name=global

You are now ready to deploy example front and colorapp applications to respective clusters along with the App Mesh, which will span both clusters.

Deploy services and mesh constructs

  1. You should be in the walkthrough/howto-k8s-cross-cluster folder, all commands will be run from this location.

  2. Your account id:

export AWS_ACCOUNT_ID=<your_account_id>
  1. Region, e.g., us-east-1
export AWS_DEFAULT_REGION=us-east-1
  1. ENVOY_IMAGE environment variable is set to App Mesh Envoy, see Envoy.
export ENVOY_IMAGE=...
  1. VPC_ID environment variable is set to the VPC where Kubernetes pods are launched. VPC will be used to set up
    private DNS namespace in AWS using create-private-dns-namespace API. To find out VPC of EKS cluster, you can
    use aws eks describe-cluster. See below for reason why AWS Cloud Map PrivateDnsNamespace is required.
export VPC_ID=...
  1. CLUSTER environment variables to export kube configuration
export CLUSTER1=first-cluster
export CLUSTER2=second-cluster

Deploy

./deploy.sh

Finally remember to verify deployment. More information you can find here: app-mesh-eks.

Hobson answered 5/5, 2020 at 10:25 Comment(1)
If it somehow help you and solve your problem, please accept and upvote this answer to be more visible for future community ?Hobson
I
28

There are 2 ways you can get the kubeconfig.

  1. aws eks update-kubeconfig --name <clustername> --region <region>
  2. eksctl utils write-kubeconfig --cluster=<clustername>

Provided you have the EKS on the same account and visible to you.

Once you get the kubeconfig, if you have the access, then you can start using kubectl.

If you don't have access, you need to ask the owner to give your userID permissions to the cluster.

Complete details are listed here

Infra answered 12/8, 2020 at 1:20 Comment(0)
H
1

Accessing a private only API server

If you have disabled public access for your cluster's Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network. Here are a few possible ways to access the Kubernetes API server endpoint:

  • Connected network – Connect your network to the VPC with an AWS transit gateway or other connectivity option and then use a computer in the connected network. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network.

  • Amazon EC2 bastion host – You can launch an Amazon EC2 instance into a public subnet in your cluster's VPC and then log in via SSH into that instance to run kubectl commands. For more information, see Linux bastion hosts on AWS. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host. For more information, see Amazon EKS security group considerations.

    When you configure kubectl for your bastion host, be sure to use AWS credentials that are already mapped to your cluster's RBAC configuration, or add the IAM user or role that your bastion will use to the RBAC configuration before you remove endpoint public access. For more information, see Managing users or IAM roles for your cluster and Unauthorized or access denied (kubectl).

  • AWS Cloud9 IDE – AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. You can create an AWS Cloud9 IDE in your cluster's VPC and use the IDE to communicate with your cluster. For more information, see Creating an environment in AWS Cloud9. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your IDE security group. For more information, see Amazon EKS security group considerations.

    When you configure kubectl for your AWS Cloud9 IDE, be sure to use AWS credentials that are already mapped to your cluster's RBAC configuration, or add the IAM user or role that your IDE will use to the RBAC configuration before you remove endpoint public access. For more information, see Managing users or IAM roles for your cluster and Unauthorized or access denied (kubectl). Take a look here: eks-endpoints-access.

Isolation through multiple clusters

A possible alternative is to use multiple single tenant Amazon EKS clusters. With this strategy, each tenant will have the possibility to use its own Kubernetes cluster, within a shared AWS account or using dedicated accounts within an Organization for large enterprises. Once clusters are deployed, you might want to have an overview of all deployed clusters to monitor each tenant, make sure we are running the latest version of EKS control plane and operate at scale. Rancher is a popular open-source tool used to manage multiple Kubernetes clusters, make sure to check out this article on the Open Source blog for details on how to deploy and use it.

Clusters in the same VPC

If your colleague's cluster is in the same VPC I advice you to use AWS App Mesh. App Mesh is a service mesh that lets you control and monitor services spanning two clusters deployed in the same VPC.

Architecture:

Prerequisites

In order to successfully carry out the base deployment:

  • Make sure to have newest AWS CLI installed, that is, version 1.16.268 or above.
  • Make sure to have kubectl installed, at least version 1.11 or above.
  • Make sure to have jq installed.
  • Make sure to have aws-iam-authenticator installed, required for eksctl
  • Install eksctl, for example, on macOS with brew tap weaveworks/tap and brew install weaveworks/tap/eksctl, and make sure it’s on at least on version 0.1.26.

Note that this walkthrough assumes throughout to operate in the us-east-1 Region.

Assuming that both of cluster are working and

Update the KUBECONFIG environment variable on each cluster according to the eksctl output, respectively:
Run the following in respective tabs.

export KUBECONFIG=~/.kube/eksctl/clusters/first-cluster 

export KUBECONFIG=~/.kube/eksctl/clusters/second-cluster

You have now setup the two clusters and pointing kubectl to respective clusters.

Now it is time to deploy App Mesh custom components

To automatically inject App Mesh components and proxies on pod creation, you need to create some custom resources on the clusters. Use helm for that. Install tiller on both the clusters and then use helm to run the following commands on both clusters for that.

Download App Mesh repo

>> git clone https://github.com/aws/aws-app-mesh-examples (https://github.com/aws/aws-app-mesh-examples).git
>> cd aws-app-mesh-examples/walkthroughs/howto-k8s-cross-cluster

Install Helm

>>brew install kubernetes-helm

Install tiller

Using helm requires a server-side component called tiller installed on the cluster. Follow the instructions in the documentation to install tiller on both the clusters.

Verify tiller installation

>>kubectl get po -n kube-system | grep -i tiller
tiller-deploy-6d65d78679-whwzn 1/1 Running 0 5h35m

Install App Mesh Components

Run the following set of commands to install the App Mesh controller and Injector components.

helm repo add eks https://aws.github.io/eks-charts
kubectl create ns appmesh-system
kubectl apply -f https://raw.githubusercontent.com/aws/eks-charts/master/stable/appmesh-controller/crds/crds.yaml
helm upgrade -i appmesh-controller eks/appmesh-controller --namespace appmesh-system
helm upgrade -i appmesh-inject eks/appmesh-inject --namespace appmesh-system --set mesh.create=true --set mesh.name=global

You are now ready to deploy example front and colorapp applications to respective clusters along with the App Mesh, which will span both clusters.

Deploy services and mesh constructs

  1. You should be in the walkthrough/howto-k8s-cross-cluster folder, all commands will be run from this location.

  2. Your account id:

export AWS_ACCOUNT_ID=<your_account_id>
  1. Region, e.g., us-east-1
export AWS_DEFAULT_REGION=us-east-1
  1. ENVOY_IMAGE environment variable is set to App Mesh Envoy, see Envoy.
export ENVOY_IMAGE=...
  1. VPC_ID environment variable is set to the VPC where Kubernetes pods are launched. VPC will be used to set up
    private DNS namespace in AWS using create-private-dns-namespace API. To find out VPC of EKS cluster, you can
    use aws eks describe-cluster. See below for reason why AWS Cloud Map PrivateDnsNamespace is required.
export VPC_ID=...
  1. CLUSTER environment variables to export kube configuration
export CLUSTER1=first-cluster
export CLUSTER2=second-cluster

Deploy

./deploy.sh

Finally remember to verify deployment. More information you can find here: app-mesh-eks.

Hobson answered 5/5, 2020 at 10:25 Comment(1)
If it somehow help you and solve your problem, please accept and upvote this answer to be more visible for future community ?Hobson
T
0

You can get aws IAM access key and id that has permission to access the cluster for both the cluster and set two aws profile and use the following command to access the cluster :

aws eks update-kubeconfig --name cluster-name --profile aws-profilename

above command will add the access details in the kubeconfig file and also set the current-context. After that you can switch the context with kubectl commands:

kubectl config use-context arn-nameofeks-cluster

prerequisite to have:

  • aws-cli installed
  • kubectl installed
Tankage answered 7/5, 2020 at 6:45 Comment(1)
how does one get profilename?Karnak

© 2022 - 2025 — McMap. All rights reserved.