Using Cloud Shell to Access a Private Kubernetes Cluster in GCP
Asked Answered
I

2

6

The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands

$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
>     --region europe-west2 \
>     --enable-master-authorized-networks \
>     --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'

$ gcloud container clusters update mservice-dev-cluster \
>     --region europe-west2 \
>     --enable-master-authorized-networks \
>     --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean- 
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to: 
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe- 
west2/mservice-dev-cluster?project=protean-XXXX

$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster

$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout

When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.

Inclination answered 15/3, 2019 at 12:30 Comment(0)
V
2

Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there: https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies

Vesicant answered 14/11, 2019 at 19:15 Comment(0)
C
-6

Try to perform the following

gcloud container clusters get-credentials [CLUSTER_NAME]

And confirm that kubectl is using the right credentials:

gcloud auth application-default login
Cote answered 19/3, 2019 at 12:5 Comment(4)
I have the exact same issue happening and its nothing todo with being authenticated or expired tokens or credentialsCupola
I've noticed that you have this value configured as true enable_private_endpoint = true I suggest to change it to false, then, you should be able to access with the public IP of the Cloud Shell.Illene
I need it private hence reason private endpoint has been enabled.Inclination
Thanks for the clarification, since you need the private endpoint enabled you will only be able to run kubectl commands from machines which are in same VPC than the private GKE cluster. You are not able to access to your cluster because the Cloud Shell is not part of your project VPC.Illene

© 2022 - 2024 — McMap. All rights reserved.