How to pass GKE credential to kubernetes provider with Terraform?
Asked Answered
P

2

7

I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the kubernetes provider.

I followed the example in the google_client_config data source documentation and I got

data.google_container_cluster.cluster.endpoint is null

Here is my failed attempt https://github.com/varshard/gke-cluster-terraform/tree/title-terraform

cluster.tf is responsible for creating a GKE cluster, which work fine.

kubernetes.tf is responsible for managing Kubernetes, which failed to get GKE credential.

Pediment answered 8/9, 2019 at 15:36 Comment(0)
D
11

You don't need the google_container_cluster data source here at all because the relevant information is also in the google_container_cluster resource that you are creating in the same context.

Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is terraform apply'd).

I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.

Instead you should remove the google_container_cluster data source and amend your google_container_cluster resource to instead be:

resource "google_container_cluster" "cluster" {
  name     = "${var.project}-cluster"
  location = var.region

  # ...
}

And then refer to this resource in your kubernetes provider:

provider "kubernetes" {
  load_config_file = false
  host                   = "https://${google_container_cluster.cluster.endpoint}"
  cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
  token                  = data.google_client_config.current.access_token
}
Droit answered 9/9, 2019 at 8:24 Comment(0)
D
1

The answer to above question is below:

  1. While creating a cluster you need used the kubernetes provider and data source google_client_config

check my code below its working fine for me.

resource "google_container_cluster" "primary" {
  project  =   var.project_id
  name     = var.cluster-name
  location = var.region

  remove_default_node_pool = true
  initial_node_count       = 1
}


data "google_client_config" "current" {}

provider "kubernetes" {
  host                   = "https://${google_container_cluster.primary.endpoint}"
  cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
  token                  = data.google_client_config.current.access_token
}
Dextro answered 22/1, 2023 at 8:5 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.