How to import a generated Kubernetes cluster's namespace in terraform
Asked Answered
S

2

11

In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.

The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):

The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

The command I used to import the namespaces is pretty straightforward:

terraform import kubernetes_namespace.my_new_namespace my_new_namespace

I also tried using the -provdier="" and -config="" but to no avail.

My Kubernetes provider configuration is this:

provider "kubernetes" {
  version = "~> 1.8"

  host  = module.gke.endpoint
  token = data.google_client_config.current.access_token

  cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}

An example for a namespace resource I am trying to import is this:

resource "kubernetes_namespace" "my_new_namespace" {
  metadata {
    name = "my_new_namespace"
  }
}

The import command results in the following:

Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused

It's obvious it's doomed to fail since it's trying to reach localhost instead of the actual cluster IP and configurations.

Is there any workaround for this use case?

Thanks in advance.

Stull answered 10/9, 2019 at 7:6 Comment(7)
You could temporarily hardcode the provider config from the known outputs while you import the resources and then revert your change when you're done.Roadhouse
can you reach your cluste via the api? kubectl get <something> works?Offense
Still having this issue in 2021, if anyone has the answer that would be awesome... :DMechellemechlin
Yes, it seems it is complete dead end to look for a sound solutionBlepharitis
Is it grabbing localhost from your local kubectl kubeconfig? If you can do a gcloud container clusters get-credentials to generate a local kubeconfig, I believe the terraform import command will use your local kubeconfig/context I'm guessing the module.gke.endpoint isn't coming back with localhost so it's getting it from somewhere...Haldis
we used a glitch to do so: we applied, failed with conflicts and then the current token is in the state and active and somehow used by the import.. did not try for a while.. not sure it is still working that way ;)Latria
If you were to replace the token with hard-coded value instead of using a data source, what would happen? Of course, when the import is done you could revert it back to the data source.Airwaves
S
1

the issue lies with the dynamic data provider. The import statement doesn't have access to it.

For the process of importing, you have to hardcode the provider values.

Change this:

provider "kubernetes" {
  version                = "~> 1.8"
  host                   = module.gke.endpoint
  token                  = data.google_client_config.current.access_token
  cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}

to:

provider "kubernetes" {
  version                = "~> 1.8"
  host                   = "https://<ip-of-cluster>"
  token                  = "<token>"
  cluster_ca_certificate = base64decode(<cert>)
  load_config_file       = false
}
  • The token can be retrieved from gcloud auth print-access-token.
  • The IP and cert can be retrieved by inspecting the created container resource using terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here

For provider version 2+ you have to drop load_config_file.

Once in place, import and revert the changes on the provider.

Spanker answered 25/5, 2022 at 15:6 Comment(0)
K
0

(1) Create an entry in your kubeconfig file for your GKE cluster.

gcloud container clusters get-credentials cluster-name

see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry

(2) Point terraform Kubernetes provider to your kubeconfig:

provider "kubernetes" {
  config_path = "~/.kube/config"
}
Keciakeck answered 5/2, 2021 at 17:34 Comment(1)
this won't work, because import runs outside the context of plan or applyBlepharitis

© 2022 - 2024 — McMap. All rights reserved.