How do I change a k8s Deployment's matchLabels without downtime?
Asked Answered
D

3

12

Kubernetes kind Deployment doesn't allow patch changes in spec.selector.matchLabels, so any new deployments (managed by Helm or otherwise) that want to change the labels can't use the RollingUpdate feature within a Deployment. What's the best way to achieve a rollout of a new deployment without causing downtime?

Minimum example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foo
  template:
    metadata:
      labels:
        app: foo
    spec:
      containers:
        - name: foo
          image: ubuntu:latest
          command: ["/bin/bash", "-ec", "sleep infinity"]

Apply this, then edit the labels (both matchLabels and metadata.labels) to foo2. If you try to apply this new deployment, k8s will complain (by design) The Deployment "foo" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"foo2"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable.

The only way I can think of right now is to use a new Deployment name so the new deployment does not try to patch the old one, and then delete the old one, with the ingress/load balancer resources handling the transition. Then we can redeploy with the old name, and delete the new name, completing the migration.

Is there a way to do it with fewer k8s CLI steps? Perhaps I can edit/delete something that keeps the old pods alive while the new pods roll out under the same name?

Doyle answered 18/3, 2021 at 18:43 Comment(2)
if it is managed by helm, then better deploy with helm again and set the new values you need. How many instances are running?Undesirable
The number of pods varies, but there's only one Helm release. Regardless of deployment with Helm(3), Kinds with matchLabels (Jobs, Deployments) cannot be updated and need to be recreated (delete + create). I've added a minimum example to illustrate the issue.Doyle
S
5

I just did this, and I followed the four-step process you describe. I think the answer is no, there is no better way.

My service was managed by Helm. For that I literally created four merge requests that needed to be rolled out sequentially:

  1. Add identical deployment "foo-temp", only name is different.
  2. Delete deployment foo.
  3. Recreate deployment foo with desired label selector.
  4. Delete deployment foo-temp.

I tested shortcutting the process (combining step 1 and 2), but it doesn't work - helm deletes one deployment before it creates the other, and then you have downtime.

The good news is: in my case i didn't need to change any other descriptors (charts), so it was not so bad. All the relationships (traffic routing, etc) were made via label matching. Since foo-temp had the same labels, the relationships worked automatically. The only issue was that my HPA referenced the name, not the labels. Instead of modifying it, I left foo-temp without an HPA and just specified a high amount of replicas for it. The HPA didn't complain when its target didn't exist between step 2 and 3.

Seismograph answered 14/2, 2023 at 13:2 Comment(0)
A
1

The easier way is to configure temporarily both the old label and the new label. Once your Pods have both label, delete your Deployment and orphan the pods:

kubectl delete deployment YOUR-DEPLOYMENT --cascade=orphan

Now the Deployment is gone, but your pods are still running, and you can deploy the Deployment again, this time with the new label selector. It will find the running Pods as they had also the new label.

And once this is done, you can finish by removing the old label from your pods.

Acnode answered 22/7, 2024 at 11:47 Comment(0)
B
-3

From my experience, while using helm when I use

helm upgrade release -f values .

I do not get downtime. Also when using helm I noticed that until the new deployment get ready by X/X it does not terminate the old deployment. I can suggest using it. This way it can be as painless as it gets.

Also from the section Updating Deployment from Kubernetes docs it is said that, A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed.

Therefore, you can use label changes with helm.

Hopefully I was a little help.

Beware! untried method: kubectl has an edit subcommand which enabled me to update ConfigMaps, PersistentVolumeClaims and etc. Maybe you can use it to update your Deployment. Syntax:

kubectl edit [resource] [resource-name]

But before doing that please choose a proper text editor since you will be dealing with yaml formatted files. Do so by using,

export KUBE_EDITOR=/bin/{nano,vim,yourFavEditor}
Benzyl answered 18/3, 2021 at 20:29 Comment(2)
My question is more about the limitation further down in the same doc: kubernetes.io/docs/concepts/workloads/controllers/deployment/… I get that this is a rare upgrade, but it would still be nice to have a standard process for this and not feel like I'm hacking through the k8s undergrowth to get things done.Doyle
I think if it works and give you what you need you do not need to feel that way. Also these kind of changes happen. Just there is no easy one-line solution for this, that i know of. That is all. This method is almost the same as applying the changed yaml file without deleting which forces kubernetes to re-configure the deployment.Benzyl

© 2022 - 2025 — McMap. All rights reserved.