Kubernetes kind Deployment
doesn't allow patch changes in spec.selector.matchLabels
, so any new deployments (managed by Helm or otherwise) that want to change the labels can't use the RollingUpdate feature within a Deployment. What's the best way to achieve a rollout of a new deployment without causing downtime?
Minimum example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- name: foo
image: ubuntu:latest
command: ["/bin/bash", "-ec", "sleep infinity"]
Apply this, then edit the labels (both matchLabels and metadata.labels) to foo2
. If you try to apply this new deployment, k8s will complain (by design) The Deployment "foo" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"foo2"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
.
The only way I can think of right now is to use a new Deployment name so the new deployment does not try to patch the old one, and then delete the old one, with the ingress/load balancer resources handling the transition. Then we can redeploy with the old name, and delete the new name, completing the migration.
Is there a way to do it with fewer k8s CLI steps? Perhaps I can edit/delete something that keeps the old pods alive while the new pods roll out under the same name?