In kubernetes there is a rolling update (automatically without downtime) but there is not a rolling restart, at least i could not find. We have to change deployment yaml. Is there a way to make rolling "restart", preferably without changing deployment yaml?
How to rolling restart pods without changing deployment yaml in kubernetes?
Asked Answered
Before kubernetes 1.15 the answer is no. But there is a workaround of patching deployment spec with a dummy annotation:
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
As of kubernetes 1.15 you can use:
kubectl rollout restart deployment your_deployment_name
- Created a new
kubectl rollout restart
command that does a rolling restart of a deployment.kubectl rollout restart
now works for DaemonSets and StatefulSets
So having locally installed kubectl 1.15 you can use this on a 1.14 cluster? –
Olimpia
@NielsBasjes Yes you can use kubectl 1.15 with apiserver 1.14. Here is more detail on kubernetes version skew policy: kubernetes.io/docs/setup/release/version-skew-policy –
Sulfatize
If I do the rolling Update, the running Pods are terminated if the new pods are running. But my pods need to load configs and this can take a few seconds. In these seconds my server is not reachable. Can I set a timeout, when the running pods are termianted? –
Punkie
@B.Stucke you can use "terminationGracePeriodSeconds" for draining purpose before termination. But I think your prior need is to set "readinessProbe" to check if configs are loaded. –
Sulfatize
I don't have deployments, I have svc/pods - Is there a way to restart pods? –
Wilhelm
@Wilhelm every service is backed by a deployment,
kubectl get deployment -l app=<service-name>
will find the deployment name, then you can rollout-restart that –
Alius Here is the PR that implements rollout restart: github.com/kubernetes/kubernetes/pull/76062 –
Annulus
If you use k9s, the restart command can be found if you select deployments, statefulsets or daemonsets:
yes, however, I tried this twice this week, and found that it immediately kills the pods, and then starts new ones, thus potentially causing an outage. That doesn't seem like a "rolling restart" as I've seen them. That said, I'm not sure why k9s does that, given this code in the PR that implements it: github.com/derailed/k9s/pull/345/… –
Annulus
I think that is because you have your deployment configured to allow that. There are settings that controls how many pods may be killed at once in a rolling upgrade: kubernetes.io/docs/concepts/workloads/controllers/deployment . –
Gabble
© 2022 - 2024 — McMap. All rights reserved.
kubectl rollout restart
works by changing an annotation on the deployment's pod spec, so it doesn't have any cluster-side dependencies; you can use it against older Kubernetes clusters just fine.) – Licht