Move Kubernetes statefulset pod to another node
Asked Answered
R

4

7

My k8s cluster initially have 2node and 1master and I deployed statefulset with 3pods, so 3pods with PVC are running on 2 nodes. Now I increased nodes from 2 to 3. So now k8s is 3nodes and 1master. I would like to move one of the statefulset pod to newly added node without deleting PVC so that 3 pods will spread on 3nodes each. I tried deleting pod but it creates on same node and not on new node(which is expected). Can anyone please let me know if it is possible to move one pod to another node with out deleting PVC? is this achievable? or any alternate solution as I do not want to delete PVC.

Rameriz answered 17/6, 2020 at 3:33 Comment(6)
share details about pvc is it host's filesystem?Illuminating
Thanks Arghya Sadhu, PVC is AWS EBS volume. K8s is deployed in AWS.Rameriz
Do you have any taints on newly create node or (anti)affinity set in pod specs?Himmler
No, they do not have any taints, but podAntiAffinity is set as "affinity": { "podAntiAffinity": { "preferredDuringSchedulingIgnoredDuringExecution": [ { "weight": 100, "podAffinityTerm": { "labelSelector": { "matchExpressions": [Rameriz
Could you add topologyKey used in podAntiAffinity? I tried to reproduce your issue and everytime I scaled statefulset (either using kubectl scale or patch) it ended on 3rd node.Himmler
Thanks KFC, I am deviated from this this task because of other priority issues, I will try this.Rameriz
H
2

You can force a pod to be started on a different node by cordoning the node that the pod is running on and then redeploying the pod. That way kubernetes has to place it onto a different node. You can uncordon the node afterwards.

Hereunder answered 16/7, 2021 at 20:49 Comment(1)
I got an Event: Warning FailedMount Unable to attach or mount volumes: unmounted volumes=[my-pvc], unattached volumes=[my-pvc kube-api-access-skwzd]: timed out waiting for the conditionAbbie
I
1

It's not recommended to delete pods of a statefulset. You can scale-down the statefulset to 2 replicas and then scale it up to 3.

kubectl get statefulsets <stateful-set-name>

kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
Illuminating answered 17/6, 2020 at 3:49 Comment(1)
I tried this but everytime statefulset create pod on same node which previously it was running on.Rameriz
A
1

You will need affinity

And restart all statefulsets

kubectl rollout restart statefulset <stateful-set-name>
Analyse answered 17/6, 2020 at 4:13 Comment(0)
W
1

Neither rollout nor updating replica's will work when it comes to statefulsets as it is mainly based on PVC configuration.

So to move the statefulset pod to different nodes you will have to reconfigure or update the PVC first and then redeploy statefulset. By doing so old PVC will get killed and a new one will be created on the new node.

Withdrew answered 21/4 at 22:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.