I have a set of Pods running commands that can take up to a couple seconds. There is a process that keeps track of open request & which Pod the request is running on. I'd like the use that information when scaling down pods - either by specifying which pods to try to leave up, or specifying which pods to shut down. Is it possible to specify this type of information when changing the # of replicas, e.g. I want X replicas, try not to kill my long running tasks on pods A, B, C?
i've been looking for a solution to this myself, and i also can't find one out of the box. however, there might be a workaround (would love it if you could test and confirm)
steps:
1. delete replication controller
2. delete X desired pods
3. recreate replication controller of size X
You can annotation specific pod with controller.kubernetes.io/pod-deletion-cost: -999
and enable PodDeletionCost
featuregate. This feature is implement alpha in 1.21 and beta in 1.22.
controller.kubernetes.io/pod-deletion-cost
annotation can be set to offer a hint on the cost of deleting a pod compared to other pods belonging to the same ReplicaSet. Pods with lower deletion cost are deleted first.
https://github.com/kubernetes/kubernetes/pull/99163 https://github.com/kubernetes/kubernetes/pull/101080
This isn't currently possible. When you scale down the number of replicas, the system will choose one to remove; there isn't a way to "hint" at which one you'd like it to remove.
One thing you can do is you can change the labels on running pods which can affect their membership in the replication controller. This can be used to quarantine pods that you want to debug (so that they won't be part of a service or removed by a scaling event) but might also be possible to use for your use case.
i've been looking for a solution to this myself, and i also can't find one out of the box. however, there might be a workaround (would love it if you could test and confirm)
steps:
1. delete replication controller
2. delete X desired pods
3. recreate replication controller of size X
As mention above, the workaround for this action should be something like this:
alias k=kubectl
k delete pod <pods_name> && k scale --replicas=<current_replicas - 1> deploy/<name_of_deployment>
Make sure you don't have an active hpa resource that is related to the deployment.
© 2022 - 2024 — McMap. All rights reserved.