Can not delete pods in Kubernetes
Asked Answered
B

6

30

I tried installing dgraph (single server) using Kubernetes.
I created pod using:

kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml

Now all I need to do is to delete the created pods.
I tried deleting the pod using:

kubectl delete pod pod-name

The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?

Bogbean answered 21/11, 2018 at 5:23 Comment(5)
Is there any deployment or statefulset or replicaset or replicationcontroller or job or cronjob or daemonset running in your cluster for dgraph?Arlettearley
How did you deploy dgraph?Immixture
Do a kubectl get all. I'm pretty certain you will see a deployment there that owns the pods, that's the one you need to delete.Accidie
Did you deploy you dgraph using like this command $ kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml?Arlettearley
yes i created using kubectl create -f raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/… @shudiptaBogbean
A
17

The link provided by the op may be unavailable. See the update section

As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:

$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml

Update

Basically, this is an explanation for the reason.

Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:

See, who controls whom:

  • ReplicationController -> Pod(s)
  • ReplicaSet -> Pod(s)
  • Deployment -> ReplicaSet(s) -> Pod(s)
  • StatefulSet -> Pod(s)
  • DaemonSet -> Pod(s)
  • Job -> Pod
  • CronJob -> Job(s) -> Pod

a -> b means a creates and controls b and the value of field .metadata.ownerReference in b's manifest is the reference of a. For example,

apiVersion: v1
kind: Pod
metadata:
  ...
  ownerReferences:
  - apiVersion: apps/v1
    controller: true
    blockOwnerDeletion: true
    kind: ReplicaSet
    name: my-repset
    uid: d9607e19-f88f-11e6-a518-42010a800195
  ...

This way, deletion of the parent object will also delete the child object via garbase collection.

So, a's controller ensures that a's current status matches with a's spec. Say, if one deletes b, then b will be deleted. But a is still alive and a's controller sees that there is a difference between a's current status and a's spec. So a's controller recreates a new b obj to match with the a's spec.

The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.

$ kubectl get deploy -n {namespace}

$ kubectl delete deploy {deployment name} -n {namespace}

Note Book

Another problem may arise during deletion is as follows: If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,

$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
Arlettearley answered 21/11, 2018 at 6:4 Comment(1)
Correct explanationStarch
I
25

I did face same issue. Run command:

kubectl get deployment

you will get respective deployment to your pod. Copy it and then run command:

kubectl delete deployment xyz

then check. No new pods will be created.

Ineffaceable answered 27/11, 2018 at 13:22 Comment(0)
A
17

The link provided by the op may be unavailable. See the update section

As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:

$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml

Update

Basically, this is an explanation for the reason.

Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:

See, who controls whom:

  • ReplicationController -> Pod(s)
  • ReplicaSet -> Pod(s)
  • Deployment -> ReplicaSet(s) -> Pod(s)
  • StatefulSet -> Pod(s)
  • DaemonSet -> Pod(s)
  • Job -> Pod
  • CronJob -> Job(s) -> Pod

a -> b means a creates and controls b and the value of field .metadata.ownerReference in b's manifest is the reference of a. For example,

apiVersion: v1
kind: Pod
metadata:
  ...
  ownerReferences:
  - apiVersion: apps/v1
    controller: true
    blockOwnerDeletion: true
    kind: ReplicaSet
    name: my-repset
    uid: d9607e19-f88f-11e6-a518-42010a800195
  ...

This way, deletion of the parent object will also delete the child object via garbase collection.

So, a's controller ensures that a's current status matches with a's spec. Say, if one deletes b, then b will be deleted. But a is still alive and a's controller sees that there is a difference between a's current status and a's spec. So a's controller recreates a new b obj to match with the a's spec.

The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.

$ kubectl get deploy -n {namespace}

$ kubectl delete deploy {deployment name} -n {namespace}

Note Book

Another problem may arise during deletion is as follows: If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,

$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
Arlettearley answered 21/11, 2018 at 6:4 Comment(1)
Correct explanationStarch
D
9

You can perform a graceful pod deletion with the following command:

kubectl delete pods <pod>

If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:

kubectl delete pods <pod> --grace-period=0 --force

If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:

kubectl delete pods <pod> --grace-period=0

If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:

kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Day answered 21/11, 2018 at 5:27 Comment(2)
i tried all these and it does the job of deleting the pods. but my problem is the pods getting created again and again after deletion(its replicating).Bogbean
@AATHITHRAJENDRAN There is probably a deployment which is doing this. check kubectl get all.Sedberry
S
4

Pods in kubernetes also depends on its type. Like

  • Replication Controllers
  • Replica Sets
  • Statefulsets
  • Deployments
  • Daemon Sets
  • Pod

Do kubectl describe pod <podname> and check

apiVersion: apps/v1
kind: StatefulSet
metadata:

Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.

Seducer answered 11/12, 2018 at 9:23 Comment(0)
V
4

Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.

kubectl get all

kubectl delete deployment DEPLOYMENTNAME
Violist answered 26/2, 2020 at 8:33 Comment(0)
B
3

As @Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening. The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:

# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dgraph
spec:
  serviceName: "dgraph"
  replicas: 1

By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid. The way that @Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.

You can find more about that in Kubernetes documentation on:

StatefulSets

Cluster's desired state

More about Kubernetes objects and difference between each of them

Betony answered 21/11, 2018 at 13:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.