Kubernetes pod gets recreated when deleted
Asked Answered
W

25

292

I have started pods with command

$ kubectl run busybox \
--image=busybox \
--restart=Never \
--tty \
-i \
--generator=run-pod/v1

Something went wrong, and now I can't delete this Pod.

I tried using the methods described below but the Pod keeps being recreated.

$ kubectl delete pods  busybox-na3tm
pod "busybox-na3tm" deleted

$ kubectl get pods
NAME                                     READY     STATUS              RESTARTS   AGE
busybox-vlzh3                            0/1       ContainerCreating   0          14s

$ kubectl delete pod busybox-vlzh3 --grace-period=0

$ kubectl delete pods --all
pod "busybox-131cq" deleted
pod "busybox-136x9" deleted
pod "busybox-13f8a" deleted
pod "busybox-13svg" deleted
pod "busybox-1465m" deleted
pod "busybox-14uz1" deleted
pod "busybox-15raj" deleted
pod "busybox-160to" deleted
pod "busybox-16191" deleted

$ kubectl get pods --all-namespaces
NAMESPACE   NAME            READY     STATUS              RESTARTS   AGE
default     busybox-c9rnx   0/1       RunContainerError   0          23s
Wrist answered 18/11, 2016 at 21:24 Comment(8)
Did you somehow manage to create a replication controller by passing wrong arguments. What do you get for kubectl get all -o name?Noami
No, I had not created replication controller until today which I was able to delete it without any issue. This is just pod by itself I created two days ago. Oh wow I got 2599 pod/busybox # kubectl get all -o namepod/busybox-zzt7p ... # kubectl get all -o name | wc -l 2599Wrist
Can you check kubectl get events to see what is creating these objects?Varuna
try kubctl get rc to see if a ReplicationController was created. If so, delete that, then delete the pods.Sapheaded
what version of kubernetes are you running? Depending on your kubernetes version it? It could behave differently. for example before 1.2 it always created deployment. kubectl get deploymentBeige
# kubectl version -> v1.2.0 #kubectl get events --> "Error syncing pod, skipping: failed to "StartContainer" for "busybox" with RunContainerError: "runContainer: API error (500): Container command not found or does not exist.\n" # kubectl get rc --> shows nothing #kubectl get deployment --> shows nothing ----- It looks like it got stuck in downloading image. I wonder how I can delete or stop this container. #kubectl get pods NAME READY STATUS RESTARTS AGE busybox-zehyn 0/1 ContainerCreating 0 8sWrist
If someone ends by here:- Deleteing deployments solved the issue for me. kubectl delete deployment <deployment_name>. To get the deployment name, do kubectl get deploymentsHolozoic
Even if you have stateful set, I see pods keeps recreating after delete. Ensure, deployment, statefulset, service are all removed before removing the podMuns
C
526

You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137

To list all deployments:

kubectl get deployments --all-namespaces

Then to delete the deployment:

kubectl delete -n NAMESPACE deployment DEPLOYMENT

Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is default, leave off the -n option altogether.

In some cases it could also be running due to a job or daemonset. Check the following and run their appropriate delete command.

kubectl get jobs

kubectl get daemonsets.app --all-namespaces

kubectl get daemonsets.extensions --all-namespaces
Cand answered 30/5, 2017 at 23:10 Comment(5)
How do you bring the deployment back afterwards?Ameba
@Ameba you create it again with the kubectl create command.Lamasery
does not need to be a deployment. could be a job. so make sure to also check kubectl get jobsGiaour
To delete multiple object types, not just deployments, try: kubectl delete replicasets,subscriptions,deployments,jobs,services,pods --all -n <namespace>Nephelinite
kubectl delete deployments --namespace=default nginx-deployment , this command worked for meOlga
C
59

Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :) In order to determine what it was that kept spanning up the image I got all the resources with this command:

kubectl get all

Of course you could also get all resources from all namespaces:

kubectl get all --all-namespaces

or define the namespace you would like to inspect:

kubectl get all -n NAMESPACE_NAME

Once I saw that the replication controller was responsible for my trouble I deleted it:

kubectl delete replicationcontroller/CONTROLLER_NAME
Chaunceychaunt answered 26/2, 2019 at 16:5 Comment(0)
F
30

If your pod has name like name-xxx-yyy, it could be controlled by a replicasets.apps named name-xxx, you should delete that replicaset first before deleting the pod:

kubectl delete replicasets.apps name-xxx
Fully answered 12/7, 2018 at 9:26 Comment(2)
Thanks! For my case, it was a specific job that was recreating it. So: kubectl delete --all jobs -n <namespace>Ruminant
Find the replica-set with kubectl get replicasets.apps -n <namespace> (or --all-namespaces)Nephelinite
S
21

Obviously something is respawning the pod. While a lot of the other answers have you looking at everything (replica sets, jobs, deployments, stateful sets, ...) to find what may be respawning the pod, you can instead just look at the pod to see what spawned it. For example do:

$ kubectl describe pod $mypod | grep 'Controlled By:'
Controlled By:  ReplicaSet/foobar

This tells you exactly what created the pod. You can then go and delete that.

Schick answered 9/3, 2021 at 3:8 Comment(1)
This is an incredible answer and saved my "life" as we devs tend to say. CheersCanvasback
S
16

Look out for stateful sets as well

kubectl get sts --all-namespaces

to delete all the stateful sets in a namespace

kubectl --namespace <yournamespace> delete sts --all

to delete them one by one

kubectl --namespace ag1 delete sts mssql1 
kubectl --namespace ag1 delete sts mssql2
kubectl --namespace ag1 delete sts mssql3
Stupefaction answered 31/1, 2019 at 13:8 Comment(0)
L
10

This will provide information about all the pods,deployments, services and jobs in the namespace.

kubectl get pods,services,deployments,jobs

pods can either be created by deployments or jobs

kubectl delete job [job_name]
kubectl delete deployment [deployment_name]

If you delete the deployment or job then restart of the pods can be stopped.

Lycaonia answered 15/3, 2019 at 23:3 Comment(0)
N
9

Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:

kubectl delete deployments,jobs,services,pods --all -n <namespace>

In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.

Only when I deleted OLM and its subscription, the deployment, services and pods were gone.

First list all k8s objects in your namespace:

$ kubectl get all -n openshift-submariner

NAME                                       READY   STATUS    RESTARTS   AGE
pod/submariner-operator-847f545595-jwv27   1/1     Running   0          8d  
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/submariner-operator-metrics   ClusterIP   101.34.190.249   <none>        8383/TCP   8d
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/submariner-operator   1/1     1            1           8d
NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/submariner-operator-847f545595   1         1         1       8d

OLM is not listed with get all, so I search for it specifically:

$ kubectl get olm -n openshift-submariner

NAME                                                      AGE
operatorgroup.operators.coreos.com/openshift-submariner   8d
NAME                                                             DISPLAY      VERSION
clusterserviceversion.operators.coreos.com/submariner-operator   Submariner   0.0.1 

Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:

$ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner

operatorgroup.operators.coreos.com "openshift-submariner" deleted
clusterserviceversion.operators.coreos.com "submariner-operator" deleted
deployment.extensions "submariner-operator" deleted
subscription.operators.coreos.com "submariner" deleted
service "submariner-operator-metrics" deleted
replicaset.extensions "submariner-operator-847f545595" deleted
pod "submariner-operator-847f545595-jwv27" deleted

List objects again - all gone:

$ kubectl get all -n openshift-submariner
No resources found.

$ kubectl get olm -n openshift-submariner
No resources found.
Nephelinite answered 20/11, 2019 at 11:37 Comment(0)
B
8

Firstly list the deployments

kubectl get deployments

After that delete the deployment

kubectl delete deployment <deployment_name>

Bloodstain answered 10/9, 2022 at 12:29 Comment(1)
Without setting context or specifying namespace name in the commands, they will return resource information in default namespace. Also the original poster has specified that the pods we’re getting created by K8S job not deploymentsTarpaulin
L
6

After taking an interactive tutorial I ended up with a bunch of pods, services, deployments:

me@pooh ~ > kubectl get pods,services
NAME                                       READY   STATUS    RESTARTS   AGE
pod/kubernetes-bootcamp-5c69669756-lzft5   1/1     Running   0          43s
pod/kubernetes-bootcamp-5c69669756-n947m   1/1     Running   0          43s
pod/kubernetes-bootcamp-5c69669756-s2jhl   1/1     Running   0          43s
pod/kubernetes-bootcamp-5c69669756-v8vd4   1/1     Running   0          43s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   37s
me@pooh ~ > kubectl get deployments --all-namespaces
NAMESPACE     NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default       kubernetes-bootcamp   4         4         4            4           1h
docker        compose               1         1         1            1           1d
docker        compose-api           1         1         1            1           1d
kube-system   kube-dns              1         1         1            1           1d

To clean up everything, delete --all worked fine:

me@pooh ~ > kubectl delete pods,services,deployments --all
pod "kubernetes-bootcamp-5c69669756-lzft5" deleted
pod "kubernetes-bootcamp-5c69669756-n947m" deleted
pod "kubernetes-bootcamp-5c69669756-s2jhl" deleted
pod "kubernetes-bootcamp-5c69669756-v8vd4" deleted
service "kubernetes" deleted
deployment.extensions "kubernetes-bootcamp" deleted

That left me with (what I think is) an empty Kubernetes cluster:

me@pooh ~ > kubectl get pods,services,deployments
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   8m
Liddy answered 7/2, 2019 at 0:48 Comment(0)
D
4

In some cases the pods will still not go away even when deleting the deployment. In that case to force delete them you can run the below command.

kubectl delete pods podname --grace-period=0 --force

Dashtikavir answered 14/3, 2018 at 0:27 Comment(2)
This won't resolve the problem when the pod created by deployment, jobs or any other kind of controllers if the strategy type is set to Recreate.Scroop
The scenario being described is due to the continued existence of the deployment. So the solution is to delete the deployment. The edge-case you are referring to is not the answer and as-such needs more description as to when this is necessary and what causes it to be necessary.Eliath
F
4

When the pod is recreating automatically even after the deletion of the pod manually, then those pods have been created using the Deployment. When you create a deployment, it automatically creates ReplicaSet and Pods. Depending upon how many replicas of your pod you mentioned in the deployment script, it will create those number of pods initially. When you try to delete any pod manually, it will automatically create those pod again.

Yes, sometimes you need to delete the pods with force. But in this case force command doesn’t work.

Fatherland answered 14/3, 2018 at 1:56 Comment(1)
I get a warning when I try this that the pod may live on as a zombie process so wasn't what I wanted..Lawn
F
4

Instead of removing NS you can try removing replicaSet

kubectl get rs --all-namespaces

Then delete the replicaSet

kubectl delete rs your_app_name
Fleecy answered 7/1, 2019 at 17:10 Comment(0)
S
2

The root cause for the question asked was the deployment/job/replicasets spec attribute strategy->type which defines what should happen when the pod will be destroyed (either implicitly or explicitly). In my case, it was Recreate.

As per @nomad's answer, deleting the deployment/job/replicasets is the simple fix to avoid experimenting with deadly combos before messing up the cluster as a novice user.

Try the following commands to understand the behind the scene actions before jumping into debugging :

kubectl get all -A -o name
kubectl get events -A | grep <pod-name>
Scroop answered 27/8, 2019 at 5:26 Comment(0)
C
2

In my case I deployed via a YAML file like kubectl apply -f deployment.yaml and the solution appears to be to delete via kubectl delete -f deployment.yaml

Clisthenes answered 15/11, 2019 at 12:57 Comment(0)
B
2

Kubernetes always works in the format like:

deployments >>> replicasets >>> pods

first edit deployment with 0 replicas and then scale deployment with desired replicas(run below command).You will see new replicaset has been created and pods will also run with desired count.

*

IN-Linux:~ anuragmanikkame$ kubectl scale deploy tomcat -n dev-namespace --replicas=2 deployment.extensions/tomcat scaled


Belligerency answered 16/6, 2022 at 6:42 Comment(0)
C
1

If you have a job that continues running, you need to search the job and delete it:

kubectl get job --all-namespaces | grep <name>

and

kubectl delete job <job-name>

Coeternity answered 7/2, 2019 at 14:14 Comment(0)
M
1

You can do kubectl get replicasets check for old deployment based on age or time

Delete old deployment based on time if you want to delete same current running pod of application

kubectl delete replicasets <Name of replicaset>
Monody answered 18/6, 2019 at 11:32 Comment(0)
A
1

I also faced the issue, I have used below command to delete deployment.

kubectl delete deployments DEPLOYMENT_NAME

but still pods was recreating, So I crossed check the Replica Set by using below command

kubectl get rs

then edit the replicaset to 1 to 0

kubectl edit rs REPICASET_NAME
Asexual answered 20/8, 2019 at 9:46 Comment(0)
I
1

With deployments that have stateful sets (or services, jobs, etc.) you can use this command:

This command terminates anything that runs in the specified <NAMESPACE>

kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all

And forceful

kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all --cascade=true --grace-period=0 --force
Innate answered 20/6, 2020 at 15:31 Comment(0)
C
1

There is basically two ways to remove PODS

  1. kubectl scale --replicas=0 deploy name_of_deployment. This will set the number of replica to 0 and hence it will not restart the pods again.
  2. Use helm to uninstall the chart which you have implemented in your pipeline. Do not delete the deployment directly, instead use helm to uninstall the chart which will remove all objects it created.
Claret answered 4/10, 2021 at 10:58 Comment(0)
N
1

The fastest solution for me was installing Lens IDE and removing the service under de DEPLOYMENTS tab. Just delete from this tab and the replica will be deleted too.

Best regards

Norvin answered 17/2, 2022 at 12:30 Comment(0)
A
0

I experienced a similar problem: after deleting the deployment (kubectl delete deploy <name>), the pods kept "Running" and where automatically re-created after deletion (kubectl delete po <name>).

It turned out that the associated replica set was not deleted automatically for some reason, and after deleting that (kubectl delete rs <name>), it was possible to delete the pods.

Anabantid answered 23/3, 2020 at 11:54 Comment(0)
G
0

This has happened to me with some broken 'helm' installs. You might have a bit of a messed up deployment. If none of the previous suggestions work, look for a daemonset and delete that.

eg kubectl get daemonset --namespace

then delete daemonset

kubectl delete daemonset --namespace <NAMESPACE> --all --force

then try to delete the pods.

kubectl delete pod --namespace  <NAMESPACE> --all --force

Check if pods are gone.

kubectl get pods --all-namespaces
Gladygladys answered 17/5, 2021 at 6:48 Comment(0)
A
0

In my case I use these below

kubectl get all --all-namespaces 
kubectl delete deployment statefulset-deploymentnament(choose your deployment name)
kubectl delete sts -n default(choose your namespace) --all 
kubectl get pods --all-namespaces

Problem got resolved

Acetone answered 8/12, 2022 at 12:11 Comment(0)
T
0

I had the same problem on my local docker desktop kubernetes. The following solved my problem.

kubectl drain docker-desktop --ignore-daemonsets --delete-emptydir-data --force

Source: https://phoenixnap.com/kb/kubectl-delete-pod

Tenorite answered 11/3 at 7:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.