Kubernetes: Can't delete PersistentVolumeClaim (pvc)
Asked Answered
O

13

115

I created the following persistent volume by calling

kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-monitoring-static-content
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/some/path"

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-monitoring-static-content-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ""
  resources:
    requests:
      storage: 100Mi

After this I tried to delete the pvc. But this command stuck. when calling kubectl describe pvc pv-monitoring-static-content-claim I get the following result

Name:          pv-monitoring-static-content-claim
Namespace:     default
StorageClass:
Status:        Terminating (lasts 5m)
Volume:        pv-monitoring-static-content
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    [foregroundDeletion]
Capacity:      100Mi
Access Modes:  RWO
Events:        <none>

And for kubectl describe pv pv-monitoring-static-content

Name:            pv-monitoring-static-content
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller=yes
Finalizers:      [kubernetes.io/pv-protection foregroundDeletion]
StorageClass:
Status:          Terminating (lasts 16m)
Claim:           default/pv-monitoring-static-content-claim
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        100Mi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /some/path
    HostPathType:
Events:            <none>

There is no pod running that uses the persistent volume. Could anybody give me a hint why the pvc and the pv are not deleted?

Onward answered 16/7, 2018 at 9:42 Comment(0)
S
236

This happens when persistent volume is protected. You should be able to cross verify this:

Command:

kubectl describe pvc PVC_NAME | grep Finalizers

Output:

Finalizers: [kubernetes.io/pvc-protection]

You can fix this by setting finalizers to null using kubectl patch:

kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge

Ref; Storage Object in Use Protection

Sapowith answered 17/5, 2019 at 8:55 Comment(6)
that solution works best than edit solution accross corporate firewallsReprehend
@codersofthedark it does not explain the cause. Of course it is protected. That's what I already mentioned in my question. But the volume wasn't used by any Pod => protection shouldn't have any effect.Dav
I am having this issue and when I tried the command above to patch the pvc, kept getting the error unable to parse "'{metadata:{finalizers:": yaml: found unexpected end of streamTakahashi
for my case, the PVCs are protected cuz I only deleted StatefulSet, not the underlying pods, so the PVCs are still being used by Pods, that's why it is in TERMINATING phaseWilliamson
I'm on GKE and something seems to be setting the finalizer back immediately. :/Lamination
@Lamination I have the same issue, have you found any solution?Thailand
O
24

I'm not sure why this happened, but after deleting the finalizers of the pv and the pvc via the kubernetes dashboard, both were deleted. This happened again after repeating the steps I described in my question. Seems like a bug.

Onward answered 17/7, 2018 at 18:19 Comment(5)
I had a similar problem: PVC didn't want to die and because of that the project was in "Terminating" state forever. I did oc edit pvc/protected-pvc -n myproject and deleted those two lines about finalizers. Both PVC and the projects were gone immediately. I agree it's probably a bug because it should not behave that way. I didn't have any pods running in that project, just that PVC.Aeolus
I just came across this same problem, with this being the solution for me too...delete the constraints. This isn't a "thank you" comment. Rather, I'm adding this because it's 7+ months later and this problem still seems to exist in the wild, and thought new readers might benefit in knowing that. I'm running the latest 'minikube' (installed and built just a few days ago) behind an up to date "Docker for Mac".Sunbathe
...I'm following an online tutorial. I don't know if this is related to this bug, but the behavior I get is different than the instructor's. He creates a new PVC and its state is initially "Pending". Only when he manually creates a PV does the state of the PVC become "bound". In my case, it seems that creating the PVC using the same command he does immediately creates a PV to use the PVC allocated storage. Does anyone know why this is?Sunbathe
As answer is not complete in my opinion (not explaining steps of a solution for laics) - You can remove Finalizers in dashboard in YAML of particular PV. Additionally you can do that in terminal by: kubectl patch pvc NAME -p '{"metadata":{"finalizers":null}}', kubectl patch pod NAME -p '{"metadata":{"finalizers":null}}'. Source github.com/kubernetes/kubernetes/issues/…Gilgamesh
Already mentioned in another answer: https://mcmap.net/q/187985/-kubernetes-can-39-t-delete-persistentvolumeclaim-pvcDav
R
24

You can get rid of editing your pvc! Remove pvc protection.

  1. kubectl edit pvc YOUR_PVC -n NAME_SPACE
  2. Manually edit and put # before this line enter image description here
  3. All pv and pvc will be deleted
Rorke answered 5/6, 2019 at 8:17 Comment(1)
This comment has helped me with deleting a volume that was stuck in Terminating state. thanks.Butyraceous
M
15

The PV is protected. Delete the PV before deleting the PVC. Also, delete any pods/ deployments which are claiming any of the referenced PVCs. For further information do check out Storage Object in Use Protection

Museology answered 16/7, 2018 at 11:38 Comment(2)
I tried to delete both. The pv and the pvc. As you can see in the describe output both are in terminating stateDav
What platform are you using? Have you tried to delete using kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml ?Derail
E
12

For me pv was in retain state, hence doing the above steps did not work.

1st we need to change policy state as below :

kubectl patch pv PV_NAME -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

Then delete pvc as below.

kubectl get pvc

kubectl delete pvc PVC_NAME

finally, delete pv with

kubectl delete pv PV_NAME
Einhorn answered 19/5, 2020 at 16:11 Comment(0)
C
7

Just met this issue hours ago.

I deleted deployments that used this references and the PV/PVCs are automatically terminated.

Consanguineous answered 18/10, 2018 at 4:7 Comment(1)
Thank you! this works for meLance
A
6

If PV still exists it may be because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone. From the docs:

PersistentVolumes can have various reclaim policies, including “Retain”, “Recycle”, and “Delete”. For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”. This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data. In that case, it is more appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered

Amphibology answered 14/11, 2018 at 17:14 Comment(1)
Recycle is deprecated nowKahle
F
6

In my case, as long as I delete the pod associated to both pv and pvc, the pv and pvc in terminating status are gone

Fess answered 9/3, 2019 at 6:39 Comment(2)
"There is no pod running that uses the persistent volume"Dav
I met this issue again today. 2 PVs, without Pod and PVC associated, turned into terminating state forever, when being deleted. To fix it, I ran kubectl patch pv local-pv-324352d9 -n ops -p '{"metadata":{"finalizers": []}}' --type=merge Then the PV is gone. Thanks @Sapowith hintFess
S
1
kubectl get pvc pvc_name -o yaml > pvcfile.yaml

Then open pvcfile.yaml and delete the finalizers line, save and apply :

kubectl apply -f pvcfile.yaml 
Swedish answered 10/6, 2021 at 16:2 Comment(1)
Instead of two commands, it can be done as 'kubectl edit pvc pvc_name', then remove the section and save. BTW, as per the original question, the author removed the PVC - but that operation failed due to other reasons as specified in the above answers.Rochet
H
0

in my case a pvc was not deleted because missing namespace (I deleted the namespace before deleting all resources/pvc) solution : create namespace with the same name as it was before and then I was able to remove the finalizers and finally pvc

Homeopathist answered 14/11, 2019 at 11:14 Comment(0)
W
0

In case you have already deleted PV and trying to delete PVC

Check if the volume is attached by this command

kubectl get volumeattachment

Deleting the PVC :-

First you have to delete pvc pne by one using this command

kubectl delete pvc <pvc_name> --grace-period=0 --force

Or you can delete all PVC's using

kubectl delete pvc --all

Now you can see the status of PVC as terminating by using

kubectl get pvc

and then you have to apply this delete using

kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'

Wadlinger answered 17/12, 2020 at 11:16 Comment(0)
A
0

Additionally to the other answers about the finalizer...

I could free up the resources only after deleting the Deployment. After that, the Terminating resources got released.


Delete all the resources listed by:

kubectl get all -n YOURNAMESPACE

Use kubectl delete -n YOURNAMESPACE <resource> <id> or (if you copy paste from the above output) kubectl delete -n YOURNAMESPACE <resource>/<id>, for each resource that you see listed there.

You can also do it at once

kubectl delete -n YOURNAMESPACE <resource>/<id1>  <resource>/<id2>  <resource2>/<id3>  <resource2>/<id4>  <resource3>/<id5>

Probably you tried to remove resources but they are getting recreated because of the deployment or replicaset resource, preventing the namespace from freeing up depending resources and from being cleaned up.

Adina answered 15/2, 2022 at 17:47 Comment(1)
This is the expected behaviour for protected volumes. When the deployment still uses the volume it cant be deleted.Dav
K
0

For me, the problem was that I had pods referencing the PVC. The pods were job executions which had completed. I deleted the completed pods and the delete command finished on it's own.

Korte answered 19/1 at 11:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.