Problem : Delete PVC (Persistent Volume Claim) Kubernetes Status Terminating
Asked Answered
L

3

6

Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:

kubectl delete -f file.yml
kubectl delete PVC

but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.

Capacity: 10Gi Storage Class: rook-cephfs Access Modes: RWX

Here is the status in my terminal:

kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h

Thank You for your answers, Stack Community :)

Libratory answered 19/4, 2022 at 9:2 Comment(4)
did you have a look at: #51359356 ?Silici
Can you describe the pvc? kubectl describe pvc spoc-volume-spoc-ihm-kube-test ?Wager
Hello Thank you for the correction, yes I saw that response too, I fixed the problem with my colleague, the problem was simple; as long as you keep other pods, which depends on that pvc at a running state, the pvc will never be deleted. That's a also, I think one of the principle of Kubernetes, as long as the pod communicate with the pvc, the pvc can not be deleted,.Libratory
Thank's a lot @KamolHasan kubectl describe pvc spoc-volume-spoc-ihm-kube-test is a good way to describe the pod, I will use it next time. My app works perfectly in the cluster !Libratory
L
14

You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.

Solution 1:

Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.

Solution 2

If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:

a) Edit the PV and PVC and delete or set to null the finalizers in the metadata

kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}

b) Simply patch the PV and PVC as shown below:

kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'

Hope it helps.

Luciennelucier answered 21/10, 2022 at 17:31 Comment(0)
L
5

I fixed the problem by deleting the pods depending on that pvc

The status: TERMINATING disappeared

Libratory answered 19/4, 2022 at 10:52 Comment(0)
P
0

This issue due to still pod referring deleted claim.

If you are using OCP, Go to Workload -> DeploymentConfig and edit the deployment configuration.

Remove your old deleted claim from the yaml configuration under VolumeMounts and Volumes section.

This fix is resolved my issue. I hope this will help.

Peppercorn answered 9/11, 2023 at 15:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.