What to do with Released persistent volume?
Asked Answered
M

8

88

TL;DR. I'm lost as to how to access the data after deleting a PVC, as well as why PV wouldn't go away after deleting a PVC.

Steps I'm taking:

  1. created a disk in GCE manually:

    gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b
    
  2. ran:

    kubectl apply -f /tmp/pv-and-pvc.yaml
    

    with the following config:

    # /tmp/pv-and-pvc.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-for-rabbitmq
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 5Gi
      gcePersistentDisk:
        fsType: ext4
        pdName: disk-for-rabbitmq
      persistentVolumeReclaimPolicy: Delete
      storageClassName: standard
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-for-rabbitmq
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: standard
      volumeName: pv-for-rabbitmq
    
  3. deleted a PVC manually (on a high level: I'm simulating a disastrous scenario here, like accidental deletion or misconfiguration of a helm release):

    kubectl delete pvc pvc-for-rabbitmq
    

At this point I see the following:

$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                      STORAGECLASS   REASON   AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq   standard                8m
$

A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

Now if I try to re-create the PVC to regain access to the data in PV:

$ kubectl apply -f /tmp/pv-and-pvc.yaml
persistentvolume "pv-for-rabbitmq" configured
persistentvolumeclaim "pvc-for-rabbitmq" created
$

I still get this for pvs, e.g. a PV is stuck in Released state:

$
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                             STORAGECLASS   REASON    AGE
pv-for-rabbitmq                            5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq          standard                 15m
$

...and I get this for pvcs:

$
kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Pending   pv-for-rabbitmq   0                         standard       1m
$

Looks like my PV is stuck in Released status, and PVC cannot access the PV which is not in Available status.

So, why the same PV and PVC cannot be friends again? How do I make a PVC to regain access to data in the existing PV?

Myotonia answered 3/6, 2018 at 14:28 Comment(1)
it is really annoying that kubernetes doesn't show you how to change your PV from Released to Available .. Its even worse with dynamic provisioningLeibniz
C
126

As documented here, "Delete the claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available."

So do the following:

kubectl patch pv pv-for-rabbitmq -p '{"spec":{"claimRef": null}}'

This worked for me.

Capitoline answered 19/12, 2019 at 9:24 Comment(9)
Yes, this worked. I followed with a kubectl delete -f storage.yml as the console delete would not workNeilla
More explanation on this, from the doc of kubernetes: "Delete the claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available."Gerrald
So simple, yet so effective.Azarria
when a pvc/pv binding happens, it updates the .spec.claimRef section of the pv. You can check this using k get pv pv-name -o jsonpath="{.spec.claimRef}". By patching this to null means erasing this binding and making it availableAnimatism
In my case, the PV get re-bounded shortly after I patch them. All the PVC are in Terminating status. But still can't manage to reset PVC & PV and redeploy from scratch.Nodababus
Is the any way to automate this and include it into the helm chart? Is there any configuration option in PV, PVC, SC or in Helm that might do the same routine for me? I wuld like to be able just run "helm unsinstall ...", "heil install ..." to get my apps works with the same PV without any manual steps to make the PV status changed form "Released" to "Avaliable"Inebriate
You can make available but reserve the PV for the same PVC ns/name by only deleting the resourceVersion and uid of the claimRef. Other PVCs will not be able to claim it, only the one matching the name and ns. This could be useful for wipe/redeploy of the app.Panslavism
yeah, delete claimRef: xxx worked for meClank
Excellent it's patched After executing. persistentvolume/pv-name patched and pod comes status becomes healthy. Thank you very muchFaceplate
S
10

The official documentation on PVs has this answer:

The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. An administrator can manually reclaim the volume with the following steps.

  1. Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
  2. Manually [copy / backup and/or] clean up the data on the associated storage asset.
  3. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.
Snath answered 14/8, 2019 at 21:9 Comment(0)
I
6

Like @Bharat Chhabra's answer but it will modify the status of all Released PersistentVolumes to Available:

kubectl get pv | tail -n+2 | awk '$5 == "Released" {print $1}' | xargs -I{} kubectl patch pv {} --type='merge' -p '{"spec":{"claimRef": null}}'
Immateriality answered 26/3, 2022 at 11:34 Comment(1)
Another version: for resource in $(kubectl get pv | grep Released | cut -d' ' -f1); do kubectl patch pv "${resource}" -p '{"spec":{"claimRef": null}}'; done Hibernaculum
S
5

The phrase that says, Pods consume node resources and PVCs consume PV resources may be useful to fully understand the theory and friendship between PV and PVC.

I have attempted a full reproduction of the behavior noted using the provided YAML file and failed and it returned an expected result. Hence, before providing any further details, here is a walk-through of my reproduction.

Step 1: Created PD in Europe-west1 zone

sunny@dev-lab:~$ gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b

WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O 
performance. For more information, see: 

NAME               ZONE            SIZE_GB  TYPE         STATUS
disk-for-rabbitmq  europe-west1-b  5        pd-standard  READY

Step 2: Create a PV and PVC using the project YAML file

sunny@dev-lab:~$  kubectl apply -f pv-and-pvc.yaml

persistentvolume "pv-for-rabbitmq" created
persistentvolumeclaim "pvc-for-rabbitmq" created

Step 3: List all the available PVC

sunny@dev-lab:~$ kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Bound     pv-for-rabbitmq   5Gi        RWO            standard       16s

Step 4: List all the available PVs

sunny@dev-lab:~$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS   REASON    AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Bound     default/pvc-for-rabbitmq   standard                 28s

Step 5: Delete the PVC and verify the result

sunny@dev-lab:~$  kubectl delete pvc pvc-for-rabbitmq
persistentvolumeclaim "pvc-for-rabbitmq" deleted

sunny@dev-lab:~$  kubectl get pv

No resources found.

sunny@dev-lab:~$  kubectl get pvc

No resources found.

sunny@dev-lab:~$  kubectl describe pvc-for-rabbitmq

the server doesn't have a resource type "pvc-for-rabbitmq"

As per your question

A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

You are absolutely correct, as per documentation when a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. In your YAML it was set to:

Reclaim Policy:  Delete

which means that it should have been deleted immediately. Currently, volumes can either be Retained, Recycled or Deleted.

Why wasn't it deleted? The only thing I could think of, would be maybe the PV was somehow still claimed, which is likely as a result of the PVC not successfully deleted as its capacity is showing "0" and to fix this you will need to delete the POD. Alternatively, you may use the $ kubectl describe pvc command to see why the PVC is still in a pending state.

And for the question, How do I make a PVC to regain access to data in the existing PV?

This is not possible because of the status of reclaim policy i.e. Reclaim Policy: Delete to make this possible you would need to use the Retain option instead as per documentation

To validate the theory that you can delete PVC and keep the disk, do the following:

  • Change the reclaim policy to Retain
  • Delete the PVC
  • Delete the PV

And then verify if the disk was retained.

Substantialism answered 15/6, 2018 at 22:34 Comment(0)
H
2

I wrote a simple automatic PV releaser controller that would find and make Released PVs Available again for new PVCs, check it out here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers.

But please make sure you read disclaimers and make sure that this is exactly what you want. Kubernetes doesn't do it automatically for a reason - workloads aren't supposed to have access to data from other workloads. For when they do - the idiomatic Kubernetes way to do it is StatefulSets, so Kubernetes guarantees that only the replicas of the same workload may claim the old data. My releaser certainly might be useful in some cases like CI/CD build cache (which it was created for) - but normally a PVC means "give me a clean ready to use storage I can save some data on", so at the very least - make it a separate StorageClass.

Herbarium answered 21/6, 2021 at 20:57 Comment(0)
N
2

The patches from the other answers worked for me only after deleting the Deployment.
After that, the Terminating resources got Released.


Delete all the resources listed by:

kubectl -n YOURNAMESPACE get all

Use kubectl -n YOURNAMESPACE <resource> <id> or (if you copy paste from the above output) kubectl -n YOURNAMESPACE <resource>/<id>, for each resource that you see listed there.

You can also do it at once kubectl -n YOURNAMESPACE <resource>/<id1> <resource>/<id2> <resource2>/<id3> <resource2>/<id4> <resource3>/<id5> etc..

Probably you tried to remove resources but they are getting recreated because of the deployment or replicaset resource, preventing the namespace from freeing up depending resources and from being cleaned up.

Nodababus answered 15/2, 2022 at 17:48 Comment(0)
B
0

The answer from Bharat worked for me as well.

If your PV shows up as "Released" and you have already deleted the PVC via helm uninstall or another method, then you cannot re-use this PV again unless you remove the claim ref:

kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}'

Keep in mind, you cannot do this unless while the PV is bound, you must first delete the PVC so the PV says "Released" and then you may run this command. The PV's status should then appear as "Available" and can be reused.

Ballade answered 15/5, 2021 at 1:57 Comment(0)
L
0

Removing claimRef.resourceVersion and claimRef.uid made the pv available for me.

Leanneleanor answered 15/4 at 22:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.