Move or change a volume namespace
Asked Answered
G

2

11

We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another.

(Our backup tool don't help).

Gules answered 6/2, 2021 at 15:42 Comment(2)
What backup tools do you use ? I wonder if it is possible in Velero.Sirotek
Velero seems to support Restoring Into a Different Namespace.I don't use it . @SirotekGules
G
15

Option 1: use a backup tool

The easiest and safest option to migrate a pvc/pv to a new namespace is to use a backup tool (like velero)

Option 2: no backup tool (by hand)

This is undocumented.

In this exemple, we use VMware storage provider, but it should work with any storageClass.

Prepare

Make a * Backup * Backup * Backup * Backup * Backup * !!!

well, if you do have a backup tool for kubernetes (like velero) you can restore directly in the target namespace, otherwise use kubectl cp as explained in How to copy files from kubernetes Pods to local system

Let's set some environment variable and backup the existing PV and PVC ressources

NAMESPACE1=XXX
NAMESPACE2=XXX
PVC=mypvc

kubectl get pvc -n $NAMESPACE1 $PVC -o yaml | tee /tmp/pvc.yaml

PV=pvc-XXXXXXXXXXXXX-XXXXXXXXXXXX

kubectl get pv  $PV -o yaml | tee /tmp/pv.yaml

Change ReclaimPolicy for PV

If your persistent volume (or storage provider) has persistentVolumeReclaimPolicy=Delete, make sure to change it to "Retain" to avoid data loss when removing the PVC below.

Run this:

kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Then check:

kubectl describe pv "$PV" | grep -e Reclaim

Remove the PVC

Manually delete the Persistent volume claim (you have a copy, right?).

kubectl delete -n "$NAMESPACE1" "$PVC"

Modify the Persistent Volume (PV)

A PV is attached to a namespace when it's first used by a PVC. Furthermore, the PV become "attached" to the PVC (by it's uid:, not by it's name).

Change the namespace of the PV. Temporarily use PVC "name" to "lock" the PV for that PVC (rather than PVC uid).

"kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"namespace":"'$NAMESPACE2'","name":"'$PVC'","uid":null}}}'

Check what we have now:

kubectl get pv "$PV" -o yaml | grep -e Reclaim -e namespace -e uid: -e name: -e claimRef | grep -v " f:"

Create the new PVC

Create a PVC in the new namespace. Make sure to explicitly choose the PV to use (don't use StorageClass to provision the volume). Typically, you can copy the original PVC YAML, but drop namespace:, selfLink:, uid: in the section metadata:.

This command should work (it re-use the previous PVC), but you can use your own kubectl apply command.

grep -v -e "uid:" -e "resourceVersion:" -e "namespace:" -e "selfLink:"  /tmp/pvc.yml | kubectl -n "$NAMESPACE2" apply -f -

Assign PVC to PV

At this point, the PV is bounded to the former PVC's name (but it may not work, and it is not the standard configuration). Running kubectl describe -n "$NAMESPACE2" pvc "$PVC"will complain with Status: Lost and/or Warning ClaimMisbound. So let's fix the problem:

Retrieve the new PVC's uid:

PVCUID=$( kubectl get -n "$NAMESPACE2" pvc "$PVC" -o custom-columns=UID:.metadata.uid --no-headers )

Then update the PV accordingly :

kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"uid":"'$PVCUID'","name":null}}}'

After a few seconds the PV should be Status: Bound.

Restore PV ReclaimPolicy=Delete

(This step is optional. Needed to ensure PV is deleted when user delete the PVC)

Once the PV is in Bound state again, you can restore the reclaim policy if you want to preserve the original behaviour (i.e removing the PV when the PVC is removed) :

kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

## Check :
kubectl get pv $PV -o yaml | grep -e Reclaim -e namespace

Voilà

Gules answered 6/2, 2021 at 15:42 Comment(2)
Thanks, this is a great answer! Although I don't get why propose to anyone to restore reclaimPolicy back to Delete. If we mentioned backups in the beginning, then definitely reclaimPolicy should stay Retain to prevent unexpected data loss.Pub
I performed this method, but It did not work with storageClassName: nfs-client, what I did is to backup the nfs directory and once the new pvc was created, I copied the content of the older one to it.Scoop
S
2

I have migrated a pv that was with a storage class of storageClassName: nfs-client in a different way to another namespace.

The steps performed:

  • Change the pvc policy to Retain (the default is Delete), it means that once the pvc is removed, the pv resource will not be removed.:
kubectl patch pvc <pvc-name> '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  • Get the name of the pvc and copy it to another directory, taking into consideration its owner, permissions and everything (the flags -avR of the cp command, are really important to achieve this task).
dirpath=$(kubectl get pv <pvc-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR ${dirpath} /tmp/pvc_backup
  • Once it has been copied, you can proceed to remove the previous pvc and pv:
kubectl delete pvc <pvc-name>
kubectl delete pv <pv-name>
  • Create the new pvc resource with your own yaml file path/pvc.yaml( for this example this was my pvc yaml file):
kubectl -n <target-namespace> create -f path/pvc.yaml
  • Once created in the right namespace, proceed to copy the content of the backup directory to the new pvc created (remember that you all need to do is to create the nfs pvc and the pv is automatically created):
nfs_pvc_dir=$(kubectl -n <target-namespace> get pv <pv-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR /tmp/pvc_backup/* ${nfs_pvc_dir}/
  • Finally, proceed to bound your new pvc to a deployment or pod resource.

It was performed with microk8s on a public vps with a nfs storage:

Greetings :)

Scoop answered 8/7, 2022 at 15:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.