We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another.
(Our backup tool don't help).
We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another.
(Our backup tool don't help).
The easiest and safest option to migrate a pvc/pv to a new namespace is to use a backup tool (like velero)
This is undocumented.
In this exemple, we use VMware storage provider, but it should work with any storageClass.
Make a * Backup * Backup * Backup * Backup * Backup * !!!
well, if you do have a backup tool for kubernetes (like velero) you can restore directly in the target namespace, otherwise use kubectl cp
as explained in How to copy files from kubernetes Pods to local system
Let's set some environment variable and backup the existing PV and PVC ressources
NAMESPACE1=XXX
NAMESPACE2=XXX
PVC=mypvc
kubectl get pvc -n $NAMESPACE1 $PVC -o yaml | tee /tmp/pvc.yaml
PV=pvc-XXXXXXXXXXXXX-XXXXXXXXXXXX
kubectl get pv $PV -o yaml | tee /tmp/pv.yaml
If your persistent volume (or storage provider) has persistentVolumeReclaimPolicy=Delete, make sure to change it to "Retain" to avoid data loss when removing the PVC below.
Run this:
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Then check:
kubectl describe pv "$PV" | grep -e Reclaim
Manually delete the Persistent volume claim (you have a copy, right?).
kubectl delete -n "$NAMESPACE1" "$PVC"
A PV is attached to a namespace when it's first used by a PVC. Furthermore, the PV become "attached" to the PVC (by it's uid:
, not by it's name).
Change the namespace of the PV. Temporarily use PVC "name" to "lock" the PV for that PVC (rather than PVC uid).
"kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"namespace":"'$NAMESPACE2'","name":"'$PVC'","uid":null}}}'
Check what we have now:
kubectl get pv "$PV" -o yaml | grep -e Reclaim -e namespace -e uid: -e name: -e claimRef | grep -v " f:"
Create a PVC in the new namespace. Make sure to explicitly choose the PV to use (don't use StorageClass to provision the volume). Typically, you can copy the original PVC YAML, but drop namespace:
, selfLink:
, uid:
in the section metadata:
.
This command should work (it re-use the previous PVC), but you can use your own kubectl apply
command.
grep -v -e "uid:" -e "resourceVersion:" -e "namespace:" -e "selfLink:" /tmp/pvc.yml | kubectl -n "$NAMESPACE2" apply -f -
At this point, the PV is bounded to the former PVC's name (but it may not work, and it is not the standard configuration). Running kubectl describe -n "$NAMESPACE2" pvc "$PVC"
will complain with Status: Lost
and/or Warning ClaimMisbound
. So let's fix the problem:
Retrieve the new PVC's uid:
PVCUID=$( kubectl get -n "$NAMESPACE2" pvc "$PVC" -o custom-columns=UID:.metadata.uid --no-headers )
Then update the PV accordingly :
kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"uid":"'$PVCUID'","name":null}}}'
After a few seconds the PV should be Status: Bound
.
(This step is optional. Needed to ensure PV is deleted when user delete the PVC)
Once the PV is in Bound
state again, you can restore the reclaim policy if you want to preserve the original behaviour (i.e removing the PV when the PVC is removed) :
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
## Check :
kubectl get pv $PV -o yaml | grep -e Reclaim -e namespace
Voilà
Delete
. If we mentioned backups in the beginning, then definitely reclaimPolicy should stay Retain
to prevent unexpected data loss. –
Pub storageClassName: nfs-client
, what I did is to backup the nfs directory and once the new pvc was created, I copied the content of the older one to it. –
Scoop I have migrated a pv
that was with a storage class of storageClassName: nfs-client
in a different way to another namespace
.
The steps performed:
Retain
(the default is Delete
), it means that once the pvc
is removed, the pv
resource will not be removed.:kubectl patch pvc <pvc-name> '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
-avR
of the cp
command, are really important to achieve this task).dirpath=$(kubectl get pv <pvc-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR ${dirpath} /tmp/pvc_backup
kubectl delete pvc <pvc-name>
kubectl delete pv <pv-name>
path/pvc.yaml
( for this example this was my pvc yaml file):kubectl -n <target-namespace> create -f path/pvc.yaml
namespace
, proceed to copy the content of the backup directory to the new pvc created (remember that you all need to do is to create the nfs pvc and the pv is automatically created):nfs_pvc_dir=$(kubectl -n <target-namespace> get pv <pv-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR /tmp/pvc_backup/* ${nfs_pvc_dir}/
It was performed with microk8s
on a public vps with a nfs storage
:
Greetings :)
© 2022 - 2024 — McMap. All rights reserved.
Velero
. – Sirotek