Kubernetes Unable to mount volumes for pod
Asked Answered
C

2

13

I'm trying to setup a volume to use with Mongo on k8s.

I use kubectl create -f pv.yaml to create the volume.

pv.yaml:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pvvolume
  labels:
    type: local
spec:
  storageClassName: standard
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/nfs"
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: pvvolume

I then deploy this StatefulSet that has pods making PVCs to this volume.

My volume seems to have been created without problem, I'm expecting it to just use the storage of the host node.

When I try to deploy I get the following error:

Unable to mount volumes for pod "mongo-0_default(2735bc71-5201-11e8-804f-02dffec55fd2)": timeout expired waiting for volumes to attach/mount for pod "default"/"mongo-0". list of unattached/unmounted volumes=[mongo-persistent-storage]

Have a missed a step in setting up my persistent volume?

Captive answered 7/5, 2018 at 15:49 Comment(0)
K
9

A persistent volume is just the declaration of availability of some storage inside your kubernetes cluster. There is no binding with your pod at this stage.

Since your pod is deployed through a StatefulSet, there should be in your cluster one or more PersistentVolumeClaims which are the objects that connect a pod with a PersistentVolume.

In order to manually bind a PV with a PVC you need to edit your PVC by adding the following in its spec section:

volumeName: "<your persistent volume name>"

Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding

Kappel answered 7/5, 2018 at 19:44 Comment(13)
The volumeName is set, but the PVC is still pending.Captive
Is the storage size on your PVC less or equal to 10Gb?Kappel
According to the yaml of the Stateful set it should be 2Gi. When I look in the k8s UI it shows - for capacity gateway.ipfs.io/ipfs/…Captive
Just to be sure try and get the PVC's yaml and checkKappel
Seems to be 2 in the yaml gist.github.com/kirkins/47b33b4f86e44f1b8f533bef40e3ddd1Captive
Another thing I see is the claimRef. It is referencing to a PVC called pvvolume. Nothing wrong per se, but it doesn't look like a PVC name as created by a statefulset. Is your mongo pvc really named pvvolume?Kappel
I just deleted everything and re-generated based on the yaml in my question, and the StatefulSet yaml I linked. The PVC yaml generated seems to be the exact same as the gist. Maybe this is a clue to what went wrong?Captive
Sorry I must have missed something. Could you post the output of kubectl get pvc on your namespace please?Kappel
mongo-persistent-storage-mongo-0 Pending pvvolume 0 fast 1hCaptive
wait ... that isn't right. If I just deleted everything and regenerated the setup the PVC shouldn't be 1 hour old.Captive
The pvc have to be deleted manually. They don't get deleted if you delete the statefulset that generated them.Kappel
Anyway error is in the PV IMHO, the claimRef section should have name: mongo-persistent-storage-mongo-0 rather than name: pvvolumeKappel
hmm ok I changed that and now the PVC is successful, there we go it is working!Captive
T
5

My case is an edge case, and I doubt that you will reach it. However, I will describe it because, it cost me a lot of grey hairs - and maybe it will save yours.

This same error occurred for me despite PV and PVC being binned. Pod was constantly in ContainerCreating stare, yet kubectl get events throw the error asked in this question.

$ kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
sewage-db   5Ti        RWO            Retain           Bound    global-sewage/sewage-db   nfs                     3h40m

$kubectl get pvc -n global-sewage 
NAME        STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
sewage-db   Bound    sewage-db   5Ti        RWO            nfs            3h39m

After rebooting the server it turned out that, one of 32GiB RAM physical memory was corrupted. Removing the memory fixed the issue.

Terrorism answered 4/5, 2020 at 12:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.