pod has unbound PersistentVolumeClaims
Asked Answered
E

6

171

When I push my deployments, for some reason, I'm getting the error on my pods:

pod has unbound PersistentVolumeClaims

Here are my YAML below:

This is running locally, not on any cloud solution.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 ()
  creationTimestamp: null
  labels:
    io.kompose.service: ckan
  name: ckan
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: ckan
    spec:
      containers:
        image: slckan/docker_ckan
        name: ckan
        ports:
        - containerPort: 5000
        resources: {}
        volumeMounts:
            - name: ckan-home
              mountPath: /usr/lib/ckan/
              subPath: ckan
      volumes:
      - name: ckan-home
        persistentVolumeClaim:
          claimName: ckan-pv-home-claim
      restartPolicy: Always
status: {}

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ckan-pv-home-claim
  labels:
    io.kompose.service: ckan
spec:
  storageClassName: ckan-home-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
  - dir_mode=0755
  - file_mode=0755
  - uid=1000
  - gid=1000
Exarchate answered 5/10, 2018 at 15:28 Comment(0)
G
169

You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.

When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.


To solve your issue:

  • Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
  • Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
  • Remove the StorageClass from your cluster

How do these pieces play together?

At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.

The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.

The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.

A PersistentVolume without StorageClass is considered to be static.

"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.


Example PersistentVolume

In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ckan-pv-home
  labels:
    type: local
spec:
  capacity:
    storage: 100Mi
  hostPath:
    path: "/mnt/data/ckan"

The PersistentVolumeSpec allows us to define multiple attributes. I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.


Additional Resources:

Genipap answered 5/10, 2018 at 15:38 Comment(8)
You may not remove StorageClass, it is enough just replacing value of storage class name to an empty string, like StorageClass: ""Gasolier
How should the PersistentVolume be defined?Carnet
@VictorZuanazzi good question - it seems the docs slightly changed, i added an example. Digging into the API is pretty hard at the beginning. Fortunatley there are often cross-references from the API-docs to the guides and vice versa. I hope this helps you to go on.Genipap
Thanks for the hint @Gasolier - i added that option to the description.Genipap
Do you need to attach a PersistentVolume to a StatefulSet?Kordula
I guess it depends – i haven't used StatefulSets yet. When you're going to use a StatefulSet and your application meets requirements described here: kubernetes.io/docs/concepts/workloads/controllers/statefulset/… then its likely you require a PersistentVolume. See the API here: kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/… and think about filing a dedicated SO question if you need further help.Genipap
@FlorianNeumann The documents mentions its for a single node cluster. What if my cluster has more than 1 node?Hernandez
@Hernandez - i am not sure which docs you're referring to. But one claim can be used by multiple nodes depending on your deployment, kubernetes should take care of this unless you configure the PV to be consumed in a special way (see f.e. the ReadWriteOncePod: kubernetes.io/docs/concepts/storage/persistent-volumes/…)Genipap
S
5

If you ever see pod events like: 0/n nodes are available...pod has unbound immediate PersistentVolumeClaims...Preemption is not helpful for scheduling. you probably want to look next at what PersistentVolumeClaim is actually unbound.

Try kubectl get pvc and you may see something like:

% kubectl get pvc
NAME               STATUS     VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
unbound-volume     Pending                                                                        nfs            8m25s
good-volume        Bound      pvc-exists   1Gi        RWO            gp2            20h

You'll likely see the pvc you're looking for in the Pending status (in our example unbound-volume), but it could be stuck in some other status. Right now our persistent volume claim only has a claim to a volume, which it's currently waiting for - and until it gets that volume your pod won't start.

We can take a closer look with kubectl describe pvc, where we should see a more helpful message describing the volume provisioning issue in detail:

% kubectl describe pvc unbound-volume 
...
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  ExternalProvisioning  3m30s (x42 over 13m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "cluster.local/bad-nfs-provisioner" or manually created by system administrator

In my case I was running an nfs provisioner that was failing to create new nfs volumes. In your case, whatever is supposed to be provisioning your cluster's persistentvolumes for that volume's storage class may be faltering, or maybe it's the wrong storageclass as other posts mention.

Smart answered 17/5, 2023 at 15:40 Comment(0)
D
4

If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-path-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 2Gi

To use it on other distributions use https://github.com/rancher/local-path-provisioner

Diseased answered 9/4, 2022 at 18:19 Comment(0)
S
2

I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.

YOUR POD Expects what kind of storage class?

YOUR PVC Definition volumeClaimTemplates --> storageClassName : "standard"

PV spec --> storageClassName : "standard"

Screeching answered 25/2, 2022 at 16:30 Comment(0)
H
1

We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:

kubectl get csidriver 

Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature ‘Compute Engine persistent disk CSI Driver’

enter image description here

Haslett answered 1/2, 2023 at 13:50 Comment(0)
M
0

In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.

But there might be more reasons to it. Make sure that :

  1. The volumeName name specified in PVC match PV name
  2. The storageClassName name specified in PVC match PV name
  3. The sufficient capacity size is allocated to your resource
  4. The access modes of You PV and PVC are consistent
  5. The number of PV match PVC

For detailed explanation read this article.

Monostich answered 26/8, 2022 at 11:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.