0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims
Asked Answered
S

5

12

As the documentation states:

For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of my-storage-class and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.

The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used

I create a StatefulSet like this:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: ches
  name: ches
spec:
  serviceName: ches
  replicas: 1
  selector:
    matchLabels:
      app: ches
  template:
    metadata:
      labels:
        app: ches
    spec:
      serviceAccountName: ches-serviceaccount
      nodeSelector:
        ches-worker: "true"
      volumes:
      - name: data
        hostPath:
          path: /data/test
      containers:
      - name: ches
        image: [here I have the repo]
        imagePullPolicy: Always
        securityContext:
            privileged: true
        args:
        - server
        - --console-address
        - :9011
        - /data
        env:
        - name: MINIO_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: ches-keys
              key: access-key
        - name: MINIO_SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: ches-keys
              key: secret-key
        ports:
        - containerPort: 9000
          hostPort: 9011
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: data
          mountPath: /data
      imagePullSecrets:
        - name: edge-storage-token
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.

When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:

Warning FailedScheduling 6s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling

Am I missing something here?

Shillong answered 9/12, 2022 at 10:43 Comment(0)
S
7

K3s when installed, also downloads a storage class which makes it as default.

Check with kubectl get storageclass:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 8s

K8s cluster on the other hand, does not download also a default storage class.

In order to solve the problem:

Shillong answered 19/12, 2022 at 6:18 Comment(2)
Local-path is generic to the node. A pod wich starts on node2 can not bound the PV which was created on node1. In a single node installation this is accettable, Not in a cluster,Outshout
Warning: HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly. If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.Sudatorium
O
8

You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.

First create a PV with at least the amout of space need by your PVC. Then you can apply your deployment yaml which contains the PVC claim.

Outshout answered 9/12, 2022 at 12:16 Comment(2)
No it is not mandatory. See also this: bluexp.netapp.com/blog/…Shillong
That wont work without an PV or Provvisoner. Take a look also here: #56450772Outshout
S
7

K3s when installed, also downloads a storage class which makes it as default.

Check with kubectl get storageclass:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 8s

K8s cluster on the other hand, does not download also a default storage class.

In order to solve the problem:

Shillong answered 19/12, 2022 at 6:18 Comment(2)
Local-path is generic to the node. A pod wich starts on node2 can not bound the PV which was created on node1. In a single node installation this is accettable, Not in a cluster,Outshout
Warning: HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly. If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.Sudatorium
A
0

I fixed this issue by doing these steps.

  1. Check what you have

kubectl get pvc

kubectl get pv

  1. Delete everything

kubectl delete pv your-name-pv

kubectl delete pvc your-name-pvc

  1. Create everything from scratch
Ada answered 29/11, 2023 at 20:50 Comment(0)
F
0

This can also mean the underlying storage driver does not exist, for example, EBS driver is not ready.

Figured answered 28/3 at 11:41 Comment(0)
C
0

In my case, I was using minikube and the volume I requested looked like this:

volume:
  storage: 20Gi
  className: managed-csi
  hostPath: false

Changing the volume type fixed it

volume:
  storage: 20Gi
  className: standard
  hostPath: true
Complete answered 27/8 at 8:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.