Why ReadWriteOnce is working on different nodes?
Asked Answered
F

1

15

Our platform which runs on K8s has different components. We need to share the storage between two of these components (comp-A and comp-B) but by mistake, we defined the PV and PVC for that as ReadWriteOnce and even when those two components were running on different nodes everything was working and we were able to read and write to the storage from both components.

Based on the K8s docs the ReadWriteOnce can be mounted to one node and we have to use ReadWriteMany:

  • ReadWriteOnce -- the volume can be mounted as read-write by a single node
  • ReadOnlyMany -- the volume can be mounted read-only by many nodes
  • ReadWriteMany -- the volume can be mounted as read-write by many nodes"

So I am wondering why everything was working fine while it shouldn't?

More info: We use NFS for storage and we are not using dynamic provisioning and below is how we defined our pv and pvc (we use helm):

- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: gstreamer-{{ .Release.Namespace }}
  spec:
    capacity:
      storage: 10Gi
    accessModes:
      - ReadWriteOnce
    persistentVolumeReclaimPolicy: Recycle
    mountOptions:
      - hard
      - nfsvers=4.1
    nfs:
      server: {{ .Values.global.nfsserver }}
      path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}

- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: gstreamer-claim
    namespace: {{ .Release.Namespace }}
  spec:
    volumeName: gstreamer-{{ .Release.Namespace }}
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 10Gi

Update

The output of some kubectl commands:

$ kubectl get -n 149 pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
gstreamer-claim   Bound    gstreamer-149                              10Gi       RWO                           177d


$ kubectl get -n 149 pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
gstreamer-149                              10Gi       RWO            Recycle          Bound    149/gstreamer-claim                                                 177d

I think somehow it takes care of it because the only thing the pods need to do is connecting to that IP.

Foetation answered 14/8, 2020 at 12:13 Comment(6)
which csi do you use for it?Carisa
We are not using any csi. I copied what we do (the yaml)Foetation
what is kubectl get pvc and kubectl get pv telling?Carisa
While Kubernetes doc suggests otherwise, it is unclear whether NFS volumes accessModes is actually honored - see #40524603Welltodo
Its your local env (kubeadm, minikube) or you are using Cloud environment?Araroba
We ran it on different clusters from microk8s, minikube, kubeadm and also once we ran it on AWS using EKS for a short time as a test and that one also worked.Foetation
A
15

It's quite misleading concept regarding accessMode, especially in NFS.

In Kubernetes Persistent Volume docs it's mentioned that NFS supports all types of Access. RWO, RXX and RWX.

However accessMode is something like matching criteria, same as storage size. It's described better in OpenShift Access Mode documentation

A PersistentVolume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.

Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.

Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.

All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.

In the next paragraph:

A volume’s AccessModes are descriptors of the volume’s capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.

For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume’s ROX capability. Errors in the provider show up at runtime as mount errors.

Another example is that you can choose a few AccessModes as it is not constraint but a matching criteria.

$ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: exmaple-pvc
> spec:
>   accessModes:
>     - ReadOnlyMany
>     - ReadWriteMany
>     - ReadWriteOnce
>   resources:
>     requests:
>       storage: 1Gi
> EOF

or as per GKE example:

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: exmaple-pvc-rwo-rom
spec:
  accessModes:
    - ReadOnlyMany
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF               
persistentvolumeclaim/exmaple-pvc-rwo-rom created

PVC Output

$ kubectl get pvc
NAME                  STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
exmaple-pvc           Pending                                                                        standard       2m18s
exmaple-pvc-rwo-rom   Bound     pvc-d704d346-42b3-4090-af96-aebeee3053f5   1Gi        RWO,ROX        standard       6s
persistentvolumeclaim/exmaple-pvc created

exmaple-pvc is in Pending state as default GKE GCEPersistentDisk its not supporting RreadWriteMany.

Warning  ProvisioningFailed  10s (x5 over 69s)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadOnlyMany ReadWriteMany ReadWr
iteOnce]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported

However second pvc exmaple-pvc-rwo-rom were created and you can see it have 2 access mode RWO, ROX.

In short accessMode is more like requirement for PVC/PV to Bind. If NFS which is providing all access modes binds with RWO it fulfill requirement, however it will work as RWM as NFS providing that capability.

Hope it answered cleared a bit.

In addition you can check other StackOverflow threads regarding accessMode

Araroba answered 21/8, 2020 at 14:23 Comment(1)
Hi, thanks for the advice! From this I take that setting a PVC to ReadWriteOnce will not enforce that the pods using the given PV will be restricted to the node on which the PV is provisioned, right?Proud

© 2022 - 2024 — McMap. All rights reserved.