Kubernetes PVC with ReadWriteMany on AWS
Asked Answered
C

4

31

I want to setup a PVC on AWS, where I need ReadWriteMany as access mode. Unfortunately, EBS only supports ReadWriteOnce.

How could I solve this?

  • I have seen that there is a beta provider for AWS EFS which supports ReadWriteMany, but as said, this is still beta, and its installation looks somewhat flaky.
  • I could use node affinity to force all pods that rely on the EBS volume to a single node, and stay with ReadWriteOnce, but this limits scalability.

Are there any other ways of how to solve this? Basically, what I need is a way to store data in a persistent way to share it across pods that are independent of each other.

Culbertson answered 6/7, 2018 at 14:46 Comment(0)
G
32

Using EFS without automatic provisioning

The EFS provisioner may be beta, but EFS itself is not. Since EFS volumes can be mounted via NFS, you can simply create a PersistentVolume with a NFS volume source manually -- assuming that automatic provisioning is not a hard requirement on your side:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-efs-volume
spec:
  capacity:
    storage: 100Gi # Doesn't really matter, as EFS does not enforce it anyway
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - hard
    - nfsvers=4.1
    - rsize=1048576
    - wsize=1048576
    - timeo=600
    - retrans=2
  nfs:
    path: /
    server: fs-XXXXXXXX.efs.eu-central-1.amazonaws.com

You can then claim this volume using a PersistentVolumeClaim and use it in a Pod (or multiple Pods) as usual.

Alternative solutions

If automatic provisioning is a hard requirement for you, there are alternative solutions you might look at: There are several distributed filesystems that you can roll out on yourcluster that offer ReadWriteMany storage on top of Kubernetes and/or AWS. For example, you might take a look at Rook (which is basically a Kubernetes operator for Ceph). It's also officially still in a pre-release phase, but I've already worked with it a bit and it runs reasonably well. There's also the GlusterFS operator, which already seems to have a few stable releases.

Groping answered 6/7, 2018 at 19:20 Comment(1)
this example worked well for me when I explicitly set the storageClassName in the PersistentVolumeClaim to "". If not set explicitly, gp2 is assumed which will result in an error message like "Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported"Onward
C
13

You can use Amazon EFS to create PersistentVolume with ReadWriteMany access mode.

Amazon EKS Announced support for the Amazon EFS CSI Driver on Sep 19 2019, which makes it simple to configure elastic file storage for both EKS and self-managed Kubernetes clusters running on AWS using standard Kubernetes interfaces.

Applications running in Kubernetes can use EFS file systems to share data between pods in a scale-out group, or with other applications running within or outside of Kubernetes.

EFS can also help Kubernetes applications be highly available because all data written to EFS is written to multiple AWS Availability zones. If a Kubernetes pod is terminated and relaunched, the CSI driver will reconnect the EFS file system, even if the pod is relaunched in a different AWS Availability Zone.

You can deploy the Amazon EFS CSI Driver to an Amazon EKS cluster following the EKS-EFS-CSI user guide, basically like this:

Step 1: Deploy the Amazon EFS CSI Driver

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

Note: This command requires version 1.14 or greater of kubectl.

Step 2: Create an Amazon EFS file system for your Amazon EKS cluster

Step 2.1: Create a security group that allows inbound NFS traffic for your Amazon EFS mount points.

Step 2.2: Add a rule to your security group to allow inbound NFS traffic from your VPC CIDR range.

Step 2.3: Create the Amazon EFS file system configured with the security group you just created.

Now you are good to use EFS with ReadWriteMany access mode in your EKS Kubernetes project with the following sample manifest files:

1. efs-storage-class.yaml: Create the storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

kubectl apply -f efs-storage-class.yaml

2. efs-pv.yaml: Create PersistentVolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ftp-efs-pv
spec:
  storageClassName: efs-sc
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 10Gi # Doesn't really matter, as EFS does not enforce it anyway
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-642da695

Note: you need to replace the volumeHandle value with your Amazon EFS file system ID.

3. efs-pvc.yaml: Create PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ftp-pv-claim
  labels:
    app: ftp-storage-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: efs-sc

That should be it. You need to refer to the aforementioned official user guide for detailed explanation, where your can also find an example app to verify your setup.

Cumulous answered 9/1, 2020 at 20:7 Comment(0)
H
1

As you mention EBS volume with affinity & node selector will stop scalability however with EBS only ReadWriteOnce will work.

Sharing my experience, if you are doing many operations on the file system and frequently pushing & fetching files it might could be slow with EFS which can degrade application performance. operation rate on EFS is slow.

However, you can use GlusterFs in back it will be provisioning EBS volume. GlusterFS also support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).

Hudson answered 10/1, 2020 at 4:1 Comment(0)
I
0

I know this is an old question, but there are many new alternatives other than EFS. First, EBS with a volume type of io2 now supports RWM. Another alternative you may want to look at is AWS FSx for NetApp ONTAP. It is a managed service, like EFS, but supports cross availability zone access at no additional network cost, built in HA across AZs, and other features like immediate snapshots, clones, deduplication and cross region replication.

Interrelation answered 16/7 at 21:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.