Configuring Persistent Storage Using NFS

Alauda Container Platform clusters support persistent storage using NFS. Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) provide an abstraction layer for provisioning and consuming storage volumes within a project. While NFS configuration details can be embedded directly in a Pod definition, this approach does not create the volume as a distinct, isolated cluster resource, increasing the risk of conflicts.

TOC

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in Alauda Container Platform.
  • To provision NFS volumes, a list of NFS servers and export paths are all that is required.

Procedure

Create an object definition for the PV

cat << EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-example
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  nfs:
    path: /tmp
    server: 10.0.0.3
  persistentVolumeReclaimPolicy: Retain
EOF
  1. The name of the volume.
  2. Amount of storage.
  3. Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes.
  4. The volume type being used, in this case the nfs plugin.
  5. The NFS server address.
  6. The NFS export path.
  7. What happens after PVC is deleted (Retain, Delete, Recycle).

Verify that the PV was created

Command
Output Example
kubectl get pv

Create a PVC that references the PV

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-claim1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: pv-nfs-example
  storageClassName: ""
  1. The access modes do not enforce security, but rather act as labels to match a PV to a PVC.
  2. This claim looks for PVs offering 1Gi or greater capacity.
  3. The name of the PV to be used.

Verify that the persistent volume claim was created

Command
Output Example
kubectl get pvc

Enforcing Disk Quotas via Partitioned Exports

To enforce disk quotas and size constraints, you can utilize disk partitions. Assign each partition as a dedicated export point, with each export corresponding to a distinct PersistentVolume (PV).

While Alauda Container Platform mandates unique PV names, it remains the administrator's responsibility to ensure the uniqueness of the NFS volume's server and path for each export.

This partitioned approach enables precise capacity management. Developers request persistent storage specifying a required amount (e.g., 10Gi), and ACP matches the request to a PV backed by a partition/export offering at least that specified capacity. Please note: The quota enforcement applies to the usable storage space within the assigned partition/export.

NFS volume security

This section details NFS volume security mechanisms with a focus on permission matching. Readers are assumed to possess fundamental knowledge of POSIX permissions, process UIDs, and supplemental groups.

Developers request NFS storage through either:

  • A PersistentVolumeClaim (PVC) reference by name, or
  • Direct configuration of the NFS volume plugin in the volumes section of their Pod specification.

On the NFS server, the /etc/exports file defines export rules for accessible directories. Each exported directory retains its native POSIX owner/group IDs.

Key behavior of Alauda Container Platform's NFS plugin:

  1. Mounts volumes to containers while preserving exact POSIX ownership and permissions from the source directory
  2. Executes containers without forcing process UIDs to match the mount ownership - an intentional security measure

For example, consider an NFS directory with these server-side attributes:

Command
Output Example
ls -l /share/nfs -d
Command
Output Example
id nfsnobody

Then the container must either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory.

NOTE

Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root, uid 0, to nfsnobody, uid 65534, NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports.

Group IDs

Recommended NFS Access Management (When Export Permissions Are Fixed) When modifying permissions on the NFS export is not feasible, the recommended approach for managing access is through supplemental groups.

Supplemental groups in Alauda Container Platform are a common mechanism for controlling access to shared file storage, such as NFS.

Contrast with Block Storage: Access to block storage volumes (e.g., iSCSI) is typically managed by setting the fsGroup value within the pod's securityContext. This approach leverages filesystem group ownership change upon mount.

NOTE

To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.

Because the group ID on the example target NFS directory is 5555, the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example:

spec:
  containers:
    - name:
    ...
  securityContext:
    supplementalGroups: [5555] 
  1. securityContext must be defined at the pod level, not under a specific container.
  2. An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated.

User IDs

User IDs can be defined in the container image or in the Pod definition.

NOTE

It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.

In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring group IDs for the moment, so the following can be added to the Pod definition:

spec:
  containers:
  - name:
  ...
    securityContext:
      runAsUser: 65534
  1. Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod.
  2. 65534 is the nfsnobody user.

Export settings

To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:

  • Every export must be exported using the following format:

    # replace 10.0.0.0/24 to trusted CIDRs/hosts
    /<example_fs> 10.0.0.0/24(rw,sync,root_squash,no_subtree_check)
  • The firewall must be configured to allow traffic to the mount point.

    • For NFSv4, configure the default port 2049 (nfs).
      iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
    • For NFSv3, there are three ports to configure: 2049 (nfs), 20048 (mountd), and 111 (portmapper).
      iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
      iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
      iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
  • The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups, as shown in the group IDs above.

Reclaiming resources

NFS implements the Alauda Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.

By default, PVs are set to Retain.

Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.

For example, the administrator creates a PV named nfs1:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs1
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This results in nfs1 being Released. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs2
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss.