Alauda Container Platform clusters support persistent storage using NFS. Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) provide an abstraction layer for provisioning and consuming storage volumes within a project. While NFS configuration details can be embedded directly in a Pod definition, this approach does not create the volume as a distinct, isolated cluster resource, increasing the risk of conflicts.
To enforce disk quotas and size constraints, you can utilize disk partitions. Assign each partition as a dedicated export point, with each export corresponding to a distinct PersistentVolume (PV).
While Alauda Container Platform mandates unique PV names, it remains the administrator's responsibility to ensure the uniqueness of the NFS volume's server and path for each export.
This partitioned approach enables precise capacity management. Developers request persistent storage specifying a required amount (e.g., 10Gi), and ACP matches the request to a PV backed by a partition/export offering at least that specified capacity. Please note: The quota enforcement applies to the usable storage space within the assigned partition/export.
This section details NFS volume security mechanisms with a focus on permission matching. Readers are assumed to possess fundamental knowledge of POSIX permissions, process UIDs, and supplemental groups.
Developers request NFS storage through either:
On the NFS server, the /etc/exports file defines export rules for accessible directories. Each exported directory retains its native POSIX owner/group IDs.
Key behavior of Alauda Container Platform's NFS plugin:
For example, consider an NFS directory with these server-side attributes:
Then the container must either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory.
Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root, uid 0, to nfsnobody, uid 65534, NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports.
Recommended NFS Access Management (When Export Permissions Are Fixed) When modifying permissions on the NFS export is not feasible, the recommended approach for managing access is through supplemental groups.
Supplemental groups in Alauda Container Platform are a common mechanism for controlling access to shared file storage, such as NFS.
Contrast with Block Storage: Access to block storage volumes (e.g., iSCSI) is typically managed by setting the fsGroup value within the pod's securityContext. This approach leverages filesystem group ownership change upon mount.
To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.
Because the group ID on the example target NFS directory is 5555, the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example:
User IDs can be defined in the container image or in the Pod definition.
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.
In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring group IDs for the moment, so the following can be added to the Pod definition:
To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Every export must be exported using the following format:
The firewall must be configured to allow traffic to the mount point.
The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups, as shown in the group IDs above.
NFS implements the Alauda Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.
By default, PVs are set to Retain.
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named nfs1:
The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This results in nfs1 being Released. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss.