Accessing Storage Services

Accessing storage services supports two methods of integration: first, integrating distributed storage resources from other business clusters within the platform to ensure storage and business isolation for easier management and maintenance; second, connecting external Ceph storage resources for distributed storage use.

Prerequisites

Prepare Storage

Choose one of the following:

  • Distributed storage has been deployed in other business clusters, and a storage pool has been created. Please record the name of the storage pool for later integration use.

  • External Ceph storage outside the platform (version ≥ 14.2.3) has been created with a storage pool. Please record the name of the storage pool for later integration use.

Open Ports

Destination IPDestination PortSource IPSource Port
IP of Ceph node3300, 6789, 6800-7300, 7480IP of all nodes in business clusterany

Obtain Authentication Information (External Ceph)

If the prepared storage is external Ceph storage, authentication information must be obtained using the following commands.

ParameterMethod of Acquisition
FSIDceph fsid
MON Component Informationceph mon dump
Must be in {name= IP} format, e.g. a=192.168.100.100:6789.
Admin Keyceph auth get-key client.admin
Storage Pool
  • File storage: Use ceph fs ls command to get the name value.
  • Block storage: ceph osd dump | grep "application rbd" | awk '{print $3}'
Data Storage Pool(only needed for file storage) Use ceph fs ls command to get the data pools value.

Procedure

Note: The following steps take accessing external Ceph storage as an example, the operations for accessing distributed storage are similar.

  1. In the left navigation bar, click Storage Management > Distributed Storage.

  2. Click Access Storage.

  3. On the Access Configuration wizard page, select External Ceph.

    ParameterDescription
    SnapshotWhen enabled, supports creating PVC snapshots and using snapshots to configure new PVCs for quick backup and restoration of business data.
    If snapshots were not enabled during storage access, you can still enable them later in the Operations section of the storage cluster details page as needed.
    Note: Please ensure that you have deployed the volume snapshot plugin for the current cluster before use.
    Network Configuration
    • Host Network: Computing components in this cluster will access the storage cluster using the host network.
    • Container Network: Computing components in this cluster will access the storage cluster using the container network. You can create a subnet in network management and assign the subnet to the rook-ceph namespace. If left empty, the default subnet will be used.
    Other ParametersPlease fill in the authentication parameters for the external Ceph obtained in the prerequisites.
  4. On the Create Storage Class wizard page, complete the configuration and click Access.

    ParameterDescription
    TypeBased on the type of storage pool created above, the default corresponding storage class will be:
    • File Storage: CephFS File Storage
    • Block Storage: CephRBD Block Storage
    Reclaim PolicyReclaim policy for persistent volumes.
    • Delete: When the persistent volume claim is deleted, the bound persistent volume will also be deleted.
    • Retain: Even if the persistent volume claim is deleted, the bound persistent volume will still be retained.
    Project AllocationProjects that can use this type of storage.
    If there are currently no projects requiring this type of storage, you may choose not to allocate projects for now and update them later.
  5. Wait approximately 1-5 minutes for the successful integration.

Follow-up Actions

  • Create Storage Classes: CephFS File Storage, CephRBD Block Storage

  • Developers using the above storage classes to create persistent volume claims can extend usage with volume snapshots and scaling features.

Note: If you need to maintain storage pools, storage device configurations, etc., for external storage, operations must be performed in the management platform of the storage cluster.