Quick Start
This document guides you through creating a new Kubernetes cluster using the Hosted Control Plane architecture.
TOC
Prerequisites
Before creating a hosted control plane cluster, ensure the following requirements are met:
Management Cluster Ready
Choose an Alauda Container Platform workload cluster as the management cluster. The management cluster must be ready before creating a new HCP cluster, as the hosted control plane components will run within it.
If you choose a workload cluster as the management cluster, run the following command in the global CLI console:
Environment Requirements
The management cluster must have the following cluster plugins installed:
- Alauda Container Platform Kubeadm Provider
- Alauda Container Platform Hosted Control Plane
- Alauda Container Platform SSH Infrastructure Provider
LoadBalancer Requirements
The management cluster must support LoadBalancer service type, such as MetalLB (provided by Alauda Container Platform) or cloud-based load balancers.
Worker Host Ready
Worker hosts must be prepared before creating a new cluster. Worker hosts can be physical or virtual machines.
For detailed worker host requirements, refer to .
Etcd Cluster Ready
Each HCP cluster requires an etcd cluster as the storage backend for kube-apiserver. You need to deploy the etcd cluster before creating a new HCP cluster. If you haven't prepared an etcd cluster yet, refer to the document Deploy Etcd Cluster for etcd cluster deployment instructions.
Process Overview
Creating a hosted control plane cluster involves the following main steps:
- Declare Backend Storage - Create a DataStore resource to configure the etcd backend
- Create Hosted Control Plane - Deploy the control plane resources
- Add Worker Nodes - Deploy worker node resources
- Access from Alauda Container Platform - Integrate with ACP management console
The following sections provide detailed instructions for each step.
Step 1: Declare Backend Storage
Create a DataStore resource that declares the storage backend for kube-apiserver. A single DataStore resource can be shared across multiple HCP clusters.
DataStore Parameters
Important Notes:
- All certificate and key files must be stored in Kubernetes Secrets before creating the DataStore resource.
- The etcd cluster must be accessible from the management cluster where the hosted control planes run.
- Using TLS authentication is strongly recommended for production environments.
- A single DataStore can serve multiple HCP clusters, enabling efficient resource sharing.
Step 2: Create Hosted Control Plane
The control plane deployment requires creating the following Kubernetes resources in sequence.
Best Practice: For centralized management and operational consistency, it is recommended to deploy all hosted control plane resources within the cpaas-system namespace.
2.1 Create Cluster Resource
Create a Cluster API Cluster resource to declare the cluster and configure its network CIDRs.
Important: The podCIDR and serviceCIDR must not overlap with the management cluster's network ranges.
Parameter Description:
{CLUSTER_NAME}: Name of your cluster (e.g.,my-hcp-cluster){NAMESPACE}: Namespace for cluster resources (typicallycpaas-system){POD_CIDR}: Pod network CIDR (e.g.,10.7.0.0/16){SERVICE_CIDR}: Service network CIDR (e.g.,10.8.0.0/16)
2.2 Create Registry Credentials Secret (Optional)
Create a Secret for container registry authentication. Skip this step if your registry doesn't require authentication.
Parameter Description:
{REGISTRY_CREDENTIAL_NAME}: Name of the registry credential Secret (e.g.,registry-credentials){BASE64_ENCODED_USERNAME}: Base64-encoded registry username{BASE64_ENCODED_PASSWORD}: Base64-encoded registry password
2.3 Create SSHCluster Resource
Configure the SSH-based infrastructure cluster.
Parameter Description:
{REGISTRY_ADDRESS}: Container registry address (e.g.,harbor.example.com){REGISTRY_CREDENTIAL_NAME}: Name of the registry credential Secret (omitauthsection if no authentication is needed)
2.4 Create KamajiControlPlane Resource
Before creating the KamajiControlPlane resource, retrieve the CoreDNS and kube-proxy image tags from the management cluster:
Create the KamajiControlPlane resource:
Parameter Description:
{DATASTORE_NAME}: Name of the DataStore resource created in Step 1{REGISTRY_ADDRESS}: Container registry address{COREDNS_IMAGE_TAG}: CoreDNS image tag (obtained from the command above){KUBE_PROXY_IMAGE_TAG}: kube-proxy image tag (typically matches Kubernetes version){CONTROL_PLANE_REPLICAS}: Number of control plane replicas (recommended:3for high availability){KUBERNETES_VERSION}: Kubernetes version (e.g.,v1.32.7, check global cluster for supported versions)
2.5 Verify Control Plane Status
After deploying the control plane resources, verify that all components are running correctly:
Check KamajiControlPlane Status
Expected output:
Check TenantControlPlane Status
Expected output:
Check Control Plane Pods
Expected output (all pods should be in Running state):
If all three checks show healthy status, the control plane deployment was successful.
Step 3: Add Worker Nodes
After the control plane is running, add worker nodes to the cluster.
3.1 Create SSHHost and Credentials
Create SSH credentials and host definitions for each worker node:
Parameter Description:
{HOST_CREDENTIAL_NAME}: Name of the SSH credential Secret (e.g.,worker-node-credentials){BASE64_ENCODED_SSH_USERNAME}: Base64-encoded SSH username{BASE64_ENCODED_SSH_PASSWORD}: Base64-encoded SSH password{HOST_NAME}: Name for this worker host (e.g.,worker-node-1){HOST_IP_ADDRESS}: IP address of the worker host (e.g.,192.168.143.64){SSH_PORT}: SSH port (default:22){REUSE_HOST}: Whether to reuse the host after Machine deletion (trueorfalse). Iftrue, the host will be cleaned and reused when the corresponding Machine resource is deleted.
3.2 Create SSHMachineTemplate
Define the machine template specifying the container runtime configuration.
Note: Currently only containerd is supported. The plugin includes containerd version 1.7.27-4 by default. If using a different version, ensure the corresponding image exists in your registry.
Parameter Description:
{MACHINE_TEMPLATE_NAME}: Name of the machine template (e.g.,worker-template){CONTAINERD_VERSION}: Container runtime version (e.g.,1.7.27-4)
3.3 Create KubeadmConfigTemplate
Configure kubelet settings. If you don't have special requirements, use the default configuration.
Parameter Description:
{CONFIG_TEMPLATE_NAME}: Name of the config template (e.g.,worker-config-template)
3.4 Create MachineDeployment
Deploy worker nodes using a MachineDeployment:
Parameter Description:
{MACHINE_DEPLOYMENT_NAME}: Name of the machine deployment (e.g.,worker-deployment){CLUSTER_NAME}: Name of your cluster (must match the Cluster resource name){WORKER_NODE_REPLICAS}: Number of worker node replicas (must not exceed available SSHHost resources){CONFIG_TEMPLATE_NAME}: Name of the KubeadmConfigTemplate created above{MACHINE_TEMPLATE_NAME}: Name of the SSHMachineTemplate created above{KUBERNETES_VERSION}: Kubernetes version (e.g.,v1.32.7, must match control plane version)
3.5 Verify Worker Node Status
After deployment, check the MachineDeployment status:
Expected output:
If the status shows as healthy, worker nodes have been successfully added to the cluster.
Step 4: Access from Alauda Container Platform
After the cluster is fully deployed, integrate it with Alauda Container Platform.
Retrieve Cluster Kubeconfig
Run the following command on the management cluster to retrieve the cluster's kubeconfig:
Import to ACP
Use the retrieved kubeconfig to import the cluster into Alauda Container Platform through the ACP console interface.
Troubleshooting
If you encounter issues during deployment, check the following:
-
Control Plane Issues:
- Verify DataStore connectivity to etcd cluster
- Check control plane pod logs:
kubectl logs -n {NAMESPACE} {POD_NAME} - Ensure LoadBalancer service has received an external IP
-
Worker Node Issues:
- Verify SSH connectivity to worker hosts
- Check SSHHost status:
kubectl get sshhost -n {NAMESPACE} - Review Machine provisioning logs:
kubectl describe machine -n {NAMESPACE}
-
Network Issues:
- Ensure Pod CIDR and Service CIDR don't conflict with management cluster
- Verify network plugin (Calico) is running correctly
- Check konnectivity agent connectivity
For additional support, consult the Alauda Container Platform documentation or contact support.