global
cluster will access nodes through this proxy service.For production environments, a load balancer is required for cluster control plane nodes to ensure high availability.
You can provide your own hardware load balancer or enable Self-built VIP
, which provides software load balancing using haproxy + keepalived.
We recommend using a hardware load balancer because:
When using your own hardware load balancer, you can use the load balancer's VIP as the IP Address / Domain
parameter. If you have a domain name that resolves to the load balancer's VIP, you can use that domain as the IP Address / Domain
parameter.
Note:
6443
, 11780
, and 11781
on all control plane nodes in the cluster.IP Address / Domain
parameter, the cluster cannot be scaled from a single node to a highly available multi-node setup later. Therefore, we recommend providing a load balancer even for single-node clusters.When enabling Self-built VIP
, you need to prepare:
global
Cluster and Workload ClusterThe platform requires mutual access between the global
cluster and workload clusters. If they're not on the same network, you need to:
External Access
for the workload cluster to ensure the global
cluster can access it. Network requirements must ensure global
can access ports 6443
, 11780
, and 11781
on all control plane nodes.global
that the workload cluster can access. When creating a workload cluster, add this address to the cluster's annotations with the key cpaas.io/platform-url
and the value set to the public access address of global
.Cluster images support Platform Built-in, Private Repository, and Public Repository options.
global
cluster. If the cluster cannot access global
, see Add External Address for Built-in Registry.If you plan to use Kube-OVN's Underlay for your cluster, refer to Preparing Kube-OVN Underlay Physical Network.
Enter the Platform Management view, and click Clusters/Clusters in the left navigation bar.
Click Create Cluster.
Configure the following sections according to the instructions below: Basic Info, Container Network, Node Settings, and Extended Parameters.
Parameter | Description |
---|---|
Kubernetes Version | All optional versions are rigorously tested for stability and compatibility. Recommendation: Choose the latest version for optimal features and support. |
Container Runtime | Containerd is provided as the default container runtime. If you prefer using Docker as the container runtime, please refer to Choosing a Container Runtime. |
Cluster Network Protocol | Supports three modes: IPv4 single stack, IPv6 single stack, IPv4/IPv6 dual stack. Note: If you select dual stack mode, ensure all nodes have correctly configured IPv6 addresses; the network protocol cannot be changed after setting. |
Cluster Endpoint |
|
An enterprise-grade Cloud Native Kubernetes container network orchestration system developed by Alauda. It brings mature networking capabilities from the OpenStack domain to Kubernetes, supporting cross-cloud network management, traditional network architecture and infrastructure interconnection, and edge cluster deployment scenarios, while greatly enhancing Kubernetes container network security, management efficiency, and performance.
Parameter | Description |
---|---|
Subnet | Also known as Cluster CIDR, represents the default subnet segment. After cluster creation, additional subnets can be added. |
Transmit Mode | Overlay: A virtual network abstracted over the infrastructure that doesn't consume physical network resources. When creating an Overlay default subnet, all Overlay subnets in the cluster use the same cluster NIC and node NIC configuration.
|
Service CIDR | IP address range used by Kubernetes Services of type ClusterIP. Cannot overlap with the default subnet range. |
Join CIDR | In Overlay transmission mode, this is the IP address range used for communication between nodes and pods. Cannot overlap with the default subnet or Service CIDR. |
Parameter | Description |
---|---|
Network Interface Card | The name of the host network interface device used by the cluster network plugin. Note:
|
Node Name | You can choose to use either the node IP or hostname as the node name on the platform. Note: When choosing to use hostname as the node name, ensure that the hostnames of nodes added to the cluster are unique. |
Nodes | Add nodes to the cluster, or Recovery from draft temporarily saved node information. See the detailed parameter descriptions for adding nodes below. |
Monitoring Type | Supports Prometheus and VictoriaMetrics. - Deploy VictoriaMetrics Agent: Only deploys the log collection component, VMAgent. When using this deployment method, you need to associate with a VictoriaMetrics instance already deployed on another cluster in the platform to provide monitoring services for the cluster. |
Monitoring Nodes | Select nodes for deploying cluster monitoring components. Supports selecting compute nodes and control plane nodes that allow application deployment. To avoid affecting cluster performance, it's recommended to prioritize compute nodes. After the cluster is successfully created, monitoring components with storage type Local Volume will be deployed on the selected nodes. |
Node Addition Parameters
Parameter | Description |
---|---|
Type | Control Plane Node: Responsible for running components such as kube-apiserver, kube-scheduler, kube-controller-manager, etcd, container network, and some platform management components in the cluster. When Application Deployable is enabled, control plane nodes can also be used as compute nodes. Worker Node: Responsible for hosting business pods running on the cluster. |
IPv4 Address | The IPv4 address of the node. For clusters created in internal network mode, enter the node's private IP. |
IPv6 Address | Valid when the cluster has IPv4/IPv6 dual stack enabled. The IPv6 address of the node. |
Application Deployable | Valid when Node Type is Control Plane Node. Whether to allow business applications to be deployed on this control plane node, scheduling business-related pods to this node. |
Display Name | The display name of the node. |
SSH Connection IP | The IP address that can connect to the node when accessing it via SSH service. If you can log in to the node using
|
Network Interface Card | Enter the name of the network interface used by the node. The priority of network interface configuration effectiveness is as follows (from left to right, in descending order): Kube-OVN Underlay: Node NIC > Cluster NIC Kube-OVN Overlay: Node NIC > Cluster NIC > NIC corresponding to the node's default route Calico: Cluster NIC > NIC corresponding to the node's default route Flannel: Cluster NIC > NIC corresponding to the node's default route |
Associated Bridge Network | Note: When creating a cluster, bridge network configuration is not supported; this option is only available when adding nodes to a cluster that already has Underlay subnets created. Select an existing Add Bridge Network. If you don't want to use the bridge network's default NIC, you can configure the node NIC separately. |
SSH Port | SSH service port number, e.g., |
SSH Username | SSH username, needs to be a user with root privileges, e.g.,
|
Proxy | Whether to access the node's SSH port through a proxy. When the
Note: Currently, only SOCKS5 proxy is supported. Access URL: Proxy server address, e.g.,
Password: Password for accessing the proxy server. |
SSH Authentication | Authentication method and corresponding authentication information for logging into the added node. Options include: Password: Requires a username with root privileges and the
corresponding SSH password. |
Save Draft | Saves the currently configured data in the dialog as a draft and closes the Add Node dialog. Without leaving the Create Cluster page, you can select Restore from draft to open the Add Node dialog and restore the configuration data saved as a draft. Note: The data restored from the draft is the most recently saved draft data. |
Note:
Apart from required configurations, it's not recommended to set extended parameters, as incorrect settings may make the cluster unavailable and cannot be modified after cluster creation.
If a entered Key duplicates a default parameter Key, it will override the default configuration.
Procedure
Parameter | Description |
---|---|
Docker Parameters |
|
Kubelet Parameters |
Note: When the Container Network's Node IP Count
parameter is entered, a default Kubelet Parameter configuration
with the key Adding a new |
Controller Manager Parameters |
|
Scheduler Parameters |
|
APIServer Parameters |
|
APIServer URL |
|
Cluster Annotations | Cluster annotation information, marking cluster characteristics in metadata in the form of key-value pairs for platform components or business components to obtain relevant information. |
On the cluster list page, you can view the list of created clusters. For clusters in the Creating state, you can check the execution progress.
Procedure
Click the small icon View Execution Progress to the right of the cluster status.
In the execution progress dialog that appears, you can view the cluster's execution progress (status.conditions).
Tip: When a certain type is in progress or in a failed state with a reason, hover your cursor over the corresponding reason (shown in blue text) to view detailed information about the reason (status.conditions.reason).
After the cluster is created, you can add it to projects in the project management view.