logo
Alauda Container Platform
English
简体中文
English
简体中文
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation
CLI Tools

Configure

Feature Gate

Clusters

Overview
Creating an On-Premise Cluster
etcd Encryption
Automated Rotate Kuberentes Certificates

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Updating Public Repository Credentials

Networking

Introduction

Architecture

Understanding Kube-OVN
Understanding ALB
Understanding MetalLB

Concepts

Auth
Ingress-nginx Annotation Compatibility
TCP/HTTP Keepalive
ModSecurity
Comparison Among Different Ingress Method
HTTP Redirect
L4/L7 Timeout
GatewayAPI
OTel

Guides

Creating Services
Creating Ingresses
Configure Gateway
Create Ingress-Nginx
Creating a Domain Name
Creating Certificates
Creating External IP Address Pool
Creating BGP Peers
Configure Subnets
Configure Network Policies
Creating Admin Network Policies
Configure Cluster Network Policies

How To

Deploy High Available VIP for ALB
Soft Data Center LB Solution (Alpha)
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Use OAuth Proxy with ALB
Creating GatewayAPI Gateway
Configure a Load Balancer
How to properly allocate CPU and memory resources
Forwarding IPv6 Traffic to IPv4 Addresses within the Cluster
Calico Network Supports WireGuard Encryption
Kube-OVN Overlay Network Supports IPsec Encryption
ALB Monitoring
Load Balancing Session Affinity Policy in Application Load Balancer (ALB)

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Machine Configuration

Overview
Managing Node Configuration with MachineConfig
Node Disruption Policies

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Setting the naming rules for subdirectories in the NFS Shared Storage Class
Generic ephemeral volumes
Using an emptyDir
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storagge Disaster Recovery
Update the optimization parameters
Create ceph object store user

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Installation

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Admission
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Introduction

Install

Installing Alauda Container Platform Builds

Upgrading

Upgrading Alauda Container Platform Builds
Architecture

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

GitOps

Introduction

Install

Installing Alauda Build of Argo CD
Installing Alauda Container Platform GitOps

Upgrade

Upgrading Alauda Container Platform GitOps
Architecture

Concepts

GitOps

Argo CD Concept

Introduction
Application
ApplicationSet
Tool
Helm
Kustomize
Directory
Sync
Health

Alauda Container Platform GitOps Concepts

Introduction
Alauda Container Platform GitOps Sync and Health Status

Guides

Creating GitOps Application

Creating GitOps Application
Creating GitOps ApplicationSet

GitOps Observability

Argo CD Component Monitoring
GitOps Applications Ops

How To

Integrating Code Repositories via Argo CD dashboard
Creating an Argo CD Application via Argo CD dashboard
Creating an Argo CD Application via the web console
How to Obtain Argo CD Access Information
Troubleshooting

Extend

Operator
Cluster Plugin

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

Introduction
Install

Architecture

Log Module Architecture
Log Component Selection Guide
Log Component Capacity Planning
Concepts

Guides

Logs

How To

How to Archive Logs to Third-Party Storage
How to Interface with External ES Storage Clusters

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

Overview

Introduction
Features
Install

Application Development

Introduction

Guides

CUDA Driver and Runtime Compatibility
Add Custom Devices Using ConfigMap

Troubleshooting

Troubleshooting float16 is only supported on GPUs with compute capability at least xx Error in vLLM
Paddle Autogrow Memory Allocation Crash on GPU-Manager

Configuration Management

Introduction

Guides

Configure Hardware accelerator on GPU nodes

Resource Monitoring

Introduction

Guides

GPU Resource Monitoring

Alauda Service Mesh

About Alauda Service Mesh

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]
📝 Edit this page on GitHub
Previous PageOverview
Next Pageetcd Encryption

#Creating an On-Premise Cluster

#TOC

#Prerequisites

#Node Requirements

  1. If you downloaded a single-architecture installation package from Download Installation Package, ensure your node machines have the same architecture as the package. Otherwise, nodes won't start due to missing architecture-specific images.
  2. Verify that your node operating system and kernel are supported. See Supported OS and Kernels for details.
  3. Perform availability checks on node machines. For specific check items, refer to Node Preprocessing > Node Checks.
  4. If node machine IPs cannot be directly accessed via SSH, provide a SOCKS5 proxy for the nodes. The global cluster will access nodes through this proxy service.

#Load Balancing

For production environments, a load balancer is required for cluster control plane nodes to ensure high availability. You can provide your own hardware load balancer or enable Self-built VIP, which provides software load balancing using haproxy + keepalived. We recommend using a hardware load balancer because:

  • Better Performance: Hardware load balancing performs better than software load balancing.
  • Lower Complexity: If you're unfamiliar with keepalived, misconfigurations could make the cluster unavailable, leading to lengthy troubleshooting and seriously affecting cluster reliability.

When using your own hardware load balancer, you can use the load balancer's VIP as the IP Address / Domain parameter. If you have a domain name that resolves to the load balancer's VIP, you can use that domain as the IP Address / Domain parameter. Note:

  • The load balancer must correctly forward traffic to ports 6443, 11780, and 11781 on all control plane nodes in the cluster.
  • If your cluster has only one control plane node and you use that node's IP as the IP Address / Domain parameter, the cluster cannot be scaled from a single node to a highly available multi-node setup later. Therefore, we recommend providing a load balancer even for single-node clusters.

When enabling Self-built VIP, you need to prepare:

  1. An available VRID
  2. A host network that supports the VRRP protocol
  3. All control plane nodes and the VIP must be on the same subnet, and the VIP must be different from any node IP.

#Connecting global Cluster and Workload Cluster

The platform requires mutual access between the global cluster and workload clusters. If they're not on the same network, you need to:

  1. Provide External Access for the workload cluster to ensure the global cluster can access it. Network requirements must ensure global can access ports 6443, 11780, and 11781 on all control plane nodes.
  2. Add an additional address to global that the workload cluster can access. When creating a workload cluster, add this address to the cluster's annotations with the key cpaas.io/platform-url and the value set to the public access address of global.

#Image Registry

Cluster images support Platform Built-in, Private Repository, and Public Repository options.

  • Platform Built-in: Uses the image registry provided by the global cluster. If the cluster cannot access global, see Add External Address for Built-in Registry.
  • Private Repository: Uses your own image registry. For details on pushing required images to your registry, contact technical support.
  • Public Repository: Uses the platform's public image registry. Before using, complete Updating Public Repository Credentials.

#Container Networking

If you plan to use Kube-OVN's Underlay for your cluster, refer to Preparing Kube-OVN Underlay Physical Network.

#Creation Procedure

  1. Enter the Administrator view, and click Clusters/Clusters in the left navigation bar.

  2. Click Create Cluster.

  3. Configure the following sections according to the instructions below: Basic Info, Container Network, Node Settings, and Extended Parameters.

#Basic Info

ParameterDescription
Kubernetes Version

All optional versions are rigorously tested for stability and compatibility.


Recommendation: Choose the latest version for optimal features and support.

Container Runtime

Containerd is provided as the default container runtime.


If you prefer using Docker as the container runtime, please refer to Choosing a Container Runtime.

Cluster Network Protocol

Supports three modes: IPv4 single stack, IPv6 single stack, IPv4/IPv6 dual stack.


Note: If you select dual stack mode, ensure all nodes have correctly configured IPv6 addresses; the network protocol cannot be changed after setting.

Cluster Endpoint

IP Address / Domain: Enter the pre-prepared domain name or VIP if no domain name is available.


Self-Built VIP: Disabled by default. Only enable if you haven't provided a LoadBalancer. When enabled, the installer will automatically deploy keepalived for software load balancing support.


External Access: Enter the externally accessible address prepared for the cluster when it's not in the same network environment as the global cluster.

#Container Network

Kube-OVN
Calico
Flannel
Custom

An enterprise-grade Cloud Native Kubernetes container network orchestration system developed by Alauda. It brings mature networking capabilities from the OpenStack domain to Kubernetes, supporting cross-cloud network management, traditional network architecture and infrastructure interconnection, and edge cluster deployment scenarios, while greatly enhancing Kubernetes container network security, management efficiency, and performance.

ParameterDescription
Subnet

Also known as Cluster CIDR, represents the default subnet segment. After cluster creation, additional subnets can be added.

Transmit Mode

Overlay: A virtual network abstracted over the infrastructure that doesn't consume physical network resources. When creating an Overlay default subnet, all Overlay subnets in the cluster use the same cluster NIC and node NIC configuration.
Underlay: This transmission method relies on physical network devices. It can directly allocate physical network addresses to Pods, ensuring better performance and connectivity with the physical network. Nodes in an Underlay subnet must have multiple NICs, and the NIC used for bridge networking must be exclusively used by Underlay and not carry other traffic like SSH. When creating an Underlay default subnet, the cluster NIC is actually a default NIC for bridge networking, and the node NIC is the node NIC configuration in the bridge network.

  • Default Gateway: The physical network gateway address, which is the gateway address for the Cluster CIDR segment (must be within the Cluster CIDR address range).
  • VLAN ID: Virtual LAN identifier (VLAN number), e.g., 0.
  • Reserved IPs: Set reserved IPs that won't be automatically allocated, such as IPs in the subnet that are already used by other devices.
Service CIDR

IP address range used by Kubernetes Services of type ClusterIP. Cannot overlap with the default subnet range.

Join CIDR

In Overlay transmission mode, this is the IP address range used for communication between nodes and pods. Cannot overlap with the default subnet or Service CIDR.

#Node Settings

ParameterDescription
Network Interface Card

The name of the host network interface device used by the cluster network plugin.


Note:

  • When selecting Underlay transmission mode for the Kube-OVN default subnet, you must specify the network interface name, which will be the default NIC for bridge networking.
    - The platform's network interface traffic monitoring by default recognizes traffic on interfaces named like eth.|en.|wl.|ww.. If you use interfaces with different naming conventions, please refer to Collect Network Data from Custom-Named Network Interfaces after cluster onboarding to modify the relevant resources and ensure the platform can properly monitor network interface traffic.
Node Name

You can choose to use either the node IP or hostname as the node name on the platform.


Note: When choosing to use hostname as the node name, ensure that the hostnames of nodes added to the cluster are unique.

Nodes

Add nodes to the cluster, or Recovery from draft temporarily saved node information. See the detailed parameter descriptions for adding nodes below.

Monitoring Type

Supports Prometheus and VictoriaMetrics.
When selecting VictoriaMetrics as the monitoring component, you must configure the Deploy Type:
- Deploy VictoriaMetrics: Deploys all related components, including VMStorage, VMAlert, VMAgent, etc.


- Deploy VictoriaMetrics Agent: Only deploys the log collection component, VMAgent. When using this deployment method, you need to associate with a VictoriaMetrics instance already deployed on another cluster in the platform to provide monitoring services for the cluster.

Monitoring Nodes

Select nodes for deploying cluster monitoring components. Supports selecting compute nodes and control plane nodes that allow application deployment.


To avoid affecting cluster performance, it's recommended to prioritize compute nodes. After the cluster is successfully created, monitoring components with storage type Local Volume will be deployed on the selected nodes.

Node Addition Parameters

ParameterDescription
Type

Control Plane Node: Responsible for running components such as kube-apiserver, kube-scheduler, kube-controller-manager, etcd, container network, and some platform management components in the cluster. When Application Deployable is enabled, control plane nodes can also be used as compute nodes.


Worker Node: Responsible for hosting business pods running on the cluster.

IPv4 Address

The IPv4 address of the node. For clusters created in internal network mode, enter the node's private IP.

IPv6 Address

Valid when the cluster has IPv4/IPv6 dual stack enabled. The IPv6 address of the node.

Application Deployable

Valid when Node Type is Control Plane Node. Whether to allow business applications to be deployed on this control plane node, scheduling business-related pods to this node.

Display NameThe display name of the node.
SSH Connection IP

The IP address that can connect to the node when accessing it via SSH service.


If you can log in to the node using ssh <username>@<node's IPv4 address>, this parameter is not required; otherwise, enter the node's public IP or NAT external IP to ensure the global cluster and proxy can connect to the node via this IP.

Network Interface Card

Enter the name of the network interface used by the node. The priority of network interface configuration effectiveness is as follows (from left to right, in descending order):


Kube-OVN Underlay: Node NIC > Cluster NIC


Kube-OVN Overlay: Node NIC > Cluster NIC > NIC corresponding to the node's default route


Calico: Cluster NIC > NIC corresponding to the node's default route


Flannel: Cluster NIC > NIC corresponding to the node's default route

Associated Bridge Network

Note: When creating a cluster, bridge network configuration is not supported; this option is only available when adding nodes to a cluster that already has Underlay subnets created.


Select an existing Add Bridge Network. If you don't want to use the bridge network's default NIC, you can configure the node NIC separately.

SSH Port

SSH service port number, e.g., 22.

SSH Username

SSH username, needs to be a user with root privileges, e.g., root.

Proxy

Whether to access the node's SSH port through a proxy. When the global cluster cannot directly access the node to be added via SSH (e.g., the global cluster and workload cluster are not in the same subnet; the node IP is an internal IP that the global cluster cannot directly access), this switch needs to be turned on and proxy-related parameters configured. After configuring the proxy, node access and deployment can be achieved through the proxy.


Note: Currently, only SOCKS5 proxy is supported.


Access URL: Proxy server address, e.g., 192.168.1.1:1080 .
Username: Username for accessing the proxy server.


Password: Password for accessing the proxy server.

SSH Authentication

Authentication method and corresponding authentication information for logging into the added node. Options include:


Password: Requires a username with root privileges and the corresponding SSH password.
Key: Requires a private key with root privileges and the private key password .

Save Draft

Saves the currently configured data in the dialog as a draft and closes the Add Node dialog.


Without leaving the Create Cluster page, you can select Restore from draft to open the Add Node dialog and restore the configuration data saved as a draft.


Note: The data restored from the draft is the most recently saved draft data.

#Extended Parameters

Note:

  • Apart from required configurations, it's not recommended to set extended parameters, as incorrect settings may make the cluster unavailable and cannot be modified after cluster creation.

  • If a entered Key duplicates a default parameter Key, it will override the default configuration.

Procedure

  1. Click Extended Parameters to expand the extended parameter configuration area. You can optionally set the following extended parameters for the cluster:
ParameterDescription
Docker Parameters

dockerExtraArgs, additional configuration parameters for Docker, which will be written to /etc/sysconfig/docker. Modification is not recommended. To configure Docker through the daemon.json file, it must be configured as key-value pairs.

Kubelet Parameters

kubeletExtraArgs, additional configuration parameters for Kubelet.


Note: When the Container Network's Node IP Count parameter is entered, a default Kubelet Parameter configuration with the key max-pods and a value of Node IP Count is automatically generated. This sets the maximum number of pods that can run on any node in the cluster. This configuration is not displayed in the interface.


Adding a new max-pods: maximum number of runnable pods key-value pair in the Kubelet Parameters area will override the default value. Any positive integer is allowed, but it's recommended to use the default value (Node IP Count) or enter a value not exceeding 256.

Controller Manager Parameters

controllerManagerExtraArgs, additional configuration parameters for the Controller Manager.

Scheduler Parameters

schedulerExtraArgs, additional configuration parameters for the Scheduler.

APIServer Parameters

apiServerExtraArgs, additional configuration parameters for the APIServer.

APIServer URL

publicAlternativeNames, APIServer access addresses issued in the certificate. Only IPs or domain names can be entered, with a maximum of 253 characters.

Cluster Annotations

Cluster annotation information, marking cluster characteristics in metadata in the form of key-value pairs for platform components or business components to obtain relevant information.

  1. Click Create. You'll return to the cluster list page where the cluster will be in the Creating state.

#Post-Creation Steps

#Viewing Creation Progress

On the cluster list page, you can view the list of created clusters. For clusters in the Creating state, you can check the execution progress.

Procedure

  1. Click the small icon View Execution Progress to the right of the cluster status.

  2. In the execution progress dialog that appears, you can view the cluster's execution progress (status.conditions).

    Tip: When a certain type is in progress or in a failed state with a reason, hover your cursor over the corresponding reason (shown in blue text) to view detailed information about the reason (status.conditions.reason).

#Associating with Projects

After the cluster is created, you can add it to projects in the project management view.