Before installing the global
cluster, you need to prepare hardware, network, and OS that meet the requirements.
The platform currently does not support direct installation of the global
cluster in an existing Kubernetes environment. If your environment already has a Kubernetes cluster, please back up your data and clean the environment before installation.
Before installation, you must select an appropriate installation scenario based on your goals and actual needs. Different scenarios have significant differences in infrastructure resource configuration and architecture design requirements. The following are planning recommendations for three typical scenarios:
Scope of Application Suitable for platform function verification, demo , or technical feasibility testing. This scenario is only used to verify the core functions of the platform and does not carry production-level application traffic. The resource configuration is at the minimum level.
Resource Configuration Requirements
Dimension | Specification Requirements |
---|---|
Number of Nodes | 1 (physical machine or virtual machine) |
CPU | ≥16 cores |
Memory | ≥32GB |
Architecture Description
global
cluster should be deployed in an independent VPC or VLAN to ensure bandwidth ≥1Gbps.This section describes the minimum hardware requirements for building a highly available global
cluster. If you have completed capacity planning, please prepare the corresponding resources according to Capacity Planning, or scale up as needed after installation.
At least 3 physical machines or virtual machines must be provided as control plane nodes for the cluster. The minimum configuration for each node is as follows:
Category | Minimum Requirements |
---|---|
CPU | ≥ 8 cores, clock speed ≥ 2.5GHz No over-provisioning; disable power saving mode |
Memory | ≥ 16GB No over-provisioning; recommended to use at least six-channel DDR4 |
Hard Drive | Single device IOPS ≥ 2000 Throughput ≥ 200MB/s Must use SSD |
For ARM architectures (such as Kunpeng 920), it is recommended to increase the configuration to 2 times that of the x86 minimum configuration, but not less than 1.5 times.
For example: If x86 requires 8 cores 16GB, then ARM should reach at least 12 cores 24GB, and the recommended configuration is 16 cores 32GB.
3.10.0-1127.el7.x86_64
4.18.0-80.el8.x86_64
to 4.18.0-372.9.1.el8.x86_64
RHEL 7.8 does not support Calico Vxlan IPv6.
Before installation, the following network resources must be pre-configured. If a hardware LoadBalancer cannot be provided, the installer supports configuring haproxy + keepalived as a software load balancer, but you need to understand:
global
cluster to be unavailable, problem troubleshooting will take a long time, and seriously affect platform reliability.Resource | Mandatory | Quantity | Description |
---|---|---|---|
global VIP | Mandatory | 1 | Used for nodes in the cluster to access kube-apiserver, configured in the load balancing device to ensure high availability. This IP can also be used as the access address for the platform Web UI. Workload clusters in the same network as the global cluster can also access the global cluster through this IP. |
External IP | Optional | On Demand | When there are workload clusters that are not in the same network as the global cluster, such as a hybrid cloud scenario, it must be provided. Workload clusters in other networks access the global cluster through this IP. This IP needs to be configured in the load balancing device to ensure high availability. This IP can also be used as the access address for the platform Web UI. |
Domain Name | Optional | On Demand | If you need to access the global cluster or platform Web UI through a domain name, please provide it in advance and ensure that the domain name resolution is correct. |
Certificate | Optional | On Demand | It is recommended to use a trusted certificate to avoid browser security warnings; if not provided, the installer will generate a self-signed certificate, but there may be security risks when using HTTPS. |
A domain name must be provided in the following cases:
global
cluster needs to support IPv6 access.global
cluster is planned.If the platform needs to configure multiple access addresses (for example, addresses for internal and external networks), please prepare the corresponding IP addresses or domain names in advance according to the table above. You can configure them in the installation parameters later, or add them according to the product documentation after installation.
Type | Requirement Description |
---|---|
Network Speed | Speed of global cluster and workload cluster in the same network ≥1Gbps (recommended 10Gbps); cross-network speed ≥100Mbps (recommended 1Gbps). Insufficient speed will significantly reduce data query performance. |
Network Latency | Latency ≤2ms in the same network; latency ≤100ms (recommended ≤30ms) across networks. |
Network Policy | Please refer to LoadBalancer Forwarding Rules to ensure that the necessary ports are open; when using Calico CNI, ensure that the IP-in-IP protocol is enabled. |
IP Address Range | The global cluster nodes should avoid using the 172.16-32 network segment. If it has been used, please adjust the Docker configuration (add the bip parameter) to avoid conflicts. |
This rule is designed to ensure that the global
cluster can receive traffic from the LoadBalancer normally. Please check the network policy according to the following table to ensure that the relevant ports are open.
Source IP | Protocol | Destination IP | Destination Port | Description |
---|---|---|---|---|
global VIP, External IP | TCP | All control plane node IPs | 443 | Provides access services for the platform Web UI, image repository, and Kubernetes API Server through the HTTPS protocol. The default port is
|
global VIP, External IP | TCP | All control plane node IPs | 6443 | This port provides access to the Kubernetes API Server for nodes within the cluster. |
global VIP, External IP | TCP | All control plane node IPs | 11443 | This port provides access to the image repository for nodes within the cluster. |
global
cluster, you need to open port 2379
for all control plane nodes for ETCD data synchronization between the primary and disaster recovery clusters.