Prerequisites

Before installing the global cluster, you need to prepare hardware, network, and OS that meet the requirements.

INFO

The platform currently does not support direct installation of the global cluster in an existing Kubernetes environment. If your environment already has a Kubernetes cluster, please back up your data and clean the environment before installation.

Capacity Planning

Before installation, you must select an appropriate installation scenario based on your goals and actual needs. Different scenarios have significant differences in infrastructure resource configuration and architecture design requirements. The following are planning recommendations for three typical scenarios:

Function Verification
ISV Integration Delivery
Multi-Cluster Management (Data Center Level)

Scope of Application Suitable for platform function verification, demo , or technical feasibility testing. This scenario is only used to verify the core functions of the platform and does not carry production-level application traffic. The resource configuration is at the minimum level.

Resource Configuration Requirements

DimensionSpecification Requirements
Number of Nodes1 (physical machine or virtual machine)
CPU≥16 cores
Memory≥32GB

Architecture Description

  • All-in-one: The cluster has only one node, and all control plane components and applications run on that node.
  • Lightweight Load: Can only load Demo applications with no more than 10 Pods.
  • Non-Production Use: Does not support horizontal scaling and does not meet application continuity and high availability requirements.
TIP
  1. Resource Redundancy: Production environments are recommended to reserve at least 30% resource margin to cope with sudden loads.
  2. Network Planning: The global cluster should be deployed in an independent VPC or VLAN to ensure bandwidth ≥1Gbps.
  3. Storage Isolation: ETCD storage is recommended to use NVMe SSD and be physically isolated from application storage.

Machines

INFO

This section describes the minimum hardware requirements for building a highly available global cluster. If you have completed capacity planning, please prepare the corresponding resources according to Capacity Planning, or scale up as needed after installation.

Basic Requirements

At least 3 physical machines or virtual machines must be provided as control plane nodes for the cluster. The minimum configuration for each node is as follows:

CategoryMinimum Requirements
CPU≥ 8 cores, clock speed ≥ 2.5GHz
No over-provisioning; disable power saving mode
Memory≥ 16GB
No over-provisioning; recommended to use at least six-channel DDR4
Hard DriveSingle device IOPS ≥ 2000
Throughput ≥ 200MB/s
Must use SSD

ARM Architecture Requirements

For ARM architectures (such as Kunpeng 920), it is recommended to increase the configuration to 2 times that of the x86 minimum configuration, but not less than 1.5 times.

For example: If x86 requires 8 cores 16GB, then ARM should reach at least 12 cores 24GB, and the recommended configuration is 16 cores 32GB.

Supported OS and Kernels

INFO
  1. Kernel Version Requirements: These kernel versions have been officially released and validated by our platform tests. In your actual deployment, adherence to the A.B.C major version numbers is crucial, while subsequent minor versions can vary.
  2. Unsupported Environments: If the OS, kernel version, or CPU architecture does not meet the requirements, please contact technical support.
Red Hat Enterprise Linux (RHEL)
CentOS
Ubuntu
Kylin Linux Advanced Server
  • RHEL 7.8: 3.10.0-1127.el7.x86_64
  • RHEL 8.0 to 8.6: 4.18.0-80.el8.x86_64 to 4.18.0-372.9.1.el8.x86_64
WARNING

RHEL 7.8 does not support Calico Vxlan IPv6.

Network

Before installation, the following network resources must be pre-configured. If a hardware LoadBalancer cannot be provided, the installer supports configuring haproxy + keepalived as a software load balancer, but you need to understand:

  • Poorer Performance: Software load balancing performance is lower than hardware LoadBalancer.
  • Higher Complexity: If you are not familiar with keepalived, it may cause the global cluster to be unavailable, problem troubleshooting will take a long time, and seriously affect platform reliability.

Network Resources

ResourceMandatoryQuantityDescription
global VIPMandatory1Used for nodes in the cluster to access kube-apiserver, configured in the load balancing device to ensure high availability.
This IP can also be used as the access address for the platform Web UI.
Workload clusters in the same network as the global cluster can also access the global cluster through this IP.
External IPOptionalOn DemandWhen there are workload clusters that are not in the same network as the global cluster, such as a hybrid cloud scenario, it must be provided. Workload clusters in other networks access the global cluster through this IP.
This IP needs to be configured in the load balancing device to ensure high availability.
This IP can also be used as the access address for the platform Web UI.
Domain NameOptionalOn DemandIf you need to access the global cluster or platform Web UI through a domain name, please provide it in advance and ensure that the domain name resolution is correct.
CertificateOptionalOn DemandIt is recommended to use a trusted certificate to avoid browser security warnings; if not provided, the installer will generate a self-signed certificate, but there may be security risks when using HTTPS.
INFO

A domain name must be provided in the following cases:

  1. The global cluster needs to support IPv6 access.
  2. A disaster recovery plan for the global cluster is planned.
NOTE

If the platform needs to configure multiple access addresses (for example, addresses for internal and external networks), please prepare the corresponding IP addresses or domain names in advance according to the table above. You can configure them in the installation parameters later, or add them according to the product documentation after installation.

Network Configuration

TypeRequirement Description
Network SpeedSpeed of global cluster and workload cluster in the same network ≥1Gbps (recommended 10Gbps); cross-network speed ≥100Mbps (recommended 1Gbps).
Insufficient speed will significantly reduce data query performance.
Network LatencyLatency ≤2ms in the same network; latency ≤100ms (recommended ≤30ms) across networks.
Network PolicyPlease refer to LoadBalancer Forwarding Rules to ensure that the necessary ports are open; when using Calico CNI, ensure that the IP-in-IP protocol is enabled.
IP Address RangeThe global cluster nodes should avoid using the 172.16-32 network segment. If it has been used, please adjust the Docker configuration (add the bip parameter) to avoid conflicts.

LoadBalancer Forwarding Rules

This rule is designed to ensure that the global cluster can receive traffic from the LoadBalancer normally. Please check the network policy according to the following table to ensure that the relevant ports are open.

Source IPProtocolDestination IPDestination PortDescription
global VIP, External IPTCPAll control plane node IPs443

Provides access services for the platform Web UI, image repository, and Kubernetes API Server through the HTTPS protocol. The default port is 443. If you need to use a custom HTTPS port, please do the following:

  • Replace the destination port in the port forwarding rule with your custom port number.
  • Later, in the Web UI installation parameters, fill in your custom port number.
global VIP, External IPTCPAll control plane node IPs6443

This port provides access to the Kubernetes API Server for nodes within the cluster.

global VIP, External IPTCPAll control plane node IPs11443

This port provides access to the image repository for nodes within the cluster.
Note: If you plan to use an external image repository instead of the default image repository provided by the global cluster, you do not need to configure this port.

TIP
  • It is recommended to configure health checks on the LoadBalancer to monitor the port status.
  • If you plan to implement a disaster recovery plan for the global cluster, you need to open port 2379 for all control plane nodes for ETCD data synchronization between the primary and disaster recovery clusters.
  • The platform only supports HTTPS by default. If HTTP support is required, you need to open the HTTP port for all control plane nodes.