Prerequisites

Before installing the global cluster, you need to prepare hardware, network, and OS that meet the requirements.

INFO
  1. The platform currently does not support direct installation of the global cluster in an existing Kubernetes environment. If your environment already has a Kubernetes cluster, please back up your data and clean the environment before installation.
  2. If you plan to use global Cluster Disaster Recovery, please first read global Cluster Disaster Recovery.
  3. Make sure all of the new nodes meet the Nodes Requirements.
  4. The disks' performance and capacity also have to meet the Disk Configuration Requirements.

TOC

Resource Planning

This section provides guidelines for resource planning before installing . Choose the appropriate deployment scenario based on your environment and usage needs, and prepare resources accordingly.

INFO

The following recommendations only cover the minimum resources required for a successful installation of the Global cluster.

They do not include the resources required for any additional extensions or components deployed on the Global cluster.

For detailed requirements of each extension, refer to the corresponding component documentation.

Deployment Architectures

WARNING

For ARM architectures (such as Kunpeng 920), it is recommended to increase the configuration to 2 times that of the x86 minimum configuration, but not less than 1.5 times.

For example: If x86 requires 8 cores 16GB, then ARM should reach at least 12 cores 24GB, and the recommended configuration is 16 cores 32GB.

Before installation, you must determine which deployment architecture best fits your use case. supports the following three common deployment architectures:

  • Multi-Cluster

    Select this architecture if you need to centrally manage multiple Kubernetes clusters. In this mode, consists of one Global cluster and multiple workload clusters. Running non-platform workloads on the Global cluster may degrade platform stability and performance, and should be avoided.

  • Single Cluster

    Select this architecture if you plan to install only one cluster and run workloads directly on it. In this mode, the Global cluster also acts as a workload cluster, and therefore requires more resources compared with a pure Global-only setup in Multi-Cluster mode.

  • Single Node

    WARNING

    This architecture is intended solely for testing or proof-of-concept purposes and should not be used in production.

Single Node

The following table lists the minimum hardware requirements for installing in Single Node mode.

ResourceMinimum Requirement
CPU12 cores
Memory24GB
StorageStorage Capacity

Single Cluster

In this mode, the Global cluster serves both as the control plane and as the workload cluster.
The number of control plane nodes of global cluster MUST be 3.

The total resource requirement consists of two parts:

  • Base resources for the Global cluster itself
  • Additional resources for running workloads on the same cluster

The resources required for a highly available Global cluster are as follows:

ResourceMinimum Requirement
CPU8 cores
Memory16GB
StorageStorage Capacity

To estimate the additional resources required for your workloads, refer to Evaluating Resources for Workload Cluster

Multi-Cluster

When managing multiple workload clusters, the resource usage of the Global cluster increases proportionally with the number of managed clusters. The additional overhead mainly comes from cluster registration, monitoring, and control-plane synchronization.

To estimate the resources required for your Global cluster based on the number of managed clusters, refer to Evaluating Resources for Global Cluster

Network

Before installation, the following network resources must be pre-configured. If a hardware LoadBalancer cannot be provided, the installer supports configuring haproxy + keepalived as a software load balancer, but you need to understand:

  • Poorer Performance: Software load balancing performance is lower than hardware LoadBalancer.
  • Higher Complexity: If you are not familiar with keepalived, it may cause the global cluster to be unavailable, problem troubleshooting will take a long time, and seriously affect platform reliability.

Network Resources

ResourceMandatoryQuantityDescription
global VIPMandatory1Used for nodes in the cluster to access kube-apiserver, configured in the load balancing device to ensure high availability.
This IP can also be used as the access address for the platform Web UI.
Workload clusters in the same network as the global cluster can also access the global cluster through this IP.
External IPOptionalOn DemandWhen there are workload clusters that are not in the same network as the global cluster, such as a hybrid cloud scenario, it must be provided. Workload clusters in other networks access the global cluster through this IP.
This IP needs to be configured in the load balancing device to ensure high availability.
This IP can also be used as the access address for the platform Web UI.
Domain NameOptionalOn DemandIf you need to access the global cluster or platform Web UI through a domain name, please provide it in advance and ensure that the domain name resolution is correct.
CertificateOptionalOn DemandIt is recommended to use a trusted certificate to avoid browser security warnings; if not provided, the installer will generate a self-signed certificate, but there may be security risks when using HTTPS.
INFO

A domain name must be provided in the following cases:

  1. The global cluster needs to support IPv6 access.
  2. A disaster recovery plan for the global cluster is planned.
NOTE

If the platform needs to configure multiple access addresses (for example, addresses for internal and external networks), please prepare the corresponding IP addresses or domain names in advance according to the table above. You can configure them in the installation parameters later, or add them according to the product documentation after installation.

Network Configuration

TypeRequirement Description
Network SpeedSpeed of global cluster and workload cluster in the same network ≥1Gbps (recommended 10Gbps); cross-network speed ≥100Mbps (recommended 1Gbps).
Insufficient speed will significantly reduce data query performance.
Network LatencyLatency ≤10 ms within the same cluster; latency ≤100 ms (recommended ≤30 ms) across clusters.
Network PolicyPlease refer to LoadBalancer Forwarding Rules to ensure that the necessary ports are open; when using Calico CNI, ensure that the IP-in-IP protocol is enabled.
IP Address RangeThe global cluster nodes should avoid using the 172.16-32 network segment. If it has been used, please adjust the Docker configuration (add the bip parameter) to avoid conflicts.

LoadBalancer Forwarding Rules

This rule is designed to ensure that the global cluster can receive traffic from the LoadBalancer normally. Please check the network policy according to the following table to ensure that the relevant ports are open.

Source IPProtocolDestination IPDestination PortDescription
global VIP, External IPTCPAll control plane node IPs443

Provides access services for the platform Web UI, image repository, and Kubernetes API Server through the HTTPS protocol. The default port is 443. If you need to use a custom HTTPS port, please do the following:

  • Replace the destination port in the port forwarding rule with your custom port number.
  • Later, in the Web UI installation parameters, fill in your custom port number.
global VIP, External IPTCPAll control plane node IPs6443

This port provides access to the Kubernetes API Server for nodes within the cluster.

global VIP, External IPTCPAll control plane node IPs11443

This port provides access to the image repository for nodes within the cluster.
Note: If you plan to use an external image repository instead of the default image repository provided by the global cluster, you do not need to configure this port.

TIP
  • It is recommended to configure health checks on the LoadBalancer to monitor the port status.
  • If you plan to implement a disaster recovery plan for the global cluster, you need to open port 2379 for all control plane nodes for ETCD data synchronization between the primary and disaster recovery clusters.
  • The platform only supports HTTPS by default. If HTTP support is required, you need to open the HTTP port for all control plane nodes.