Evaluating Resources for Workload Cluster

TOC

This topic provides recommended performance and scalability practices for control planes in .

INFO

All the following requirements are based on the minimum set up of , which is the installation of a core packages.
In practice, the actual requirements may be higher, please refer to the instructions of extensions for detailed additional resources requirements.

Control Plane Node Sizing

Control plane sizing needs change with the cluster's node count, node types, and the number and kind of objects running. The recommendations below are derived from Cluster-density tests that evaluate control plane behavior under load. Those tests deploy the following objects across a specified set of namespaces:

  • 6 deployments, which pods only run the sleep process
  • 6 services
  • 6 ingresses pointing to the previous services
  • 12 secrets containing 2048 random string characters
  • 12 config maps containing 2048 random string characters
Number of worker nodesCluster-density (namespaces)CPU coresMemory (GB)
24500416
1201000832
254400024128

The data from the table above is based on an environment installed on AWS, using r5.4xlarge instances (which is 16 vCPUs, 128 GB RAM) as control-plane nodes and m5.2xlarge instances (which is 8 vCPUs, 32 GB RAM) as worker nodes.

On large, dense clusters with three control-plane nodes, taking one node offline—whether due to unexpected issues like power or network failures, infrastructure problems, or an intentional shutdown to save costs—forces the remaining two nodes to handle the extra work. When that occurs, CPU and memory usage on the surviving control-plane machines can rise significantly.
You will also see this pattern during upgrades, because control-plane nodes are typically cordoned, drained, and rebooted one after another while control-plane Operators are updated. That sequential maintenance concentrates demand on the nodes that remain active.
To reduce the risk of cascading failures, plan for sufficient headroom on control-plane servers: target an overall CPU and memory utilization of about 60% or less so the cluster can absorb transient load increases. If necessary, increase the control-plane CPU and RAM to prevent potential outages caused by resource exhaustion.

IMPORTANT NOTES

The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the Running phase.

Tested Cluster Maximums for Major Releases of

WARNING

Overcommitting a node's physical resources can weaken the scheduling guarantees that Kubernetes relies on. Apply controls such as resource requests and limits, QoS classes, and node tuning to minimize memory swapping and other resource contention.

The figures in this document come from Alauda's specific configuration, methodology, and tuning. Instead of presenting fixed caps, the following entries list the maximums observed for under the tested conditions.
Since the combinations of version, control-plane workload, and network plugin is unlimited, these values are not guaranteed limits for all deployments and may not be simultaneously achievable across all dimensions.
Use them as guidance when planning deployments with similar characteristics.

INFO

When increasing or decreasing the number of your clusters' nodes, it is recommended to:

  • Spread nodes across all of the available zones when available, this contributes to higher availability.
  • Scale up/down by no more than 25 to 50 nodes at once.

When scaling down large, densely populated clusters, the operation can take considerable time because workloads on the nodes slated for removal must be relocated or terminated before those nodes are shut down. This can be particularly lengthy when many resources must be processed at once.
If a large number of objects need eviction, the API client may begin to rate-limit requests. The default client queries-per-second (QPS) and burst settings are 50 and 100, respectively, and these values cannot be changed in .

Maximum typetested maximum
Number of nodes500 [1]
Number of pods [2]100,000
Number of pods per node250
Number of namespaces [3]10,000
Number of pods per namespace [4]25,000
Number of secrets40,000
Number of config maps40,000
Number of services10,000
Number of services per namespace5,000
Number of back-ends per service5,000
Number of deployments per namespace [4]2,000
Number of custom resource definitions (CRD)1,024 [5]
  1. supports clusters with more than 500 nodes, the number here is the recommended maximums. If your use-cases need larger clusters, please get in touch with our technical support.
  2. The pod numbers shown are counts from the test environment. Actual pod capacity will vary according to each application's memory, CPU, and storage requirements.
  3. With many active projects, etcd performance can degrade if the keyspace grows too large and surpasses its space quota. Regular etcd maintenance—such as defragmentation—is recommended to reclaim storage and avoid performance issues.
  4. Several control loops iterate over all objects in a namespace in response to state changes. A very large number of objects of a single type within one namespace makes those loops expensive and can slow processing. The stated limit assumes the system has sufficient CPU, memory, and disk to meet application needs.
  5. The test was run on a 29-server cluster (3 control-plane nodes, 2 infrastructure nodes, and 24 worker nodes) with 500 namespaces. enforces a cap of 1,024 total custom resource definitions (CRDs), including those provided by , CRDs added by integrated products, and user-created CRDs. Creating more than 1,024 CRDs may cause kubectl requests to become throttled.

Examples

As an example, 500 worker nodes (sized m5.2xlarge, which is 8 vCPUs and 32 GB of memory) were tested, and are supported, using 4.1, the Kube-OVN network plugin, and the following workload objects:

  • 200 namespaces, in addition to the defaults
  • 60 pods per node; 30 server and 30 client pods (30k total)
  • 15 services/ns backed by the server pods (3k total)
  • 20 secrets/ns (4k total)
  • 10 config maps/ns (2k total)
  • 6 network policies/ns, including deny-all, allow-from ingress and intra-namespace rules

The following factors are known to affect cluster workload scaling, positively or negatively, and should be factored into the scale numbers when planning a deployment. For additional information and guidance, please contact our technical support team.

  • Number of pods per node
  • Number of containers per pod
  • Type of probes used (for example, liveness/readiness, exec/http)
  • Number of network policies
  • Number of projects, or namespaces
  • Number of services/endpoints and type
  • Number of shards
  • Number of secrets
  • Number of config maps
  • Rate of API calls, which is an estimation of how quickly things change in the cluster configuration.
    • Prometheus query for pod creation requests per second over 5 minute windows: sum(irate(apiserver_request_count{resource="pods",verb="POST"}[5m]))
    • Prometheus query for all API requests per second over 5 minute windows: sum(irate(apiserver_request_count{}[5m]))
  • Cluster node resource consumption of CPU
  • Cluster node resource consumption of memory