Cluster Node Planning

A cluster utilizes the Kubernetes node role labels node-role.kubernetes.io/<role> to assign different roles to nodes. For convenience of description, we refer to this type of label as a role label.

By default, a cluster contains two types of nodes: control plane nodes and worker nodes, used to host control plane workloads and application workloads, respectively.

In a cluster:

  • The control plane nodes are labeled with the role label node-role.kubernetes.io/control-plane.

    Note:

    Prior to Kubernetes v1.24, the community also used the label node-role.kubernetes.io/master to mark control plane nodes. For backward compatibility, both labels are considered valid for identifying control plane nodes.

  • The worker nodes, by default, have no role labels. However, you can explicitly assign the role label node-role.kubernetes.io/worker to a worker node if desired.

In addition to these default role labels, you can also define custom role labels on worker nodes to further classify them into different functional types. For example:

  • You can add the role label node-role.kubernetes.io/infra to designate a node as an infra node, intended for hosting infrastructure components.

  • You can add the role label node-role.kubernetes.io/log to designate a node as a log node, specialized for hosting logging components.

This document will guide you through creating infra nodes and custom role nodes, and migrating workloads to those nodes.

TOC

Creating Infra Nodes on Non-Immutable Cluster

By default, a cluster only includes control plane nodes and worker nodes. If you want to designate certain worker nodes as infra nodes dedicated to hosting infrastructure components, you need to manually add the appropriate role label and taint to those nodes.

Note:

The operations in this section are only applicable to non-immutable clusters. That is, the following operations are not supported on cloud clusters (such as EKS managed clusters deployed via the Alauda Container Platform EKS Provider Cluster Plugin), third-party clusters, or clusters where the nodes use an immutable OS.

Adding Infra Nodes

Step 1: Add the Infra Role Label to the Node resources

kubectl label nodes 192.168.143.133 node-role.kubernetes.io/infra="" --overwrite

This command adds the infra role label to the Node 192.168.143.133: node-role.kubernetes.io/infra: "", indicating that the node is an infra node.

Step 2: Add a Taint to the Node resources

Add a taint to prevent other workloads from being scheduled onto the infra node.

kubectl taint nodes 192.168.143.133 node-role.kubernetes.io/infra=reserved:NoSchedule

This command adds the taint node-role.kubernetes.io/infra=reserved:NoSchedule to Node 192.168.143.133, indicating that only applications that tolerate this taint can be scheduled onto this node.

Step 3: Verify the Label and Taint

Check whether the node has been assigned the infra role label and taint:

# kubectl describe node 192.168.143.133
Name:               192.168.143.133
Roles:              infra
Labels:             node-role.kubernetes.io/infra=reserved
                    ...
Taints:             node-role.kubernetes.io/infra=reserved:NoSchedule

The output indicates that the Node 192.168.143.133 has been configured as an infra node and has been tainted with tainted with node-role.kubernetes.io/infra=reserved:NoSchedule.

Migrating Pods to Infra Nodes

If you want to schedule specific Pod onto infra nodes, you need to make the following configurations:

  • A nodeSelector targeting the infra role label.
  • Corresponding tolerations for the infra node's taint.

Below is an example Deployment manifest configured to run on the infra node.

apiVersion: apps/v1
kind: Pod
metadata:
  name: infra-pod-demo
  namespace: default
spec:
  ...
  nodeSelector:
    node-role.kubernetes.io/infra: ""
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/infra
    value: reserved
    operator: Equal
  ...

The nodeSelector ensures the Pod is only scheduled on nodes with the label node-role.kubernetes.io/infra: "", the toleration allows the Pod to tolerate the taint node-role.kubernetes.io/infra=reserved:NoSchedule .

With these configurations, the Pod will be scheduled onto the infra node.

Note:

Moving pods installed via OLM Operators or Cluster Plugins to an infra node is not always possible. The capability to move these pods is depends on the configuration of each Operator or Cluster Plugin.

Custom Node Planning

Beyond infra nodes, you may want to designate worker nodes for other specialized purposes — such as hosting logging components, storage services, or monitoring agents.

You can achieve this by assigning more custom role labels and corresponding taints to worker nodes, effectively turning them into custom role nodes.

General Steps for Defining Custom Role Nodes

The process is similar to creating infra nodes.

Step 1: Add a Custom Role Label

kubectl label nodes <node> node-role.kubernetes.io/<role>="" --overwrite

Replace <role> with your desired role name, such as monitoring, storage, or log.

Step 2: Add a Corresponding Taint

kubectl taint nodes <node> node-role.kubernetes.io/<role>=<value>:NoSchedule

Replace <role> with your custom role name and replace <value> with a meaningful descriptor, such as reserved or dedicated. This value is optional but useful for documentation and clarity.

Step 3: Verify the Configuration

kubectl describe node <node>

Ensure the Labels and Taints fields reflect your custom role configuration.

Example: Create A Node Dedicated To Logging Components

If you want to create a node specifically for installing logging components, you can add the log role. In this case, create the log node as follows.

Step 1: Add the Log Role Label

kubectl label nodes 192.168.143.133 node-role.kubernetes.io/log="" --overwrite

This label indicates that the node is designated for log-related workloads.

Step 2: Add a Taint to the Node

kubectl taint nodes 192.168.143.133 node-role.kubernetes.io/log=reserved:NoSchedule

This taint prevents unscheduled workloads from being deployed to the node.

Step 3: Verify the Label and Taint

Name:               192.168.143.133
Roles:              log
Labels:             node-role.kubernetes.io/log=reserved
                    ...
Taints:             node-role.kubernetes.io/log=reserved:NoSchedule

This confirms that the node has been successfully configured as a log node with the appropriate label and taint.

By following the above practices, you can effectively partition your Kubernetes nodes based on their intended purpose, improve workload isolation, and ensure that specific components are deployed onto appropriately configured nodes.