HPA (Horizontal Pod Autoscaler) automatically scales the number of pods up or down based on preset policies and metrics, enabling applications to handle sudden spikes in business load while optimizing resource utilization during low-traffic periods.
You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target.
After you create a horizontal pod autoscaler, the platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available.
For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase.
The platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use.
The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed.
The HPA works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the configuration for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA.
If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas.
The following metrics are supported by horizontal pod autoscalers:
Metric | Description |
---|---|
CPU Utilization | Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. |
Memory Utilization | Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. |
Network Inbound Traffic | Amount of network traffic coming into the pod, measured in KiB/s. |
Network Outbound Traffic | Amount of network traffic going out from the pod, measured in KiB/s. |
Storage Read Traffic | Amount of data read from storage, measured in KiB/s. |
Storage Write Traffic | Amount of data written to storage, measured in KiB/s. |
Important: For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average:
- An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod.
- A decrease in replica count must lead to an overall increase in per-pod memory usage.
- Use the platform to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling.
Please ensure that the monitoring components are deployed in the current cluster and are functioning properly. You can check the deployment and health status of the monitoring components by clicking on the top right corner of the platform > Platform Health Status..
You can create a horizontal pod autoscaler using the command line interface by defining a YAML file and using the kubectl create
command. The following example shows autoscaling for a Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7:
hpa.yaml
with the following content:Example output:
Example output:
Example output:
Enter Container Platform.
In the left navigation bar, click Workloads > Deployments.
Click on Deployment Name.
Scroll down to the Elastic Scaling area and click on Update on the right.
Select Horizontal Scaling and complete the policy configuration.
Parameter | Description |
---|---|
Pod Count | After a deployment is successfully created, you need to evaluate the Minimum Pod Count corresponding to known and regular business volume changes, as well as the Maximum Pod Count that can be supported by the namespace quota under high business pressure. The maximum or minimum pod counts can be changed after setting, and it is recommended to first derive a more accurate value through performance testing and to continuously adjust during usage to meet business needs. |
Trigger Policy | List the Metrics that are sensitive to business changes and their Target Thresholds to trigger scale-up or scale-down actions. For example, if you set CPU Utilization = 60%, once the CPU utilization deviates from 60%, the platform will start to automatically adjust the number of pods based on the current deployment's insufficient or excessive resource allocation. Note: Metric types include built-in metrics and custom metrics. Custom metrics only apply to deployments in native applications, and you must first add custom metrics . |
Scale Up/Down Step (Alpha) | For businesses with specific scaling rate requirements, you can gradually adapt to changes in business volume by specifying Scale-Up Step or Scale-Down Step. For the scale-down step, you can customize the Stability Window, which defaults to 300 seconds, meaning that you must wait 300 seconds before executing scale-down actions. |
Click Update.
Custom metrics HPA extends the original HorizontalPodAutoscaler by supporting additional metrics beyond CPU and memory utilization.
Traditional HPA supports CPU utilization and memory metrics to dynamically adjust the number of Pod instances, as shown in the example below:
In this YAML, scaleTargetRef
specifies the workload object for scaling, and targetCPUUtilizationPercentage
specifies the CPU utilization trigger metric.
To use custom metrics, you need to install prometheus-operator and custom-metrics-api. After installation, custom-metrics-api provides a large number of custom metric resources:
These resources are all sub-resources under MetricValueList. You can create rules through Prometheus to create or maintain sub-resources. The HPA YAML format for custom metrics differs from traditional HPA:
In this example, scaleTargetRef
specifies the workload.
metrics
is an array type, supporting multiple metricsmetric type
can be: Object (describing k8s resources), Pods (describing metrics for each Pod), Resources (built-in k8s metrics: CPU, memory), or External (typically metrics external to the cluster)The main structure of a metric is as follows:
This metric data is collected and updated by Prometheus.
Custom metrics HPA YAML is actually compatible with the original core metrics (CPU). Here's how to write it:
targetAverageValue
is the average value obtained for each PodtargetAverageUtilization
is the utilization calculated from the direct valueThe algorithm reference is:
autoscaling/v2beta2 supports memory utilization:
Changes: targetAverageUtilization
and targetAverageValue
have been changed to target
and converted to a combination of xxxValue
and type
:
xxxValue
: AverageValue (average value), AverageUtilization (average utilization), Value (direct value)type
: Utilization (utilization), AverageValue (average value)Notes:
For CPU Utilization and Memory Utilization metrics, auto-scaling will only be triggered when the actual value fluctuates outside the range of ±10% of the target threshold.
Scale-down may impact ongoing business operations; please proceed with caution.
When business metrics change, the platform will automatically calculate the target pod count that matches the business volume according to the following rules and adjust accordingly. If the business metrics continue to fluctuate, the value will be adjusted to the set Minimum Pod Count or Maximum Pod Count.
Single Policy Target Pod Count: ceil[(sum(actual metric values)/metric threshold)] . This means that the sum of the actual metric values of all pods divided by the metric threshold, rounded up to the smallest integer that is greater than or equal to the result. For example: If there are currently 3 pods with CPU utilizations of 80%, 80%, and 90%, and the set CPU utilization threshold is 60%. According to the formula, the number of pods will be automatically adjusted to: ceil[(80%+80%+90%)/60%] = ceil 4.1 = 5 pods.
Note:
If the calculated target pod count exceeds the set Maximum Pod Count (for example 4), the platform will only scale up to 4 pods. If after changing the maximum pod count the metrics remain persistently high, you may need to use alternate scaling methods, such as increasing the namespace pod quota or adding hardware resources.
If the calculated target pod count (in the previous example 5) is less than the pod count adjusted according to the Scale-Up Step (for example 10), the platform will only scale up to 5 pods.
Multiple Policy Target Pod Count: Take the maximum value among the results of each policy calculation.