Configure a Load Balancer
A Load Balancer is a service that distributes traffic to container instances. By utilizing load balancing functionality, it automatically allocates access traffic for computing components and forwards it to the container instances of those components. Load balancing can improve the fault tolerance of computing components, scale the external service capability of those components, and enhance the availability of applications.
Platform administrators can create single-point or high-availability load balancers for any cluster on the platform, and uniformly manage and allocate load balancer resources. For example, load balancing can be assigned to projects, ensuring that only users with the appropriate project permissions can utilize the load balancing.
Please refer to the table below for explanations of related concepts in this section.
TOC
Prerequisites
The high availability of the Load Balancer requires a VIP. Please refer to Configure VIP.
Example ALB2 custom resource (CR)
- When
enableLbSvcis true, it will create an internal LoadBalancer type service for the load balancer's access address.lbSvcAnnotationsConfiguration Reference LoadBalancer Type Service Annotations. - Check the Network Mode configuration below.
- Check the Resource Allocation Method below.
- Check the Assigned Project below.
- Check the Specification below.
Creating a Load Balancer by using the web console.
-
Navigate to Platform Management.
-
In the left sidebar, click on Network Management > Load Balancer.
-
Click on Create Load Balancer.
-
Follow the instructions below to complete the network configuration.
-
Follow the instructions below to complete the resource configuration.
-
Click Create. The creation process will take some time; please be patient.
Creating a Load Balancer by using the CLI.
Update Load Balancer by using the web console
Updating the load balancer will cause a service interruption for 3 to 5 minutes. Please choose an appropriate time for this operation!
-
Enter Platform Management.
-
In the left navigation bar, click Network Management > Load Balancer.
-
Click ⋮ > Update.
-
Update the network and resource configuration as needed.
-
Please set specifications reasonably according to business needs. You can also refer to the relevant How to properly allocate CPU and memory resources for guidance.
-
Internal routing only supports updating from Disabled state to Enabled state.
-
-
Click Update.
Delete Load Balancer by using the web console
After deleting the load balancer, the associated ports and rules will also be deleted and cannot be restored.
-
Enter Platform Management.
-
In the left navigation bar, click Network Management > Load Balancer.
-
Click ⋮ > Delete, and confirm.
Delete Load Balancer by using the CLI
Configure Listener Ports (Frontend)
The load balancer supports receiving client connection requests through listener ports and corresponding protocols, including HTTPS, HTTP, gRPC, TCP, and UDP.
Prerequisites
If you need to add an HTTPS listener port, you should also contact the administrator to assign a TLS certificate to the current project for encryption.
Example Frontend custom resource (CR)
- Required, indicate the ALB instance to which this
Frontendbelongs to. - Format as
$alb_name-$port. - Format as
$secret_ns/$secret_name. - Protocol of this
Frontenditself.http|https|grpc|grpcsfor l7 proxy.tcp|udpfor l4 proxy.
- For l4 proxy,
serviceGroupis required. For l7 proxy,serviceGroupis. optional. When a request arrives, ALB will first try to match it against rules associated with thisFrontend. Only if the request doesn't match any rule, ALB will then forward it to the defaultserviceGroupspecified in theFrontendconfiguration. weightconfiguration applicable to Round Robin and Weighted Round Robin scheduling algorithms.
ALB listens to ingress and automatically creates a Frontend or Rule. source field is defined as follows:
spec.source.typecurrently only supportsingress.spec.source.nameis ingress name.spec.source.namespaceis ingress namespace.
Creating Listener Ports (Frontend) by using the web console
-
Go to Container Platform.
-
In the left navigation bar, click Network > Load Balancing.
-
Click the name of the load balancer to enter the details page.
-
Click Add Listener Port.
-
Refer to the following instructions to configure the relevant parameters.
-
Click OK.
Creating Listener Ports (Frontend) by using the CLI
Subsequent Actions
For traffic from HTTP, gRPC, and HTTPS ports, in addition to the default internal routing group, you can set more varied back-end service matching rules. The load balancer will initially match the corresponding backend service according to the set rules; if the rule match fails, it will then match the backend services corresponding to the aforementioned internal routing group.
Related Operations
You can click the ⋮ icon on the right side of the list page or click Actions in the upper right corner of the details page to update the default route or delete the listener port as needed.
If the resource allocation method of the load balancer is Port, only administrators can delete the related listener ports in the Platform Management view.
Configure Rules
Add forwarding rules for the listener ports of HTTPS, HTTP, and gRPC protocols. The load balancer will match the backend services based on these rules.
Forwarding rules cannot be added for TCP and UDP protocols.
Example Rule custom resource (CR)
- Required, indicate the
Frontendto which this rule belongs. - Required, indicate the ALB to which this rule belongs.
- As same as
Frontend. - As same as
Frontend. - The lower the number, the higher the priority.
- As same as
Frontend.
dslx
dslx is a domain specific language, it is used to describe the matching criteria.
For example, below rule matches a request that satisfies all the following criteria:
- url starts with /app-a or /app-b
- method is post
- url param's group is vip
- host is *.app.com
- header's location is east-1 or east-2
- has a cookie name is uid
- source IPs come from 1.1.1.1-1.1.1.100
Creating Rule by using web console
-
Go to Container Platform.
-
Click on Network > Load Balancing in the left navigation bar.
-
Click on the name of the load balancer.
-
Click on the name of the listener port.
-
Click Add Rule.
-
Refer to the following descriptions to configure the relevant parameters.
-
Click Add.
Creating Rule by using the CLI
Logs and Monitoring
By combining visualized logs and monitoring data, issues or failures with the load balancer can be quickly identified and resolved.
Viewing Logs
-
Go to Platform Management.
-
In the left navigation bar, click on Network Management > Load Balancer.
-
Click on Load Balancer Name.
-
In the Logs tab, view the logs of the load balancer's runtime from the container's perspective.
Monitoring Metrics
The cluster where the load balancer is located must deploy monitoring services.
-
Go to Platform Management.
-
In the left navigation bar, click on Network Management > Load Balancer.
-
Click on Load Balancer Name.
-
In the Monitoring tab, view the metric trend information of the load balancer from the node's perspective.
-
Usage Rate: The real-time usage of CPU and memory by the load balancer on the current node.
-
Throughput: The overall incoming and outgoing traffic of the load balancer instance.
-