A Load Balancer is a service that distributes traffic to container instances. By utilizing load balancing functionality, it automatically allocates access traffic for computing components and forwards it to the container instances of those components. Load balancing can improve the fault tolerance of computing components, scale the external service capability of those components, and enhance the availability of applications.
Platform administrators can create single-point or high-availability load balancers for any cluster on the platform, and uniformly manage and allocate load balancer resources. For example, load balancing can be assigned to projects, ensuring that only users with the appropriate project permissions can utilize the load balancing.
Please refer to the table below for explanations of related concepts in this section.
Parameter | Description |
---|---|
Load Balancer | A software or hardware device that distributes network requests to available nodes in a cluster. The load balancer used in the platform is a Layer 7 software load balancer. |
VIP | Virtual IP address (Virtual IP Address) is an IP address that does not correspond to a specific computer or a specific network interface card. When the load balancer is of high-availability type, the access address should be the VIP. |
The high availability of the Load Balancer requires a VIP. Please refer to Configure VIP.
enableLbSvc
is true, it will create an internal LoadBalancer type service for the load balancer's access address.
lbSvcAnnotations
Configuration Reference LoadBalancer Type Service Annotations.Navigate to Platform Management.
In the left sidebar, click on Network Management > Load Balancer.
Click on Create Load Balancer.
Follow the instructions below to complete the network configuration.
Parameter | Description |
---|---|
Network Mode |
|
Service and Annotations (Alpha) |
|
Access Address | The access address for load balancing, i.e., the service address of the load balancer instance. After the load balancer is successfully created, it can be accessed via this address.
|
Follow the instructions below to complete the resource configuration.
Parameter | Description |
---|---|
Specification | Please set the specifications reasonably according to business needs. You can also refer to How to properly allocate CPU and memory resources for reference. |
Deployment Type |
|
Replicas | The number of replicas is the number of container groups for the load balancer. Tip: To ensure high availability of the load balancer, it is recommended that the number of replicas be no less than 3. |
Node Labels | Filter nodes using labels to deploy the load balancer. Tip:
|
Resource Allocation Method |
|
Assigned Project |
|
Click Create. The creation process will take some time; please be patient.
Updating the load balancer will cause a service interruption for 3 to 5 minutes. Please choose an appropriate time for this operation!
Enter Platform Management.
In the left navigation bar, click Network Management > Load Balancer.
Click ⋮ > Update.
Update the network and resource configuration as needed.
Please set specifications reasonably according to business needs. You can also refer to the relevant How to properly allocate CPU and memory resources for guidance.
Internal routing only supports updating from Disabled state to Enabled state.
Click Update.
After deleting the load balancer, the associated ports and rules will also be deleted and cannot be restored.
Enter Platform Management.
In the left navigation bar, click Network Management > Load Balancer.
Click ⋮ > Delete, and confirm.
The load balancer supports receiving client connection requests through listener ports and corresponding protocols, including HTTPS, HTTP, gRPC, TCP, and UDP.
If you need to add an HTTPS listener port, you should also contact the administrator to assign a TLS certificate to the current project for encryption.
Frontend
belongs to.$alb_name-$port
.$secret_ns/$secret_name
.Frontend
itself.
http|https|grpc|grpcs
for l7 proxy.tcp|udp
for l4 proxy.serviceGroup
is required. For l7 proxy, serviceGroup
is. optional. When a request arrives, ALB will first try to match it against
rules associated with this Frontend
. Only if the request doesn't match any
rule, ALB will then forward it to the default serviceGroup
specified in the Frontend
configuration.weight
configuration applicable to Round Robin and Weighted Round Robin scheduling algorithms.ALB listens to ingress and automatically creates a Frontend
or Rule. source
field is defined as follows:
spec.source.type
currently only supports ingress
.spec.source.name
is ingress name.spec.source.namespace
is ingress namespace.Go to Container Platform.
In the left navigation bar, click Network > Load Balancing.
Click the name of the load balancer to enter the details page.
Click Add Listener Port.
Refer to the following instructions to configure the relevant parameters.
Parameter | Description |
---|---|
Protocol | Supported protocols include HTTPS, HTTP, gRPC, TCP, and UDP. When selecting HTTPS, a certificate must be added; adding a certificate is optional for the gRPC protocol. Note:
|
Internal Routing Group | - When the load balancing algorithm is set to Round Robin (RR), traffic will be distributed to the internal routing ports in the order of the internal routing group. - When the load balancing algorithm is set to Weighted Round Robin (WRR), internal routes with higher weight values have a higher probability of being selected; traffic will be distributed to the internal routing ports based on the configured weight. Tip: The probability calculation is the ratio of the current weight value to the sum of all weight values. |
Session Persistence | Always forward specific requests to the backend service corresponding to the aforementioned internal routing group. Specific requests include (choose one):
|
Backend Protocol | The protocol used for forwarding traffic to the backend services. For example, if forwarding to backend Kubernetes or dex services, the HTTPS protocol must be selected. |
Click OK.
For traffic from HTTP, gRPC, and HTTPS ports, in addition to the default internal routing group, you can set more varied back-end service matching rules. The load balancer will initially match the corresponding backend service according to the set rules; if the rule match fails, it will then match the backend services corresponding to the aforementioned internal routing group.
You can click the ⋮ icon on the right side of the list page or click Actions in the upper right corner of the details page to update the default route or delete the listener port as needed.
If the resource allocation method of the load balancer is Port, only administrators can delete the related listener ports in the Platform Management view.
Add forwarding rules for the listener ports of HTTPS, HTTP, and gRPC protocols. The load balancer will match the backend services based on these rules.
Forwarding rules cannot be added for TCP and UDP protocols.
Frontend
to which this rule belongs.Frontend
.Frontend
.Frontend
.dslx is a domain specific language, it is used to describe the matching criteria.
For example, below rule matches a request that satisfies all the following criteria:
Go to Container Platform.
Click on Network > Load Balancing in the left navigation bar.
Click on the name of the load balancer.
Click on the name of the listener port.
Click Add Rule.
Refer to the following descriptions to configure the relevant parameters.
Parameter | Description |
---|---|
Internal Route Group | - When the load balancing algorithm selects Round Robin (RR), the access traffic will be distributed to the ports of the internal routes in the order of the internal route group. - When the load balancing algorithm selects Weighted Round Robin (WRR), the higher the weight value of the internal route, the higher the probability it will be polled, and the access traffic will be distributed to the ports of the internal routes according to the probability calculated based on the configured weight. Tip: The calculation method for probability is the ratio of the current weight value to the sum of all weight values. |
Rule | Refers to the criteria for the load balancer to match backend services, including rule indicators and their values. The relationship between different rule indicators is 'and'.
|
Session Persistence | Always forwards specific access requests to the backend services corresponding to the aforementioned internal route group. Specific access requests refer to (choose one):
|
URL Rewrite | Rewrites the accessed address to the address of the platform's backend service. This feature requires the StartsWith rule indicator of the URL to be configured, and the rewrite address (rewrite-target) must start with /. For example: After setting the domain name to bar.example.com and the starting path of the URL to / , enabling the URL Rewrite functionality and setting the rewrite address to /test. The access to bar.example.com will rewrite the URL to bar.example.com/test. |
Backend Protocol | The protocol used to forward access traffic to the backend service. For example: If forwarding to the backend's Kubernetes or dex service, choose HTTPS protocol. |
Redirection | Forwards access traffic to a new redirected address rather than the backend services corresponding to the internal route group. For example: When a page at the original access address is upgraded or updated, to avoid users receiving a 404 or 503 error page, the traffic can be redirected to the new address by configuration.
|
Rule Priority | The priority of rule matching: there are 10 levels from 1 to 10, with 1 being the highest priority, and the default priority is 5. When two or more rules are satisfied at the same time, the higher priority rule is selected and applied; if the priority is the same, the system uses the default matching rule. |
Cross-Origin Resource Sharing (CORS) | CORS (Cross-origin resource sharing) is a mechanism that utilizes additional HTTP headers to instruct the browser that a web application running on one origin (domain) is permitted to access specified resources from a different origin server. When a resource requests another resource that is from a server with a different domain, protocol, or port than its own, it initiates a cross-origin HTTP request. |
Allowed Origins | Used to specify the origins that are allowed to access.
|
Allowed Headers | Used to specify the HTTP request headers allowed in CORS (Cross-Origin Resource Sharing) to avoid unnecessary preflight requests and improve request efficiency. Example entries are as follows: Note: Other commonly used or custom request headers will not be listed one by one here; please fill in according to actual conditions.
|
Click Add.
By combining visualized logs and monitoring data, issues or failures with the load balancer can be quickly identified and resolved.
Go to Platform Management.
In the left navigation bar, click on Network Management > Load Balancer.
Click on Load Balancer Name.
In the Logs tab, view the logs of the load balancer's runtime from the container's perspective.
The cluster where the load balancer is located must deploy monitoring services.
Go to Platform Management.
In the left navigation bar, click on Network Management > Load Balancer.
Click on Load Balancer Name.
In the Monitoring tab, view the metric trend information of the load balancer from the node's perspective.
Usage Rate: The real-time usage of CPU and memory by the load balancer on the current node.
Throughput: The overall incoming and outgoing traffic of the load balancer instance.