ALB (Another Load Balancer) is a Kubernetes Gateway powered by OpenResty with years of production experience from Alauda.
Now you can access the app via curl http://${ip}
The following defines common concepts in the ALB.
Auth is a mechanism that performs authentication before a request reaches the actual service. It allows you to handle authentication at the ALB level uniformly, without implementing authentication logic in each backend service.
Learn more about ALB Auth.
An ALB instance could be deployed in two modes: host network mode and container network mode.
Directly use the node's network stack, sharing the IP address and port with the node.
In this mode, the load balancer instance directly binds to the node's port, without port mapping or similar container network encapsulation conversion.
To avoid port conflicts, only one ALB instance is allowed to be deployed on a single node.
In host-network mode ALB instance will listen to all the NIC of the node by default.
Unlike host network mode, container network mode deploys ALB using container networking.
We define a resource called frontend (abbreviated as ft), which is used to declare all the ports that all the alb should listen to.
Each frontend corresponds to a listening port on the load balancer (LB). A Frontend is associated with the ALB via labels.
$alb_name-$port
.$secret_ns/$secret_name
.http|https|grpc|grpcs
for l7 proxy.tcp|udp
for l4 proxy.serviceGroup
is required. For l7 proxy, serviceGroup
is. optional. When a request arrives, ALB will first try to match it against
rules associated with this Frontend. Only if the request doesn't match any
rule, ALB will then forward it to the default serviceGroup
specified in the Frontend configuration.weight
configuration applicable to Round Robin and Weighted Round Robin scheduling algorithms.ALB listens to ingress and automatically creates a Frontend
or Rule. source
field is defined as follows:
spec.source.type
currently only supports ingress
.spec.source.name
is ingress name.spec.source.namespace
is ingress namespace.We define a resource called rule, which is used to describe how an alb instance should handle a 7-layer request.
Complex traffic matching and distribution patterns can be configured by Rule. When the traffic arrives, it hits the traffic according to the internal rules and does the corresponding forwarding, and provides some additional functions such as cors, url rewrite and so on.
dslx is a domain specific language, it is used to describe the matching criteria.
For example, below rule matches a request that satisfies all the following criteria:
For rule, default is project isolation, each user can only see the rule of their own project.
An ALB can be shared by multiple projects, and these projects can control this ALB. All ports of the ALB are visible to these projects.
A port of a ALB can belong to different projects. This deployment mode is called Port Project Mode. The administrator needs to specify the port segment that each project can use. The users of this project can only create ports within this port segment, and can only see the ports within this port segment.
LoadBalancer is a key component in modern cloud-native architectures, serving as an intelligent traffic router and load balancer.
To understand how ALB works in a Kubernetes cluster, we need to understand several core concepts and their relationships:
These components work together to enable flexible and powerful traffic management capabilities.
Next introduces how these concepts work together and what roles they play in the request-calling chain. Detailed introductions for each concept will be covered in other articles.
In a request-calling chain:
Ingress is a resource in Kubernetes, used to describe what request should be sent to which service.
A program that understands Ingress resource and will proxy request to service.
ALB is an Ingress controller.
In Kubernetes cluster, we use the alb2
resource to operate an ALB. You could use kubectl get alb2 -A
to view all the ALBs in the cluster.
ALBs are created by users manually. Each ALB has its own IngressClass. When you create an Ingress, you can use .spec.ingressClassName
field to indicate which Ingress controller should handle this Ingress.
ALB also is a Deployment (bunch of pods) running in the cluster. Each pod is called an ALB instance.
Each ALB instance handles requests independently, but all instances share Frontend (FT), Rule, and other configurations belonging to the same ALB.
ALB-Operator, a default component deployed in the cluster, is an operator for ALB. It will create/update/delete Deployment and other related resources for each ALB according to the ALB resource.
FT is a resource defined by ALB itself. It is used to represent the ALB instance listening ports.
FT could be created by ALB-Leader or user manually.
Cases of FT created by ALB-Leader:
RULE is a resource defined by ALB itself. It takes the same role as the Ingress, but it is more specific. A RULE is uniquely associated with a FT.
RULE could be created by ALB-Leader or user manually.
Cases of RULE created by ALB-Leader:
In multiple ALB instances, one will be elected as leader. The leader is responsible for:
From the perspective of ALB, Project is a set of namespaces.
You could configure one or more Projects in an ALB. When ALB Leader translates the Ingress into Rules, it will ignore Ingress in namespaces which do not belong to the Project.