Deploy ALB

TOC

ALB

ALB is a custom resource that represents a load balancer. The alb-operator, which is embedded by default in all clusters, watches for create/update/delete operations on ALB resources and creates corresponding deployments and services in response.

For each ALB, a corresponding Deployment watches all Frontends and Rules attached to that ALB and routes requests to backends based on those configurations.

Prerequisites

The high availability of the Load Balancer requires a VIP. Please refer to Configure VIP.

Configure ALB

There are three parts to an ALB configuration.

# test-alb.yaml
apiVersion: crd.alauda.io/v2beta1
kind: ALB2
metadata:
  name: alb-demo
  namespace: cpaas-system
spec:
  address: 192.168.66.215
  config:
    vip:
      enableLbSvc: false
      lbSvcAnnotations: {}
    networkMode: host
    nodeSelector:
      cpu-model.node.kubevirt.io/Nehalem: "true"
    replicas: 1
    resources:
      alb:
        limits:
          cpu: 200m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 256Mi
      limits:
        cpu: 200m
        memory: 256Mi
      requests:
        cpu: 200m
        memory: 256Mi
    projects:
      - ALL_ALL
  type: nginx

Resource Configuration

resource related field describes the deployment configuration for the alb.

FieldTypeDescription
.spec.config.nodeSelectormap[string]stringthe node selector for the alb
.spec.config.replicasint,optional default 3the number of replicas for the alb
.spec.config.resources.limitsk8s container-resource,optionallimit of nginx container of alb
.spec.config.resources.requestsk8s container-resource,optionalrequest of nginx container of alb
.spec.config.resources.alb.limitsk8s container-resource,optionallimit of alb container of alb
.spec.config.resources.alb.requestsk8s container-resource,optionalrequest of alb container of alb
.spec.config.antiAffinityKeystring,optional default localk8s antiAffinityKey

Networking Configuration

Networking fields describe how to access the ALB. For example, in host mode, alb will use hostnetwork, and you can access the ALB via the node IP.

FieldTypeDescription
.spec.config.networkModestring: host or container, optional, default hostIn container mode, the operator creates a LoadBalancer Service and uses its address as the ALB address.
.spec.addressstring,requiredyou could manually specify the address of alb
.spec.config.vip.enableLbSvcbool, optionalAutomatically true in container mode.
.spec.config.vip.lbSvcAnnotationsmap[string]string, optionalExtra annotations for the LoadBalancer Service.

project configuration

FieldType
.spec.config.projects[]string,required
.spec.config.portProjectsstring,optional
.spec.config.enablePortProjectbool,optional

Adding an ALB to a project means:

  1. In the web UI, only users in the given project can find and configure this ALB.
  2. This ALB will handle ingress resources belonging to this project. Please refer to ingress-sync.
  3. In the web UI, rules created in project X cannot be found or configured under project Y.

If you enable port project and assign a port range to a project, this means:

  1. You cannot create ports that do not belong to the port range assigned to the project.

tweak configuration

there are some global config which can be tweaked in alb cr.

Operation On ALB

Creating

Using the web console.

Some common configuration is exposed in the web UI. Follow these steps to create a load balancer:

  1. Navigate to Administrator.
  2. In the left sidebar, click on Network Management > Load Balancer.
  3. Click on Create Load Balancer.

Each input item in the web UI corresponds to a field of the CR:

ParameterDescription
Assigned Address.spec.address
Allocated ByInstance means project mode, and you could select project below, port means port-project mode, you could assign port-range after create alb
Using the CLI.
kubectl apply -f test-alb.yaml -n cpaas-system

Update

Using the web console
NOTE

Updating the load balancer will cause a service interruption for 3 to 5 minutes. Please choose an appropriate time for this operation!

  1. Enter Administrator.

  2. In the left navigation bar, click Network Management > Load Balancer.

  3. Click ⋮ > Update.

  4. Update the network and resource configuration as needed.

  5. Click Update.

Delete

Using the web console
NOTE

After deleting the load balancer, the associated ports and rules will also be deleted and cannot be restored.

  1. Enter Administrator.

  2. In the left navigation bar, click Network Management > Load Balancer.

  3. Click ⋮ > Delete, and confirm.

Using the CLI
kubectl delete alb2 alb-demo -n cpaas-system

Listener Ports (Frontend)

Frontend is a custom resource that defines the listener port and protocol for an ALB. Supported protocols: L7 (http|https|grpc|grpcs) and L4 (tcp|udp). In L4 Proxy use frontend to configure backend service directly. In L7 Proxy use frontend to configure listener ports, and use rule to configure backend service. If you need to add an HTTPS listener port, you should also contact the administrator to assign a TLS certificate to the current project for encryption.

Prerequisites

Create a ALB first.

Configure Frontend

# alb-frontend-demo.yaml
apiVersion: crd.alauda.io/v1
kind: Frontend
metadata:
  labels:
    alb2.cpaas.io/name: alb-demo
  name: alb-demo-00080
  namespace: cpaas-system
spec:
  port: 80
  protocol: http
  certificate_name: ""
  backendProtocol: "http"
  serviceGroup:
    session_affinity_policy: ""
    services:
      - name: hello-world
        namespace: default
        port: 80
        weight: 100
  1. alb label: Required, indicate the ALB instance to which this Frontend belongs to.

  2. frontend name: Format as $alb_name-$port.

  3. port: which port which listen on.

  4. protocol: what protocol this port uses.

    • L7 protocol https|http|grpcs|grpc and L4 protocol tcp|udp.
    • When selecting HTTPS, a certificate must be added; adding a certificate is optional for the gRPC protocol.
    • When selecting the gRPC protocol, the backend protocol defaults to gRPC, which does not support session persistence.If a certificate is set for the gRPC protocol, the load balancer will unload the gRPC certificate and forward the unencrypted gRPC traffic to the backend service.
    • If using a Google GKE cluster, a load balancer of the same container network type cannot have both TCP and UDP listener protocols simultaneously.
  5. certificate_name: for grpcs and https protocol which the default cert used, Format as $secret_ns/$secret_name.

  6. backendProtocol: what protocol the backend service uses.

  7. Default serviceGroup:

    • L4 proxy: required. ALB forwards traffic to the default service group directly.
    • L7 proxy: optional. ALB first matches Rules on this Frontend; if none match, it falls back to the default serviceGroup.
  8. session_affinity_policy

Operation On Frontend

Creating

using the web console

  1. Go to Container Platform.

  2. In the left navigation bar, click Network > Load Balancing.

  3. Click the name of the load balancer to enter the details page.

  4. Click Add Port.

Each input item on the webui corresponds to a field of the CR

ParameterDescription
Session Affinity.spec.serviceGroup.session_affinity_policy
using the CLI
kubectl apply -f alb-frontend-demo.yaml -n cpaas-system

Subsequent Actions

For traffic from HTTP, gRPC, and HTTPS ports, in addition to the default internal routing group, you can set more varied back-end service matching rules. The load balancer will initially match the corresponding backend service according to the set rules; if the rule match fails, it will then match the backend services corresponding to the aforementioned internal routing group.

You can click the ⋮ icon on the right side of the list page or click Actions in the upper right corner of the details page to update the default route or delete the listener port as needed.

NOTE

If the resource allocation method of the load balancer is Port, only administrators can delete the related listener ports in the Administrator view.

Logs and Monitoring

By combining logs and monitoring data, you can quickly identify and resolve load balancer issues.

Viewing Logs

  1. Go to Administrator.

  2. In the left navigation bar, click on Network Management > Load Balancer.

  3. Click on Load Balancer Name.

  4. In the Logs tab, view the logs of the load balancer's runtime from the container's perspective.

Monitoring Metrics

NOTE

The cluster where the load balancer is located must deploy monitoring services.

  1. Go to Administrator.

  2. In the left navigation bar, click on Network Management > Load Balancer.

  3. Click on Load Balancer Name.

  4. In the Monitoring tab, view the metric trend information of the load balancer from the node's perspective.

    • Usage Rate: The real-time usage of CPU and memory by the load balancer on the current node.

    • Throughput: The overall incoming and outgoing traffic of the load balancer instance.

For more detailed information about monitoring metrics please refer to ALB Monitoring.