logo
Alauda Container Platform
English
简体中文
English
简体中文
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation
CLI Tools

Configure

Feature Gate

Clusters

Overview
Creating an On-Premise Cluster
etcd Encryption
Automated Rotate Kuberentes Certificates

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Updating Public Repository Credentials

Networking

Introduction

Architecture

Understanding Kube-OVN
Understanding ALB
Understanding MetalLB

Concepts

Auth
Ingress-nginx Annotation Compatibility
TCP/HTTP Keepalive
ModSecurity
Comparison Among Different Ingress Method
HTTP Redirect
L4/L7 Timeout
GatewayAPI
OTel

Guides

Creating Services
Creating Ingresses
Configure Gateway
Create Ingress-Nginx
Creating a Domain Name
Creating Certificates
Creating External IP Address Pool
Creating BGP Peers
Configure Subnets
Configure Network Policies
Creating Admin Network Policies
Configure Cluster Network Policies

How To

Deploy High Available VIP for ALB
Soft Data Center LB Solution (Alpha)
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Use OAuth Proxy with ALB
Creating GatewayAPI Gateway
Configure a Load Balancer
How to properly allocate CPU and memory resources
Forwarding IPv6 Traffic to IPv4 Addresses within the Cluster
Calico Network Supports WireGuard Encryption
Kube-OVN Overlay Network Supports IPsec Encryption
ALB Monitoring
Load Balancing Session Affinity Policy in Application Load Balancer (ALB)

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Machine Configuration

Overview
Managing Node Configuration with MachineConfig
Node Disruption Policies

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Setting the naming rules for subdirectories in the NFS Shared Storage Class
Generic ephemeral volumes
Using an emptyDir
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storagge Disaster Recovery
Update the optimization parameters
Create ceph object store user

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Installation

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Admission
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Introduction

Install

Installing Alauda Container Platform Builds

Upgrading

Upgrading Alauda Container Platform Builds
Architecture

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

GitOps

Introduction

Install

Installing Alauda Build of Argo CD
Installing Alauda Container Platform GitOps

Upgrade

Upgrading Alauda Container Platform GitOps
Architecture

Concepts

GitOps

Argo CD Concept

Introduction
Application
ApplicationSet
Tool
Helm
Kustomize
Directory
Sync
Health

Alauda Container Platform GitOps Concepts

Introduction
Alauda Container Platform GitOps Sync and Health Status

Guides

Creating GitOps Application

Creating GitOps Application
Creating GitOps ApplicationSet

GitOps Observability

Argo CD Component Monitoring
GitOps Applications Ops

How To

Integrating Code Repositories via Argo CD dashboard
Creating an Argo CD Application via Argo CD dashboard
Creating an Argo CD Application via the web console
How to Obtain Argo CD Access Information
Troubleshooting

Extend

Operator
Cluster Plugin

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

Introduction
Install

Architecture

Log Module Architecture
Log Component Selection Guide
Log Component Capacity Planning
Concepts

Guides

Logs

How To

How to Archive Logs to Third-Party Storage
How to Interface with External ES Storage Clusters

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

Overview

Introduction
Features
Install

Application Development

Introduction

Guides

CUDA Driver and Runtime Compatibility
Add Custom Devices Using ConfigMap

Troubleshooting

Troubleshooting float16 is only supported on GPUs with compute capability at least xx Error in vLLM
Paddle Autogrow Memory Allocation Crash on GPU-Manager

Configuration Management

Introduction

Guides

Configure Hardware accelerator on GPU nodes

Resource Monitoring

Introduction

Guides

GPU Resource Monitoring

Alauda Service Mesh

About Alauda Service Mesh

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]
📝 Edit this page on GitHub
Previous PageCreating GatewayAPI Gateway
Next PageHow to properly allocate CPU and memory resources

#Configure a Load Balancer

A Load Balancer is a service that distributes traffic to container instances. By utilizing load balancing functionality, it automatically allocates access traffic for computing components and forwards it to the container instances of those components. Load balancing can improve the fault tolerance of computing components, scale the external service capability of those components, and enhance the availability of applications.

Platform administrators can create single-point or high-availability load balancers for any cluster on the platform, and uniformly manage and allocate load balancer resources. For example, load balancing can be assigned to projects, ensuring that only users with the appropriate project permissions can utilize the load balancing.

Please refer to the table below for explanations of related concepts in this section.

ParameterDescription
Load BalancerA software or hardware device that distributes network requests to available nodes in a cluster. The load balancer used in the platform is a Layer 7 software load balancer.
VIPVirtual IP address (Virtual IP Address) is an IP address that does not correspond to a specific computer or a specific network interface card. When the load balancer is of high-availability type, the access address should be the VIP.

#TOC

#Prerequisites

The high availability of the Load Balancer requires a VIP. Please refer to Configure VIP.

#Example ALB2 custom resource (CR)

# test-alb.yaml
apiVersion: crd.alauda.io/v2beta1
kind: ALB2
metadata:
  name: alb-demo
  namespace: cpaas-system
  annotations:
    cpaas.io/display-name: ""
spec:
  address: 192.168.66.215
  config:
    vip:
      enableLbSvc: false
      lbSvcAnnotations: {}
    networkMode: host
    enablePortProject: false
    nodeSelector:
      cpu-model.node.kubevirt.io/Nehalem: "true"
    projects:
      - ALL_ALL
    replicas: 1
    resources:
      limits:
        cpu: 200m
        memory: 256Mi
      requests:
        cpu: 200m
        memory: 256Mi
  type: nginx
  1. When enableLbSvc is true, it will create an internal LoadBalancer type service for the load balancer's access address. lbSvcAnnotations Configuration Reference LoadBalancer Type Service Annotations.
  2. Check the Network Mode configuration below.
  3. Check the Resource Allocation Method below.
  4. Check the Assigned Project below.
  5. Check the Specification below.

#Creating a Load Balancer by using the web console.

  1. Navigate to Administrator.

  2. In the left sidebar, click on Network Management > Load Balancer.

  3. Click on Create Load Balancer.

  4. Follow the instructions below to complete the network configuration.

    ParameterDescription
    Network Mode
    • Host Network Mode: Only one load balancer replica is allowed to be deployed on a single node, with multiple services sharing one ALB, resulting in superior network performance.
    • Container Network Mode: Multiple load balancer replicas can be deployed on a single node to meet the requirements of separate ALBs for each service, with slightly lower network performance.
    Service and Annotations (Alpha)
    • Service: When enabled, it will create an internal LoadBalancer type service for the load balancer's access address. Before use, ensure that the current cluster supports LoadBalancer type service. You can implement the platform's built-in LoadBalancer type service; when disabled, you need to configure an External Address Pool for the load balancer.
    • Annotations: Used to declare the configuration or capabilities of Internal LoadBalancer type routing; for specifics, please refer to Annotations for Internal LoadBalancer Type Routing.
    Access AddressThe access address for load balancing, i.e., the service address of the load balancer instance. After the load balancer is successfully created, it can be accessed via this address.
    • In host network mode, please fill out according to actual conditions; it can be a domain name or an IP address (internal IP, external IP, VIP).
    • In container network mode, it will be acquired automatically.
  5. Follow the instructions below to complete the resource configuration.

    ParameterDescription
    SpecificationPlease set the specifications reasonably according to business needs. You can also refer to How to properly allocate CPU and memory resources for reference.
    Deployment Type
    • Single Point: The container group of the load balancer is deployed on a single node, which may result in the risk of load balancer unavailability if a machine failure occurs.
    • High Availability: Multiple container groups of the load balancer are deployed across the corresponding number of nodes, usually 3. This satisfies the load balancing needs of large business volumes while providing emergency disaster recovery capabilities.
    ReplicasThe number of replicas is the number of container groups for the load balancer.
    Tip: To ensure high availability of the load balancer, it is recommended that the number of replicas be no less than 3.
    Node LabelsFilter nodes using labels to deploy the load balancer.
    Tip:
    • It is recommended that the number of nodes meeting the requirements be greater than the number of load balancer replicas.
    • A label with the same key can only select one (if multiple are selected, no matching hosts will be available).
    Resource Allocation Method
    • Instance: Any port within the range of 1-65535 that the load balancer instance can listen on can be provided for project use.
    • Port (Alpha): Only ports within the specified range can be allocated for project use. This method allows for finer-grained resource control when port resources are limited.
    Assigned Project
    • When Resource Allocation Method is set to Instance, the load balancer can be allocated to all projects associated with the current cluster or to specified projects. In allocated projects, all Pods in all namespaces can receive requests distributed by the load balancer.
      • All Projects: Allocates the load balancer for use by all projects associated with the current cluster.
      • Specified Projects (Alpha): Click the dropdown box under Specified Projects and click the checkbox on the left of the project name to select one or more projects, allocating the load balancer for use by those specified projects.
        Tip: You can filter projects by entering project names in the dropdown box.
      • No Allocation (Alpha): Temporarily does not allocate any project. After the load balancer is created, you can use the Update Project operation to update the allocation project parameters for the created load balancer.
    • When Resource Allocation Method is set to Port, this item does not need to be configured. Please manually allocate port information after creating the load balancer.
  6. Click Create. The creation process will take some time; please be patient.

#Creating a Load Balancer by using the CLI.

kubectl apply -f test-alb.yaml -n cpaas-system

#Update Load Balancer by using the web console

NOTE

Updating the load balancer will cause a service interruption for 3 to 5 minutes. Please choose an appropriate time for this operation!

  1. Enter Administrator.

  2. In the left navigation bar, click Network Management > Load Balancer.

  3. Click ⋮ > Update.

  4. Update the network and resource configuration as needed.

    • Please set specifications reasonably according to business needs. You can also refer to the relevant How to properly allocate CPU and memory resources for guidance.

    • Internal routing only supports updating from Disabled state to Enabled state.

  5. Click Update.

#Delete Load Balancer by using the web console

NOTE

After deleting the load balancer, the associated ports and rules will also be deleted and cannot be restored.

  1. Enter Administrator.

  2. In the left navigation bar, click Network Management > Load Balancer.

  3. Click ⋮ > Delete, and confirm.

#Delete Load Balancer by using the CLI

kubectl delete alb2 test-alb -n cpaas-system

#Configure Listener Ports (Frontend)

The load balancer supports receiving client connection requests through listener ports and corresponding protocols, including HTTPS, HTTP, gRPC, TCP, and UDP.

#Prerequisites

If you need to add an HTTPS listener port, you should also contact the administrator to assign a TLS certificate to the current project for encryption.

#Example Frontend custom resource (CR)

# alb-frontend-demo.yaml
apiVersion: crd.alauda.io/v1
kind: Frontend
metadata:
  labels:
    alb2.cpaas.io/name: alb-demo
  name: alb-demo-00080
  namespace: cpaas-system
spec:
  backendProtocol: "http"
  certificate_name: ""
  port: 80
  protocol: http
  serviceGroup:
    services:
      - name: hello-world
        namespace: default
        port: 80
        weight: 100
  1. Required, indicate the ALB instance to which this Frontend belongs to.
  2. Format as $alb_name-$port.
  3. Format as $secret_ns/$secret_name.
  4. Protocol of this Frontend itself.
    • http|https|grpc|grpcs for l7 proxy.
    • tcp|udp for l4 proxy.
  5. For l4 proxy, serviceGroup is required. For l7 proxy, serviceGroup is. optional. When a request arrives, ALB will first try to match it against rules associated with this Frontend. Only if the request doesn't match any rule, ALB will then forward it to the default serviceGroup specified in the Frontend configuration.
  6. weight configuration applicable to Round Robin and Weighted Round Robin scheduling algorithms.
NOTE

ALB listens to ingress and automatically creates a Frontend or Rule. source field is defined as follows:

  1. spec.source.type currently only supports ingress.
  2. spec.source.name is ingress name.
  3. spec.source.namespace is ingress namespace.

#Creating Listener Ports (Frontend) by using the web console

  1. Go to Container Platform.

  2. In the left navigation bar, click Network > Load Balancing.

  3. Click the name of the load balancer to enter the details page.

  4. Click Add Listener Port.

  5. Refer to the following instructions to configure the relevant parameters.

    ParameterDescription
    ProtocolSupported protocols include HTTPS, HTTP, gRPC, TCP, and UDP. When selecting HTTPS, a certificate must be added; adding a certificate is optional for the gRPC protocol.

    Note:
    • When selecting the gRPC protocol, the backend protocol defaults to gRPC, which does not support session persistence.
    • If a certificate is set for the gRPC protocol, the load balancer will unload the gRPC certificate and forward the unencrypted gRPC traffic to the backend service.
    • If using a Google GKE cluster, a load balancer of the same container network type cannot have both TCP and UDP listener protocols simultaneously.
    Internal Routing Group- When the load balancing algorithm is set to Round Robin (RR), traffic will be distributed to the internal routing ports in the order of the internal routing group.
    - When the load balancing algorithm is set to Weighted Round Robin (WRR), internal routes with higher weight values have a higher probability of being selected; traffic will be distributed to the internal routing ports based on the configured weight.
    Tip: The probability calculation is the ratio of the current weight value to the sum of all weight values.
    Session PersistenceAlways forward specific requests to the backend service corresponding to the aforementioned internal routing group.

    Specific requests include (choose one):
    • Source Address Hash: All requests from the same IP address.
      Note: In public cloud environments, the source address often changes, which may cause requests from the same client to have different source IP addresses at different times, leading to the source address hash technique not achieving the expected effect.
    • Cookie key: Requests that carry a specified cookie.
    • Header name: Requests that carry a specified header.
    Backend ProtocolThe protocol used for forwarding traffic to the backend services. For example, if forwarding to backend Kubernetes or dex services, the HTTPS protocol must be selected.
  6. Click OK.

#Creating Listener Ports (Frontend) by using the CLI

kubectl apply -f alb-frontend-demo.yaml -n cpaas-system

#Subsequent Actions

For traffic from HTTP, gRPC, and HTTPS ports, in addition to the default internal routing group, you can set more varied back-end service matching rules. The load balancer will initially match the corresponding backend service according to the set rules; if the rule match fails, it will then match the backend services corresponding to the aforementioned internal routing group.

#Related Operations

You can click the ⋮ icon on the right side of the list page or click Actions in the upper right corner of the details page to update the default route or delete the listener port as needed.

NOTE

If the resource allocation method of the load balancer is Port, only administrators can delete the related listener ports in the Administrator view.

#Configure Rules

Add forwarding rules for the listener ports of HTTPS, HTTP, and gRPC protocols. The load balancer will match the backend services based on these rules.

NOTE

Forwarding rules cannot be added for TCP and UDP protocols.

#Example Rule custom resource (CR)

# alb-rule-demo.yaml
apiVersion: crd.alauda.io/v1
kind: Rule
metadata:
  labels:
    alb2.cpaas.io/frontend: alb-demo-00080
    alb2.cpaas.io/name: alb-demo
  name: alb-demo-00080-test
  namespace: cpaas-system
spec:
  backendProtocol: ""
  certificate_name: ""
  dslx:
    - type: METHOD
      values:
        - - EQ
          - POST
    - type: URL
      values:
        - - STARTS_WITH
          - /app-a
        - - STARTS_WITH
          - /app-b
    - type: PARAM
      key: group
      values:
        - - EQ
          - vip
    - type: HOST
      values:
        - - ENDS_WITH
          - .app.com
    - type: HEADER
      key: LOCATION
      values:
        - - IN
          - east-1
          - east-2
    - type: COOKIE
      key: uid
      values:
        - - EXIST
    - type: SRC_IP
      values:
        - - RANGE
          - "1.1.1.1"
          - "1.1.1.100"
  enableCORS: false
  priority: 4
  serviceGroup:
    services:
      - name: hello-world
        namespace: default
        port: 80
        weight: 100
  1. Required, indicate the Frontend to which this rule belongs.
  2. Required, indicate the ALB to which this rule belongs.
  3. As same as Frontend.
  4. As same as Frontend.
  5. The lower the number, the higher the priority.
  6. As same as Frontend.

#dslx

dslx is a domain specific language, it is used to describe the matching criteria.

For example, below rule matches a request that satisfies all the following criteria:

  • url starts with /app-a or /app-b
  • method is post
  • url param's group is vip
  • host is *.app.com
  • header's location is east-1 or east-2
  • has a cookie name is uid
  • source IPs come from 1.1.1.1-1.1.1.100
dslx:
  - type: METHOD
    values:
      - - EQ
        - POST
  - type: URL
    values:
      - - STARTS_WITH
        - /app-a
      - - STARTS_WITH
        - /app-b
  - type: PARAM
    key: group
    values:
      - - EQ
        - vip
  - type: HOST
    values:
      - - ENDS_WITH
        - .app.com
  - type: HEADER
    key: LOCATION
    values:
      - - IN
        - east-1
        - east-2
  - type: COOKIE
    key: uid
    values:
      - - EXIST
  - type: SRC_IP
    values:
      - - RANGE
        - "1.1.1.1"
        - "1.1.1.100"

#Creating Rule by using web console

  1. Go to Container Platform.

  2. Click on Network > Load Balancing in the left navigation bar.

  3. Click on the name of the load balancer.

  4. Click on the name of the listener port.

  5. Click Add Rule.

  6. Refer to the following descriptions to configure the relevant parameters.

    ParameterDescription
    Internal Route Group- When the load balancing algorithm selects Round Robin (RR), the access traffic will be distributed to the ports of the internal routes in the order of the internal route group.
    - When the load balancing algorithm selects Weighted Round Robin (WRR), the higher the weight value of the internal route, the higher the probability it will be polled, and the access traffic will be distributed to the ports of the internal routes according to the probability calculated based on the configured weight.
    Tip: The calculation method for probability is the ratio of the current weight value to the sum of all weight values.
    RuleRefers to the criteria for the load balancer to match backend services, including rule indicators and their values. The relationship between different rule indicators is 'and'.
    • Domain Name: Supports adding wildcard domains and exact domain names. In cases of equal priority for the same rule, if both wildcard and exact domain name rule configurations exist, the exact domain name forwarding rule will take effect first.
    • URL: RegEx corresponds to URL regular expressions starting with /; StartsWith corresponds to URL prefixes starting with /.
    • IP: Equal corresponds to a specific IP address; Range corresponds to an IP address range.
    • Header: In addition to entering the key of the header, matching rules must also be set. Equal corresponds to the specific value of the header; Range corresponds to the range of the header value; RegEx corresponds to the header's regular expression.
    • Cookie: In addition to entering the key of the cookie, matching rules must also be set. Equal corresponds to the specific value of the cookie.
    • URL Param: In matching rules, Equal corresponds to a specific URL parameter; Range corresponds to the URL parameter range.
    • Service Name: The Service Name refers to the name of the service that uses the gRPC protocol. When using the gRPC protocol, this item can be configured, enabling traffic to be forwarded to the corresponding service based on the provided Service Name, for example: /helloworld.Greeter.
    Session PersistenceAlways forwards specific access requests to the backend services corresponding to the aforementioned internal route group.
    Specific access requests refer to (choose one):
    • Source Address Hash: All access requests originating from the same IP address.
    • Cookie Key: Access requests carrying the specified cookie.
    • Header Name: Access requests carrying the specified header.
    URL RewriteRewrites the accessed address to the address of the platform's backend service. This feature requires the StartsWith rule indicator of the URL to be configured, and the rewrite address (rewrite-target) must start with /.

    For example: After setting the domain name to bar.example.com and the starting path of the URL to /, enabling the URL Rewrite functionality and setting the rewrite address to /test. The access to bar.example.com will rewrite the URL to bar.example.com/test.
    Backend ProtocolThe protocol used to forward access traffic to the backend service. For example: If forwarding to the backend's Kubernetes or dex service, choose HTTPS protocol.
    RedirectionForwards access traffic to a new redirected address rather than the backend services corresponding to the internal route group.
    For example: When a page at the original access address is upgraded or updated, to avoid users receiving a 404 or 503 error page, the traffic can be redirected to the new address by configuration.
    • HTTP Status Code: The status code presented to the user by the browser before redirecting to the new address.
    • Redirect Address: When entering a relative address (for example, /index.html), the purpose of the forwarded traffic will be load balancer address/index.html; when entering an absolute address (for example, https://www.example.com), the purpose of the forwarded traffic will be the entered address.
    Rule PriorityThe priority of rule matching: there are 10 levels from 1 to 10, with 1 being the highest priority, and the default priority is 5.
    When two or more rules are satisfied at the same time, the higher priority rule is selected and applied; if the priority is the same, the system uses the default matching rule.
    Cross-Origin Resource Sharing (CORS)CORS (Cross-origin resource sharing) is a mechanism that utilizes additional HTTP headers to instruct the browser that a web application running on one origin (domain) is permitted to access specified resources from a different origin server. When a resource requests another resource that is from a server with a different domain, protocol, or port than its own, it initiates a cross-origin HTTP request.
    Allowed OriginsUsed to specify the origins that are allowed to access.
    • *: Allows requests from any origin.
    • Domain Name: Allows requests from the current domain.
    Allowed HeadersUsed to specify the HTTP request headers allowed in CORS (Cross-Origin Resource Sharing) to avoid unnecessary preflight requests and improve request efficiency. Example entries are as follows:

    Note: Other commonly used or custom request headers will not be listed one by one here; please fill in according to actual conditions.

    • Origin: Indicates the origin of the request, i.e., the domain that sends the request.
    • Authorization: Used to specify the authorization information for the request, usually for identification, such as Basic Authentication or Token.
    • Content-Type: Used to specify the content type of the request/response, such as application/json, application/x-www-form-urlencoded, etc.
    • Accept: Used to specify the content types that the client can accept, typically used when the client hopes to receive a specific type of response.

  7. Click Add.

#Creating Rule by using the CLI

kubectl apply -f alb-rule-demo.yaml -n cpaas-system

#Logs and Monitoring

By combining visualized logs and monitoring data, issues or failures with the load balancer can be quickly identified and resolved.

#Viewing Logs

  1. Go to Administrator.

  2. In the left navigation bar, click on Network Management > Load Balancer.

  3. Click on Load Balancer Name.

  4. In the Logs tab, view the logs of the load balancer's runtime from the container's perspective.

#Monitoring Metrics

NOTE

The cluster where the load balancer is located must deploy monitoring services.

  1. Go to Administrator.

  2. In the left navigation bar, click on Network Management > Load Balancer.

  3. Click on Load Balancer Name.

  4. In the Monitoring tab, view the metric trend information of the load balancer from the node's perspective.

    • Usage Rate: The real-time usage of CPU and memory by the load balancer on the current node.

    • Throughput: The overall incoming and outgoing traffic of the load balancer instance.

#Additional resources

  • ALB Monitoring