logo
Alauda Container Platform
English
简体中文
English
简体中文
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation
CLI Tools

Configure

Feature Gate

Clusters

Overview
Creating an On-Premise Cluster
etcd Encryption
Automated Rotate Kuberentes Certificates

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Updating Public Repository Credentials

Networking

Introduction

Architecture

Understanding Kube-OVN
Understanding ALB
Understanding MetalLB

Concepts

Auth
Ingress-nginx Annotation Compatibility
TCP/HTTP Keepalive
ModSecurity
Comparison Among Different Ingress Method
HTTP Redirect
L4/L7 Timeout
GatewayAPI
OTel

Guides

Creating Services
Creating Ingresses
Configure Gateway
Create Ingress-Nginx
Creating a Domain Name
Creating Certificates
Creating External IP Address Pool
Creating BGP Peers
Configure Subnets
Configure Network Policies
Creating Admin Network Policies
Configure Cluster Network Policies

How To

Deploy High Available VIP for ALB
Soft Data Center LB Solution (Alpha)
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Use OAuth Proxy with ALB
Creating GatewayAPI Gateway
Configure a Load Balancer
How to properly allocate CPU and memory resources
Forwarding IPv6 Traffic to IPv4 Addresses within the Cluster
Calico Network Supports WireGuard Encryption
Kube-OVN Overlay Network Supports IPsec Encryption
ALB Monitoring
Load Balancing Session Affinity Policy in Application Load Balancer (ALB)

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Machine Configuration

Overview
Managing Node Configuration with MachineConfig
Node Disruption Policies

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Setting the naming rules for subdirectories in the NFS Shared Storage Class
Generic ephemeral volumes
Using an emptyDir
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storagge Disaster Recovery
Update the optimization parameters
Create ceph object store user

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Installation

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Admission
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Introduction

Install

Installing Alauda Container Platform Builds

Upgrading

Upgrading Alauda Container Platform Builds
Architecture

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

GitOps

Introduction

Install

Installing Alauda Build of Argo CD
Installing Alauda Container Platform GitOps

Upgrade

Upgrading Alauda Container Platform GitOps
Architecture

Concepts

GitOps

Argo CD Concept

Introduction
Application
ApplicationSet
Tool
Helm
Kustomize
Directory
Sync
Health

Alauda Container Platform GitOps Concepts

Introduction
Alauda Container Platform GitOps Sync and Health Status

Guides

Creating GitOps Application

Creating GitOps Application
Creating GitOps ApplicationSet

GitOps Observability

Argo CD Component Monitoring
GitOps Applications Ops

How To

Integrating Code Repositories via Argo CD dashboard
Creating an Argo CD Application via Argo CD dashboard
Creating an Argo CD Application via the web console
How to Obtain Argo CD Access Information
Troubleshooting

Extend

Operator
Cluster Plugin

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

Introduction
Install

Architecture

Log Module Architecture
Log Component Selection Guide
Log Component Capacity Planning
Concepts

Guides

Logs

How To

How to Archive Logs to Third-Party Storage
How to Interface with External ES Storage Clusters

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

Overview

Introduction
Features
Install

Application Development

Introduction

Guides

CUDA Driver and Runtime Compatibility
Add Custom Devices Using ConfigMap

Troubleshooting

Troubleshooting float16 is only supported on GPUs with compute capability at least xx Error in vLLM
Paddle Autogrow Memory Allocation Crash on GPU-Manager

Configuration Management

Introduction

Guides

Configure Hardware accelerator on GPU nodes

Resource Monitoring

Introduction

Guides

GPU Resource Monitoring

Alauda Service Mesh

About Alauda Service Mesh

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]
📝 Edit this page on GitHub
Previous PageUnderstanding Kube-OVN
Next PageUnderstanding MetalLB

#Understanding ALB

ALB (Another Load Balancer) is a Kubernetes Gateway powered by OpenResty with years of production experience from Alauda.

#TOC

#Core components

  • ALB Operator: An operator that manage the lifecycle of ALB instances. It is responsible for watching ALB CRs and then creating and updating ALB instances for different tenants.
  • ALB Instance: The ALB instance includes an Openresty that act as the data plan and a Go controller as the controller plan. The Go controller monitors various CRs (Ingress, Gateway, Rule, etc.) and converts them into ALB-specific DSL rules. OpenResty then uses these DSL rules to match and process incoming requests.

#Quick Start

#Deploy the ALB Operator

  1. Create a cluster.
  2.  helm repo add alb https://alauda.github.io/alb/;helm repo update;helm search repo|grep alb
  3.  helm install alb-operator alb/alauda-alb2

#Deploy an ALB Instance

cat <<EOF | kubectl apply -f -
apiVersion: crd.alauda.io/v2beta1
kind: ALB2
metadata:
    name: alb-demo
    namespace: kube-system
spec:
    address: "172.20.0.5"  # the ip address of node where alb been deployed
    type: "nginx"
    config:
        networkMode: host
        loadbalancerName: alb-demo
        projects:
        - ALL_ALL
        replicas: 1
EOF

#Run a demo application

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  labels:
    k8s-app: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: hello-world
  template:
    metadata:
      labels:
        k8s-app: hello-world
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: hello-world
        image: docker.io/crccheck/hello-world:latest
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
  labels:
    k8s-app: hello-world
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8000
  selector:
    k8s-app: hello-world
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-world
            port:
              number: 80
EOF

Now you can access the app via curl http://${ip}

#ALB Common Concepts

The following defines common concepts in the ALB.

#Auth

Auth is a mechanism that performs authentication before a request reaches the actual service. It allows you to handle authentication at the ALB level uniformly, without implementing authentication logic in each backend service.

Learn more about ALB Auth.

#Network Mode

An ALB instance could be deployed in two modes: host network mode and container network mode.

#Host Network Mode

Directly use the node's network stack, sharing the IP address and port with the node.

In this mode, the load balancer instance directly binds to the node's port, without port mapping or similar container network encapsulation conversion.

NOTE

To avoid port conflicts, only one ALB instance is allowed to be deployed on a single node.

In host-network mode ALB instance will listen to all the NIC of the node by default.

#Advantages:
  1. Best network performance.
  2. Could be accessed by node's IP address.
#Disadvantages:
  1. Only one ALB instance is allowed to be deployed on a single node.
  2. Port might conflict with other processes.

#Container Network Mode

Unlike host network mode, container network mode deploys ALB using container networking.

#Advantages:
  1. Supports deploying multiple ALB instances on a single node.
  2. ALB provides integration with MetalLB, which can provide VIP for ALB.
  3. Port will not conflict with other processes.
#Disadvantages:
  1. Slightly lower performance.
  2. Must access ALB through LoadBalancer service.

#Frontend

We define a resource called frontend (abbreviated as ft), which is used to declare all the ports that all the alb should listen to.

Each frontend corresponds to a listening port on the load balancer (LB). A Frontend is associated with the ALB via labels.

apiVersion: crd.alauda.io/v1
kind: Frontend
metadata:
  labels:
    alb2.cpaas.io/name: alb-demo
  name: alb-demo-00080
  namespace: cpaas-system
spec:
  backendProtocol: "http"
  certificate_name: ""
  port: 80
  protocol: http
  serviceGroup:
    services:
      - name: hello-world
        namespace: default
        port: 80
        weight: 100
  1. Required, indicate the ALB instance to which this Frontend belongs to.
  2. Format as $alb_name-$port.
  3. Format as $secret_ns/$secret_name.
  4. Protocol of this Frontend itself.
    • http|https|grpc|grpcs for l7 proxy.
    • tcp|udp for l4 proxy.
  5. For l4 proxy, serviceGroup is required. For l7 proxy, serviceGroup is. optional. When a request arrives, ALB will first try to match it against rules associated with this Frontend. Only if the request doesn't match any rule, ALB will then forward it to the default serviceGroup specified in the Frontend configuration.
  6. weight configuration applicable to Round Robin and Weighted Round Robin scheduling algorithms.
NOTE

ALB listens to ingress and automatically creates a Frontend or Rule. source field is defined as follows:

  1. spec.source.type currently only supports ingress.
  2. spec.source.name is ingress name.
  3. spec.source.namespace is ingress namespace.

#Additional resources

  • L4/L7 timeout
  • Keepalive

#Rules

We define a resource called rule, which is used to describe how an alb instance should handle a 7-layer request.

Complex traffic matching and distribution patterns can be configured by Rule. When the traffic arrives, it hits the traffic according to the internal rules and does the corresponding forwarding, and provides some additional functions such as cors, url rewrite and so on.

apiVersion: crd.alauda.io/v1
kind: Rule
metadata:
  labels:
    alb2.cpaas.io/frontend: alb-demo-00080
    alb2.cpaas.io/name: alb-demo
  name: alb-demo-00080-test
  namespace: kube-system
spec:
  backendProtocol: ""
  certificate_name: ""
  dslx:
    - type: METHOD
      values:
        - - EQ
          - POST
    - type: URL
      values:
        - - STARTS_WITH
          - /app-a
        - - STARTS_WITH
          - /app-b
    - type: PARAM
      key: group
      values:
        - - EQ
          - vip
    - type: HOST
      values:
        - - ENDS_WITH
          - .app.com
    - type: HEADER
      key: LOCATION
      values:
        - - IN
          - east-1
          - east-2
    - type: COOKIE
      key: uid
      values:
        - - EXIST
    - type: SRC_IP
      values:
        - - RANGE
          - "1.1.1.1"
          - "1.1.1.100"
  enableCORS: false
  priority: 4
  serviceGroup:
    services:
      - name: hello-world
        namespace: default
        port: 80
        weight: 100
  1. Required, indicate the Frontend to which this rule belongs.
  2. Required, indicate the ALB to which this rule belongs.
  3. As same as Frontend.
  4. As same as Frontend.
  5. The lower the number, the higher the priority.
  6. As same as Frontend.

#dslx

dslx is a domain specific language, it is used to describe the matching criteria.

For example, below rule matches a request that satisfies all the following criteria:

  • url starts with /app-a or /app-b
  • method is post
  • url param's group is vip
  • host is *.app.com
  • header's location is east-1 or east-2
  • has a cookie name is uid
  • source IPs come from 1.1.1.1-1.1.1.100
dslx:
  - type: METHOD
    values:
      - - EQ
        - POST
  - type: URL
    values:
      - - STARTS_WITH
        - /app-a
      - - STARTS_WITH
        - /app-b
  - type: PARAM
    key: group
    values:
      - - EQ
        - vip
  - type: HOST
    values:
      - - ENDS_WITH
        - .app.com
  - type: HEADER
    key: LOCATION
    values:
      - - IN
        - east-1
        - east-2
  - type: COOKIE
    key: uid
    values:
      - - EXIST
  - type: SRC_IP
    values:
      - - RANGE
        - "1.1.1.1"
        - "1.1.1.100"

#Project Isolation

For rule, default is project isolation, each user can only see the rule of their own project.

#Project Mode

An ALB can be shared by multiple projects, and these projects can control this ALB. All ports of the ALB are visible to these projects.

#Port Project Mode

A port of a ALB can belong to different projects. This deployment mode is called Port Project Mode. The administrator needs to specify the port segment that each project can use. The users of this project can only create ports within this port segment, and can only see the ports within this port segment.

#Relationship between ALB, ALB Instance, Frontend/FT, Rule, Ingress, and Project

LoadBalancer is a key component in modern cloud-native architectures, serving as an intelligent traffic router and load balancer.

To understand how ALB works in a Kubernetes cluster, we need to understand several core concepts and their relationships:

  • ALB itself
  • Frontend (FT)
  • Rules
  • Ingress resources
  • Projects

These components work together to enable flexible and powerful traffic management capabilities.

Next introduces how these concepts work together and what roles they play in the request-calling chain. Detailed introductions for each concept will be covered in other articles.

In a request-calling chain:

  1. A client sends an HTTP/HTTPS/other protocol request, and finally the request will arrive on a pod of ALB, and the pod (an ALB instance) will start to handle this request.
  2. This ALB instance finds a rule which could match this request.
  3. If needed, modify/redirect/rewrite the request based on the rule.
  4. Find and select one pod IP from the services which the rule configured. And forward the request to the pod.

#Ingress

Ingress is a resource in Kubernetes, used to describe what request should be sent to which service.

#Ingress Controller

A program that understands Ingress resource and will proxy request to service.

#ALB

ALB is an Ingress controller.

In Kubernetes cluster, we use the alb2 resource to operate an ALB. You could use kubectl get alb2 -A to view all the ALBs in the cluster.

ALBs are created by users manually. Each ALB has its own IngressClass. When you create an Ingress, you can use .spec.ingressClassName field to indicate which Ingress controller should handle this Ingress.

#ALB Instance

ALB also is a Deployment (bunch of pods) running in the cluster. Each pod is called an ALB instance.

Each ALB instance handles requests independently, but all instances share Frontend (FT), Rule, and other configurations belonging to the same ALB.

#ALB-Operator

ALB-Operator, a default component deployed in the cluster, is an operator for ALB. It will create/update/delete Deployment and other related resources for each ALB according to the ALB resource.

#Frontend (abbreviation: FT)

FT is a resource defined by ALB itself. It is used to represent the ALB instance listening ports.

FT could be created by ALB-Leader or user manually.

Cases of FT created by ALB-Leader:

  1. If Ingress has certificate, we will create FT 443 (HTTPS).
  2. If Ingress has no certificate, we will create FT 80 (HTTP).

#RULE

RULE is a resource defined by ALB itself. It takes the same role as the Ingress, but it is more specific. A RULE is uniquely associated with a FT.

RULE could be created by ALB-Leader or user manually.

Cases of RULE created by ALB-Leader:

  1. Sync Ingress to RULE.

#ALB Leader

In multiple ALB instances, one will be elected as leader. The leader is responsible for:

  1. Translating the Ingress into Rules. We will create Rule for each path in the Ingress.
  2. Creating FT needed by Ingress. For example, if Ingress has certificate we will create FT 443 (HTTPS), if Ingress has no certificate we will create FT 80 (HTTP).

#Project

From the perspective of ALB, Project is a set of namespaces.

You could configure one or more Projects in an ALB. When ALB Leader translates the Ingress into Rules, it will ignore Ingress in namespaces which do not belong to the Project.

#Additional resources:

  • Configure a Load Balancer