logo
Alauda Container Platform
English
简体中文
English
简体中文
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation
CLI Tools

Configure

Feature Gate

Clusters

Overview
Creating an On-Premise Cluster
etcd Encryption
Automated Rotate Kuberentes Certificates

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Updating Public Repository Credentials

Networking

Introduction

Architecture

Understanding Kube-OVN
Understanding ALB
Understanding MetalLB

Concepts

Auth
Ingress-nginx Annotation Compatibility
TCP/HTTP Keepalive
ModSecurity
Comparison Among Different Ingress Method
HTTP Redirect
L4/L7 Timeout
GatewayAPI
OTel

Guides

Creating Services
Creating Ingresses
Configure Gateway
Create Ingress-Nginx
Creating a Domain Name
Creating Certificates
Creating External IP Address Pool
Creating BGP Peers
Configure Subnets
Configure Network Policies
Creating Admin Network Policies
Configure Cluster Network Policies

How To

Deploy High Available VIP for ALB
Soft Data Center LB Solution (Alpha)
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Use OAuth Proxy with ALB
Creating GatewayAPI Gateway
Configure a Load Balancer
How to properly allocate CPU and memory resources
Forwarding IPv6 Traffic to IPv4 Addresses within the Cluster
Calico Network Supports WireGuard Encryption
Kube-OVN Overlay Network Supports IPsec Encryption
ALB Monitoring
Load Balancing Session Affinity Policy in Application Load Balancer (ALB)

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Machine Configuration

Overview
Managing Node Configuration with MachineConfig
Node Disruption Policies

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Setting the naming rules for subdirectories in the NFS Shared Storage Class
Generic ephemeral volumes
Using an emptyDir
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storagge Disaster Recovery
Update the optimization parameters
Create ceph object store user

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Installation

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Admission
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Introduction

Install

Installing Alauda Container Platform Builds

Upgrading

Upgrading Alauda Container Platform Builds
Architecture

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

GitOps

Introduction

Install

Installing Alauda Build of Argo CD
Installing Alauda Container Platform GitOps

Upgrade

Upgrading Alauda Container Platform GitOps
Architecture

Concepts

GitOps

Argo CD Concept

Introduction
Application
ApplicationSet
Tool
Helm
Kustomize
Directory
Sync
Health

Alauda Container Platform GitOps Concepts

Introduction
Alauda Container Platform GitOps Sync and Health Status

Guides

Creating GitOps Application

Creating GitOps Application
Creating GitOps ApplicationSet

GitOps Observability

Argo CD Component Monitoring
GitOps Applications Ops

How To

Integrating Code Repositories via Argo CD dashboard
Creating an Argo CD Application via Argo CD dashboard
Creating an Argo CD Application via the web console
How to Obtain Argo CD Access Information
Troubleshooting

Extend

Operator
Cluster Plugin

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

Introduction
Install

Architecture

Log Module Architecture
Log Component Selection Guide
Log Component Capacity Planning
Concepts

Guides

Logs

How To

How to Archive Logs to Third-Party Storage
How to Interface with External ES Storage Clusters

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

Overview

Introduction
Features
Install

Application Development

Introduction

Guides

CUDA Driver and Runtime Compatibility
Add Custom Devices Using ConfigMap

Troubleshooting

Troubleshooting float16 is only supported on GPUs with compute capability at least xx Error in vLLM
Paddle Autogrow Memory Allocation Crash on GPU-Manager

Configuration Management

Introduction

Guides

Configure Hardware accelerator on GPU nodes

Resource Monitoring

Introduction

Guides

GPU Resource Monitoring

Alauda Service Mesh

About Alauda Service Mesh

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]
📝 Edit this page on GitHub
Previous PageArchitecture
Next PageInstall

#Release Notes

#TOC

#4.1.0

#Features and Enhancements

#Immutable Infrastructure

Released:

  • Alauda Container Platform DCS Infrastructure Provider
  • Alauda Container Platform Kubeadm Provider

Both plugins have an Agnostic lifecycle and release asynchronously with Alauda Container Platform (ACP).

  • DCS Infrastructure Provider implements the Cluster API Infrastructure Provider interface, integrating with Huawei Datacenter Virtualization Solution (DCS).
  • Kubeadm Provider installs and configures the Kubernetes control plane and nodes on VMs provisioned by the infrastructure provider.

Together, these plugins enable fully automated cluster management on DCS.

Documentation is under preparation and will be published in the online documentation upon release.

#Machine Configuration

Released: Alauda Container Platform Machine Configuration Lifecycle: Agnostic, releases asynchronously with ACP.

Machine Configuration manages file updates, systemd units, and SSH public keys across cluster nodes, providing:

  • MachineConfig CRD for writing configurations to hosts.
  • MachineConfigPool CRD for grouping and managing node configurations based on role labels.
  • Automatic creation of master and worker pools during cluster installation, with support for custom worker pools that inherit from the default worker pool.

The system continuously monitors configuration drift, marking affected nodes as Degraded until resolved.

For detailed feature information, see Machine Configuration.

#etcd Encryption

Released: Alauda Container Platform etcd Encryption Manager Lifecycle: Agnostic, releases asynchronously with ACP.

Provides periodic rotation of etcd data encryption keys on workload clusters using AES-GCM for secrets and configmaps. Supports seamless re-encryption and key reload without workload disruption, maintaining backward compatibility with the last 8 keys.

See etcd Encryption for details.

#Kubernetes Certificates Rotator

Released: Alauda Container Platform Kubernetes Certificates Rotator Lifecycle: Agnostic, releases asynchronously with ACP.

Enables automated rotation of certificates used by Kubernetes components.

See Automated Rotate Kubernetes Certificates for details.

#Cluster Enhancement

Released: Alauda Container Platform Cluster Enhancer Lifecycle: Aligned.

New features and changes:

  • etcd Backup: Migrated etcd backup functionality from Backup & Recovery to Cluster Enhancer due to differences in usage and implementation. Optimized the deployment method to avoid conflicts during configuration changes and upgrades.
  • Event Cleanup: Implements active cleanup of expired Kubernetes events externally to prevent accumulation in etcd, reducing etcd load and instability risks during restarts.
  • Certificate Monitoring: Converts certificate management into certificate monitoring with alert rules and dashboards, replacing the previous Certificates management functionality. Implements a more efficient monitoring approach while monitoring loopback certificates used by kube-apiserver.
  • Cluster Monitoring Dashboard Migration: Migrates cluster monitoring resources from chart-cpaas-monitor to Cluster Enhancer.
  • Cluster Details Chart Migration: Switches monitoring charts in cluster details to custom monitoring dashboards.

#Chinese Language Pack

Chinese language support has been decoupled from the platform and released as the Chinese Language Pack plugin. The platform defaults to English upon installation; users can install this plugin if Chinese language support is needed.

#Create On-Premise Cluster

From ACP 4.1 onward, creating on-premise clusters supports only the latest Kubernetes version provided by the platform, replacing the previous option to choose among four Kubernetes versions.

#Logging

  • Upgraded ClickHouse to version v25.3.
  • Added POD IP tags to application logs, allowing filtering by POD IP.
  • Improved standard output log collection: The timestamp field now reflects the actual print time of the log, instead of the collection component's time, ensuring logs are displayed in the correct order.

#Monitoring

  • Upgraded Prometheus to version v3.4.2.
  • Custom variables now support three types: Constant, Custom, and Textbox.
    • Constant: A fixed value that does not change.
    • Custom: A value selected from a predefined list.
    • Textbox: A value entered manually by the user.
  • Stat Chart now supports Graph mode, which displays a trend curve for the selected period below the statistic.
  • Value Mapping now supports regular expressions and special values.
  • Panels can now be copied, allowing you to duplicate a panel within the current dashboard.

#Tenant Management

  • Project quotas now support custom resource quotas and storage class quotas.
  • The plugin provides new metrics: cpaas_project_resourcequota and cpaas_project_resourcequota_aggregated, which can be used to display project quotas in dashboards.
    • cpaas_project_resourcequota: Available in every cluster.
    • cpaas_project_resourcequota_aggregated: Available in the global cluster and aggregates data from all clusters.
  • Custom Roles now have additional restrictions, allowing assignment of permissions only within the corresponding role type:
    • Platform Role: Can assign all permissions.
    • Project Role: Can assign only permissions within the scope of the platform's preset project-admin-system role.
    • Namespace Role: Can assign only permissions within the scope of the platform's preset namespace-admin-system role.
    • Permissions that the current user does not possess cannot be assigned.

#Automated UID/GID Allocation Solution for Secure Pod Execution

In Kubernetes, you can configure a dedicated User ID (UID) and Group ID (GID) range for each namespace. When users deploy Pods within such a namespace, we automatically set the RunAsUser and fsGroup for all containers within the Pod, based on the namespace's predefined security policies. These users and groups will be dynamically allocated from the UID/GID range authorized for that specific namespace.

Key Capabilities and Value:

  • Enhanced Security: By enforcing containers to run as non-privileged users and restricting their UID/GID ranges, this solution effectively mitigates security risks like container escapes and privilege escalation, adhering to the principle of least privilege.

  • Simplified Management: Developers no longer need to manually specify UID/GID in each container or Pod configuration. Once a namespace is configured, all Pods deployed within it automatically inherit and apply the correct security settings.

  • Ensured Compliance: This helps customers better meet internal security policies and external compliance requirements, ensuring containerized applications run in a regulated environment.

Usage:

  • Add the label security.cpaas.io/enabled to your Namespace.

#Productized Solution Based on Argo Rollouts

Our productized solution, built on open-source Argo Rollouts, empowers users with fine-grained control over their release processes. By implementing progressive and controlled deployment strategies, it minimizes business interruptions or failures that can arise from launching new features or versions, significantly reducing release risks.

Key Capabilities and Value:

  • Blue-Green Deployment: Achieve zero-downtime updates by deploying new versions alongside your existing production environment. After thorough testing, traffic can be instantly or rapidly switched from the old version to the new one.

  • Canary Deployment: Gradually introduce new versions by directing a small percentage (e.g., 5%) of production traffic to them, allowing you to observe their performance and stability. Based on predefined metrics (such as error rates or latency), the system can automatically increase traffic or roll back if issues are detected, limiting the impact of potential problems.

  • Platform-Certified Argo Rollout Chart: You can download the community's open-source Argo Rollouts directly, or opt for the platform-certified version available through Alauda Cloud.

#Alauda Container Platform Registry: Deep Integration with Platform User Permissions

To provide a more secure and convenient image management experience, we've deepened the integration of our lightweight image registry with the platform's existing user permission system.

Key Capabilities and Value:

  • Deep Integration with Platform User System: The image registry is seamlessly integrated with the platform's user authentication and Role-Based Access Control (RBAC) mechanisms. Developers, testers, and administrators can directly use their existing platform credentials, eliminating the need for additional configuration or separate account management. The platform automatically maps user permissions within a Namespace to corresponding access rights for images in the registry. For example, users can only push and pull images in "specific Namespaces" they have access to.

  • Smoother Command-Line Operations: Supports image pull and push operations via CLI tools, significantly improving operational efficiency and convenience.

Warning:

  • Only supports installing Alauda Container Platform Registry via the solution.

#KEDA-Based Auto-Scaling Solution

To enable applications to intelligently respond to actual load, our platform offers an auto-scaling solution built on KEDA (Kubernetes Event-driven Autoscaling).

Key Capabilities and Value:

  • Event-Driven Elastic Scaling: KEDA supports over 70 types of scalers to automatically scale applications (such as Deployments, Jobs, etc.). Beyond traditional CPU and memory utilization, it can monitor metrics like message queue length (e.g., Kafka, RabbitMQ), database connection counts, HTTP request rates, and custom metrics.

  • Platform-Certified KEDA Operator: Download and install the platform-certified version via Alauda Cloud.

Solutions:

  • The product provides two solutions: auto-scaling based on Prometheus metrics and scaling down to zero.

#Cross-Cluster Application Disaster Recovery Solution (Alpha)

Our platform now offers a new GitOps-based Cross-Cluster Application Disaster Recovery (DR) solution, designed to significantly enhance application resilience and availability.

Key Capabilities and Value:

  • Diverse DR Models: Flexibly supports Active-Active (AA-DR) for global, high-concurrency demands; Active-Standby Dual-Active (AS-DR) to optimize resource utilization; and Active-Passive (AP-DR) to strictly ensure data consistency.

  • Automated GitOps Sync: Leverages the power of GitOps, combined with ApplicationSet and Kustomize, to automate cross-cluster configuration synchronization, ensuring the DR environment is always in a ready state.

  • Flexible Traffic Management: Utilizes third-party provided DNS and GSLB functionalities to achieve intelligent, health-check-driven traffic redirection and rapid failover, minimizing service disruption.

  • Multi-Dimensional Data Synchronization: The solution provides guidance on various methods, including database-level, storage-level, and application-level synchronization, to ensure eventual data consistency between clusters, laying the foundation for business continuity.

  • Streamlined Failover Process: Clearly defines detailed steps for failure detection, traffic redirection, state promotion, and service recovery, ensuring efficient and orderly failover during a disaster.

Note:

  • The data synchronization aspect of the disaster recovery solution is closely tied to the customer's business characteristics and data volume, and thus can vary significantly. Therefore, actual implementation requires specific handling tailored to the customer's particular scenario.

#Comprehensive Upgrade of Dependency Components for Enhanced Stability and Security

This release includes upgrades to the following core components:

  • KubeVirt upgraded to v1.5.2

  • Ceph upgraded to 18.2.7

  • MinIO upgraded to RELEASE.2025-06-13T11-33-47Z

Other open-source dependencies have also been synchronized to their latest community versions, addressing numerous known issues and security vulnerabilities to ensure improved system stability and reliability.

#Enhanced Virtualization Features for Improved Business Continuity and Security

Based on practical application requirements in virtualization environments, this update introduces several key enhancements:

  • High Availability Migration: Automatically migrates virtual machines to healthy nodes during node failures, ensuring uninterrupted business continuity.

  • Virtual Machine Cloning: Quickly create new virtual machines from existing ones, significantly improving deployment efficiency.

  • Virtual Machine Templates: Supports converting existing virtual machines into templates for rapid, batch deployment of similarly configured environments.

  • Trusted Computing (vTPM): Virtual machines now support trusted computing features, enhancing overall security.

Detailed instructions and guidelines for these new features have been updated in the user manual.

#Object Storage Service Based on COSI v2 Provides More Flexible and Efficient Storage Management

The Container Object Storage Interface (COSI) has been upgraded to version v2 (alpha), bringing enhancements such as:

  • Multi-Cluster Access: Supports simultaneous access to multiple different Ceph or MinIO storage clusters, enabling more efficient centralized management.

  • Fine-Grained Quota Management: Allows flexible quota settings for different storage categories, optimizing resource utilization.

  • Enhanced Permission Management: Supports the creation of various user access permissions, including read-write, read-only, and write-only modes.

  • Anonymous Access Support: The Ceph COSI Driver now supports anonymous access, enabling quick external HTTP program access through Ingress configuration.

#ALB Enters Maintenance Mode

WARNING

ALB will stop new feature development and only receive maintenance and security fixes. Version 4.1 supports ingress-nginx, and version 4.2 supports Envoy Gateway.

Future Plan:

  • For ingress users, directly use ingress-nginx
  • Future new features will only be supported on GatewayAPI
  • Avoid mentioning ALB unless there are strong requirements for ALB-exclusive capabilities (e.g., project port allocation)

Currently unsupported ALB-exclusive features in GatewayAPI:

  • Port-based gateway instance allocation

  • Traffic forwarding based on IP and IP ranges

  • EWMA algorithm for load balancing

  • WAF usage

  • Rule-level monitoring views

#Using ingress-nginx to provide Ingress capabilities

Introduce the community's most mainstream Ingress controller implementation to replace existing ALB-based Ingress scenarios.

Key Capabilities and Value:

  • Compatibility with mainstream community practices to avoid communication ambiguities

  • Ingress UI supports custom annotations for leveraging ingress-nginx's rich extension capabilities

  • Security issue fixes

Kube-OVN Supports New High-Availability Multi-Active Egress Gateway

A new Egress mechanism addresses limitations of previous centralized gateways. The new Egress Gateway features:

  • Active-Active high availability via ECMP for horizontal throughput scaling

  • Sub-1s failover via BFD

  • Reuse of underlay mode, with Egress Gateway IPs decoupled from Nodes

  • Fine-grained routing control through Namespace selectors and Pod selectors

  • Flexible Egress Gateway scheduling via Node selectors

#Support for AdminNetworkPolicy-type cluster network policies

Kube-OVN Supports Community's New Cluster Network Policy API. This API allows cluster administrators to enforce network policies without configuring them in each Namespace.

Advantages over previous cluster network policies:

  • Community-standard API (replacing internal APIs)

  • No conflicts with NetworkPolicy (higher priority than NetworkPolicy)

  • Supports priority settings

For more information: Red Hat Blog on AdminNetworkPolicy

#Deprecated and Removed Features

#Docker Runtime Removal

  • Previously, the platform provided Docker runtime images even though it was not the default runtime for new clusters. Starting with ACP 4.1, Docker runtime images will no longer be provided by default.

#Template Application Removal

  • The Application → Template Application entry point has been officially removed. Please ensure all Template Applications are upgraded to "Helm Chart Application" before upgrading.

#Fixed Issues

No issues in this release.

#Known Issues

  • When upgrading from 3.18.0 to 4.0.1, running the upgrade script may fail with a timeout if the global cluster uses the built-in image registry with the protect-secret-files feature enabled. There is currently no available workaround.
  • Occasionally, a pod may become stuck in the Terminating state and cannot be deleted by containerd. Although containerd attempts the deletion operation, the container remains in a pseudo-running state. The containerd logs show OCI "runtime exec failed: exec failed: cannot exec in a stopped container: unknown" while the container status appears as Running. This issue occurs very rarely in containerd 1.7.23 (observed only once) and affects only individual pods when triggered. If encountered, restart containerd as a temporary workaround. This is a known issue in the containerd community, tracked at https://github.com/containerd/containerd/issues/6080.
  • When upgrading clusters to Kubernetes 1.31, all pods in the cluster will restart. This behavior is caused by changes to the Pod spec fields in Kubernetes 1.31 and cannot be avoided. For more details, please refer to the Kubernetes issue: https://github.com/kubernetes/kubernetes/issues/129385