logo
Alauda Container Platform
English
Русский
English
Русский
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
Global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation

CLI Tools

ACP CLI (ac)

Getting Started with ACP CLI
Configuring ACP CLI
Usage of ac and kubectl Commands
Managing CLI Profiles
Extending ACP CLI with Plugins
AC CLI Developer Command Reference
AC CLI Administrator Command Reference
violet CLI

Configure

Feature Gate

Clusters

Overview
Immutable Infrastructure

Node Management

Overview
Add Nodes to On-Premises Clusters
Manage Nodes
Node Monitoring

Managed Clusters

overview

Import Clusters

Overview
Import Standard Kubernetes Cluster
Import OpenShift Cluster
Import Amazon EKS Cluster
Import GKE Cluster
Import Huawei Cloud CCE Cluster (Public Cloud)
Import Azure AKS Cluster
Import Alibaba Cloud ACK Cluster
Import Tencent Cloud TKE Cluster
Register Cluster

Public Cloud Cluster Initialization

Network Initialization

AWS EKS Cluster Network Initialization Configuration
AWS EKS Supplementary Information
Huawei Cloud CCE Cluster Network Initialization Configuration
Azure AKS Cluster Network Initialization Configuration
Google GKE Cluster Network Initialization Configuration

Storage Initialization

Overview
AWS EKS Cluster Storage Initialization Configuration
Huawei Cloud CCE Cluster Storage Initialization Configuration
Azure AKS Cluster Storage Initialization Configuration
Google GKE Cluster Storage Initialization Configuration

How to

Network Configuration for Import Clusters
Fetch import cluster information
Trust an insecure image registry
Collect Network Data from Custom Named Network Cards
Creating an On-Premise Cluster
Hosted Control Plane
Cluster Node Planning
etcd Encryption

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Updating Public Repository Credentials

Backup and Recovery

Overview
Install
Backup repository

Backup Management

ETCD Backup
Create an application backup schedule
Hooks

Recovery Management

Run an Application Restore Task
Image Registry Replacement

Networking

Introduction

Architecture

Understanding Kube-OVN
Understanding ALB
Understanding MetalLB

Concepts

ALB with Ingress-NGINX Annotation Compatibility
Comparison Among Service, Ingress, Gateway API, and ALB Rule
GatewayAPI

Guides

Creating Services
Creating Ingresses
Creating a Domain Name
Creating Certificates
Creating External IP Address Pool
Creating BGP Peers
Configure Subnets
Configure Network Policies
Creating Admin Network Policies
Configuring Kube-OVN Network to Support Pod Multi-Network Interfaces (Alpha)
Configure Cluster Network Policies
Configure Egress Gateway
Network Observability
Configure ALB Rules
Cluster Interconnection (Alpha)
Endpoint Health Checker
NodeLocal DNSCache

How To

Preparing Kube-OVN Underlay Physical Network
Soft Data Center LB Solution (Alpha)
Automatic Interconnection of Underlay and Overlay Subnets
Install Ingress-Nginx via Cluster Plugin
Install Ingress-Nginx via Ingress Nginx Operator
Tasks for Ingress-Nginx

ALB

Auth
Deploy High Available VIP for ALB
Header Modification
HTTP Redirect
L4/L7 Timeout
ModSecurity
TCP/HTTP Keepalive
Use OAuth Proxy with ALB
Configure GatewayApi Gateway via ALB
Bind NIC in ALB
Decision‑Making for ALB Performance Selection
Deploy ALB
Forwarding IPv6 Traffic to IPv4 Addresses within the Cluster via ALB
OTel
ALB Monitoring
CORS
Load Balancing Session Affinity Policy in ALB
URL Rewrite
Calico Network Supports WireGuard Encryption
Kube-OVN Overlay Network Supports IPsec Encryption
DeepFlow User Guide

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Generic ephemeral volumes
Using an emptyDir
Configuring Persistent Storage Using NFS
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure
Machine Configuration

Scalability and Performance

Evaluating Resources for Global Cluster
Evaluating Resources for Workload Cluster
Improving Kubernetes Stability for Large-Scale Clusters
Disk Configuration

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storage Disaster Recovery
Update the optimization parameters
Create ceph object store user

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Install Alauda Container Platform Compliance with Kyverno

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install Alauda Container Platform API Refiner
About Alauda Container Platform Compliance Service

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project Quotas
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Certificates

Automated Kubernetes Certificate Rotation
cert-manager
OLM Certificates
Certificate Monitoring

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Build application architecture

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Admission
UID/GID Assignment
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Handling Out of Resource Errors
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules

Images

Overview of images

How To

Creating images
Managing images

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Overview

Introduction
Architecture
Release Notes
Lifecycle Policy

Install

Installing Alauda Container Platform Builds

Upgrade

Upgrading Alauda Container Platform Builds

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

GitOps

Introduction

Install

Installing Alauda Build of Argo CD
Installing Alauda Container Platform GitOps

Upgrade

Upgrading Alauda Container Platform GitOps
Architecture

Concepts

GitOps

Argo CD Concept

Introduction
Application
ApplicationSet
Tool
Helm
Kustomize
Directory
Sync
Health

Alauda Container Platform GitOps Concepts

Introduction
Alauda Container Platform GitOps Sync and Health Status

Guides

Creating GitOps Application

Creating GitOps Application
Creating GitOps ApplicationSet

GitOps Observability

Argo CD Component Monitoring
GitOps Applications Ops

How To

Integrating Code Repositories via Argo CD dashboard
Creating an Argo CD Application via Argo CD dashboard
Creating an Argo CD Application via the web console
How to Obtain Argo CD Access Information
Troubleshooting

Extend

Overview
Operator
Cluster Plugin
Upload Packages

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Monitor Component Capacity Planning
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

Introduction
Install

Architecture

Log Module Architecture
Log Component Selection Guide
Log Component Capacity Planning
Concepts

Guides

Logs

How To

How to Archive Logs to Third-Party Storage
How to Interface with External ES Storage Clusters

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Alauda Service Mesh

Service Mesh 1.x
Service Mesh 2.x

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]
Previous PageInstalling
Next PageUpgrade

View full docs as PDF

#Global Cluster Disaster Recovery

#TOC

#Overview

This solution is designed for disaster recovery scenarios involving the global cluster. The global cluster serves as the control plane of the platform and is responsible for managing other clusters. To ensure continuous platform service availability when the global cluster fails, this solution deploys two global clusters: a Primary Cluster and a Standby Cluster.

The disaster recovery mechanism is based on real-time synchronization of etcd data from the Primary Cluster to the Standby Cluster. If the Primary Cluster becomes unavailable due to a failure, services can quickly switch to the Standby Cluster.

#Supported Disaster Scenarios

  • Irrecoverable system-level failure of the Primary Cluster rendering it inoperable;
  • Failure of physical or virtual machines hosting the Primary Cluster, making it inaccessible;
  • Network failure at the Primary Cluster location resulting in service interruption;

#Unsupported Disaster Scenarios

  • Failures of applications deployed within the global cluster;
  • Data loss caused by storage system failures (outside the scope of etcd synchronization);

The roles of Primary Cluster and Standby Cluster are relative: the cluster currently serving the platform is the Primary Cluster (DNS points to it), while the standby cluster is the Standby Cluster. After a failover, these roles are swapped.

#Notes

  • This solution only synchronizes etcd data of the global cluster; it does not include data from registry, chartmuseum, or other components;

  • In favor of facilitating troubleshooting and management, it is recommended to name nodes in a style like standby-global-m1, to indicates which cluster the node belongs to (Primary or Standby).

  • Disaster recovery of application data within the cluster is not supported;

  • Stable network connectivity is required between the two clusters to ensure reliable etcd synchronization;

  • If the clusters are based on heterogeneous architectures (e.g., x86 and ARM), use a dual-architecture installation package;

  • The following namespaces are excluded from etcd synchronization. If resources are created in these namespaces, users must back them up manually:

    cpaas-system
    cert-manager
    default
    global-credentials
    cpaas-system-global-credentials
    kube-ovn
    kube-public
    kube-system
    nsx-system
    cpaas-solution
    kube-node-lease
    kubevirt
    nativestor-system
    operators
  • If both clusters are set to use built-in image registries, container images must be uploaded separately to each;

  • If the Primary Cluster deploys DevOps Eventing v3 (knative-operator) and instances thereof, the same components must be pre-deployed in the standby cluster.

#Process Overview

  1. Prepare a unified domain name for platform access;
  2. Point the domain to the Primary Cluster's VIP and install the Primary Cluster;
  3. Temporarily switch DNS resolution to the standby VIP to install the Standby Cluster;
  4. Copy the ETCD encryption key of the Primary Cluster to the nodes that will later be the control plane nodes of Standby Cluster;
  5. Install and enable the etcd synchronization plugin;
  6. Verify sync status and perform regular checks;
  7. In case of failure, switch DNS to the standby cluster to complete disaster recovery.

#Required Resources

  • A unified domain which will be the Platform Access Address, and the TLS certificate plus private key for serving HTTPS on that domain;

  • A dedicated virtual IP address for each cluster — one for the Primary Cluster and another for the Standby Cluster;

    • Preconfigure the load balancer to route TCP traffic on ports 80, 443, 6443, 2379, and 11443 to the control-plane nodes behind the corresponding VIP.

#Procedure

#Step 1: Install the Primary Cluster

NOTES OF DR (Disaster Recovery Environment) INSTALLING

While installing the primary cluster of the DR Environment,

  • First of all, documenting all of the parameters set while following the guide of the installation web UI. It is necessary to keep some options the same while installing the standby cluster.
  • A User-provisioned Load Balancer MUST be preconfigured to route traffic sent to the virtual IP. The Self-built VIP option is NOT available.
  • The Platform Access Address field MUST be a domain, while the Cluster Endpoint MUST be the virtual IP address.
  • Both clusters MUST be configured to use An Existing Certificate (has be the same one), request a legit certificate if necessary. The Self-signed Certificate option is NOT available.
  • When Image Repository is set to Platform Deployment, both Username and Password fields MUST NOT be empty; The IP/Domain field MUST be set to the domain used as the Platform Access Address.
  • Both HTTP Port and HTTPS Port fields of Platform Access Address MUST be 80 and 443.
  • When coming to the second page the of the installation guide (Step: Advanced), the Other Platform Access Addresses field MUST include the virtual IP of current Cluster.

Refer to the following documentation to complete installation:

  • Prepare for Installation
  • Installing

#Step 2: Install the Standby Cluster

  1. Temporarily point the domain name to the standby cluster's VIP;

  2. Log into the first control plane node of the Primary Cluster and copy the etcd encryption config to all standby cluster control plane nodes:

    # Assume the primary cluster control plane nodes are 1.1.1.1, 2.2.2.2 & 3.3.3.3
    # and the standby cluster control plane nodes are 4.4.4.4, 5.5.5.5 & 6.6.6.6
    for i in 4.4.4.4 5.5.5.5 6.6.6.6  # Replace with standby cluster control plane node IPs
    do
      ssh "<user>@$i" "sudo mkdir -p /etc/kubernetes/"
      scp /etc/kubernetes/encryption-provider.conf "<user>@$i:/tmp/encryption-provider.conf"
      ssh "<user>@$i" "sudo install -o root -g root -m 600 /tmp/encryption-provider.conf /etc/kubernetes/encryption-provider.conf && rm -f /tmp/encryption-provider.conf"
    done
  3. Install the standby cluster in the same way as the primary cluster

NOTES FOR INSTALLING STANDBY CLUSTER

While installing the standby cluster of the DR Environment, the following options MUST be set to the same as the primary cluster:

  • The Platform Access Address field.
  • All fields of Certificate.
  • All fields of Image Repository
  • Important: ensure the credentials of image repository and the admin user match those set on the Primary Cluster.

and MAKE SURE you followed the NOTES OF DR (Disaster Recovery Environment) INSTALLING in Step 1.

Refer to the following documentation to complete installation:

  • Prepare for Installation
  • Installing

#Step 3: Enable etcd Synchronization

  1. When applicable, configure the load balancer to forward port 2379 to control plane nodes of the corresponding cluster. ONLY TCP mode is supported; forwarding on L7 is not supported.

    INFO

    Port forwarding through a load balancer is not required. If direct access from the standby cluster to the active global cluster is available, specify the etcd addresses via Active Global Cluster ETCD Endpoints.

  2. Access the standby global cluster Web Console using its VIP, and switch to Administrator view;

  3. Navigate to Marketplace > Cluster Plugins, select the global cluster;

  4. Find etcd Synchronizer, click Install, configure parameters:

    • When not forwarding port 2379 through load balancer, its required to configure Active Global Cluster ETCD Endpoints correctly;
    • Use the default value of Data Check Interval;
    • Leave Print detail logs switch disabled unless troubleshooting.

Verify the sync Pod is running on the standby cluster:

kubectl get po -n cpaas-system -l app=etcd-sync
kubectl logs -n cpaas-system $(kubectl get po -n cpaas-system -l app=etcd-sync --no-headers | head -1) | grep -i "Start Sync update"

Once “Start Sync update” appears, recreate one of the pods to re-trigger sync of resources with ownerReference dependencies:

kubectl delete po -n cpaas-system $(kubectl get po -n cpaas-system -l app=etcd-sync --no-headers | head -1)

Check sync status:

mirror_svc=$(kubectl get svc -n cpaas-system etcd-sync-monitor -o jsonpath='{.spec.clusterIP}')
ipv6_regex="^[0-9a-fA-F:]+$"
if [[ $mirror_svc =~ $ipv6_regex ]]; then
  export mirror_new_svc="[$mirror_svc]"
else
  export mirror_new_svc=$mirror_svc
fi
curl $mirror_new_svc/check

Output explanation:

  • LOCAL ETCD missed keys: Keys exist in the Primary but are missing from the standby. Often caused by GC due to resource order during sync. Restart one etcd-sync Pod to fix;
  • LOCAL ETCD surplus keys: Extra keys exist only in the standby cluster. Confirm with ops team before deleting these keys from the standby.

If the following components are installed, restart their services:

  • Log Storage for Elasticsearch:

    kubectl delete po -n cpaas-system -l service_name=cpaas-elasticsearch
  • Monitoring for VictoriaMetrics:

    kubectl delete po -n cpaas-system -l 'service_name in (alertmanager,vmselect,vminsert)'

#Disaster Recovery Process

  1. Restart Elasticsearch on the standby cluster in case it is necessary:

    # Copy installer/res/packaged-scripts/for-upgrade/ensure-asm-template.sh to /root:
    # DO NOT skip this step
    
    # switch to the root user if necessary
    sudo -i
    
    # check whether the Log Storage for Elasticsearch is installed on global cluster
    _es_pods=$(kubectl get po -n cpaas-system | grep cpaas-elasticsearch | awk '{print $1}')
    if [[ -n "${_es_pods}" ]]; then
        # In case the script returned the 401 error, restart Elasticsearch
        # then execute the script to check the cluster again
        bash /root/ensure-asm-template.sh
    
        # Restart Elasticsearch
        xargs -r -t -- kubectl delete po -n cpaas-system <<< "${_es_pods}"
    fi
  2. Verify data consistency in the standby cluster (same check as in Step 3);

  3. Uninstall the etcd synchronization plugin;

  4. Remove port forwarding for 2379 from both VIPs;

  5. Switch the platform domain DNS to the standby VIP, which now becomes the Primary Cluster;

  6. Verify DNS resolution:

    kubectl exec -it -n cpaas-system deployments/sentry -- nslookup <platform access domain>
    # If not resolved correctly, restart coredns Pods and retry until success
  7. Clear browser cache and access the platform page to confirm it reflects the former standby cluster;

  8. Restart the following services (if installed):

    • Log Storage for Elasticsearch:

      kubectl delete po -n cpaas-system -l service_name=cpaas-elasticsearch
    • Monitoring for VictoriaMetrics:

      kubectl delete po -n cpaas-system -l 'service_name in (alertmanager,vmselect,vminsert)'
    • cluster-transformer:

      kubectl delete po -n cpaas-system -l service_name=cluster-transformer
  9. If workload clusters send monitoring data to the Primary, restart warlock in the workload cluster:

    kubectl delete po -n cpaas-system -l service_name=warlock
  10. On the original Primary Cluster, repeat the Enable etcd Synchronization steps to convert it into the new standby cluster.

#Routine Checks

Regularly check sync status on the standby cluster:

curl $(kubectl get svc -n cpaas-system etcd-sync-monitor -o jsonpath='{.spec.clusterIP}')/check

If any keys are missing or surplus, follow the instructions in the output to resolve them.

#Uploading Packages

WARNING

When using violet to upload packages to a standby cluster, the parameter --dest-repo <VIP addr of standby cluster> must be specified.
Otherwise, the packages will be uploaded to the image repository of the primary cluster, preventing the standby cluster from installing or upgrading extensions.

Also be awared that either authentication info of the standby cluster's image registry or --no-auth parameter MUST be provided.

For details of the violet push subcommand, please refer to Upload Packages.