consists of a global cluster and one or more workload clusters. The global cluster must be upgraded before any workload clusters.
This document walks you through the upgrade procedure for the global cluster.
If the global cluster is configured with the global DR (Disaster Recovery) solution, follow the global DR procedure strictly. Otherwise, follow the Standard procedure .
Copy the core package to any control plane node of the global cluster. Extract the package and cd into the extracted directory.
If the global cluster uses the built-in registry, run:
If the global cluster uses an external registry, you also need to provide the registry address:
If you plan to upgrade the Operator and Cluster Plugin together during the global cluster upgrade, you can pre-push their images to the global cluster's registry in advance. For bulk upload instructions, see Push only images from all packages in a directory.
Uploading images typically takes about 2 hours, depending on your network and disk performance.
If your platform is configured for global disaster recovery (DR), remember that the standby global cluster also requires image upload. Be sure to plan your maintenance window accordingly.
When using violet to upload packages to a standby cluster, the parameter --dest-repo <VIP addr of standby cluster> must be specified.
Otherwise, the packages will be uploaded to the image repository of the primary cluster, preventing the standby cluster from installing or upgrading extensions.
Also be awared that either authentication info of the standby cluster's image registry or --no-auth parameter MUST be provided.
For details of the violet push subcommand, please refer to Upload Packages.
After the image upload is complete, run the following command to start the upgrade process:
Wait for the script to finish before proceeding.
If you have already pre-pushed the Operator and Cluster Plugin images to the global cluster's registry, you can then follow Create only CRs from all packages in a directory. After running this command, wait about 10–15 minutes until upgrade notifications appear for functional components. You will then be able to upgrade the Operator and Cluster Plugin together as part of the subsequent upgrade steps.
When upgrading the global cluster, do not use the --clusters parameter to create CRs on workload clusters in the Create only CRs from all packages in a directory step.
Doing so may cause upgrade failures during subsequent workload cluster upgrades.
If you are upgrading from 3.18 or 4.0 and the directory contains the Build of TopoLVM package, you must remove it before running the Create only CRs from all packages in a directory step.
After completing that step, create the CRs for TopoLVM separately, and make sure to include the --target-catalog-source "platform" parameter.
If you are upgrading from 3.18 and the Build of TopoLVM is installed, you must back up and delete the related TopoLVM resources before proceeding with the upgrade.
Otherwise, the cluster upgrade will fail.
Run the following commands on any control plane node of the cluster to be upgraded:
Then, run the following command on any control plane node of the global cluster:
If you are upgrading from 3.16 or 3.18 and the platform has Data Services installed, you must also upgrade the related extensions when upgrading the clusters.
For more information, see Upgrade Data Services.
global cluster to open its detail view.Review the available component updates in the dialog, and confirm to proceed.
Upgrading the Kubernetes version is optional. However, since service disruptions may occur regardless, we recommend including the Kubernetes upgrade to avoid multiple maintenance windows.
If the Alauda Container Platform GitOps is installed in the global cluster, and after the upgrading, the pods of the plugin is running abnormally.Please refer to Upgrading Alauda Container Platform GitOps.
If you are upgrading from 3.18 and the Build of TopoLVM is installed, and you have already completed the Remove TopoLVM step.
On a control plane node of the cluster to be upgraded, continue by running the following command to upgrade TopoLVM:
After running the command, wait approximately 5–10 minutes. The TopoLVM component will be automatically upgraded and reflected in the web console.
The Alauda Container Platform Product Docs plugin provides access to product documentation within the platform. All help links throughout the platform will direct users to this documentation. If this plugin is not installed, clicking help links in the platform will result in 404 access errors.
Starting from ACP 4.0, the built-in product documentation has been separated into the Alauda Container Platform Product Docs plugin. If you are upgrading from version 3.18, you need to install this plugin by following these steps:
Navigate to Administrator.
In the left sidebar, click Marketplace > Cluster Plugins and select the global cluster.
Locate the Alauda Container Platform Product Docs plugin and click Install.
This step is only to ensure that the cluster enhancer plugin is installed. If you found this cluster plugin already installed, nothing need to do.
Navigate to Administrator.
In the left sidebar, click Marketplace > Cluster Plugins and select the global cluster.
Locate the Alauda Container Platform Cluster Enhancer plugin and click Install.
If Service Mesh v1 is installed, refer to the documentation before upgrading the workload clusters.
Follow your regular global DR inspection procedures to ensure that data in the standby global cluster is consistent with the primary global cluster.
If inconsistencies are detected, contact technical support before proceeding.
On both clusters, run the following command to ensure no Machine nodes are in a non-running state:
If any such nodes exist, contact technical support to resolve them before continuing.
global from the cluster dropdown.Perform the Upload images step on both the standby cluster and the primary cluster.
See Upload images in Standard procedure for details.
Accessing the standby cluster Web Console is required to perform the upgrade.
Before proceeding, verify that the ProductBase resource of the standby cluster is correctly configured with the cluster VIP under spec.alternativeURLs.
If not, update the configuration as follows:
On the standby cluster, follow the steps in the Standard procedure to complete the upgrade.
After the standby cluster has been upgraded, proceed with the Standard procedure on the primary cluster.
Before reinstalling, verify that port 2379 is properly forwarded from both global cluster VIPs to their control plane nodes.
To reinstall:
global cluster.To verify installation:
Run the following to verify the synchronization status:
Explanation of output:
"LOCAL ETCD missed keys:" – Keys exist in the primary cluster but are missing in the standby. This often resolves after a pod restart."LOCAL ETCD surplus keys:" – Keys exist in the standby cluster but not in the primary. Review these with your operations team before deletion.