consists of a global cluster and one or more workload clusters. The global cluster must be upgraded before any workload clusters.
This document walks you through the upgrade procedure for the global cluster.
If the global cluster is configured with the global DR (Disaster Recovery) solution, follow the global DR procedure strictly. Otherwise, follow the Standard procedure .
Copy the core package to any control plane node of the global cluster. Extract the package and cd
into the extracted directory.
If the global cluster uses the built-in registry, run:
If the global cluster uses an external registry, you also need to provide the registry address:
If you plan to upgrade the Operator and Cluster Plugin together during the global cluster upgrade, you can pre-push their images to the global cluster's registry in advance. For bulk upload instructions, see Push only images from all packages in a directory.
When using violet push
on a standby global cluster, you must specify the --dest-repo
parameter with the standby cluster VIP.
For details, see Upload Packages in a Global DR Environment.
Uploading images typically takes about 2 hours, depending on your network and disk performance.
If your platform is configured for global disaster recovery (DR), remember that the standby global cluster also requires image upload. Be sure to plan your maintenance window accordingly.
After the image upload is complete, run the following command to start the upgrade process:
Wait for the script to finish before proceeding.
If you have already pre-pushed the Operator and Cluster Plugin images to the global cluster's registry, you can then follow Create only CRs from all packages in a directory. After running this command, wait about 10–15 minutes until upgrade notifications appear for functional components. You will then be able to upgrade the Operator and Cluster Plugin together as part of the subsequent upgrade steps.
If the platform has Data Services installed, you must also upgrade the related extensions when upgrading clusters.
For details, see Upgrade Data Services.
global
cluster to open its detail view.Review the available component updates in the dialog, and confirm to proceed.
If the platform has Data Services installed, you must also upgrade the related extensions when upgrading clusters. For details, see Upgrade Data Services.
global
cluster to open its detail view.Review the available component updates shown in the dialog, and confirm to continue.
Upgrading the Kubernetes version is optional. However, since service disruptions may occur regardless, we recommend including the Kubernetes upgrade to avoid multiple maintenance windows.
If the Alauda Container Platform GitOps is installed in the global cluster, and after the upgrading, the pods of the plugin is running abnormally.Please refer to Upgrading Alauda Container Platform GitOps.
The Alauda Container Platform Product Docs plugin provides access to product documentation within the platform. All help links throughout the platform will direct users to this documentation. If this plugin is not installed, clicking help links in the platform will result in 404 access errors.
Starting from ACP 4.0, the built-in product documentation has been separated into the Alauda Container Platform Product Docs plugin. If you are upgrading from version 3.18, you need to install this plugin by following these steps:
Navigate to Administrator.
In the left sidebar, click Marketplace > Cluster Plugins and select the global
cluster.
Locate the Alauda Container Platform Product Docs plugin and click Install.
Follow your regular global DR inspection procedures to ensure that data in the standby global cluster is consistent with the primary global cluster.
If inconsistencies are detected, contact technical support before proceeding.
On both clusters, run the following command to ensure no Machine
nodes are in a non-running state:
If any such nodes exist, contact technical support to resolve them before continuing.
global
from the cluster dropdown.Perform the Upload images step on both the standby cluster and the primary cluster.
See Upload images in Standard procedure for details.
Accessing the standby cluster Web Console is required to perform the upgrade.
Before proceeding, verify that the ProductBase resource of the standby cluster is correctly configured with the cluster VIP under spec.alternativeURLs
.
If not, update the configuration as follows:
On the standby cluster, follow the steps in the Standard procedure to complete the upgrade.
After the standby cluster has been upgraded, proceed with the Standard procedure on the primary cluster.
Before reinstalling, verify that port 2379
is properly forwarded from both global cluster VIPs to their control plane nodes.
To reinstall:
global
cluster.To verify installation:
Run the following to verify the synchronization status:
Explanation of output:
"LOCAL ETCD missed keys:"
– Keys exist in the primary cluster but are missing in the standby. This often resolves after a pod restart."LOCAL ETCD surplus keys:"
– Keys exist in the standby cluster but not in the primary. Review these with your operations team before deletion.