consists of a global cluster and one or more workload clusters. The global cluster must be upgraded before any workload clusters.
This document walks you through the upgrade procedure for the global cluster.
If the global cluster is configured with the global DR (Disaster Recovery) solution, follow the global DR procedure strictly. Otherwise, follow the Standard procedure .
Copy the upgrade package to any control plane node of the global cluster. Extract the package and cd
into the extracted directory.
If the global cluster uses the built-in registry, run:
If the global cluster uses an external registry, you also need to provide the registry address:
Uploading images typically takes about 2 hours, depending on your network and disk performance. If your platform uses the global DR, remember that the standby global cluster also requires image upload, and plan your maintenance window accordingly.
After the image upload is complete, run the following command to start the upgrade process:
Wait for the script to finish before proceeding.
global
cluster to open its detail view.Review the available component updates shown in the dialog, and confirm to continue.
Upgrading the Kubernetes version is optional. However, since service disruptions may occur regardless, we recommend including the Kubernetes upgrade to avoid multiple maintenance windows.
Machine
nodes are in a non-running state:If any such nodes exist, contact technical support to resolve them before continuing.
global
from the cluster dropdown.Follow the same procedure as described in the Standard procedure section to upgrade the standby global cluster first.
After the standby is upgraded, follow the same Standard procedure to upgrade the primary global cluster.
Before reinstalling, verify that port 2379
is properly forwarded from both global cluster VIPs to their control plane nodes.
To reinstall:
global
cluster.To verify installation:
Run the following to verify the synchronization status:
Explanation of output:
"LOCAL ETCD missed keys:"
– Keys exist in the primary cluster but are missing in the standby. This often resolves after a pod restart."LOCAL ETCD surplus keys:"
– Keys exist in the standby cluster but not in the primary. Review these with your operations team before deletion.