Upgrade the global cluster
consists of a global cluster and one or more workload clusters. To move the platform to a new ACP Distribution Version, upgrade the global tier to the target Distribution Version first, and then upgrade workload clusters to that same Distribution Version.
ACP 4.3 uses a CVO-based workflow for cluster upgrades. A typical global cluster upgrade includes artifact preparation, preflight checks, upgrade request, and status observation.
Before upgrading the global cluster to ACP 4.3, verify that every workload cluster is on a compatible Kubernetes version. For ACP 4.3, the compatible versions are 1.34, 1.33, 1.32, and 1.31. This prerequisite is separate from the broader third-party cluster management range.
This Compatible Versions prerequisite applies whether or not the environment uses global DR. Global DR changes the procedure used to upgrade the global tier, but it does not change the requirement that workload clusters must remain within the compatible Kubernetes version range before the global tier is upgraded to the target Distribution Version.
Global cluster upgrades follow the validated upgrade.sh-based procedure documented on this page. You can request the global-cluster upgrade from the Web Console, by updating ClusterVersionShadow.spec.desiredUpdate, or by using ACP CLI with --cluster=global. For the complete AC CLI workflow and output interpretation, see Upgrading Clusters. For full command and flag syntax, see AC CLI Administrator Command Reference.
If the environment uses global DR, follow the Global DR Procedure. Otherwise, follow the standard workflow below.
Standard Workflow
Prepare upgrade artifacts
Run bash upgrade.sh from the extracted core package directory.
upgrade.sh prepares the resources required by the CVO-based workflow, including:
Registry behavior depends on how the environment is configured:
Common parameters:
- Do not continue to the next step until image and plugin synchronization is complete.
- Use
--only-sync-imageonly when you want artifact synchronization without further preparation. - Use
--skip-sync-imageonly when the required images and plugin artifacts have already been uploaded.
Run preflight checks
Run preflight before requesting the upgrade:
Preflight returns two parts:
The default check set includes:
ResourcePatchUpgradeableClusterVersionUpgradeableVersionUpgradePathKubernetesVersionSupportedDockerRuntimeUnsupportedClusterRunningClusterModuleStableControlPlaneStaticPodsPresentCustomEtcdBackupCronJobsAbsentCRIUpgradePodsAbsentModuleInfoStablePlatformLicense
Handle preflight blocks when needed
If ResourcePatchUpgradeable fails with reason=UnexemptResourcePatches, inspect the blocking ResourcePatch and add the required exemption annotation:
The default annotation key is config.cpaas.io/exempt-for-ver.
If temporary troubleshooting requires specific checks to be disabled, configure cpaas-system/cvo-config:
Request the upgrade
After the preparation phase completes, choose one of the following entry points:
- Use the Web Console after the target version becomes available for the cluster.
- Patch
ClusterVersionShadow.spec.desiredUpdatedirectly when you need to operate the underlying CVO resource. - Use ACP CLI to request the upgrade for
globalexplicitly.
If you use the Web Console, the request follows a two-step flow:
- In Step 1, review the RPCH list.
- Click Acknowledge to continue to Step 2.
- In Step 2, review Current Version and Target Version. The page does not display a plugin list or a warning panel at this stage.
- The target version is determined by the prepared upgrade artifacts and cannot be selected manually in the Web Console.
- Click Start Upgrade.
- Confirm the action in the dialog.
- After confirmation, the page shows that the upgrade request has been submitted and the action enters an in-progress state.
kubectl example:
You can also edit the resource directly:
Minimum configuration:
ACP CLI example:
Observe execution
Use the following command to inspect the overall status:
Important status fields:
Focus on these conditions first:
Useful diagnostics:
(Conditional) Upgrade Service Mesh Essentials
If Service Mesh v1 is installed, refer to the Alauda Service Mesh Essentials Cluster Plugin documentation before upgrading the workload clusters.
Post-upgrade
Global DR Procedure
Use this procedure when the environment includes both a primary global cluster and a standby global cluster. The DR-specific steps below are in addition to the standard CVO workflow.
Verify the DR environment before upgrading
Follow your regular global DR inspection procedures to ensure that data in the standby global cluster is consistent with the primary global cluster. For background on the DR topology and synchronization workflow, see Global Cluster Disaster Recovery.
If inconsistencies are detected, contact technical support before proceeding.
On both global clusters, run the following command to ensure no Machine nodes are in a non-running state:
If any such nodes exist, resolve them before continuing.
Uninstall the etcd synchronization plugin from the standby global cluster
- Access the Web Console of the standby global cluster through its IP or VIP.
- Switch to Administrator view.
- Navigate to Marketplace > Cluster Plugins and select the
globalcluster. - Find etcd Synchronizer and uninstall it.
- Wait for the uninstallation to complete before proceeding.
Prepare upgrade artifacts on both global clusters
Complete Prepare upgrade artifacts in the standard workflow on both the standby global cluster and the primary global cluster.
Use the same preparation mode on both clusters.
Upgrade the standby global cluster
If you will use the Web Console on the standby global cluster, verify that the standby cluster ProductBase includes the standby VIP in spec.alternativeURLs:
After preparation completes, run the remaining steps from the standard workflow on the standby global cluster:
- Run preflight checks
- Request the upgrade
- Observe execution until the standby global cluster reaches the desired version
Upgrade the primary global cluster
After the standby global cluster has reached the desired version, run the remaining steps from the standard workflow on the primary global cluster:
- Run preflight checks
- Request the upgrade
- Observe execution until the primary global cluster reaches the desired version
Reinstall the etcd synchronization plugin and verify sync status
Before reinstalling the plugin, verify that port 2379 is forwarded correctly from both global-cluster VIPs to their control plane nodes when that forwarding mode is used. Port forwarding through a load balancer is not required if the standby global cluster can access the active global cluster directly.
To reinstall the plugin:
- Access the standby global cluster Web Console through its VIP and switch to Administrator view.
- Navigate to Marketplace > Cluster Plugins and select the
globalcluster. - Find etcd Synchronizer, click Install, and configure the required parameters.
When you configure the plugin:
- When port
2379is not forwarded through a load balancer, set Active Global Cluster ETCD Endpoints correctly. - Use the default value of Data Check Interval.
- Leave Print detail logs disabled unless you are troubleshooting.
Verify the sync Pod is running on the standby global cluster:
Once Start Sync update appears, recreate one of the Pods to trigger synchronization of resources with ownerReference dependencies:
Check sync status:
Output interpretation:
LOCAL ETCD missed keys: Keys exist in the primary global cluster but are missing from the standby. This often resolves after restarting oneetcd-syncPod.LOCAL ETCD surplus keys: Keys exist in the standby global cluster but not in the primary. Review these with your operations team before deleting them.