High Availability Deployment
For production environments, it is recommended to deploy the Connectors system in a high availability (HA) configuration to ensure service continuity and fault tolerance.
TOC
Key Steps OverviewConfiguring ReplicasConnectorsCoreConnectorsGitConnectorsOCIConnectorsMavenConnectorsHarborComponents Without WorkloadsBuilt-in Pod Anti-AffinityCustomizing Affinity RulesKey Steps Overview
Configuring a high-availability Connector involves three steps:
- Set replicas ≥ 2 — Specify
spec.workloads[].replicason each Connectors Component. At minimum, configure ConnectorsCore (api, controller-manager, proxy) and any plugin components you use. - Rely on built-in anti-affinity — The system automatically adds
preferredDuringSchedulingIgnoredDuringExecutionpod anti-affinity rules so replicas are spread across nodes with no extra configuration required. - Customize affinity for multi-zone clusters (optional) — Override
spec.workloads[].template.spec.affinityto enforce zone-level distribution usingrequiredDuringSchedulingIgnoredDuringExecutionif needed.
More details on each step are provided below.
Configuring Replicas
You can increase the number of replicas for each workload to achieve high availability. This is done through the workloads field in the component spec. For production environments, we recommend configuring at least 2 replicas for each workload to ensure service continuity during node failures or rolling updates.
Below are specific examples for each major connector component:
ConnectorsCore
ConnectorsCore includes three main workloads: API server, controller manager, and proxy. For high availability, configure all three with multiple replicas:
After a period of time, all pods of the connectors-core component have a replica count of 2, except for connectors-csi.
ConnectorsGit
ConnectorsGit runs a single plugin deployment for Git Server integration:
After a period of time, all pods of the connectors-git component have a replica count of 2.
ConnectorsOCI
ConnectorsOCI runs a single plugin deployment that handles OCI registry integration:
After a period of time, all pods of the connectors-oci component have a replica count of 2.
ConnectorsMaven
ConnectorsMaven runs a single plugin deployment for Maven registry integration:
After a period of time, all pods of the connectors-maven component have a replica count of 2.
ConnectorsHarbor
ConnectorsHarbor runs a single plugin deployment for Harbor-specific features:
After a period of time, all pods of the connectors-harbor component have a replica count of 2.
Components Without Workloads
The other connector components do not have Deployment workloads and therefore do not require replica configuration.
Built-in Pod Anti-Affinity
The system includes built-in pod anti-affinity rules to ensure that replicas are distributed across different nodes. By default, the system uses preferredDuringSchedulingIgnoredDuringExecution with a weight of 100, which means the scheduler will try to place pods on different nodes when possible, but will still schedule them on the same node if no other options are available.
This default configuration ensures:
- Pods are spread across different nodes when possible
- Deployment remains schedulable even if the cluster has limited nodes
- Automatic failover capability when a node becomes unavailable
Customizing Affinity Rules
If the default affinity rules do not meet your requirements, you can override them through the workloads configuration. The template.spec.affinity field allows you to specify custom affinity rules.
For multi-zone clusters, you can configure zone-aware scheduling to spread pods across availability zones. The following example uses requiredDuringSchedulingIgnoredDuringExecution to enforce zone-level distribution, combined with preferredDuringSchedulingIgnoredDuringExecution to prefer node-level distribution within each zone:
This configuration ensures:
- Pods are strictly distributed across different availability zones (hard requirement)
- Within the same zone, pods are preferably scheduled on different nodes (soft requirement)
- Provides resilience against both zone-level and node-level failures