It supports configuration of cluster interconnection between clusters whose network mode is the same as Kube-OVN, so that Pods in the clusters can access each other. Cluster Interconnect Controller is an extension component provided by Kube-OVN, which is responsible for collecting network information between different clusters and connecting the networks of multiple clusters by issuing routes.
The subnet CIDRs of different clusters cannot overlap each other.
There needs to be a set of machines that can be accessed over IP by each cluster's kube-ovn-controller to deploy controllers that interconnect across clusters.
A set of machines that can be accessed by kube-ovn-controller per cluster via IP for cross-cluster interconnections needs to exist for each cluster to be used as gateway nodes afterwards.
This feature is only available for the default VPC, user-defined VPCs cannot use the interconnect feature.
There are three deployment methods available: Deploy deployment (supported in platform v3.16.0 and later versions), Docker deployment, and Containerd deployment.
Note: This deployment method is supported in platform v3.16.0 and later versions.
Operation Steps
Execute the following command on the cluster Master node to obtain the install-ic-server.sh installation script.
Open the script file in the current directory and modify the parameters as follows.
Modified parameter configurations are as follows:
Save the script file and execute it using the following command.
Select three or more nodes in any cluster to deploy the Interconnected Controller. In this example, three nodes are prepared.
Choose any node as the Leader and execute the following commands according to the different deployment methods.
Note: Before configuration, please check if there is an ovn directory under /etc
. If not, use the command mkdir /etc/ovn
to create one.
Commands for Docker deployment
Note: Execute the command docker images | grep ovn
to obtain the Kube-OVN image address.
Command for the Leader node:
Commands for the other two nodes:
Commands for Containerd deployment
Note: Execute the command crictl images | grep ovn
to obtain the Kube-OVN image address.
Command for the Leader node:
Commands for the other two nodes:
In any control node of global, replace the following parameters according to the comments and execute the following command to create the ConfigMap resource.
Note: To ensure the correct operation, the ConfigMap named ovn-ic on global is not allowed to be modified. If any parameter needs to be changed, please delete the ConfigMap and reconfigure it correctly before applying the ConfigMap.
Add a cluster whose network mode is Kube-OVN to the cluster interconnect.
Prerequisites
The created subnets, ovn-default, and join subnets in a cluster do not conflict with any cluster segment in the cluster interconnection group.
Procedure of operation
In the left navigation bar, click Clusters > Cluster of clusters.
Click the name of the cluster to be added to the cluster interconnect.
In the upper right corner, click Options > Cluster Interconnect.
Click Join the cluster interconnect.
Select a gateway node for the cluster.
Click Join.
Update information about cluster gateway nodes that have joined a cluster interconnect group.
Procedure of operation
In the left navigation bar, click Clusters > Cluster of clusters.
Click Cluster name for the gateway node information to be updated.
In the upper-right corner, click Operations > Cluster Interconnect.
Click Update Gateway Node for the cluster whose gateway node information you want to update.
Reselect the gateway node for the cluster.
Click Update.
A cluster that has joined a cluster interconnection group exits cluster interconnection, and when it does, it disconnects the cluster Pod from the external cluster Pod.
Procedure of operation
In the left navigation bar, click Clusters > Cluster of clusters. 2.
Click the name of the cluster that you want to decommission. 3.
In the upper-right corner, click Options > Cluster Interconnect. 4.
Click Exit cluster interconnection for the cluster you want to exit. 5. Enter the cluster name correctly.
Enter the cluster name correctly.
Click Exit.
When a cluster is deleted without leaving the interconnected cluster, some residual data may remain on the controller. When you attempt to use these nodes to create a cluster again and join the interconnected cluster, failures may occur. You can check the detailed error information in the /var/log/ovn/ovn-ic.log
log of the controller (kube-ovn-controller). Some error messages may include:
Operational Steps
Exit the interconnected cluster for the cluster to be joined.
Execute the cleanup script in the container or pod.
You can execute the cleanup script directly in either the ovn-ic-db container or the ovn-ic-controller pod. Choose one of the following methods:
Method 1: Execute in ovn-ic-db container
Enter the ovn-ic-db container and perform the cleanup operation with the following commands.
Then execute one of the following cleanup commands:
Execute the cleanup operation with the name of the original cluster. Replace <cluster-name> with the name of the original cluster:
Execute the cleanup operation with the name of any node in the original cluster. Replace <node-name> with the name of any node in the original cluster:
Method 2: Execute in ovn-ic-controller pod
Enter the ovn-ic-controller pod and perform the cleanup operation with the following commands.
Then execute one of the following cleanup commands:
Execute the cleanup operation with the name of the original cluster. Replace <cluster-name> with the name of the original cluster:
Execute the cleanup operation with the name of any node in the original cluster. Replace <node-name> with the name of any node in the original cluster:
Note: Step 1 to Step 3 need to be performed on all business clusters that have joined the interconnected cluster.
Operational Steps
Delete the ConfigMap named ovn-ic-config in the business cluster. Use the following command.
Exit the interconnected cluster through platform operations.
Enter the Leader Pod of ovn-central with the following command.
Log in to the node where the controller is deployed and delete the controller.
Docker command:
Containerd command:
Delete the ConfigMap named ovn-ic in the global cluster with the following command.
To configure the cluster gateway to be highly available after joining the cluster interconnection, you can perform the following steps:
Log in to the cluster that needs to be transformed into a High Availability Gateway and execute the following command to change the enable-ic
field to false
.
Note: Changing the enable-ic
field to false
will disrupt the cluster interconnect until it is set to true
again.
Modify the gateway node configuration by updating the gw-nodes
field and separating the gateway nodes with English commas; also change the enable-ic
field to true
.
Go to the Pod in cluster ovn-central and execute the ovn-nbctl lrp-get-gateway-chassis {current cluster name}-ts
command to verify that the configuration is in effect.