Deploy a pure software data center load balancer (LB) by creating a highly available load balancer outside the cluster, providing load balancing capabilities for multiple ALBs to ensure stable business operations. It supports configuration for IPv4 only, IPv6 only, or both IPv4 and IPv6 dual stack.
Prepare two or more host nodes as LB. It is recommended to install Ubuntu 22.04 operating system on LB nodes to reduce the time for LB to forward traffic to abnormal backend nodes.
Pre-install the following software on all host nodes of the external LB (this chapter takes two external LB host nodes as an example):
ipvsadm
Docker (20.10.7)
Ensure that the Docker service starts on boot for each host using the following command: sudo systemctl enable docker.service
.
Ensure that the clock of each host node is synchronized.
Prepare the image for Keepalived, used to start the external LB service; the platform already contains this image. The image address is in the following format: <image repository address>/tkestack/keepalived:<version suffix>
. The version suffix may vary slightly among different versions. You can obtain the image repository address and version suffix as follows. This document uses build-harbor.alauda.cn/tkestack/keepalived:v3.16.0-beta.3.g598ce923
as an example.
In the global cluster, execute kubectl get prdb base -o json | jq .spec.registry.address
to get the image repository address parameter.
In the directory where the installation package is extracted, execute cat ./installer/res/artifacts.json |grep keepalived -C 2|grep tag|awk '{print $2}'|awk -F '"' '{print $2}'
to get the version suffix.
Note: The following operations must be executed once on each external LB host node, and the hostname
of the host nodes must not be duplicated.
Add the following configuration information to the file /etc/modules-load.d/alive.kmod.conf
.
Add the following configuration information to the file /etc/sysctl.d/alive.sysctl.conf
.
Restart using the reboot
command.
Create a folder for the Keepalived configuration file.
Modify the configuration items according to the comments in the following file and save them in the /etc/keepalived/
folder, naming the file alive.yaml
.
Execute the following command in the business cluster to check the certificate expiration date in the configuration file, ensuring that the certificate is still valid. The LB functionality will become unavailable after the certificate expires, requiring contact with the platform administrator for a certificate update.
Copy the /etc/kubernetes/admin.conf
file from the three Master nodes in the Kubernetes cluster to the /etc/keepalived/kubecfg
folder on the external LB nodes, naming them with an index, e.g., kubecfg01.conf
, and modify the apiserver
node addresses in these three files to the actual node addresses of the Kubernetes cluster.
Note: After the platform certificate is updated, this step needs to be executed again, overwriting the original files.
Check the validity of the certificates.
Copy /usr/bin/kubectl
from the Master node of the business cluster to the LB node.
Execute chmod +x /usr/bin/kubectl
to grant execution permissions.
Execute the following commands to confirm certificate validity.
If the following results are returned, the certificate is valid.
Upload the Keepalived image to the external LB node and run Keepalived using Docker.
Run the following command on the node accessing keepalived
: sysctl -w net.ipv4.conf.all.arp_accept=1
.
Run the command ipvsadm -ln
to view the IPVS rules, and you will see IPv4 and IPv6 rules applicable to the business cluster ALBs.
Shut down the LB node where the VIP is located and test whether the VIP of both IPv4 and IPv6 can successfully migrate to another node, typically within 20 seconds.
Use the curl
command on a non-LB node to test if communication with the VIP is normal.