In order for project personnel to fully utilize virtualization features within the container platform, the platform administrator must perform the following operations to prepare the virtualization environment.
Download the ACP Virtualization with KubeVirt installation package corresponding to your platform architecture.
Upload the ACP Virtualization with KubeVirt installation package using the Upload Packages mechanism.
When using virtualization features, it is necessary to plan and prepare the network and storage environments in advance.
Note:
If you need to connect to the virtual machine directly via IP, the cluster must use the Kube-OVN Underlay network mode. You can refer to the best practices Preparing Kube-OVN Underlay Physical Network.
It is recommended to use TopoLVM in conjunction with Kubevirt, as it can provide near-hardware level performance. If performance requirements are not high, Ceph distributed storage can also be used.
Storage Product | Description |
---|---|
TopoLVM | Advantages: Relatively lightweight and good performance. Disadvantages: Cannot be used across nodes, has low reliability, and cannot provide redundancy. |
Ceph Distributed Storage | Advantages: Can be used across nodes, highly available, and has redundancy. Disadvantages: Redundant disk copies lead to lower utilization; performance is poorer. |
If TopoLVM is used and multiple disks are configured, please ensure that the remaining storage capacity on the virtualization-enabled nodes can meet the total capacity of the multiple disks; otherwise, the virtual machine creation will fail.
If Ceph distributed storage is being used, please ensure that the network where the storage resides and the network where the virtual machines reside can communicate with each other.
When the nodes of a self-built cluster are physical machines, you can control whether to allow Kubernetes to schedule Virtual Machine Instances (VMIs) on that node by enabling or disabling the node virtualization switch.
When the switch is enabled, new virtual machines are allowed to be scheduled on the physical machine node; Windows physical nodes do not support enabling virtualization.
When the switch is disabled, new virtual machines are prevented from being scheduled on the physical machine node, but it does not affect virtual machines that are already running on that node.
Enter Administrator.
In the left navigation bar, click Cluster Management > Clusters.
Click Self-Built Cluster Name.
On the Nodes tab, click the ⋮ to the right of the node where you want to set the virtualization switch > Enable Virtualization.
Click Confirm.
Login, go to the Administrator page.
Click Marketplace > OperatorHub to enter the OperatorHub page.
Find the ACP Virtualization with KubeVirt, click Install, and navigate to the Install ACP Virtualization with KubeVirt page.
Configuration Parameters:
Parameter | Recommended Configuration |
---|---|
Channel | The default channel is alpha . |
Installation Mode | Cluster : All namespaces in the cluster share a single Operator instance for creation and management, resulting in lower resource usage. |
Installation Place | Select Recommended , Namespace only support kubevirt. |
Upgrade Strategy | Manual : When there is a new version in the Operator Hub, manual confirmation is required to upgrade the Operator to the latest version. |
Enter Administrator.
Click Marketplace > OperatorHub.
Find the ACP Virtualization with KubeVirt, click it to enter the ACP Virtualization with KubeVirt detail info page.
Click All Instances
Click Create Instance on the HyperConverged instance card.
Note: Only one HyperConverged instance needs to be created in each cluster.
Switch to YAML view and only replace the placeholder specified in the spec.storageImport.insecureRegistries
field in the example with the correct virtual machine image repository address, for example: 192.168.16.214:60080
, keeping other parameters at their default values.
Replacement result:
Click Create and wait for the CDI and KubeVirt type instances to be automatically created in the resource list, while ensuring that the status.phase displayed in the YAML is deployed
, indicating that the HyperConverged instance has been successfully created.
Virtual machines only support CPU overcommit ratios, and the recommended configuration value is between 2 and 4.
Once the overcommit ratio is enabled for virtual machines, when creating a virtual machine, the container's request value (requests) is fixed as specified limit value (limits) / VM overcommit ratio, making the user's request set through YAML ineffective.
For example: Assuming the CPU resource overcommit ratio is set to 4 for the virtual machine, if the user specifies a CPU limit value of 4c when creating the virtual machine, the CPU request value would be 4c/4 = 1c.
The memory resource quota for virtual machines is limited by the memory resource quota of the namespace they reside in. Since the memory of the Pod hosting the virtual machine is usually larger than the actual available memory of the virtual machine, it is recommended to reserve 20% of the resources. When the remaining available resources in the namespace are below 20%, please promptly scale up the resources.