⚠️ This feature is still experimental. Please use it with caution.
Enable dynamic MIG feature
HAMi now supports dynamic MIG using mig-parted to adjust MIG devices dynamically, including:
-
Dynamic MIG Instance Management: Users no longer need to operate directly on GPU nodes or use commands like
nvidia-smi -i 0 -mig 1to manage MIG instances. HAMi-device-plugin will handle this automatically. -
Dynamic MIG Adjustment: Each MIG device managed by HAMi will dynamically adjust its MIG template according to the jobs submitted, as needed.
-
Device MIG Observation: Each MIG instance generated by HAMi will be displayed in the scheduler monitor, along with job information, providing a clear overview of MIG nodes.
-
Compatibility with HAMi-Core Nodes: HAMi can manage a unified GPU pool across both HAMi-core nodes and MIG nodes. A job can be scheduled to either node unless manually specified using the
nvidia.com/vgpu-modeannotation. -
Unified API with HAMi-Core: No additional work is required to make jobs compatible with the dynamic MIG feature.
Prerequisites
- NVIDIA Blackwell, Hopper™, and Ampere GPUs
- Alauda Build of Hami Installed
Enable dynamic MIG support
- Set
operatingmodetomigin thehami-device-pluginConfigMap for each MIG node Replace the node name in thenodeconfigarray with the target node's name. To cover multiple nodes, add more entries to the array. - Restart the following pods for the change to take effect:
- hami-scheduler
- hami-device-plugin on node 'MIG-NODE-A'
Note: The configuration above is lost on chart upgrade; future versions of Hami will improve this.
Custom MIG configuration (optional)
HAMi ships with a default MIG configuration.
You can customize the MIG configuration by following the steps below:
Then restart the hami-scheduler components. HAMi identifies and uses the first MIG template that matches the job, in the order defined in this ConfigMap.
Note: The configuration above is lost on chart upgrade; future versions of Hami will improve this.
Running MIG jobs
A MIG instance can now be requested by a container in the same way as hami-core, simply by specifying the nvidia.com/gpualloc and nvidia.com/gpumem resource types.
Note:
- The
nvidia.com/gpuallocrequest cannot exceed the actual number of physical GPUs. For example, a single GPU in MIG mode can only request1. This is a current Hami limitation and will be improved in future versions. - No action is required on MIG nodes — everything is managed by mig-parted in hami-device-plugin.
- NVIDIA devices older than the Ampere architecture do not support MIG mode.
- MIG resources (example:
nvidia.com/mig-1g.10gb) will not be visible on the node. HAMi uses a unified resource name for both MIG and hami-core nodes. - The
DCGM-exportercomponent deployed on MIG nodes must be stopped when performing MIG partitioning, because MIG partitioning requires resetting the GPU. After the first MIG-enabled workload is created, automatic MIG partitioning is performed. Subsequent workloads will not trigger further partitioning. When all workloads stop, starting the first workload again will trigger MIG partitioning once more.