### What this PR does / why we need it? This PR adds support for hierarchical communication for `dispatch_v2` and `combine_v2` MoE operations. This is achieved by introducing a new configuration `enable_mc2_hierarchy_comm`. When enabled, the communication algorithm is set to "hierarchy", which support mc2 op comm between two super pod. The changes include: - Adding `enable_mc2_hierarchy_comm` to `AscendConfig`. - Modifying `TokenDispatcherWithMC2` to pass `comm_alg: "hierarchy"` to the underlying `torch_npu` ops when the new config is enabled. - Adding validation to ensure that this feature is only used with compatible PTA/CANN versions and is not used with the conflicting `fused_mc2` op. - Updating `is_hierarchical_communication_enabled` to respect the new configuration flag. ### Does this PR introduce _any_ user-facing change? Yes, this PR introduces a new user-facing configuration option `enable_mc2_hierarchy_comm` in `additional_config` to enable hierarchical communication for MoE. ### How was this patch tested? - vLLM version: v0.18.0 Signed-off-by: zzzzwwjj <1183291235@qq.com>
8.1 KiB
8.1 KiB
Additional Configuration
Additional configuration is a mechanism provided by vLLM to allow plugins to control internal behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.
How to use
With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:
Online mode:
vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'
Offline mode:
from vllm import LLM
LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})
Configuration options
The following table lists additional configuration options available in vLLM Ascend:
| Name | Type | Default | Description |
|---|---|---|---|
xlite_graph_config |
dict | {} |
Configuration options for Xlite graph mode |
weight_prefetch_config |
dict | {} |
Configuration options for weight prefetch |
finegrained_tp_config |
dict | {} |
Configuration options for module tensor parallelism |
ascend_compilation_config |
dict | {} |
Configuration options for ascend compilation |
eplb_config |
dict | {} |
Configuration options for ascend compilation |
refresh |
bool | false |
Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
dump_config_path |
str | None |
Configuration file path for msprobe dump(eager mode). |
enable_async_exponential |
bool | False |
Whether to enable asynchronous exponential overlap. To enable asynchronous exponential, set this config to True. |
enable_shared_expert_dp |
bool | False |
When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported. |
multistream_overlap_shared_expert |
bool | False |
Whether to enable multi-stream shared expert. This option only takes effect on MoE models with shared experts. |
multistream_overlap_gate |
bool | False |
Whether to enable multi-stream overlap gate. This option only takes effect on MoE models with shared experts. |
recompute_scheduler_enable |
bool | False |
Whether to enable recompute scheduler. |
enable_cpu_binding |
bool | True |
Whether to enable CPU binding. Only takes effect on ARM CPUs; A3 uses the global-slicing CPU allocation strategy and other device types use the topo-affinity CPU allocation strategy. |
SLO_limits_for_dynamic_batch |
int | -1 |
SLO limits for dynamic batch. This is new scheduler to support dynamic batch feature |
enable_npugraph_ex |
bool | False |
Whether to enable npugraph_ex graph mode. |
pa_shape_list |
list | [] |
The custom shape list of page attention ops. |
enable_kv_nz |
bool | False |
Whether to enable KV cache NZ layout. This option only takes effects on models using MLA (e.g., DeepSeek). |
layer_sharding |
dict | {} |
Configuration options for Layer Sharding Linear |
enable_sparse_c8 |
bool | False |
Whether to enable KV cache C8 in DSA models (e.g., DeepSeekV3.2 and GLM5). Not supported on A5 devices now |
enable_mc2_hierarchy_comm |
bool | False |
Enable dispatch/combine op inter-node communication by ROCE. |
The details of each configuration option are as follows:
xlite_graph_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable Xlite graph mode. Currently only Llama, Qwen dense series models, and Qwen3-VL are supported. |
full_mode |
bool | False |
Whether to enable Xlite for both the prefill and decode stages. By default, Xlite is only enabled for the decode stage. |
weight_prefetch_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable weight prefetch. |
prefetch_ratio |
dict | {"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}, "mlp": { "gate_up": 1.0, "down": 1.0}} |
Prefetch ratio of each weight. |
finegrained_tp_config
| Name | Type | Default | Description |
|---|---|---|---|
lmhead_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of lm_head. |
oproj_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of o_proj. |
embedding_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of embedding. |
mlp_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of mlp. |
ascend_compilation_config
| Name | Type | Default | Description |
|---|---|---|---|
enable_npugraph_ex |
bool | True |
Whether to enable npugraph_ex backend. |
enable_static_kernel |
bool | False |
Whether to enable static kernel. Suitable for scenarios where shape changes are minimal and some time is available for static kernel compilation. |
fuse_norm_quant |
bool | True |
Whether to enable fuse_norm_quant pass. |
fuse_qknorm_rope |
bool | True |
Whether to enable fuse_qknorm_rope pass. If Triton is not in the environment, set it to False. |
fuse_allreduce_rms |
bool | False |
Whether to enable fuse_allreduce_rms pass. It's set to False because of conflict with SP. |
fuse_muls_add |
bool | True |
Whether to enable fuse_muls_add pass. |
eplb_config
| Name | Type | Default | Description |
|---|---|---|---|
dynamic_eplb |
bool | False |
Whether to enable dynamic EPLB. |
expert_map_path |
str | None |
When using expert load balancing for an MoE model, an expert map path needs to be passed in. |
expert_heat_collection_interval |
int | 400 |
Forward iterations when EPLB begins. |
algorithm_execution_interval |
int | 30 |
The forward iterations when the EPLB worker will finish CPU tasks. |
expert_map_record_path |
str | None |
Save the expert load calculation results to a new expert table in the specified directory. |
num_redundant_experts |
int | 0 |
Specify redundant experts during initialization. |
Example
An example of additional configuration is as follows:
{
"weight_prefetch_config": {
"enabled": True,
"prefetch_ratio": {
"attn": {
"qkv": 1.0,
"o": 1.0,
},
"moe": {
"gate_up": 0.8
},
"mlp": {
"gate_up": 1.0,
"down": 1.0
}
},
},
"finegrained_tp_config": {
"lmhead_tensor_parallel_size": 8,
"oproj_tensor_parallel_size": 8,
"embedding_tensor_parallel_size": 8,
"mlp_tensor_parallel_size": 8,
},
"enable_kv_nz": False,
"multistream_overlap_shared_expert": True,
"refresh": False
}