### What this PR does / why we need it?
Add a control to enable the exponential distribution operator
overlapping with model executing (default is OFF due to this feature
might not perform well on MOE models, i.e. For Qwen3-30B).
Enable async exponential overlapping will provides performance
improvement.
Also, overlapping the exponential operator with module execution can
cover the performance drop introduced by AICPU-version's exponential
operator.
**UPDATE**: (12/12)
Now our overlap will use the same stream that introduced in this pr:
#4908 .
We move the `do_async_exponential` from `model_runner_v1.py` to
`sampler.py`.
Now we are using `additional_config` to enable async exponential:
Add `"enable_async_exponential": 1` in `addition_config`.
Now we **ONLY** support default exponential/AI-CPU exponential, the old
`"enable_async_exponential": 2` option has been aborted to keep
consistency.
### Does this PR introduce _any_ user-facing change?
**YES**, added a new `additional_config` : `"enable_async_exponential":
1`.
When `enable_async_exponential` is set to 1, we enable the async
exponential and overlap with model runner.
When `enable_async_exponential` is set to 0 (default is 0), we disable
the async exponential, but exponential will still running on a different
stream using stream introduced in #4908.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: YuhanBai <yuhan.bai0830@gmail.com>
Signed-off-by: YuhanBai yuhan.bai0830@gmail.com
6.9 KiB
6.9 KiB
Additional Configuration
Additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.
How to use
With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:
Online mode:
vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'
Offline mode:
from vllm import LLM
LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})
Configuration options
The following table lists additional configuration options available in vLLM Ascend:
| Name | Type | Default | Description |
|---|---|---|---|
xlite_graph_config |
dict | {} |
Configuration options for xlite graph mode |
finegrained_tp_config |
dict | {} |
Configuration options for module tensor parallelism |
weight_prefetch_config |
dict | {} |
Configuration options for weight prefetch |
refresh |
bool | false |
Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
expert_map_path |
str | None |
When using expert load balancing for an MoE model, an expert map path needs to be passed in. |
enable_shared_expert_dp |
bool | False |
When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported. |
lmhead_tensor_parallel_size |
int | None |
The custom tensor parallel size of lmhead. Restriction: Can only be used when tensor_parallel=1 |
oproj_tensor_parallel_size |
int | None |
The custom tensor parallel size of oproj. |
multistream_overlap_shared_expert |
bool | False |
Whether to enable multistream shared expert. This option only takes effect on MoE models with shared experts. |
dynamic_eplb |
bool | False |
Whether to enable dynamic EPLB. |
num_iterations_eplb_update |
int | 400 |
Forward iterations when EPLB begins. |
gate_eplb |
bool | False |
Whether to enable EPLB only once. |
num_wait_worker_iterations |
int | 30 |
The forward iterations when the EPLB worker will finish CPU tasks. In our test default value 30 can cover most cases. |
expert_map_record_path |
str | None |
Save the expert load calculation results to a new expert table in the specified directory. |
init_redundancy_expert |
int | 0 |
Specify redundant experts during initialization. |
dump_config |
str | None |
Configuration file path for msprobe dump(eager mode). |
enable_async_exponential |
int | 0 |
Whether to enable async exponential overlap. To enable async exponential, set this config to 1. |
The details of each configuration option are as follows:
xlite_graph_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable xlite graph mode. Currently only Llama or Qwen dense series models are supported. |
full_mode |
bool | False |
Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
weight_prefetch_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable weight prefetch. |
prefetch_ratio |
dict | {"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}} |
Prefetch ratio of each weight. |
finegrained_tp_config
| Name | Type | Default | Description |
|---|---|---|---|
lmhead_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of lmhead. |
oproj_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of oproj. |
embedding_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of embedding. |
mlp_tensor_parallel_size |
int | 0 |
The custom tensor parallel size of mlp. |
Example
An example of additional configuration is as follows:
{
"weight_prefetch_config": {
"enabled": True,
"prefetch_ratio": {
"attn": {
"qkv": 1.0,
"o": 1.0,
},
"moe": {
"gate_up": 0.8
}
},
},
"finegrained_tp_config": {
"lmhead_tensor_parallel_size": 8,
"oproj_tensor_parallel_size": 8,
"embedding_tensor_parallel_size": 8,
"mlp_tensor_parallel_size": 8,
},
"multistream_overlap_shared_expert": True,
"refresh": False,
}