aclgraph is stable and fast now. Let's drop torchair graph mode now.
TODO: some logic to adapt torchair should be cleaned up as well. We'll
do it in the following PR.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
5.6 KiB
5.6 KiB
Additional Configuration
Additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.
How to use
With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:
Online mode:
vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'
Offline mode:
from vllm import LLM
LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})
Configuration options
The following table lists additional configuration options available in vLLM Ascend:
| Name | Type | Default | Description |
|---|---|---|---|
xlite_graph_config |
dict | {} |
Configuration options for xlite graph mode |
weight_prefetch_config |
dict | {} |
Configuration options for weight prefetch |
refresh |
bool | false |
Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
expert_map_path |
str | None |
When using expert load balancing for an MoE model, an expert map path needs to be passed in. |
kv_cache_dtype |
str | None |
When using the KV cache quantization method, KV cache dtype needs to be set, currently only int8 is supported. |
enable_shared_expert_dp |
bool | False |
When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported. |
lmhead_tensor_parallel_size |
int | None |
The custom tensor parallel size of lmhead. |
oproj_tensor_parallel_size |
int | None |
The custom tensor parallel size of oproj. |
multistream_overlap_shared_expert |
bool | False |
Whether to enable multistream shared expert. This option only takes effect on MoE models with shared experts. |
dynamic_eplb |
bool | False |
Whether to enable dynamic EPLB. |
num_iterations_eplb_update |
int | 400 |
Forward iterations when EPLB begins. |
gate_eplb |
bool | False |
Whether to enable EPLB only once. |
num_wait_worker_iterations |
int | 30 |
The forward iterations when the EPLB worker will finish CPU tasks. In our test default value 30 can cover most cases. |
expert_map_record_path |
str | None |
When dynamic EPLB is completed, save the current expert load heatmap to the specified path. |
init_redundancy_expert |
int | 0 |
Specify redundant experts during initialization. |
dump_config |
str | None |
Configuration file path for msprobe dump(eager mode). |
The details of each configuration option are as follows:
xlite_graph_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable xlite graph mode. Currently only Llama or Qwen dense series models are supported. |
full_mode |
bool | False |
Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
weight_prefetch_config
| Name | Type | Default | Description |
|---|---|---|---|
enabled |
bool | False |
Whether to enable weight prefetch. |
prefetch_ratio |
dict | {"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}} |
Prefetch ratio of each weight. |
Example
An example of additional configuration is as follows:
{
"weight_prefetch_config": {
"enabled": True,
"prefetch_ratio": {
"attn": {
"qkv": 1.0,
"o": 1.0,
},
"moe": {
"gate_up": 0.8
}
},
},
"multistream_overlap_shared_expert": True,
"refresh": False,
}