### What this PR does / why we need it?
This is a part of
https://github.com/vllm-project/vllm-ascend/issues/4715#issue-3694310762
1. refactor the npugraph_ex config,modified the default configuration of
the static kernel, new default value of static kernel is false
2. support online-infer with static kernel
3. fixed the issue where manually modifying FX graphs caused an abnormal
model return type, and removed the related redundant code.
### Does this PR introduce _any_ user-facing change?
yes,the new config of npugraph_ex is as follow:
```
additional_config={
"npugraph_ex_config": {
"enable": True,
"enable_static_kernel": False
}
}
```
### How was this patch tested?
```
vllm serve /data/DeepSeek-V3.1-Terminus-w4a8 \
--host 0.0.0.0 \
--port 8004 \
--data-parallel-size 4 \
--tensor-parallel-size 4 \
--quantization ascend \
--seed 1024 \
--served-model-name deepseek_v3 \
--enable-expert-parallel \
--max-num-seqs 48 \
--max-model-len 40000 \
--async-scheduling \
--max-num-batched-tokens 9000 \
--trust-remote-code \
--no-enable-prefix-caching \
--speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp","disable_padded_drafter_batch": false}' \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_capture_sizes":[4,32,64,112,160,176,192], "cudagraph_mode": "FULL_DECODE_ONLY"}' \
--additional-config \
'{"enable_shared_expert_dp": true,"multistream_overlap_shared_expert": true,"npugraph_ex_config":{"enable":true}}'
```
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
---------
Signed-off-by: chencangtao <chencangtao@huawei.com>
Signed-off-by: ChenCangtao <50493711+ChenCangtao@users.noreply.github.com>
Co-authored-by: chencangtao <chencangtao@huawei.com>
128 lines
7.8 KiB
Markdown
128 lines
7.8 KiB
Markdown
# Additional Configuration
|
|
|
|
Additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.
|
|
|
|
## How to use
|
|
|
|
With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:
|
|
|
|
**Online mode**:
|
|
|
|
```bash
|
|
vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'
|
|
```
|
|
|
|
**Offline mode**:
|
|
|
|
```python
|
|
from vllm import LLM
|
|
|
|
LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})
|
|
```
|
|
|
|
### Configuration options
|
|
|
|
The following table lists additional configuration options available in vLLM Ascend:
|
|
|
|
| Name | Type | Default | Description |
|
|
|-------------------------------------|------|---------|-----------------------------------------------------------------------------------------------------------|
|
|
| `xlite_graph_config` | dict | `{}` | Configuration options for xlite graph mode |
|
|
| `weight_prefetch_config` | dict | `{}` | Configuration options for weight prefetch |
|
|
| `finegrained_tp_config` | dict | `{}` | Configuration options for module tensor parallelism |
|
|
| `ascend_compilation_config` | dict | `{}` | Configuration options for ascend compilation |
|
|
| `eplb_config` | dict | `{}` | Configuration options for ascend compilation |
|
|
| `npugraph_ex_config` | dict | `{}` | Configuration options for npugraph_ex backend |
|
|
| `refresh` | bool | `false` | Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
|
|
| `dump_config_path` | str | `None` | Configuration file path for msprobe dump(eager mode). |
|
|
| `enable_async_exponential` | bool | `False` | Whether to enable async exponential overlap. To enable async exponential, set this config to True. |
|
|
| `enable_shared_expert_dp` | bool | `False` | When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported. |
|
|
| `multistream_overlap_shared_expert` | bool | `False` | Whether to enable multistream shared expert. This option only takes effect on MoE models with shared experts. |
|
|
| `multistream_overlap_gate` | bool | `False` | Whether to enable multistream overlap gate. This option only takes effect on MoE models with shared experts. |
|
|
| `recompute_scheduler_enable` | bool | `False` | Whether to enable recompute scheduler. |
|
|
| `enable_cpu_binding` | bool | `False` | Whether to enable CPU binding. |
|
|
| `SLO_limits_for_dynamic_batch` | int | `-1` | SLO limits for dynamic batch. This is new scheduler to support dynamic feature |
|
|
| `enable_npugraph_ex` | bool | `False` | Whether to enable npugraph ex graph mode. |
|
|
| `pa_shape_list` | list | `[]` | The custom shape list of page attention ops. |
|
|
| `enable_kv_nz` | bool | `False` | Whether to enable kvcache NZ layout. This option only takes effects on models using MLA (e.g., DeepSeek). |
|
|
| `layer_sharding` | dict | `{}` | Configuration options for layer sharding linear |
|
|
|
|
The details of each configuration option are as follows:
|
|
|
|
**xlite_graph_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `enabled` | bool | `False` | Whether to enable xlite graph mode. Currently only Llama, Qwen dense series models, and Qwen3-vl are supported. |
|
|
| `full_mode` | bool | `False` | Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
|
|
|
|
**weight_prefetch_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
|------------------|------|-------------------------------------------------------------|------------------------------------|
|
|
| `enabled` | bool | `False` | Whether to enable weight prefetch. |
|
|
| `prefetch_ratio` | dict | `{"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}}` | Prefetch ratio of each weight. |
|
|
|
|
**finegrained_tp_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `lmhead_tensor_parallel_size` | int | `0` | The custom tensor parallel size of lmhead. |
|
|
| `oproj_tensor_parallel_size` | int | `0` | The custom tensor parallel size of oproj. |
|
|
| `embedding_tensor_parallel_size` | int | `0` | The custom tensor parallel size of embedding. |
|
|
| `mlp_tensor_parallel_size` | int | `0` | The custom tensor parallel size of mlp. |
|
|
|
|
**ascend_compilation_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `fuse_norm_quant` | bool | `True` | Whether to enable fuse_norm_quant pass. |
|
|
| `fuse_qknorm_rope` | bool | `False` | Whether to enable fuse_qknorm_rope pass. It's set to True by default when Triton is installed. |
|
|
|
|
**eplb_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `dynamic_eplb` | bool| `False`| Whether to enable dynamic EPLB. |
|
|
| `expert_map_path` | str | `None` | When using expert load balancing for an MoE model, an expert map path needs to be passed in.|
|
|
| `expert_heat_collection_interval`| int | `400` | Forward iterations when EPLB begins. |
|
|
| `algorithm_execution_interval` | int | `30` | The forward iterations when the EPLB worker will finish CPU tasks. |
|
|
| `expert_map_record_path` | str | `None` | Save the expert load calculation results to a new expert table in the specified directory.|
|
|
| `num_redundant_experts` | int | `0` | Specify redundant experts during initialization. |
|
|
|
|
**npugraph_ex_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
|------------------------| ---- |---------|----------------------------------------------------------------------------------------|
|
|
| `enable` | bool | `False` | Whether to enable npugraph_ex backend. |
|
|
| `enable_static_kernel` | bool | `False` | Whether to enable static kernel. Suitable for scenarios where shape changes are minimal and some time is available for static kernel compilation. |
|
|
|
|
### Example
|
|
|
|
An example of additional configuration is as follows:
|
|
|
|
```python
|
|
{
|
|
"weight_prefetch_config": {
|
|
"enabled": True,
|
|
"prefetch_ratio": {
|
|
"attn": {
|
|
"qkv": 1.0,
|
|
"o": 1.0,
|
|
},
|
|
"moe": {
|
|
"gate_up": 0.8
|
|
}
|
|
},
|
|
},
|
|
"finegrained_tp_config": {
|
|
"lmhead_tensor_parallel_size": 8,
|
|
"oproj_tensor_parallel_size": 8,
|
|
"embedding_tensor_parallel_size": 8,
|
|
"mlp_tensor_parallel_size": 8,
|
|
},
|
|
"enable_kv_nz": False,
|
|
"multistream_overlap_shared_expert": True,
|
|
"refresh": False
|
|
}
|
|
```
|