### What this PR does / why we need it?
1. Rename num_iterations_eplb_update to expert_heat_collection_interval.
2. Rename num_wait_worker_iterations to algorithm_execution_interval.
3. Rename init_redundancy_expert to num_redundant_experts because the
variable with the same meaning in vLLM is named this way.
4. Delete gate_eplb because we don't need this feature.
5. Move eplb config into a dict in additional config.
6. Depend on pr5817
### Does this PR introduce _any_ user-facing change?
before this pr:
`--additional-config '{"dynamic_eplb":true,
"num_iterations_eplb_update": 4000, "num_wait_worker_iterations": 150,
"init_redundancy_expert": 16, "expert_map_path": "xxx.json"}'`
after this pr:
`--additional-config
'{"eplb_config":{"dynamic_eplb":true,"expert_heat_collection_interval":4000,
"algorithm_execution_interval":150,"num_redundant_experts": 16,
"expert_map_path": "xxx.json"}}'`
### How was this patch tested?
#### test qwen3-235b eplb num_redundant_experts=16
without pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 83.33 |
with pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 86.67 |
- vLLM version: v0.13.0
- vLLM main:
45c1ca1ca1
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
120 lines
7.0 KiB
Markdown
120 lines
7.0 KiB
Markdown
# Additional Configuration
|
|
|
|
Additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.
|
|
|
|
## How to use
|
|
|
|
With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:
|
|
|
|
**Online mode**:
|
|
|
|
```bash
|
|
vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'
|
|
```
|
|
|
|
**Offline mode**:
|
|
|
|
```python
|
|
from vllm import LLM
|
|
|
|
LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})
|
|
```
|
|
|
|
### Configuration options
|
|
|
|
The following table lists additional configuration options available in vLLM Ascend:
|
|
|
|
| Name | Type | Default | Description |
|
|
|-------------------------------------|------|---------|-----------------------------------------------------------------------------------------------------------|
|
|
| `xlite_graph_config` | dict | `{}` | Configuration options for xlite graph mode |
|
|
| `weight_prefetch_config` | dict | `{}` | Configuration options for weight prefetch |
|
|
| `finegrained_tp_config` | dict | `{}` | Configuration options for module tensor parallelism |
|
|
| `ascend_compilation_config` | dict | `{}` | Configuration options for ascend compilation |
|
|
| `eplb_config` | dict | `{}` | Configuration options for ascend compilation |
|
|
| `refresh` | bool | `false` | Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
|
|
| `dump_config_path` | str | `None` | Configuration file path for msprobe dump(eager mode). |
|
|
| `enable_async_exponential` | bool | `False` | Whether to enable async exponential overlap. To enable async exponential, set this config to True. |
|
|
| `enable_shared_expert_dp` | bool | `False` | When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported. |
|
|
| `multistream_overlap_shared_expert` | bool | `False` | Whether to enable multistream shared expert. This option only takes effect on MoE models with shared experts. |
|
|
| `multistream_overlap_gate` | bool | `False` | Whether to enable multistream overlap gate. This option only takes effect on MoE models with shared experts. |
|
|
| `recompute_scheduler_enable` | bool | `False` | Whether to enable recompute scheduler. |
|
|
| `enable_cpu_binding` | bool | `False` | Whether to enable CPU binding. |
|
|
| `SLO_limits_for_dynamic_batch` | int | `-1` | SLO limits for dynamic batch. This is new scheduler to support dynamic feature |
|
|
| `enable_npugraph_ex` | bool | `False` | Whether to enable npugraph ex graph mode. |
|
|
| `pa_shape_list` | list | `[]` | The custom shape list of page attention ops. |
|
|
| `enable_kv_nz` | bool | `False` | Whether to enable kvcache NZ layout. This option only takes effects on models using MLA (e.g., DeepSeek). |
|
|
| `layer_sharding` | dict | `{}` | Configuration options for layer sharding linear |
|
|
|
|
The details of each configuration option are as follows:
|
|
|
|
**xlite_graph_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `enabled` | bool | `False` | Whether to enable xlite graph mode. Currently only Llama, Qwen dense series models, and Qwen3-vl are supported. |
|
|
| `full_mode` | bool | `False` | Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
|
|
|
|
**weight_prefetch_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
|------------------|------|-------------------------------------------------------------|------------------------------------|
|
|
| `enabled` | bool | `False` | Whether to enable weight prefetch. |
|
|
| `prefetch_ratio` | dict | `{"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}}` | Prefetch ratio of each weight. |
|
|
|
|
**finegrained_tp_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `lmhead_tensor_parallel_size` | int | `0` | The custom tensor parallel size of lmhead. |
|
|
| `oproj_tensor_parallel_size` | int | `0` | The custom tensor parallel size of oproj. |
|
|
| `embedding_tensor_parallel_size` | int | `0` | The custom tensor parallel size of embedding. |
|
|
| `mlp_tensor_parallel_size` | int | `0` | The custom tensor parallel size of mlp. |
|
|
|
|
**ascend_compilation_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `fuse_norm_quant` | bool | `True` | Whether to enable fuse_norm_quant pass. |
|
|
| `fuse_qknorm_rope` | bool | `False` | Whether to enable fuse_qknorm_rope pass. It's set to True by default when Triton is installed. |
|
|
|
|
**eplb_config**
|
|
|
|
| Name | Type | Default | Description |
|
|
| ---- | ---- | ------- | ----------- |
|
|
| `dynamic_eplb` | bool| `False`| Whether to enable dynamic EPLB. |
|
|
| `expert_map_path` | str | `None` | When using expert load balancing for an MoE model, an expert map path needs to be passed in.|
|
|
| `expert_heat_collection_interval`| int | `400` | Forward iterations when EPLB begins. |
|
|
| `algorithm_execution_interval` | int | `30` | The forward iterations when the EPLB worker will finish CPU tasks. |
|
|
| `expert_map_record_path` | str | `None` | Save the expert load calculation results to a new expert table in the specified directory.|
|
|
| `num_redundant_experts` | int | `0` | Specify redundant experts during initialization. |
|
|
|
|
### Example
|
|
|
|
An example of additional configuration is as follows:
|
|
|
|
```python
|
|
{
|
|
"weight_prefetch_config": {
|
|
"enabled": True,
|
|
"prefetch_ratio": {
|
|
"attn": {
|
|
"qkv": 1.0,
|
|
"o": 1.0,
|
|
},
|
|
"moe": {
|
|
"gate_up": 0.8
|
|
}
|
|
},
|
|
},
|
|
"finegrained_tp_config": {
|
|
"lmhead_tensor_parallel_size": 8,
|
|
"oproj_tensor_parallel_size": 8,
|
|
"embedding_tensor_parallel_size": 8,
|
|
"mlp_tensor_parallel_size": 8,
|
|
},
|
|
"enable_kv_nz": False,
|
|
"multistream_overlap_shared_expert": True,
|
|
"refresh": False
|
|
}
|
|
```
|