Files
xc-llm-ascend/docs/source/user_guide/configuration/additional_config.md
lidenghui1110 d65fb194d9 [Feat] Add custom Embedding tensor model parallel (#2616)
Similar to #2309 , this PR introduces Embedding tensor model parallel to
achieve decreasing of memory consumption. It support both eager mode and
graph mode.

And this PR refactor module tensor parallel configurations supported in
#2309, #2167, #2120, merge all config into `finegrained_tp_config` in
`additional_config`, including:
`lmhead_tensor_parallel_size`
`oproj_tensor_parallel_size`
`embedding_tensor_parallel_size`
`mlp_tensor_parallel_size`

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: zzhx1 <zzh_201018@outlook.com>
Signed-off-by: zzhxx <zhangzihang23@mails.ucas.ac.cn>
Co-authored-by: zzhx1 <zzh_201018@outlook.com>
Co-authored-by: chenxiao <Jaychou1620@Gmail.com>
Co-authored-by: zzhxx <zhangzihang23@mails.ucas.ac.cn>
Co-authored-by: Jade Zheng <zheng.shoujian@outlook.com>
2025-12-12 14:41:20 +08:00

6.5 KiB

Additional Configuration

Additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by themselves. VLLM Ascend uses this mechanism to make the project more flexible.

How to use

With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:

Online mode:

vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'

Offline mode:

from vllm import LLM

LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})

Configuration options

The following table lists additional configuration options available in vLLM Ascend:

Name Type Default Description
xlite_graph_config dict {} Configuration options for xlite graph mode
finegrained_tp_config dict {} Configuration options for module tensor parallelism
weight_prefetch_config dict {} Configuration options for weight prefetch
refresh bool false Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case.
expert_map_path str None When using expert load balancing for an MoE model, an expert map path needs to be passed in.
kv_cache_dtype str None When using the KV cache quantization method, KV cache dtype needs to be set, currently only int8 is supported.
enable_shared_expert_dp bool False When the expert is shared in DP, it delivers better performance but consumes more memory. Currently only DeepSeek series models are supported.
multistream_overlap_shared_expert bool False Whether to enable multistream shared expert. This option only takes effects on MoE models with shared experts.
dynamic_eplb bool False Whether to enable dynamic EPLB.
num_iterations_eplb_update int 400 Forward iterations when EPLB begins.
gate_eplb bool False Whether to enable EPLB only once.
num_wait_worker_iterations int 30 The forward iterations when the EPLB worker will finish CPU tasks. In our test default value 30 can cover most cases.
expert_map_record_path str None When dynamic EPLB is completed, save the current expert load heatmap to the specified path.
init_redundancy_expert int 0 Specify redundant experts during initialization.
dump_config str None Configuration file path for msprobe dump(eager mode).

The details of each configuration option are as follows:

xlite_graph_config

Name Type Default Description
enabled bool False Whether to enable xlite graph mode. Currently only Llama or Qwen dense series models are supported.
full_mode bool False Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage.

weight_prefetch_config

Name Type Default Description
enabled bool False Whether to enable weight prefetch.
prefetch_ratio dict {"attn": {"qkv": 1.0, "o": 1.0}, "moe": {"gate_up": 0.8}} Prefetch ratio of each weight.

finegrained_tp_config

Name Type Default Description
lmhead_tensor_parallel_size int 0 The custom tensor parallel size of lmhead.
oproj_tensor_parallel_size int 0 The custom tensor parallel size of oproj.
embedding_tensor_parallel_size int 0 The custom tensor parallel size of embedding.
mlp_tensor_parallel_size int 0 The custom tensor parallel size of mlp.

Example

An example of additional configuration is as follows:

{
    "weight_prefetch_config": {
        "enabled": True,
        "prefetch_ratio": {
            "attn": {
                "qkv": 1.0,
                "o": 1.0,
            },
            "moe": {
                "gate_up": 0.8
            }
        },
    },
    "finegrained_tp_config": {
        "lmhead_tensor_parallel_size": 8,
        "oproj_tensor_parallel_size": 8,
        "embedding_tensor_parallel_size": 8,
        "mlp_tensor_parallel_size": 8,
    },
    "multistream_overlap_shared_expert": True,
    "refresh": False,
}