Files
xc-llm-ascend/docs/source/user_guide/configuration/additional_config.md
Csrayz 80524f5711 [CORE] concurrent partial prefills (#2372)
# What this PR does / why we need it?

When processing a mix of large and small requests, the TTFT of responses
is significantly reduc\ed. Please refer to
https://github.com/vllm-project/vllm/pull/10235, which achieves the same
effect by simply limiting the number of prompt fills for long requests.
This solution can be applied to both AscendScheduler (V0) and vLLM
Scheduler (V1). Tests show that TTFT can be significantly improved when
handling such mixed requests. However, This capability is currently
missing when Ascend Scheduler is enabled.

This benchmark used the Qwen3-8B model, with a context length of 128K,
running on a single card.

Regarding dataset selection, the sharegpt_clean dataset is used, with
its content concatenated and cropped. Small requests with token=50 and
medium requests with token=10240 were constructed (there were also large
requests with token=102400, but these were ignored because when using
the Prefill First scheduling strategy, max_num_batched_tokens will not
be set to such a large value). When loading vLLM, set
max_num_batched_tokens=22000. This length can accommodate two
medium-sized requests and some short requests, reflecting an extreme
scenario where the budget is almost entirely occupied by longer
requests.

Next, we mix 990 small requests and 100 medium requests into one type of
load scenario (hereinafter referred to as 10%), and similarly generate
load scenarios with 5% medium requests and 1% load scenarios.

Performance tests were conducted separately for enabling vLLMScheduler,
AscendScheduler, and AscendScheduler (long prompt concurrency set to 1).

- vLLM version: v0.10.2
- vLLM main:
1dfea5f4a9

---------

Signed-off-by: Csrayz <jover@cmbchina.com>
2025-09-24 17:12:55 +08:00

5.2 KiB

Additional Configuration

additional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by their own. vLLM Ascend uses this mechanism to make the project more flexible.

How to use

With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:

Online mode:

vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'

Offline mode:

from vllm import LLM

LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})

Configuration options

The following table lists the additional configuration options available in vLLM Ascend:

Name Type Default Description
torchair_graph_config dict {} The config options for torchair graph mode
ascend_scheduler_config dict {} The config options for ascend scheduler
refresh bool false Whether to refresh global ascend config content. This value is usually used by rlhf or ut/e2e test case.
expert_map_path str None When using expert load balancing for the MOE model, an expert map path needs to be passed in.
enable_prefetch bool False Whether to enable weight prefetch.
kv_cache_dtype str None When using the kv cache quantization method, kv cache dtype needs to be set, currently only int8 is supported.
enable_shared_expert_dp bool False When the shared expert in DP, it has better performance but consumes more memory. Currently only DeepSeek series models are supported to use.
lmhead_tensor_parallel_size int None The custom tensor parallel size of lmhead.
oproj_tensor_parallel_size int None The custom tensor parallel size of oproj.
multistream_overlap_shared_expert bool False Whether to enable multistream shared expert. This option only takes effects on moe models with shared experts.

The details of each config option are as follows:

torchair_graph_config

Name Type Default Description
enabled bool False Whether to enable torchair graph mode. Currently only DeepSeek series models and PanguProMoE are supported to use torchair graph mode
mode str None When using reduce-overhead mode for torchair, mode needs to be set
enable_multistream_mla bool False Whether to put vector ops of MLA to another stream. This option only takes effects on models using MLA (e.g., DeepSeek).
enable_view_optimize bool True Whether to enable torchair view optimization
enable_frozen_parameter bool True Whether to fix the memory address of weights during inference to reduce the input address refresh time during graph execution.
use_cached_graph bool False Whether to use cached graph
graph_batch_sizes list[int] [] The batch size for torchair graph cache
graph_batch_sizes_init bool False Init graph batch size dynamically if graph_batch_sizes is empty
enable_kv_nz bool False Whether to enable kvcache NZ layout. This option only takes effects on models using MLA (e.g., DeepSeek).

ascend_scheduler_config

Name Type Default Description
enabled bool False Whether to enable ascend scheduler for V1 engine
enable_pd_transfer bool False Whether to enable pd transfer. When using it, decode is started only when prefill of all requests is done. This option only takes effects on offline inference.
decode_max_num_seqs int 0 Whether to change max_num_seqs of decode phase when enable pd transfer. This option only takes effects when enable_pd_transfer is True.
max_long_partial_prefills Union[int, float] float('inf') the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently.
long_prefill_token_threshold Union[int, float] float('inf') a request is considered long if the prompt is longer than this number of tokens.

ascend_scheduler_config also support the options from vllm scheduler config. For example, you can add enable_chunked_prefill: True to ascend_scheduler_config as well.

Example

An example of additional configuration is as follows:

{
    "torchair_graph_config": {
        "enabled": True,
        "use_cached_graph": True,
        "graph_batch_sizes": [1, 2, 4, 8],
        "graph_batch_sizes_init": False,
        "enable_kv_nz": False
    },
    "ascend_scheduler_config": {
        "enabled": True,
        "enable_chunked_prefill": True,
        "max_long_partial_prefills": 1,
        "long_prefill_token_threshold": 4096,
    },
    "multistream_overlap_shared_expert": True,
    "refresh": False,
}