Commit Graph

2582 Commits

Author SHA1 Message Date
tfhddd
21fea86b08 feat: [CI] Introduce uv to accelerate pip install (#7127)
### What this PR does / why we need it?
Integrates uv: Significantly accelerates pip install execution and
resolves concurrency issues caused by traditional pip caching
mechanisms.

Why pip install uc-manager is explicitly added:
This project depends on uc-manager. However, installing it via uv pip
install uc-manager currently fails due to a known issue. An issue has
already been filed with the upstream uv repository to address this.
Consequently, we explicitly invoke pip install uc-manager as a temporary
workaround to ensure the build succeeds.
https://github.com/ModelEngine-Group/unified-cache-management/issues/736

Why use UV_SYSTEM_PYTHON: 1:
No virtual environment has been created yet; this configuration has the
same effect as directly using `pip install`.

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: tfhddd <2272751277@qq.com>
2026-03-12 16:47:23 +08:00
shaopeng-666
592661e787 [Doc] EPD doc and load-balance proxy example (#6221)
Add EPD doc and load-balance proxy example

- vLLM version: v0.14.0
- vLLM main:
d68209402d

---------

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
2026-03-12 16:17:17 +08:00
无脸男
09d26754cd [Bugfix] Fix the issue where no exception is thrown when graph capture fails. (#5644)
### What this PR does / why we need it?

Fix the issue where no exception is thrown when graph capture fails.


- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef

Signed-off-by: WithHades <244036962@qq.com>
2026-03-12 16:14:45 +08:00
xleoken
77b43492ae improve the ttft when use mooncake (#6125)
### What this PR does / why we need it?
improve performance of mooncake by change the log level from info to
debug
### ENV
2P + 4D, EP

1. benchmark script
```
evalscope perf \
  --parallel 512 \
  --number 1024 \
  --model deepseek \
  --url http://localhost:9000/v1/chat/completions \
  --api openai \
  --dataset random \
  --max-tokens 2 \
  --min-tokens 2 \
  --prefix-length 0 \
  --min-prompt-length 512 \
  --max-prompt-length 512 \
  --tokenizer-path /tmp/DeepSeek-v3-0324-w8a8-0814  \
  --extra-args '{"ignore_eos": true}' \
  --rate 2
```

2. before patch
```
+-----------------------------------+-----------+
| Key                               |     Value |
+===================================+===========+
| Time taken for tests (s)          |  209.484  |
+-----------------------------------+-----------+
| Number of concurrency             |  512      |
+-----------------------------------+-----------+
| Request rate (req/s)              |    6      |
+-----------------------------------+-----------+
| Total requests                    | 1024      |
+-----------------------------------+-----------+
| Succeed requests                  | 1022      |
+-----------------------------------+-----------+
| Failed requests                   |    2      |
+-----------------------------------+-----------+
| Output token throughput (tok/s)   |    9.7573 |
+-----------------------------------+-----------+
| Total token throughput (tok/s)    | 2507.62   |
+-----------------------------------+-----------+
| Request throughput (req/s)        |    4.8786 |
+-----------------------------------+-----------+
| Average latency (s)               |    7.0561 |
+-----------------------------------+-----------+
| Average time to first token (s)   |    5.7444 |
+-----------------------------------+-----------+
| Average time per output token (s) |    1.3117 |
+-----------------------------------+-----------+
| Average inter-token latency (s)   |    1.3117 |
+-----------------------------------+-----------+
| Average input tokens per request  |  512      |
+-----------------------------------+-----------+
| Average output tokens per request |    2      |
+-----------------------------------+-----------+
2026-01-22 14:56:32 - evalscope - INFO: 
Percentile results:
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
| Percentiles | TTFT (s) | ITL (s) | TPOT (s) | Latency (s) | Input tokens | Output tokens | Output (tok/s) | Total (tok/s) |
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
|     10%     |  0.6062  | 0.5113  |  0.5113  |    1.234    |     512      |       2       |     0.0888     |    22.8338    |
|     25%     |  0.7248  | 0.5639  |  0.5639  |   1.4114    |     512      |       2       |      0.2       |    51.3919    |
|     50%     |  0.9092  | 0.7748  |  0.7748  |   1.6767    |     512      |       2       |     1.1935     |   306.7171    |
|     66%     |  1.0745  | 1.0345  |  1.0345  |   3.1308    |     512      |       2       |     1.3395     |   344.2495    |
|     75%     |  7.0812  | 1.5389  |  1.5389  |   10.0016   |     512      |       2       |     1.417      |   364.1808    |
|     80%     | 10.6944  | 1.8552  |  1.8552  |   13.3717   |     512      |       2       |     1.4778     |   379.7911    |
|     90%     | 19.2342  | 2.4325  |  2.4326  |   22.5105   |     512      |       2       |     1.6208     |   416.5381    |
|     95%     | 24.4399  | 2.8289  |  2.8289  |   26.0329   |     512      |       2       |     1.7548     |   450.9942    |
|     98%     | 45.0941  | 3.4098  |  3.4098  |   45.6287   |     512      |       2       |     1.8193     |   467.5476    |
|     99%     | 46.2786  | 3.8492  |  3.8492  |   46.9282   |     512      |       2       |     1.8576     |   477.4157    |
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
```

3. after patch
```
Benchmarking summary:
+-----------------------------------+-----------+
| Key                               |     Value |
+===================================+===========+
| Time taken for tests (s)          |  191.613  |
+-----------------------------------+-----------+
| Number of concurrency             |  512      |
+-----------------------------------+-----------+
| Request rate (req/s)              |    6      |
+-----------------------------------+-----------+
| Total requests                    | 1024      |
+-----------------------------------+-----------+
| Succeed requests                  | 1024      |
+-----------------------------------+-----------+
| Failed requests                   |    0      |
+-----------------------------------+-----------+
| Output token throughput (tok/s)   |   10.6882 |
+-----------------------------------+-----------+
| Total token throughput (tok/s)    | 2746.87   |
+-----------------------------------+-----------+
| Request throughput (req/s)        |    5.3441 |
+-----------------------------------+-----------+
| Average latency (s)               |    2.0407 |
+-----------------------------------+-----------+
| Average time to first token (s)   |    0.7989 |
+-----------------------------------+-----------+
| Average time per output token (s) |    1.2419 |
+-----------------------------------+-----------+
| Average inter-token latency (s)   |    1.2419 |
+-----------------------------------+-----------+
| Average input tokens per request  |  512      |
+-----------------------------------+-----------+
| Average output tokens per request |    2      |
+-----------------------------------+-----------+
2026-01-22 15:10:31 - evalscope - INFO: 
Percentile results:
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
| Percentiles | TTFT (s) | ITL (s) | TPOT (s) | Latency (s) | Input tokens | Output tokens | Output (tok/s) | Total (tok/s) |
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
|     10%     |  0.5727  | 0.5051  |  0.5051  |   1.1761    |     512      |       2       |     1.0368     |   266.4696    |
|     25%     |  0.6497  | 0.5324  |  0.5324  |   1.3159    |     512      |       2       |     1.1763     |   302.3184    |
|     50%     |  0.7767  | 0.6908  |  0.6908  |   1.4793    |     512      |       2       |     1.3521     |   347.4944    |
|     66%     |  0.8711  | 0.7912  |  0.7912  |   1.5916    |     512      |       2       |     1.4518     |   373.1092    |
|     75%     |  0.9125  | 0.8797  |  0.8797  |   1.7008    |     512      |       2       |     1.521      |   390.9018    |
|     80%     |  0.9381  | 0.9442  |  0.9442  |   1.7657    |     512      |       2       |     1.5749     |   404.7606    |
|     90%     |  0.994   | 1.0818  |  1.0818  |   1.9289    |     512      |       2       |     1.7006     |   437.0518    |
|     95%     |  1.0369  | 1.2454  |  1.2454  |   2.2154    |     512      |       2       |     1.7937     |   460.9731    |
|     98%     |  1.1237  | 18.8814 | 18.8814  |   19.4607   |     512      |       2       |     1.8755     |   482.0097    |
|     99%     |  1.6752  | 24.4406 | 24.4406  |   25.4734   |     512      |       2       |     1.907      |   490.0993    |
+-------------+----------+---------+----------+-------------+--------------+---------------+----------------+---------------+
```

---------

Signed-off-by: xleoken <xleoken@163.com>
2026-03-12 16:13:48 +08:00
Hexiang Wang
f244f3c4a9 [BugFix] Fix problem of extra processes on rank0 device (#7107)
### What this PR does / why we need it?
Currently when tp>1, we have extra processes on tp rank0 device which
consumes extra HBM memory. This is caused by `import
torch_npu._inductor` before set_device which introduces extra
initialization of device.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All ci passed.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: whx-sjtu <2952154980@qq.com>
2026-03-12 15:59:03 +08:00
herizhen
e5024d0264 [doc] Add Ascend PyTorch Profiler section (#7117)
### What this PR does / why we need it?
add Ascend PyTorch Profiler section

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
Documentation Format Checks
Technical Content Validation
Build Verification
Version Compatibility
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: herizhen <1270637059@qq.com>
2026-03-12 15:51:00 +08:00
Mercykid-bash
132f3c5d0a Support per-step heat collection and enhance FlashLB for multi-stage load balancing (#6477)
# Feature: FlashLB algorithm

## Purpose

This Pull Request enhances the EPLB (Expert Parallelism Load Balancing)
system by introducing a novel load balancing algorithm: FlashLB.
1. The default algorithm adopts two separate sub-procedures to optimize
expert replication and placement independently:

a. **Expert Replica Allotment Sub-procedure** : Determines the number of
replicas for all experts. At each step, it greedily adds one more
replica to the expert with the highest per-replica load, aiming to
minimize load skew at the expert replica granularity (Min Max Replica,
MMR).

b. **Expert Replica Placement Sub-procedure** : Distributes all replicas
across devices. First, it sorts the generated replicas in descending
order of hotness, then iteratively places the currently hottest replica
onto the device with the lowest cumulative load and available slots.
However, this simplistic combination of two separate procedures lacks
synergy and often leads to sub-optimal load balancing. For example, in
the simple scenario illustrated below: Given 8 logical experts with
hotness values [600, 560, 120, 120, 20, 10, 10, 10], and 2 replicas
allocated per device across 8 devices, the default EPLB algorithm
results in a maximum per-device hotness of 232 (peak-average load ratio
1.28), while our proposed FlashLB algorithm reduces this value to 205
(peak-average load ratio 1.13).

<figure><img
src="https://github.com/user-attachments/assets/b9b10fab-651e-4524-9942-adbca8d044a4"
width="90%"</figure>

2. The default algorithm simply aggregates hotness measurements across
the entire profiling window. While this provides a coarse approximation
of the hotness distribution, it fails to capture the time-phased
variations and temporal correlations in expert hotness (both within and
between experts) across iterations—phenomena that have been observed in
real-world scenarios. Such single-point hotness estimation degrades the
solution quality of the load balancing algorithm.

3. The default algorithm regularly recalculates updated expert placement
results for all layers without discrimination. Considering that
excessive expert updates can impact Service Level Objectives (SLOs),
such full-scale redeployment leads to excessively high adjustment
overhead, which negatively affects end-to-end performance.

## FlashLB Algorithm Principle

### 1. Joint Optimization of Replica Allotment and Placement

FlashLB achieves joint optimization of replica allotment and placement
through a novel tree search approach, combined with carefully designed e
Fl fficient pruning and lightweight look-ahead estimation. We partition
all experts into several subsets, and for each subset, hierarchically
determine the optimal replica count and placement. Leveraging efficient
pruning and lightweight look-ahead estimation, the process consistently
aims to optimize the globally expected inter-device load balance degree
(considering both deployed and unexplored experts) while ensuring
sufficient computational efficiency. Additionally, precompilation
techniques are employed for acceleration, delivering load balancing that
is both high-quality and practically efficient.
### 2. Multi-Episode Enhancement

Instead of performing full-duration averaging like the default
algorithm, FlashLB partitions each profiling interval (e.g., 1024
iterations) into multiple consecutive smaller episodes (e.g., 16
iterations). This preserves hotness fluctuation and correlation
information. It then constructs a multi-objective optimization problem
to co-optimize these episodes simultaneously, enabling adaptability to
interleaved hotness patterns and improving statistical robustness.

### 3. Layer-wise Cherry-Picking Redeployment

To reduce the overhead of frequent expert redeployment, FlashLB
introduces a cherry-picking redeployment scheme. During each algorithmic
decision cycle, it real-time tracks load balance degree of all layers
and triggers expert placement updates only for those layers whose
peak-average ratio exceeds a predefined threshold. This avoids
unnecessary redeployment for stable layers, significantly reducing
adjustment overhead and thereby improving end-to-end performance gains.

## Co-author:

Co-authored-by: Skywalker-EP 173723846@qq.com

This PR mainly introduces two key optimizations for load balancing
scheduling:
1. **Add per-step heat collection function**:
Support real-time collection of per-step heat information during model
inference. This enables more fine-grained load balancing decisions by
taking per-step heat as the optimization target, improving scheduling
accuracy for dynamic and fluctuating workloads.

2. **Update FlashLB algorithm**:
Upgrade the FlashLB scheduling logic to better adapt to multi-stage heat
distribution scenarios. The improved algorithm can comprehensively
perceive and utilize multi-stage heat characteristics, achieving more
stable and efficient load balancing under complex expert deployment and
dynamic traffic patterns.

---------

Signed-off-by: Mercykid-bash <ruanche0218@gmail.com>
Signed-off-by: xuzewei28 <xuzewei2@h-partners.com>
Co-authored-by: xuzewei28 <xuzewei2@h-partners.com>
2026-03-12 15:49:09 +08:00
Feng-xiaosuo
abe72d7cb9 Refactor quantization layer name mapping to leverage vLLM built-in mappers (#7050)
…the quantization layer name

### What this PR does / why we need it?
This PR modifies the loading logic for layer name prefixes in quantized
models. The goal is to reduce or eliminate the need for point-to-point
(hardcoded) modifications by leveraging the built-in mapper mechanism
already provided in vLLM's model code. For models that do not yet have a
corresponding mapper, the original point-to-point modification approach
has been retained to ensure backward compatibility.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
The changes were validated using an offline deployment script to launch
and verify multiple multimodal models. Testing confirmed that the
updated loading logic correctly handles layer name prefixes across
different model architectures, with no regression in model
initialization or inference behavior.
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
Co-authored-by: Matrix_K <zhangke144@huawei.com>
2026-03-12 15:48:14 +08:00
drslark
fb0d6dd175 [main][bugfix] Fixed the problem of speculative decoding in FULL mode (#7148)
### What this PR does / why we need it?

Fixed the error of speculative decoding in FULL mode when `num_spec + 1`
not in `cudagraph_capture_sizes`.

Now, we can run speculative decoding in FULL mode, but with drafter as
eager.

It depends on https://github.com/vllm-project/vllm-ascend/pull/7144 .

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

Test code is shown as below:

```python
prompts = [
    "1.Who are you?",
    "2. Who are you?",
]

sampling_params = SamplingParams(temperature=0.0, top_p=0.95, top_k=40, max_tokens=200)
llm = LLM(
    model="/home/some-model/Meta-Llama-3.1-8B-Instruct",
    tensor_parallel_size=1,
    max_num_seqs=32,
    # enforce_eager=True,
    disable_log_stats=False,
    distributed_executor_backend="mp",
    gpu_memory_utilization=0.7,
    async_scheduling=True,

    speculative_config={
        "enforce_eager": True,
        "model": "/home/some-model/EAGLE3-LLaMA3.1-Instruct-8B",
        "disable_padded_drafter_batch": False,
        "method": "eagle3",
        "num_speculative_tokens": 2,
    },
    
    compilation_config={
        "cudagraph_mode": "FULL",
        "cudagraph_num_of_warmups": 1,
    },

    max_model_len=4096, 
    enable_prefix_caching=False,
)

outputs = llm.generate(prompts, sampling_params)
```

The result before:

```text
   File "/vllm-workspace/vllm/vllm/v1/cudagraph_dispatcher.py", line 140, in _create_padded_batch_descriptor
     assert num_tokens_padded % uniform_decode_query_len == 0
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 AssertionError
```

The result after:

```text
--------------------------------------------------
total_num_output_tokens: 400
num_drafts: 249
num_draft_tokens: 498
num_accepted_tokens: 149
mean acceptance length: 1.60
--------------------------------------------------
acceptance at token 0: 0.43
acceptance at token 1: 0.17
```

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: drslark <slarksblood@qq.com>
2026-03-12 14:51:12 +08:00
XiaoxinWang
37d1bd8c50 fixed fia pad logic in graph mode. (#7144)
### What this PR does / why we need it?
related to vllm PR #34043 this pr delete func
‘relax_for_mixed_batch_cudagraphs’, num_reqs no longer equals the actual
number of requests, due to fia operator requires that
query_start_loc[-1] equals the total number of computed tokens, so this
func delete cause the ifa error.
In full graph mode, set num_reqs_paded = num_reqs to fix the error
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2026-03-12 14:50:54 +08:00
MengLong Chen
bbffe58b63 [Doc] fix DSV3.1 PD configs (#7187)
### What this PR does / why we need it?
Modify the `kv_port` and `engine_id` config of DeepSeek-V3.1/R1 in the
2P1D scenario

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
2026-03-12 14:24:49 +08:00
Qiu
aa0143e55d refactor: add a check before layer_sharding logging (#7186)
### What this PR does / why we need it?
We should only display this log message when layer_sharding is enabled.
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
2026-03-12 11:56:04 +08:00
linfeng-yuan
5f3826b093 [Build] Add support for Ascend950 chip (#7151)
### What this PR does / why we need it?
This PR adds support for the Ascend950 chip. This includes:
- Updating build scripts (`CMakeLists.txt` and `setup.py`) to recognize
the Ascend950 chip and set appropriate compilation flags.
- Disabling a set of custom operators that are not yet supported on the
Ascend950 hardware target.
- Performing a codebase-wide refactoring of `pipe_barrier()` calls to
the namespaced `AscendC::PipeBarrier<>()` for improved code consistency
and adherence to the latest API standards.

Ascend950DT e2e passed (Qwen3-32B-MXFP8) and CI passed
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
2026-03-12 10:25:51 +08:00
meihanc
da01a74009 Revert "[CI] fix skiped e2e test when upgrade vllm version (#6654)" (#7166)
This reverts commit f6db47f103.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
2026-03-11 23:03:15 +08:00
shiyuan680
3b6b3c4214 [MODELRUNNERV2]fix penality ops (#7013)
### What this PR does / why we need it?
fix penality ops for new version, and achieved a 10% performance
improvement

### How was this patch tested?
pytest
‎tests/e2e/nightly/single_node/ops/singlecard_ops/triton/test_penality.py
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: shiyuan680 <917935075@qq.com>
2026-03-11 17:13:34 +08:00
yupeng
830f39dd70 [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (#6650)
### What this PR does / why we need it?
Fix the issue #6143 .

### Does this PR introduce _any_ user-facing change?
Allow to start the server with "--enable-lora && --fully-sharded-loras
&& --tensor_parallel_size 2".

### How was this patch tested?
pytest -sv tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py
- vLLM version: v0.15.0
- vLLM main:
d7e17aaacd

---------

Signed-off-by: paulyu12 <507435917@qq.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
2026-03-11 15:43:15 +08:00
pz1116
a7f91fce71 [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (#7146)
### What this PR does / why we need it?
Currently, we call lookup_client for looking up token hit in KV Pool,
however, when token length < block size, the key will be empty and there
is no point to lookup in KV Pool backend since there will never be a
hit.
Hence, add early return in `get_num_new_matched_tokens` when `token_len`
< `block_size`

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: fems14 <1804143737@qq.com>
2026-03-11 15:05:34 +08:00
Mengqing Cao
1a83c8e2f5 [CI] Build Image for v0.16.0rc1 (#7155)
### What this PR does / why we need it?
Build Image for v0.16.0rc1
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: MengqingCao <cmq0113@163.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
2026-03-11 14:48:50 +08:00
SILONG ZENG
90aa048e60 [CI] Skip test_mooncake_layerwise_connector.py in ut (#7147)
### What this PR does / why we need it?
The `test_mooncake_layerwise_connector.py` file in the `ut` test will be
skipped for now and fixed later.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: MrZ20 <2609716663@qq.com>
2026-03-11 11:46:29 +08:00
zxr2333
e16009b2cc [BugFix]Fix recomputed scheduler bug (#7137)
### What this PR does / why we need it?
Fix the wrong usage of `model_type`.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By CI.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
2026-03-11 00:32:19 +08:00
SparrowMu
54668e73c5 [Model] Support Minimax-m2.5 on NPU (#7105)
### What this PR does / why we need it?

Initial version to support minimax-m2.5 on vllm-ascend. 
This commit coverting original fp8 weight to a quantilized bf16 to
support Minimax-m2.5 on NPU.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

### Test Report
Self tested precision summary, where the official precision score of
AIME2025 is 86.3
<img width="426" height="84" alt="image"
src="https://github.com/user-attachments/assets/a3ce2452-92fa-4713-962e-862248e0b61a"
/>

---------

Signed-off-by: limuyuan <limuyuan3@huawei.com>
Signed-off-by: SparrowMu <52023119+SparrowMu@users.noreply.github.com>
Co-authored-by: limuyuan <limuyuan3@huawei.com>
2026-03-11 00:12:02 +08:00
zxr2333
239683c7a6 [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (#7022)
### What this PR does / why we need it?
Mooncake Layerwise Connector supports hybrid attention manager with
multiple kvcache groups.

### Does this PR introduce _any_ user-facing change?
Yes.

### How was this patch tested?
By CI.

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

---------

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
2026-03-10 23:59:20 +08:00
pppeng
0f289fa2a8 Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (#7109)
### What this PR does / why we need it?

The ops `torch_npu.npu_recurrent_gated_delta_rule` currently does not
support `ssm_state` inputs in float32 format,
we temporarily retain the _forward_core implementation with triton for
Qwen3_5

---------

Signed-off-by: pppeng <zepengliu912@qq.com>
Signed-off-by: pppeng <60355449+ppppeng@users.noreply.github.com>
2026-03-10 23:28:58 +08:00
Canlin Guo
a78a00e0b1 [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (#7067)
Add release notes for v0.16.0rc1

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e
---------
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
2026-03-10 22:45:05 +08:00
Li Wang
881c38d210 [Misc] Download on both hk and guiyang region (#7129)
### What this PR does / why we need it?
Since the PVC files for Guiyang and Hong Kong are not shared, we need to
trigger the download of both regions simultaneously when downloading the
model to ensure that the models in all regions are synchronized.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: wangli <wangli858794774@gmail.com>
2026-03-10 19:22:32 +08:00
shaopeng-666
6e8d3681ae [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (#7090)
### What this PR does / why we need it?
This is a bug fix to resolve the issue where the MOE model fails to load
quantized weights in w4a8 format when EP is not enabled.The parameters
["weight_scale_second", "weight_offset_second", "scale_bias"] shall be
parsed in per-group mode, regardless of other conditions.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
2026-03-10 16:57:05 +08:00
lilinsiman
a5ea699e29 [eagle][cp] fix eagle_cp enable bug2 (#7079)
### What this PR does / why we need it?
Fix acceptance and high-concurrency bug in eagle3 and cp enabled

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
tests and ut

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2026-03-10 16:32:49 +08:00
zhangxinyuehfad
67d40f23fd [CI]Upgrade niglty multi-node-tests max-parallel to 2 (#7035)
### What this PR does / why we need it?

1. Increase nightly multi-node test max-parallel from 1 to 2, and fix
resource conflicts that arise when tests run concurrently.
2. Fix parse-trigger job: Add an if condition so it only runs on
schedule, workflow_dispatch, or PRs labeled nightly-test
3. Adjust nightly schedule: Shift trigger time from 24:00 to 23:45
(UTC+8)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2026-03-10 16:25:51 +08:00
pu-zhe
5df450bca4 [Feat] [310p] Support w8a8sc quantization method (#7075)
### What this PR does / why we need it?
New Quantization Method: Introduced support for the W8A8SC static linear
quantization scheme specifically for 310P hardware, enabling more
efficient model compression.
Refactored the save_sharded_state_310.py to avoid multi-process issue.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
W8A8SC quant E2E test.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
2026-03-10 16:13:20 +08:00
Frank Chen
14c71b19e1 [Doc][CPU binding] Add user/developer guide for CPU binding (#7045)
### What this PR does / why we need it?
This PR adds comprehensive documentation for the CPU binding feature on
Ascend NPUs. It includes:

- A detailed developer guide
(`docs/source/developer_guide/feature_guide/cpu_binding.md`) covering
the design, internal logic, allocation examples, and troubleshooting for
the CPU binding mechanism.
- A concise user guide
(`docs/source/user_guide/feature_guide/cpu_binding.md`) explaining the
core concepts, usage, and common issues for end-users.
- An update to `additional_config.md` to use consistent terminology for
binding strategies (`global-slicing` and `topo-affinity`).

This documentation is needed to help both developers and users
understand, use, and debug the CPU binding feature, which is critical
for performance on ARM+Ascend platforms.

### Does this PR introduce _any_ user-facing change?
No. This is a documentation-only update.

### How was this patch tested?
The documentation has been reviewed for clarity and technical accuracy.
The examples and descriptions align with the implementation in
`vllm_ascend/cpu_binding.py`.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Signed-off-by: c00818886 <chenchuwei@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
2026-03-10 15:59:31 +08:00
Li Wang
33234aa0c5 Revert "[Feature][Quant] Auto-detect quantization format from model f… (#6873)
This reverts commit 3953dcf784. to keep
the basic functions available

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2026-03-10 11:27:32 +08:00
yupeng
40f7d93f1a [bugfix][LoRA] Fix the lora accuracy issue introduced by the upstream vLLM changed. (#6958)
### What this PR does / why we need it?
Fix the LoRA e2e test accuracy issue that introduced by the upstream PR
https://github.com/vllm-project/vllm/pull/32005

### How was this patch tested?
pytest -sv tests/e2e/singlecard/test_llama32_lora.py

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2
---------
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: yupeng <507435917@qq.com>
2026-03-10 10:43:18 +08:00
ZRJ026
a398fa6a0b [Bugfix]: correct streaming content-type in load balance proxy server (#6985)
Set proper 'text/event-stream; charset=utf-8' media type for streaming
requests instead of hardcoded 'application/json'

### What this PR does / why we need it?

This PR fixes an issue in the disaggregated prefill proxy server where
streaming requests (`"stream": true`) were always returned with a
hardcoded `Content-Type: application/json`, even when the backend vLLM
servers correctly returned Server-Sent Events (SSE) with `Content-Type:
text/event-stream; charset=utf-8`.

Specifically, the proxy used `StreamingResponse` with a fixed
`media_type` of `application/json`, which caused FastAPI to override the
response headers and break proper SSE semantics. As a result, clients
(e.g. `curl -i`, EventSource, or OpenAI-compatible SDKs) could not
reliably receive token-by-token streaming output.

In addition, this incorrect response type causes compatibility issues
with benchmarking and load-testing tools such as **EvalScope**. When
streaming is enabled, these tools expect SSE-formatted responses to
correctly parse token usage information. With the incorrect
`application/json` content type, EvalScope fails to parse the response
and reports errors similar to:`2025-12-15 09:27:56 - evalscope - ERROR:
Failed to parse usage from response: list index out of range. Response:
[]`

This PR updates the proxy to:
- Detect whether the incoming request is a streaming request
(`stream=true`)
- Use `text/event-stream; charset=utf-8` for streaming responses
- Preserve `application/json` for non-streaming responses

This aligns the proxy behavior with native vLLM prefill/decoder servers
and the OpenAI-compatible streaming API contract.

Fixes incorrect streaming response headers that prevented proper
real-time token delivery.

### Does this PR introduce _any_ user-facing change?

None

### How was this patch tested?
This change was tested manually using a disaggregated prefill + decode
setup
with the proxy server.

### Test Steps

1. Start prefiller and decoder vLLM servers:
```bash
   vllm serve --host 0.0.0.0 --port 8001 ...
   vllm serve --host 0.0.0.0 --port 8002 ...
```

2. Start the proxy server:
```bash
python load_balance_proxy_server_example.py \
  --host 127.0.0.1 --port 8000 \
  --prefiller-hosts 127.0.0.1 --prefiller-ports 8001 \
  --decoder-hosts 127.0.0.1 --decoder-ports 8002
```
3. Send a streaming completion request through the proxy:
```bash
curl -i -X POST http://127.0.0.1:8000/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
        "model": "test",
        "prompt": "hello",
        "max_tokens": 3,
        "stream": true
      }'
```
4. Verify the following:

- The response header is Content-Type: text/event-stream; charset=utf-8
- Tokens are streamed incrementally as SSE data: events
- Non-streaming requests still return application/json
No automated tests were added because this change affects an example
proxy
server and is limited to HTTP response headers. The behavior is directly
verifiable using standard SSE-compatible clients.

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Co-authored-by: zrj026 <zhangrunjiang026@gmail.com>
2026-03-10 10:11:35 +08:00
NJX
bb7ed759d4 [Doc] Fix broken chunked-prefill URL in supported features (#6963)
## What this PR does / why we need it?

Fixes the broken URL for chunked-prefill in the supported features
documentation page.

The chunked prefill documentation URL was moved from
`performance/optimization.html` to `configuration/optimization.html` in
upstream vLLM docs. This PR updates the link to point to the correct
location.

**Before**:
https://docs.vllm.ai/en/stable/performance/optimization.html#chunked-prefill
(404)
**After**:
https://docs.vllm.ai/en/stable/configuration/optimization.html#chunked-prefill
(working)

## Does this PR introduce _any_ user-facing change?

Yes - fixes a broken documentation link that users encounter when
clicking 'Chunked Prefill' in the supported features page.

## How was this patch tested?

- Verified the new URL resolves correctly
- Documentation change only

Closes #4217
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: NJX-njx <3771829673@qq.com>
2026-03-10 10:10:07 +08:00
NJX
9b30d4e774 [Doc][Misc] Add metrics usage documentation and example (#6962)
## What this PR does / why we need it?

This PR addresses issue #5027 where users find that `output.metrics`
returns `None` when using the vLLM offline inference API.

**Root Cause**: vLLM disables log stats by default
(`disable_log_stats=True`), which causes `output.metrics` to be `None`.

**Changes**:
1. Added a NOTE comment in `examples/offline_inference_npu.py`
explaining how to enable metrics
2. Created a new example `examples/offline_inference_metrics.py`
demonstrating how to access request-level metrics (`first_token_time`,
`finished_time`, etc.) by setting `disable_log_stats=False`

## Does this PR introduce _any_ user-facing change?

Yes - adds documentation and example code to help users understand how
to access output metrics.

## How was this patch tested?

- Documentation/example change only
- Verified example code follows the same patterns as existing examples

Closes #5027
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: NJX-njx <3771829673@qq.com>
2026-03-10 10:09:50 +08:00
Yikun Jiang
326fd359aa [Docs] add and publish llms.txt for LLM discovery (#6886)
### What this PR does / why we need it?
- move llms.txt under docs/source and publish it at /llms.txt via
html_extra_path
- rewrite llms.txt to an LLM-friendly link index
- use _sources markdown links and include missing entry points such as
FAQs

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2026-03-10 10:06:27 +08:00
ZKSU
bdad11e9a8 [doc] Update GLM4.x.md, add GLM4.x multi-node deploy tutorial (#6872)
### What this PR does / why we need it?

This PR updates the GLM4.x documentation by adding multi-node like 2 ×
Atlas 800 A2 (64G × 8) deployment tutorial.

- **What changed**: Added instructions for deploying GLM-4.X models
across multiple nodes, including environment variables and example
commands.
- **Why needed**: Although the previous tutorial stated that multi-node
deployment on Atlas 800 A2 (64GB × 8) is **not recommended**, but we
still face some situation that must deploy GLM-4.7 on 2 × Atlas 800 A2
(64G × 8). And we successfully run GLM-4.7 on 2 nodes and it works fine,
so we think it might be the time to update this part.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

- Verified that the new documentation renders correctly in Markdown
format.
- Tested the multi-node deployment steps on 2 × Atlas 800 A2 (64G × 8)
to ensure the commands work as described.
- Confirmed that existing GLM4.x documentation links and structure
remain intact.
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

---------

Signed-off-by: ZKSU <zksu@outlook.com>
2026-03-10 10:01:53 +08:00
xleoken
146b9d2a83 [BugFix] fix metadata execute error: integer modulo by zero (#6521)
### What this PR does / why we need it?
fix metadata execute error: integer modulo by zero 

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

Signed-off-by: xleoken <xleoken@163.com>
2026-03-10 09:58:06 +08:00
meihanc
f6db47f103 [CI] fix skiped e2e test when upgrade vllm version (#6654)
### What this PR does / why we need it?
fix skiped test_aclgraph_capture_replay.py when upgrade vllm version

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
13397841ab

Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
2026-03-10 09:55:35 +08:00
SILONG ZENG
43df2cb2fc [Lint]Style: Convert test/ to ruff format(Batch #1) (#6738)
### What this PR does / why we need it?
**Scope of Changes**:
| File Path |
| :--- |
| `tests/e2e/310p/multicard/test_vl_model_multicard.py` |
| `tests/e2e/310p/singlecard/test_vl_model_singlecard.py` |
| `tests/e2e/310p/test_utils.py` |
| `tests/e2e/conftest.py` |
| `tests/e2e/model_utils.py` |
| `tests/e2e/models/conftest.py` |
| `tests/e2e/models/test_lm_eval_correctness.py` |
| `tests/e2e/multicard/2-cards/spec_decode/test_spec_decode.py` |
| `tests/e2e/multicard/2-cards/test_aclgraph_capture_replay.py` |
| `tests/e2e/multicard/2-cards/test_data_parallel.py` |
| `tests/e2e/multicard/2-cards/test_disaggregated_encoder.py` |
| `tests/e2e/multicard/2-cards/test_expert_parallel.py` |
| `tests/e2e/multicard/2-cards/test_external_launcher.py` |
| `tests/e2e/multicard/2-cards/test_full_graph_mode.py` |
| `tests/e2e/multicard/2-cards/test_ilama_lora_tp2.py` |
| `tests/e2e/multicard/2-cards/test_offline_inference_distributed.py` |
| `tests/e2e/multicard/2-cards/test_offline_weight_load.py` |
| `tests/e2e/multicard/2-cards/test_pipeline_parallel.py` |
| `tests/e2e/multicard/2-cards/test_prefix_caching.py` |
| `tests/e2e/multicard/2-cards/test_quantization.py` |
| `tests/e2e/multicard/2-cards/test_qwen3_moe.py` |
| `tests/e2e/multicard/2-cards/test_qwen3_moe_routing_replay.py` |
| `tests/e2e/multicard/2-cards/test_qwen3_performance.py` |
| `tests/e2e/multicard/2-cards/test_shared_expert_dp.py` |
| `tests/e2e/multicard/2-cards/test_single_request_aclgraph.py` |
| `tests/e2e/multicard/2-cards/test_sp_pass.py` |

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
9562912cea

Signed-off-by: MrZ20 <2609716663@qq.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
2026-03-10 09:52:50 +08:00
xmpp777
9216e1b050 [fix] Add support for Qwen3.5 Dense and MoE on Ascend (#6933)
### What this PR does / why we need it?

This pull request introduces support for the Qwen3.5 MoE model on Ascend
devices. The key changes are:

* **Quantization Configuration for Qwen3.5 MoE**: Adds necessary prefix
mappings and packed module definitions for `qwen3_5_moe` in
`vllm_ascend/quantization/modelslim_config.py` to enable ModelSlim
quantization.
* **Triton Kernel Fix**: Corrects a bug in the `fused_gdn_gating` Triton
kernel. The calculation for `BLK_BATCHES` had an operator precedence
issue which is now resolved. The calculation has also been made more
robust with added clamping to prevent potential out-of-bounds memory
access in the unified buffer.

These changes enable the correct and efficient execution of Qwen3.5 MoE
models on Ascend hardware.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI should be used to verify the correctness of these changes. It is
recommended to run tests with the Qwen3.5 MoE model to ensure the new
configurations and the kernel fix work as expected.

Signed-off-by: xmpp777 <yangming2@huawei.com>
2026-03-10 09:09:31 +08:00
dependabot[bot]
3b25ded8b7 [CI] Bump docker/metadata-action from 5 to 6 (#7069)
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 5 to 6.


- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 09:06:04 +08:00
dependabot[bot]
2325bbe79b [CI] Bump actions/checkout from 4 to 6 (#7070)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 09:05:22 +08:00
ZT-AIA
ee5347e824 [qwen3 next ]add ascend c casual_conv1d_fn (#6661)
### What this PR does / why we need it?
add ascend c casual_conv1d_fn

- vLLM version: v0.15.0
- vLLM main:
13397841ab
---------
Signed-off-by: ZT-AIA <1028681969@qq.com>
Signed-off-by: ZT-AIA <63220130+ZT-AIA@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-03-09 23:29:49 +08:00
Hexiang Wang
48b624e4cc [BugFix] Fix implementation bug of triton rope_siso (#7082)
### What this PR does / why we need it?
Previously implemention of triton rope_siso missing the storage of
second half of rope results, which will result in:

1. accuracy problem in neox-style scenario
2. ub overflow in non neox-style scenario

This PR fixes it and supplement nightly test case for it.

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: whx-sjtu <2952154980@qq.com>
2026-03-09 23:08:43 +08:00
liuchen2026fly
542258ac9d [feat] parameterize hardcoded MLA dimensions to support GLM5-W8A8 (#6902)
Derive MLA dimension constants (q_lora_rank, qk_nope_head_dim, etc.)
from tensor shapes at runtime instead of hardcoding DeepSeek V3 values.
This enables the mla_preprocess fused op to work with both DeepSeek V3
and GLM5 models without Python API changes.

- Add 9 dimension fields to MlaTilingData with DeepSeek V3 defaults
- Add OpParam fields and dynamize all host-side tiling functions
- Derive dimensions from wuk, gamma1, kv_cache_rope tensor shapes
- Replace 310+ hardcoded constants across 4 kernel .hpp files
- Remove unused MMSIZE1/MMSIZE2 constants

### What this PR does / why we need it?

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

---------

Signed-off-by: liuchenbing <chenliumail@163.com>
Co-authored-by: liuchenbing <chenliumail@163.com>
2026-03-09 20:17:21 +08:00
Qiu
13adcbe44b feat(attention_cp): support chunked prefill for Qwen3Next with PCP&DCP (#6900)
### What this PR does / why we need it?
Support chunked prefill for Qwen3Next with PCP&DCP

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
2026-03-09 17:55:09 +08:00
LI SHENGYONG
a76a509fae [MOE][Bugfix] Cancel H2D for expert_map (#7000)
### What this PR does / why we need it?
If expert_map is on the device, there may be occasional repeated answers
in long output scenarios.

dsv3.2-exp-w8a8
No garbled characters are displayed in the output.
| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2025 | ef2f4f | accuracy | gen | 60.00 |

- vLLM version: v0.16.0
- vLLM main:
15d76f74e2

Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
2026-03-09 17:53:54 +08:00
王远
82fdd40d49 [Feat]Xlite Qwen3 MoE Support Data Parallel (#6715)
### What this PR does / why we need it?
This patch adds support for the Qwen3-MoE data parallel in Xlite. For
more details about Xlite, please refer to the following
link:[https://atomgit.com/openeuler/GVirt/blob/master/xlite/README.md](https://atomgit.com/openeuler/GVirt/blob/master/xlite/README.md).

online server config:
```shell
port=$1
log=$2
export VLLM_USE_V1=1
export TASK_QUEUE_ENABLE=1
export HCCL_BUFFSIZE=512
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export VLLM_ASCEND_ENABLE_NZ=0
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl kernel.sched_migration_cost_ns=50000
ip=127.0.0.1
python -m vllm.entrypoints.openai.api_server \
        --model /mnt/nvme1n1/wy/models/Qwen3-30B-A3B  \
        --tensor-parallel-size 2 \
        --enable-expert-parallel \
        --data-parallel-size 4 \
        --gpu-memory-utilization 0.9 \
        --max-num-batched-tokens 32768 \
        --data-parallel-size-local 4 \
        --max-num-seqs=200 \
        --block-size 128 \
        --max-model-len 6656 \
        --trust-remote-code \
        --disable-log-requests \
        --served-model-name qwen \
        --no-enable-prefix-caching \
	--additional-config '{"xlite_graph_config": {"enabled": true, "full_mode": true}, "enable_cpu_binding": true}' \
	--compilation-config '{"cudagraph_capture_sizes":[1, 16, 32, 48, 64, 100, 150, 200], "cudagraph_mode": "FULL_DECODE_ONLY"}' \
	--async-scheduling \
	--host ${ip} \
	--port ${port} > ${log} 2>&1 &
``` 
test_config:
```shell
vllm bench serve \
    --max-concurrency ${maxconcurrency} \
    --num-prompts ${num_prompts} \
    --host ${HOST} \
    --port ${PORT} \
    --model ${MODEL_NAME} \
    --dataset-name random \
    --backend openai-chat \
    --random-input-len 512 \
    --random-output-len 512  \
    --random-range-ratio 0.2 \
    --temperature 0.6 \
    --metric-percentiles "50,90,99" \
    --tokenizer ${TOKENIZER_PATH} \
    --endpoint /v1/chat/completions \
    --ignore-eos
``` 

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?


- vLLM version: v0.16.0
- vLLM main:
c86cdcbcd2

Signed-off-by: uuzWY <Ethan.wangyuan@huawei.com>
Co-authored-by: uuzWY <Ethan.wangyuan@huawei.com>
2026-03-09 17:53:35 +08:00
Shaoxu Cheng
ba1c82e758 [DOC] Add explaination of 310p special param: max-model-len (#7065)
### What this PR does / why we need it?

This PR updates the documentation for running vLLM on Atlas 300I series
(310p) hardware. It adds a warning to explicitly set `--max-model-len`
to prevent potential Out-of-Memory (OOM) errors that can occur with the
default configuration.

The example commands and Python scripts for online and offline inference
have been updated to:
- Include `--max-model-len 4096` (or `max_model_len=4096`).
- Remove the `compilation-config` parameter, which is no longer
necessary for 310p devices.

These changes ensure users have a clearer and more stable experience
when using vLLM on Atlas 300I hardware.

### Does this PR introduce _any_ user-facing change?
No, this is a documentation-only update.

### How was this patch tested?
The changes are to documentation and do not require testing.


- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

---------

Signed-off-by: Tflowers-0129 <2906339855@qq.com>
2026-03-09 16:54:43 +08:00