Commit Graph

2809 Commits

Author SHA1 Message Date
Nengjun Ma
99cea6c1b5 [CI] Fix the nightly pip binary install doc test fail. (#8129)
### What this PR does / why we need it?
Fix the nightly pip binary install doc test fail.

### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
Nightly doc test

Signed-off-by: leo-pony <nengjunma@outlook.com>
2026-04-10 17:34:18 +08:00
linfeng-yuan
bd9927d5a9 [releases/v0.18.0][Build][BugFix] support ascend950 npu-smi info interface changes and make SOC_VERSION actually take effect (#8061)
### What this PR does / why we need it?
Cherry-picked from #8062 

This PR adds support for the Ascend950 NPU by updating the `npu-smi
info` parsing logic to handle interface changes. It also improves
robustness by ensuring that `SOC_VERSION` actually takes effect by
disabling `get_chip_type` given this environment variable.


### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed.

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2026-04-10 16:44:38 +08:00
ZYang6263
34386c8896 [v0.18.0][CI] Fix and simplify the CI for Qwen3 32B (#8093)
### What this PR does / why we need it?
This PR fixes and simplifies the CI configuration for Qwen3 32B.

The main changes are:
- Remove the redundant `Qwen3-32B-Int8-A3-Feature-Stack3.yaml` config
and consolidate the CI setup into `Qwen3-32B-Int8.yaml`.
- Improve runtime stability by adding
`PYTORCH_NPU_ALLOC_CONF=expandable_segments:True` and setting
`--max-num-seqs 80`.
- Update the accuracy benchmark from `aime2024` to `gsm8k-lite`, and
adjust the related dataset config, output length, baseline, and
threshold accordingly.

These changes make the Qwen3 32B CI easier to maintain and more stable
in nightly validation.

---------

Signed-off-by: ZYang6263 <zy626375@gmail.com>
2026-04-10 14:22:24 +08:00
DreamerLeader
531d0e6fff [v0.18.0][BugFix][KV Pool]Fix the conflict between pooling scenarios … (#8101)
…and PCP across machines

<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

Signed-off-by: DreamLeader <2270923832@qq.com>
2026-04-09 21:55:56 +08:00
Zetong Li
054fde7b72 [0.18.0][BugFix] Fix attention state of short prompt for correct forwarding (#8088)
### What this PR does / why we need it?
This PR is cherry-pick from #8029.

This PR aims to fix attention state of short prompt for correct
forwarding. Since a batch of short prompts (prefill tokens less than or
equal to num_spec_tokens + 1) will be treated as decode requests (by
split_decodes_and_prefills), its original PrefillNoCache attention state
contradicts. Thus these short prompts will be passed into a mismatched
branch and incur errors.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

Signed-off-by: Zetong Li <slippersss@126.com>
2026-04-09 21:21:24 +08:00
weijinqian0
f668ff9ef0 [v0.18.0][BugFix]Revert the code: Replace npu_ring_mla wit FIA with MLA prefill. (#7961)
This pull request reverts previous changes to switch to FIA and instead
implements npu_ring_mla for MLA prefill operations(#5704 ). The change
streamlines the attention mechanism by removing unnecessary metadata
tracking and updating the underlying NPU operations to use the
ring-based MLA kernel. This adjustment ensures better compatibility and
performance for MLA prefill tasks within the vLLM Ascend backend.

Highlights

- Migration to npu_ring_mla: Replaced the usage of
npu_fused_infer_attention_score (FIA) with npu_ring_mla for MLA prefill
operations across the codebase to improve performance and alignment with
the intended architecture.
- Cleanup of redundant metadata: Removed
chunk_actual_seq_lengths_kv_list and actual_seq_lengths_q from various
metadata structures as they are no longer required for the updated
attention implementation.
- Test suite updates: Updated unit tests in test_mla_cp.py and
test_mla_v1.py to mock npu_ring_mla instead of the deprecated FIA
functions and adjusted test assertions to reflect the new implementation
details.

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
2026-04-09 17:00:25 +08:00
linfeng-yuan
7c9aa498d6 [releases/v0.18.0][BugFix] Restore global_bs=0 and mc2_mask for uniform-token dispatching and support inter-node roce hierarchical MC2 communication (#8040)
### What this PR does / why we need it? 
Cherry-picked from #8039 
Restore the setting of MC2 `global_bs` and `mc2_mask` handling when
`all_reduce` across DP group cannot be skipped. Ascend MC2 ops require
`global_bs=0` + `mc2_mask` while enabling inter-node roce hierarchical
communication. PR #4983 always passed non-zero `global_bs` without
`mc2_mask`, which is incompatible with hierarchy comm raised in PR #7583
  **Changes:**                           
- Add `should_skip_allreduce_across_dp_group()` to `utils.py` with
hierarchy constraint
- Set `global_bs=0` when allreduce is not skipped; pass `mc2_mask`
accordingly
- Add `mc2_mask` field to `MoEMC2CombineMetadata` for dispatch→combine
propagation
### Does this PR introduce _any_ user-facing change?
No. But this PR fixes cross-super-node communication function on A3 with
`enable_mc2_hierarchy_comm=True` in `additional_config` and `export
HCCL_INTRA_ROCE_ENABLE=1`.

### How was this patch tested?
E2E serving succeeded and CI pssed.

- vLLM version: v0.18.0
- vLLM main:
14acf429ac

---------

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2026-04-09 16:51:17 +08:00
Shaoxu Cheng
82e17f693a [BugFix][0.18.0][310p] fix post-sampling not working in graph mode on 310p (#8077)
### What this PR does / why we need it?

Enabling temperature in post-processing on 310P devices can cause the
service to stall and eventually hang. We first traced the issue to a
timeout where the temperature-related `div` operator was waiting for
results from a sub-stream. After investigating the preceding operators,
we finally identified the root cause as the `q.exponential_()` operator,
which is not well supported on 310P and triggers an internal issue in
the `add` kernel.

### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
This patch was thoroughly tested locally(accuracy-dataset test and
stress test). It is not easy to design a proper unit test for this case,
and I appreciate your understanding.

Signed-off-by: Tflowers-0129 <2906339855@qq.com>
2026-04-09 16:31:38 +08:00
herizhen
0d1424d81a [Doc][Misc] Comprehensive documentation cleanup and grammatical fixes (#8073)
What this PR does / why we need it?
This pull request performs a comprehensive cleanup of the vLLM Ascend
documentation. It fixes numerous typos, grammatical errors, and phrasing
issues across community guidelines, developer documents, hardware
tutorials, and feature guides. Key improvements include correcting
hardware names (e.g., Atlas 300I), fixing broken links, cleaning up code
examples (removing duplicate flags and trailing commas), and improving
the clarity of technical explanations. These changes are necessary to
ensure the documentation is professional, accurate, and easy for users
to follow.

Does this PR introduce any user-facing change?
No, this PR contains documentation-only updates.

How was this patch tested?
The changes were manually reviewed for accuracy and grammatical
correctness. No functional code changes were introduced.

---------

Signed-off-by: herizhen <1270637059@qq.com>
Signed-off-by: herizhen <59841270+herizhen@users.noreply.github.com>
2026-04-09 15:37:57 +08:00
zouyida2052
c40a387f63 [bugfix]fix extra npu context in device 0 (#8041)
<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
When we launch a PD-disaggregated process and send requests, an
additional processes appear on NPU 0, becasue when a thread has a
primary cuda context, the child thread it creates automatically doesn't
inherit the cuda context. See
https://forums.developer.nvidia.com/t/when-a-thread-has-a-primary-cuda-context-does-the-child-thread-it-creates-automatically-inherit-the-cuda-context/362810.
vLLM has fixed this issue in [pr-37449
](https://github.com/vllm-project/vllm/pull/37449), but version 0.18.0
does not include the fix. Therefore, we need to patch it.
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
no
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

---------

Signed-off-by: zouyida <zouyida@huawei.com>
Co-authored-by: zouyida <zouyida@huawei.com>
2026-04-08 23:35:52 +08:00
hucong
4a628f1042 [UT][v0.18.0] Fix APC nightly UT and TTFT ratio (cherry-pick #7468) (#8053)
<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->
Cherry-pick from https://github.com/vllm-project/vllm-ascend/pull/7468

- Fix TTFT ratio threshold from 0.8 to 0.4 for prefix cache benchmarks
- Fix max_out_len values for warm_up and benchmark configs
- Applied to both DeepSeek-R1-0528-W8A8 and Qwen3-32B-Int8 configs

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

Signed-off-by: underfituu <hzhucong@163.com>
2026-04-08 21:08:26 +08:00
Mengqing Cao
044d4c3974 [v0.18.0]feat(quant): add C8 INT8 KV cache support for GQA attention models (#7474) (#8007)
backport of #7474

This PR adds C8 (INT8) KV cache quantization support for standard GQA
attention models (e.g., Qwen3-32B W8A8C8). C8 uses static per-channel
quantization scales to store KV cache in INT8, reducing KV cache memory
by ~50% compared to BF16, enabling higher batch concurrency and longer
context lengths on the same hardware.

**Key changes:**

1. **`attention_v1.py`** — New `AscendC8AttentionBackendImpl` subclass
of `AscendAttentionBackendImpl`:
- `_prepare_c8_scales`: Shards per-channel scales/offsets to the current
TP rank and pre-computes BF16 BNSD-shaped antiquant tensors (one-time
per layer).
- `_quantize_kv_to_int8`: Quantizes BF16 K/V to INT8 before
`reshape_and_cache`, using pre-cached inverse scales.
- `_forward_c8_decode`: FIA V1 BNSD paged attention with native INT8 KV
and `perchannel` antiquant mode.
- `_forward_c8_chunked_prefill`: Splits decode (FIA V1 BNSD paged INT8)
and prefill (FIA V1 TND float) into two kernel calls.
- `_forward_c8_fused_infer_attention`: Handles `PrefillNoCache` and
`PrefillCacheHit` states.

2. **`quantization/methods/kv_c8.py`** — New
`AscendC8KVCacheAttentionMethod` scheme:
- Creates `k/v_cache_scale/offset` parameters via
`_c8_kv_scale_weight_loader`, which handles per-channel scale shapes and
lazy resizing.
- Sets `layer.kv_cache_torch_dtype = torch.int8` so
`get_kv_cache_spec()` returns INT8 dtype automatically.
- Upgrades `layer.impl` to `AscendC8AttentionBackendImpl` via class
surgery.

3. **`quantization/modelslim_config.py`** — C8 branch in
`get_quant_method()` activates when `kv_cache_type == "C8"` in
`quant_model_description.json`.

4. **`patch/worker/patch_qwen3_c8.py`** — Intercepts per-channel C8
scale/offset weights before `AutoWeightsLoader` discards them, routing
them to the parameters created by `AscendC8KVCacheAttentionMethod`.

5. **`tests/ut/quantization/test_kv_c8.py`** — Unit tests covering
`_c8_kv_scale_weight_loader`, `AscendC8KVCacheAttentionMethod`, and
`AscendC8AttentionBackendImpl` scale helpers.

Yes. Users can now serve Qwen3-32B W8A8C8 quantized models with INT8 KV
cache on Ascend NPU. The model checkpoint must contain a
`quant_model_description.json` with `"kv_cache_type": "C8"` and
per-channel scale/offset tensors in safetensors.

No changes to the serving CLI — the feature activates automatically when
the quantization config is detected.

Benchmarked with `vllm serve` (TP=8, `max_num_seqs=256`,
`max_model_len=131072`, `enable_chunked_prefill=true`) + `random_bench`
(input_len=10240, output_len=2048, 960 prompts, max_concurrency=192):

```
============ Serving Benchmark Result ============
Successful requests:                     960
Failed requests:                         0
Maximum request concurrency:             192
Benchmark duration (s):                  1359.81
Total input tokens:                      9830400
Total generated tokens:                  1966080
Request throughput (req/s):              0.71
Output token throughput (tok/s):         1445.85
Peak output token throughput (tok/s):    2304.00
Total token throughput (tok/s):          8675.12
---------------Time to First Token----------------
Mean TTFT (ms):                          24598.51
Median TTFT (ms):                        23167.02
P50 TTFT (ms):                           23167.02
P90 TTFT (ms):                           47717.08
P99 TTFT (ms):                           84402.61
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          120.76
Median TPOT (ms):                        121.50
P50 TPOT (ms):                           121.50
P90 TPOT (ms):                           127.05
P99 TPOT (ms):                           130.13
---------------Inter-token Latency----------------
Mean ITL (ms):                           120.70
Median ITL (ms):                         90.34
P50 ITL (ms):                            90.34
P90 ITL (ms):                            93.79
P99 ITL (ms):                            101.80
==================================================
```

All attention states verified: `PrefillNoCache`, `PrefillCacheHit`,
`ChunkedPrefill`, `DecodeOnly`.

- vLLM version: v0.17.0
- vLLM main:
8b6325758c

Signed-off-by: lico67373 <918688502@qq.com>
Co-authored-by: LICO67373 <110013619+LICO1314@users.noreply.github.com>
2026-04-08 10:51:58 +08:00
Nagisa125
fbd5d0fd55 [Doc][Misc][v0.18.0] Updated the document configuration for DeepSeek-V3.2 (#7970)
### What this PR does / why we need it?

To avoid misleading users, the unmaintained DSV32 models, such as the
floating-point model, are deleted from the document.This PR removes the
BF16 version entries for DeepSeek-V3.2 from the documentation.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Documentation update only.

Signed-off-by: wyh145 <1987244901@qq.com>
2026-04-07 16:17:28 +08:00
cvSoldier
6c19270498 [BugFix] fix qwen3-next compilation error (#7977)
### What this PR does / why we need it?
fix qwen3-next compilation error

- vLLM version: v0.18.0
- vLLM release0.18.0:
445dc7196f
---------
Signed-off-by: cvSoldier <610496306@qq.com>
2026-04-03 20:03:39 +08:00
guxin108
81c6f51a45 【CI】add nightly cases: MiniMax-M2.5-W8A8 Qwen3.5-27B-w8a8 Qwen3.5-397B-A1… (#7968)
### What this PR does / why we need it?
This PR Qwen3.5-27B ;MiniMax-M2.5-w8a8 ;Qwen3.5-397B-w8a8-mtp acc/perf 3
cases on A3, we need test them daily.

- vLLM version: v0.18.0
- vLLM main:
35141a7eed

Signed-off-by: guxin108 <1252896542@qq.com>
2026-04-03 17:50:59 +08:00
jiangmengyu18
3f462d251e [v0.18.0][CI] fix acc baseline of qwen3vl 235b (#7981)
### What this PR does / why we need it?
fix acc baseline of qwen3vl 235b

---------
Signed-off-by: jiangmengyu18 <56633611+jiangmengyu18@users.noreply.github.com>
2026-04-03 17:38:17 +08:00
LeeWenquan
0d773efd70 [CI]Fix qwen3Next Nightly CI config (#7903)
### What this PR does / why we need it?
Fix qwen3Next Nightly CI config in 0.18.0.
backport: #7679

Signed-off-by: Your Name <you@example.com>
Co-authored-by: Your Name <you@example.com>
2026-04-03 16:46:25 +08:00
jiangmengyu18
445dc7196f [v0.18.0][CI] add qwen3vl weights download (#7915)
### What this PR does / why we need it?
Add qwen3vl weights download
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?

Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
2026-04-03 12:15:01 +08:00
jiangmengyu18
902d1312d9 [v0.18.0][CI] add nightly ci test for qwen3vl (#7913)
### What this PR does / why we need it?
Add nightly ci test for qwen3vl
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?

Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
2026-04-03 11:39:28 +08:00
jiangmengyu18
3cbd6acc89 [v0.18.0][Feature] Support Flash Comm V1 for Qwen3-VL models (#7893)
### What this PR does / why we need it?
Enable Flash Comm V1 (sequence parallelism) for Qwen3-VL models (both
dense and MoE variants).

Root cause: Qwen3-VL's deepstack embeddings remain full-size [N, H]
while hidden states become [N/tp_size, H] after reduce-scatter, causing
shape mismatch on add.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- [x] Run Qwen3-VL dense model with FC1 enabled (TP > 1), verify correct
output
- [x] Run Qwen3-VL MoE model with FC1 enabled (TP > 1), verify correct
output

---------

Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Signed-off-by: jiangmengyu18 <56633611+jiangmengyu18@users.noreply.github.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-04-03 11:38:41 +08:00
yydyzr
8ce4cfdae7 [Doc][Misc][v0.18.0] Add GLM5 to supported model list and update deployment document for GLM5 (#7963)
### What this PR does / why we need it?
1. Add version notes for GLM5.
2. Add paramter modification for GLM5.
3. Add GLM5 to supported model list.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.18.0
- vLLM main:
35141a7eed

---------

Signed-off-by: yydyzr <liuyuncong1@huawei.com>
Signed-off-by: Zhu Jiyang <zhujiyang2@huawei.com>
Co-authored-by: Zhu Jiyang <zhujiyang2@huawei.com>
2026-04-03 10:15:39 +08:00
shaopeng-666
3218eb9fe1 [DOC]update Qwen3.5 user guide (#7934)
This pr cherry pick from #7866. Update the model user guide

---------
Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
2026-04-02 22:09:00 +08:00
jiangmengyu18
85234d096d [v0.18.0][Feature] support qkv_rmsnorm_mrope for qwen3vl (#7852)
### What this PR does / why we need it?
Qwen3vl full attention supports enabling the split_qkv_rmsnorm_mrope
fusion operator.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
- [x] Run Qwen3-VL dense model with the fusion operator, verify correct
output
- [x] Run Qwen3-VL MoE model with the fusion operator, verify correct
output

---------

Signed-off-by: jiangmengyu18 <451528648@qq.com>
Signed-off-by: jiangmengyu18 <56633611+jiangmengyu18@users.noreply.github.com>
Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
2026-04-02 17:46:50 +08:00
Zhujiyang2
4969a0d783 [Doc][Misc][v0.18.0] Add Parameter Description, best practices and FAQs in GLM5.md (#7909)
### What this PR does / why we need it?

This PR updates the GLM-5 documentation to include:
- Information about the first supported version
(`vllm-ascend:v0.17.0rc1`).
- Updated `--additional-config` parameters to use the new nested
`ascend_compilation_config` structure.
- Added `VLLM_ASCEND_BALANCE_SCHEDULING` environment variable to
deployment scripts.
- Improved formatting of deployment steps.
- A new "Notice" section explaining optimization environment variables
(`VLLM_ASCEND_ENABLE_FLASHCOMM1`, `VLLM_ASCEND_ENABLE_FUSED_MC2`,
`VLLM_ASCEND_ENABLE_MLAPO`).
- A "Best Practices" section for prefill-decode disaggregation.
- An "FAQ" section addressing common tokenizer issues and function
calling configuration.

### Does this PR introduce _any_ user-facing change?

No, this is a documentation-only update.

### How was this patch tested?

Documentation changes were verified for correctness and formatting.

---------

Signed-off-by: Zhu Jiyang <zhujiyang2@huawei.com>
2026-04-02 16:28:32 +08:00
LoganJane
829957b53f [Doc] Update docs of Kimi-K2.5 for 0.18.0rc1 (#7931)
### What this PR does / why we need it?
Update docs of Kimi-K2.5 for 0.18.0rc1
backport of #7901
---------
Signed-off-by: LoganJane <loganJane73@hotmail.com>
2026-04-02 14:15:12 +08:00
jiangmengyu18
74699877c9 [v0.18.0][BugFix] fix the weightsmapper bug of qwen3-vl (#7868)
### What this PR does / why we need it?
This PR fixes a weight loading error in the Qwen3-VL model.
The bug was introduced by vLLM. In vLLM's `qwen3-vl.py`, the prefix of
the `lm_head` layer is hardcoded as `"lm_head"`. However,
`hf_to_vllm_mapper` remaps the weight name of `lm_head` from `lm_head`
to `language_model.lm_head`.
This causes a mismatch between the keys in the weight file and the
prefix of the lm_head layer, resulting in an error.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- [x] Run Qwen3-VL dense model with the fusion operator, verify correct
output

Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
2026-04-02 12:56:08 +08:00
pz1116
1225c613fb [BugFix][0.18.0][KV Pool] Fix KV Pool not putting kv cache for vllm v0.18.0 (#7874)
<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
vLLM v0.18 defers KV connector finalization during target-modelforward
when speculative decoding is enable, leading to KV Pool not doing Put
Operation. This change is forgotten when we bumpped up the version for
vllm-ascend. Fix by adding finalize_kv_connector for spec decode.
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>
2026-04-02 10:57:09 +08:00
LI SHENGYONG
4b2f0130bc [V0.18.0][EPLB][BugFix] Fix moe_load precision in allgather (#7890)
### What this PR does / why we need it?
Fixed the bug of incorrect reshape usage.
For example:
ori_tensor: [[1, 2, 3], [4, 5, 6]]
after reshape:
[[1, 2], [3, 4], [5, 6]]
after permute:
[[1, 4], [2, 5], [3, 6]]
Now, we will directly use squeeze for a more intuitive understanding.
pr for main:
#7887 

### Does this PR introduce _any_ user-facing change?
The actual peak-to-average ratio has successfully decreased.

Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
2026-04-02 09:20:31 +08:00
Li Wang
99e1ea0fe6 [v0.18.0][Misc] Upgrade torch_npu to pre-release built version (#7918)
### What this PR does / why we need it?
This PR upgrades the `torch_npu` (PTA) version in multiple Dockerfiles
to a pre-release build. It introduces logic to dynamically select the
correct wheel based on the Python version and system architecture.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed with existing tests. The author should verify that the Docker
images build successfully for all supported architectures and Python
versions.

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2026-04-01 22:41:09 +08:00
hucong
d3de7333dc [BugFix][v0.18.0][cherry-pick] Fix embedding prefix caching for APC (#7894)
## What this PR does / why we need it?
pick-from:https://github.com/vllm-project/vllm-ascend/pull/7452
### Problem
Embedding models produce inconsistent outputs when prefix caching is
enabled vs disabled.

### Root Cause
The attention router condition was too broad:
- All `model_runner_type == "pooling"` → `_forward_encoder_attention()`
→ uses `npu_fusion_attention`
- **But `npu_fusion_attention` does NOT support prefix caching**
- Result: Numerical mismatch when KV cache is managed by prefix caching

### Solution
Refine the router condition to check causality:

**Before**: 
```
if attn_metadata.model_runner_type == "pooling":
    → npu_fusion_attention (no prefix caching support)
```

**After**: 
```
if attn_metadata.model_runner_type == "pooling" and not attn_metadata.causal:
    → npu_fusion_attention (for true encoders)
else:
    → npu_fused_infer_attention_score (prefix caching support)
```
### Changes Made

1. **Fixed router condition** (`vllm_ascend/attention/attention_v1.py`
L968)
   - Added `and not attn_metadata.causal` check
   - Effect: Non-causal embeddings now use correct operator

2. **Simplified encoder attention**
(`vllm_ascend/attention/attention_v1.py` L864-877)
   - Removed redundant causal branch (encoders never use causal mask)
   - Reduced from 34 lines to 14 lines

3. **Added test** (`tests/e2e/singlecard/pooling/test_embedding.py`)
- Validates embedding outputs with/without prefix caching are consistent
  
## Does this PR introduce _any_ user-facing change?

### Functional Changes
 **Yes** - Bug fix: Embedding models now produce consistent outputs
with prefix caching

### API Changes
 **No** - All public APIs unchanged

### Configuration Changes
 **No** - No new configuration required

### Backward Compatibility
 **Fully compatible** - Only fixes incorrect behavior

## How was this patch tested?
### New Test
Added `test_embed_models_using_prefix_caching_correctness()`:
- Tests: `Qwen3-Embedding-0.6B`
- Validates numerical consistency between runs with/without prefix
caching
- Uses long sequences to activate prefix caching
- Tolerance: 1e-2
- vLLM version: v0.18.0

Signed-off-by: underfituu <hzhucong@163.com>
2026-04-01 16:57:33 +08:00
Frank Chen
762850fb4e [v0.18.0][Misc] Install numactl in Docker images (#7898)
### What this PR does / why we need it?
This PR backports the `numactl` Docker image update from #7870 to
`releases/v0.18.0`. It installs the `numactl` runtime package in both
Ubuntu-based and openEuler-based Dockerfiles while keeping the existing
development packages (`libnuma-dev` and `numactl-devel`) unchanged.

Backport of #7870.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed in #7870 on `main`. This backport reuses the same
Dockerfile-only change, and no additional local test was run in this
environment.

Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
2026-04-01 16:22:37 +08:00
Nagisa125
2cb9195ff0 [Releases/v0.18.0][CI] Updated the parameters for the single-node test to fix the OOM issue for DeepSeek-V3.2 (#7862)
### What this PR does / why we need it?
Fix the OOM (Out-of-Memory) error in the single-node-deepseek-v3-2-w8a8
nightly test of vllm-ascend:

- Reduced the value of HCCL_BUFFSIZE

- Lowered the gpu-memory-utilization

Optimize service-side performance:
Updated service-oriented configuration parameters (e.g., max-num-seqs,
cudagraph_capture_sizes, batch_size) to improve the inference
performance,so that the performance is closer to the optimal performance
of the current mainline.
Align performance baseline with main branch:
Updated the performance baseline according to the latest performance
data

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
The test has passed.

https://github.com/vllm-project/vllm-ascend/actions/runs/23734079080/job/69134387320?pr=7793

---------

Signed-off-by: wyh145 <1987244901@qq.com>
2026-04-01 10:28:46 +08:00
weiguihua2
59a7526339 [CI][Misc] modify ds3.2+dcp ci (#7841)
### What this PR does / why we need it?

Due to the current dcp solution of allgathering the KV cache, the
performance deteriorates significantly, and the CI may get stuck. This
PR temporarily removes the performance and accuracy benchmarks for
DeepSeek-V3.2-W8A8-cp to prevent CI hangs until optimization is
complete.

pcik-from:https://github.com/vllm-project/vllm-ascend/pull/7842

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Verified that the configuration file remains valid and that the CI no
longer attempts to run the problematic benchmarks.

pick-from: https://github.com/vllm-project/vllm-ascend/pull/7842

---------

Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
2026-04-01 08:58:21 +08:00
zxr2333
ef9964389f [v0.18.0][BugFix][P/D]Fix layerwise connector out of memory during large buffer transfer (#7752)
### What this PR does / why we need 
Fix layerwise connector out of memory during large buffer transfer.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By nightly.

---------

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
2026-03-31 22:16:53 +08:00
yydyzr
b1cc6ef6ae [v0.18.0][BugFix] Fix bug of precision when DSA-CP is enabled on GLM5 (#7843)
### What this PR does / why we need it?
This PR fixs accuracy bug in some cases with additional communication
methods.
This PR is a specific fix for version 0.18.0

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
CI passed with new added/existing test.
- vLLM version: v0.18.0
- vLLM main:
35141a7eed

---------

Signed-off-by: rjg-lyh <1318825571@qq.com>
Signed-off-by: yydyzr <liuyuncong1@huawei.com>
Co-authored-by: rjg-lyh <1318825571@qq.com>
2026-03-31 21:51:10 +08:00
pz1116
0b48ddbc8b [Bugfix][0.18.0][KV Pool]Fix KV transfer put logic (#7718)
<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
Before when we do put for KV Pool, we find the first non-existing key
and put all the blocks starting from that index; however, if the prefix
cache blocks is from another request, and some of the blocks are evicted
due to LRU, we will be putting blocks that still exist in the pool, and
causing MooncakeStore printing unnecessary logs in master service.

What this PR does:

Now we lookup all the keys and only put the ones that are missing.
Fix lookup_scheduler in pool_worker so it handles GQA correctly.
Fixes a few existing typos
Add UT, written by codex
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

---------

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>
2026-03-31 20:21:23 +08:00
pz1116
14411e911e [Doc][0.18.0][KV Pool]add mooncake rdma timeout (#7784)
<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
- Add `default_kv_lease_ttl` to the `mooncake.json` example in the KV
Pool guide.
- Document `default_kv_lease_ttl` semantics and clarify that it should
be larger than `ASCEND_CONNECT_TIMEOUT` and `ASCEND_TRANSFER_TIMEOUT`.
- Add `HCCL_RDMA_TIMEOUT` explanation for Mooncake RDMA retransmission
timeout, including the recommended constraint note.
- Add `HCCL_RDMA_TIMEOUT=16` to relevant KV Pool environment setup
examples for consistency.

<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

---------

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
2026-03-31 20:17:03 +08:00
wangyibo1005
a63dd5868d [0.18.0][cherry-pick][BugFix]Fix compilation errors for operators dispatch_gmm_combine_decode/moe_combine_normal/moe_dispatch_normal (#7844)
**What this PR does / why we need it?**
pick from https://github.com/vllm-project/vllm-ascend/pull/7114

Fix compilation errors encountered when building versions later than
b020 for the following operators:
dispatch_gmm_combine_decode, moe_combine_normal, moe_dispatch_normal
**Root Cause**
After the b020 version update, the original moe_distribute_base.h file
was updated and its definitions changed, which caused compilation
failures for the above three operators that depend on this file.
**Solution**
We have added a dedicated copy of moe_distribute_base.h into the
implementation of these three operators, ensuring stable compilation
independent of framework version updates.

**Does this PR introduce any user-facing change?**
No. There are no user-facing changes; this fix only resolves compilation
issues without affecting functionality or user behavior.

**How was this patch tested?**
vLLM version: releases/v0.18.0

Signed-off-by: Wangyibo1005 <2633333316@qq.com>
2026-03-31 19:58:46 +08:00
linfeng-yuan
ed4ef1f4e7 [releases/v0.18.0][Triton][Sampler] Add penalty-related Triton kernel for better performance of penalties (#7794)
### What this PR does / why we need it?
Implement get_token_bin_counts_and_mask and apply_penalties with
Triton-Ascend kernels. This significantly reduces latency of the
sampling process when repetition/frequency/presence penalties are
enabled.

Cherry-pick from main PR #7569 
### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed.

Signed-off-by: linfeng-yuan <1102311262@qq.com>
Co-authored-by: realliujiaxu <realliujiaxu@163.com>
2026-03-31 19:01:51 +08:00
wangxiaoteng888
82e26b5a6e [BugFix][v0.18.0]Adjust request map pop time (#7857)
### What this PR does / why we need it?
Adjust request map pop time.This pull request optimizes the KV cache
transfer mechanism by streamlining how requests are tracked and cleaned
up. By removing unnecessary mapping structures and adjusting the timing
of request removal, the system achieves more efficient state management
during the transfer process.
pick-from:https://github.com/vllm-project/vllm-ascend/pull/7855


### How was this patch tested?
By ci
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
2026-03-31 18:55:36 +08:00
ZT-AIA
66db070423 [cherry-pick][Test]repair for test_compute_slot_mapping (#7836)
### What this PR does / why we need it?
repair for test_compute_slot_mapping

Signed-off-by: ZT-AIA <1028681969@qq.com>
2026-03-31 16:52:58 +08:00
zhangxinyuehfad
af4278be35 [v0.18.0][CI] Close build image by pr (#7776)
### What this PR does / why we need it?
Close build image by pr

This PR is related to
https://github.com/vllm-project/vllm-ascend/pull/7775, please merge them
together

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2026-03-31 16:38:43 +08:00
jack
7314bbe2df fix(platform): reimplement MiniMax usage accounting patch (#7835)
## Summary
- replace the MiniMax usage accounting monkey patch with a runtime
wrapper implementation instead of source-text rewriting
- preserve MiniMax reasoning-token semantics when `</think>` is missing
by counting the emitted output as reasoning tokens
- add unit coverage for usage tracking helpers and MiniMax
reasoning-token counting

## Why
The previous implementation rewrote `OpenAIServingChat` by matching
exact source blocks. That was brittle against `vllm` source drift and
could crash during early plugin initialization with:
`RuntimeError: Failed to locate expected block while patching
OpenAIServingChat usage accounting.`

This change keeps the usage-accounting backport, but applies it by
wrapping the original stream/full generators and tracking output token
ids at runtime.

For MiniMax reasoning counting, a missing `</think>` should not be
treated as zero reasoning tokens. It can mean the whole output is still
in thinking mode, or that generation stopped before the closing token
was produced. In that case, the emitted output should still be counted
as reasoning.

## Validation
- `pytest -q
tests/ut/patch/platform/test_patch_minimax_usage_accounting.py`
- `vllm serve --help`

Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
Co-authored-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
2026-03-31 16:27:00 +08:00
Wangbei25
4f259d4fd8 [Performance]Optimize DeepSeekOCR2 RelPosAttention and CustomQwen2Decoder (#7737)
### What this PR does / why we need it?
Optimize DeepSeekOCR2 RelPosAttention and CustomQwen2Decoder and add doc
for DeepSeekOCR2.md

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
- vllm 0.18.0
- vllm-ascend main

1. _create_custom_4d_mask during 141ms49us620ns -->
_create_npu_optimized_mask during 1ms227us780ns
2. convd2d : 27ms --> matmul <1ms
3. relposattention:sdpa->prompt_flash_attention

---------

Signed-off-by: Wangbei25 <wangbei41@huawie.com>
Signed-off-by: Wangbei25 <wangbei41@huawei.com>
Co-authored-by: Wangbei25 <wangbei41@huawie.com>
2026-03-31 14:49:29 +08:00
liuchenbing2026
2a0a588311 [0.18.0][BugFix] Disable block verify to avoid incorrect verification on NPU … (#7839)
…(#7603)

### What this PR does / why we need it?
Block verify uses cumprod(target_probs / draft_probs) for joint
acceptance. Suffix/ngram methods have
draft_probs=None, the fallback draft_token_probs=1.0 with cumprod is not
equivalent to per-token
verification, causing incorrect accept/reject results. Fix:
using_block_verify = max_spec_len >= 3 and draft_probs is not None.
MTP/Eagle3 unaffected.

- vLLM version: v0.18.0
- vLLM main:
ed359c497a

<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

Signed-off-by: liuchenbing <chenliumail@163.com>
Co-authored-by: liuchenbing <chenliumail@163.com>
2026-03-31 09:36:48 +08:00
zxr2333
ab928ed586 [v0.18.0][P/D][Feature]Layerwise connector supports Mamba prefill prefix caching (#7796)
### What this PR does / why we need it?
Mooncake layerwise connector supports Mamba prefix caching on prefiller
nodes.

### Does this PR introduce _any_ user-facing change?
Yes. Use `--enable-prefix-caching` and `--mamba-cache-mode align` to
enable mamba align mode prefix caching on P/D prefill nodes. This
function does not supports on decode nodes now.

### How was this patch tested?
By P/D E2E test.

---------

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
2026-03-31 09:25:22 +08:00
linfeng-yuan
cab5d73633 [releases/v0.18.0][BugFix] Fix server init error when set max_num_seqs not a multiple of tp while FLASHCOMM is on (#7832)
### What this PR does / why we need it?
Current version will run into init error when user set max_num_seqs to
number not a multiple of tp size. The reason is that we will first find
out the valid size of sequence parallelism, and then remove numbers that
are not the multiple of tp size. This may cause an error when we set a
max_num_seqs above a multiple of 8 before a multiple of tp size, say
when the tp size is 16 and the max_num_seqs is 90. The system will just
drop the calculated max graph capture size 88 from the valid size list
but not reset the max_cudagraph_capture_size to the next valid number.
Thus, we will need to add the line to match them up.

Cherry-pick from main PR #7801

### Does this PR introduce _any_ user-facing change?
No. 

### How was this patch tested?
Full CI passed with this PR.

Signed-off-by: linfeng-yuan <1102311262@qq.com>
Co-authored-by: limuyuan <limuyuan3@huawei.com>
2026-03-30 20:24:52 +08:00
linfeng-yuan
deceefd305 [releases/v0.18.0][bugfix][eplb] remove unnecessary weight_scale wrap behaviour (#7732)
### What this PR does / why we need it?
This PR simplifies the apply method in w8a8_dynamic.py by removing the
conditional logic that used fused_w1_scale and fused_w2_scale based on
the fused_scale_flag. This redundant wrap behavior leads to EPLB break
in int8 quantization scenarios.

Cherry-picked from #7188. Note that only bugfix lines in that PR are
picked.

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2026-03-30 16:16:03 +08:00
Mengqing Cao
fdd0726ae4 [v0.18.0][Triton] Fix triton-ascend version in Dockerfile (#7766)
### What this PR does / why we need it?
Triton-ascend occasionally encounters compilation errors, which is a
known issue in triton-ascend 3.2.0. However, we want to use the official
version rather than the development version, so we only changed the
triton-ascend version in the Dockerfile and added a FAQ to explain this
issue.

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
2026-03-30 14:43:16 +08:00
Yang Yuxi
e776d5c0f1 [Bugfix]v0.18.0 support FlashComm1 & DCP for Qwen (#7726)
### What this PR does / why we need it?
This PR backports the changes from #7673 ([Bugfix] support FlashComm1 &
DCP for Qwen) to the releases/v0.18.0 branch.

--------
Signed-off-by: Yang Yuxi <907276627@qq.com>
2026-03-29 15:59:19 +08:00