Commit Graph

1359 Commits

Author SHA1 Message Date
wangxiaochao
0d04ad8c8f [feature] Mooncake_connector support pcp/dcp (#4183)
add feature for Mooncake_connector supporting pcp/dcp

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: wangxiaochao <w00642655@china.huawei.com>
Co-authored-by: wangxiaochao <w00642655@china.huawei.com>
2025-11-18 10:17:48 +08:00
Angazenn
10a046ddce [main][misc]change default capture size for Qwen3-MoE when using full dp (#4199)
### What this PR does / why we need it?
Currently, the default `cudagraph_capture_size` in vLLM is `[1, 2, 4 ,8
,16 ,24 ,... , max_capture_size]`. However, this is not always the best
choice on different situations. This PR aims to change the default
setting when running Qwen3-MoE on full dp (`dp_size > 1` && `tp_size ==
1`) setting, which is usually applied in Large-Scale EP.
old :
`[1, 2, 4 ,8 ,16 ,24 ,... , max_capture_size]`
new:
`[1, 2, 5 ,10 ,15, 16 ,24 ,... , max_capture_size]`
This is mainly because the performance of `_npu_paged_attention` op
degrades dramatically on old settings. We hope to provide better
performance if users do not set specific `cudagraph_capture_size`.
### Does this PR introduce _any_ user-facing change?
The default `cudagraph_capture_size` is modified in above cases.
However, if `cudagraph_capture_size` has already set by users, this PR
won't have any influence on this.

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: Angazenn <supperccell@163.com>
2025-11-18 08:41:45 +08:00
weiguihua2
da1cd9c7ca [Bugfix]Fix moe error when sp chunked the hidden_states (#4212)
### What this PR does / why we need it?
Fix moe error when sp chunked the hidden_states by disabling sp by a hacky way

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
2025-11-17 22:55:17 +08:00
Ronald
3677202594 make vllm-ascend work well in developer mode (#4179)
### What this PR does / why we need it?
we often install vllm-ascend in developer mode, which has no _build_info
module. it will raise error in `utils.is_310p` and
`utils.sleep_model_enabled`, then we need to modify these two function.

### Does this PR introduce _any_ user-facing change?
not involved

### How was this patch tested?
not involved

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: Ronald1995 <ronaldautomobile@163.com>
2025-11-17 19:13:04 +08:00
jiangyunfan1
9a1cfb48d4 [TEST]Update prefixcache perf threshold for qwen3-32b-int8 (#4220)
### What this PR does / why we need it?
This PR update the prefixcache threshold for qwen3-32b-int from 0.4 to
0.8, as the baseline has been improved.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-17 19:06:54 +08:00
XiaoxinWang
e38ef2c434 support FULL graph mode for GQA (#3970)
### What this PR does / why we need it?
The current library only supports the FullDecodeOnly graph mode, which
enables full graph execution during the decode. This PR extends support
to allow full graph execution in both the prefill and decode, referred
to as FULL graph mode.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-17 10:50:35 +08:00
zhangyiming
c334114f69 [CI] Fix no space left in build wheel CI. (#4215)
### What this PR does / why we need it?
[CI] Fix no space left in build wheel CI.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-17 10:45:58 +08:00
zhangxinyuehfad
67f2b3a031 [Test] Add deepseek v3.2 exp nightly test (#4191)
### What this PR does / why we need it?

- skip the nightly image build when the github event is pull_request
- set imagepullpolicy as alway for multi_node test
- move multi_node tests ahead to have some resource clean first
- do not relevant nightly image build with nightly tests for tolerance

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Co-authored-by: wangli <wangli858794774@gmail.com>
2025-11-14 15:46:10 +08:00
Shanshan Shen
1d0f13c1a3 [Misc] Add benchmark results into .gitignore (#4200)
### What this PR does / why we need it?
Add benchmark results into `.gitignore`

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: shen-shanshan <467638484@qq.com>
2025-11-14 15:44:28 +08:00
Canlin Guo
f10251ede0 [Platform] Add import_kernels interface (#3694)
### What this PR does / why we need it?
Add import_kernels interface to avoid import useless vLLM C library

Closes #3488. Reopen #3498 for CI.

### How was this patch tested?

CI tested.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-14 11:32:51 +08:00
Yizhou
094f32c8c9 [Feat] Adds a utility for printing from within ACL graphs (#4162)
### What this PR does / why we need it?
Introduces the `acl_graph_print` function to enable printing debug
information from code running inside an ACL graph, such as custom
operators.

This works by launching a host function on a dedicated stream, bypassing
the limitations of standard `print` within compiled graph execution. The
implementation handles the necessary stream subscriptions and ensures
they are properly unregistered upon exit.

### Does this PR introduce _any_ user-facing change?
None.

### How was this patch tested?
None.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
2025-11-14 09:41:14 +08:00
weiguihua2
01195e860c [Bugfix] fix cannot import name get_mp_context (#4174)
### What this PR does / why we need it?
fix bug: cannot import vllm package

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
2025-11-14 09:09:14 +08:00
欧派果奶我还要
f90ed95578 [CI] Add multi-nodes EPLB configs of DeepSeek-R1-W8A8 & Qwen3-235B-W8A8 (#4144)
### What this PR does / why we need it?
add DeepSeek-R1-W8A8 and Qwen3-235B-W8A8 configs in multi-nodes and EPLB
scenario

### Does this PR introduce _any_ user-facing change?
no

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
Co-authored-by: 白永斌 <baiyongbin3@h-partners.com>
2025-11-14 08:50:29 +08:00
LookAround0301
5ec96fd46c [long_seq_Feat] support chunk prefill (#4158)
### What this PR does / why we need it?
1、qwen GQA attention_v1 optim
2、DeepSeek MLA refactor, all gather q -> all gather kv 
3、modelrunner refactor for chunk prefill, we remove some code not use

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: Delphine-Nic <tanwenqin@huawei.com>
Co-authored-by: Delphine-Nic <tanwenqin@huawei.com>
2025-11-14 08:43:37 +08:00
Li Wang
7294f89e43 [CI] Add daily images build for nightly ci (#3989)
### What this PR does / why we need it?
Given the current excessively long build time of our nightly-ci, I
recommend installing necessary, confirmed versions of packages in the
Docker image to reduce the time required for integration testing.
Including Mooncake vllm with fixed tags, This is expected to reduce
nightly-ci duration by 2 hours.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-13 20:10:12 +08:00
Nengjun Ma
f7d1f73b98 [CI] Remove unsupported python 3.9 format check (#4172)
### What this PR does / why we need it?
- Fixes the lint test fail for python 3.9, as python 3.9 is not support
now.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: leo-pony <nengjunma@outlook.com>
2025-11-13 16:47:24 +08:00
CodeCat
49818dbbed [Test]Add ut test qwen3_moe and sfa (#4121)
### What this PR does / why we need it?
Currently, the UT tests lack coverage for the Qwen3_moe network and
torchair_sfa. Therefore, supplementary tests are being added.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
by CI

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
2025-11-13 16:09:22 +08:00
lilinsiman
adee9dd3b1 [Info][main] Correct the mistake in information documents (#4157)
### What this PR does / why we need it?
Correct the mistake in information documents

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2025-11-13 15:53:58 +08:00
zhaozx-cn
fdd2db097a [BugFix] Fix kv_no_split not contiguous (#3594)
allgather need contiguous data, split operation return uncontiguous
data.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: zhaozx-cn <zhaozx2116@163.com>
2025-11-13 11:28:09 +08:00
drslark
9d84172359 [BugFix] adapted e2e tests for Qwen3-next-mtp (#4160)
### What this PR does / why we need it?

Now, from https://github.com/vllm-project/vllm-ascend/pull/3967, chunked
prefill and spiltfuse are defaultly enabled.

The e2e test for mtp breaks now.

After locating the bug, we found that a triton operator does not support
chunked prefill.

But if let e2e test be skipped is bad.

So, we changed the e2e test to only test the case in which chunked
prefill is off.

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

Because we only modified
`test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`.

So, we only run `pytest -s
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`
locally to test it.

Below is the result:

```text
==================================================================================================================== warnings summary ====================================================================================================================
usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8
  /usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
    import pkg_resources

<frozen importlib._bootstrap>:241
  <frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyPacked has no __module__ attribute

<frozen importlib._bootstrap>:241
  <frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyObject has no __module__ attribute

tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
  /usr/local/python3.11.10/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py:121: DeprecationWarning: The 'task' option has been deprecated and will be removed in v0.13.0 or v1.0, whichever comes first. Please remove this option.
    s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================================= 1 passed, 5 warnings in 314.52s (0:05:14) ========================================================================================================
sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute
```

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: drslark <slarksblood@qq.com>
2025-11-13 11:08:35 +08:00
realliujiaxu
5093192769 [Bugfix] fix mtp profile run error where main model and mtp model use different quantization (#4102)
### What this PR does / why we need it?
In PR https://github.com/vllm-project/vllm-ascend/pull/3420, we
initially placed the quantization type (quant_type) in the MoECommMethod
class. However, since MoECommMethod follows a singleton pattern, it
couldn't accommodate scenarios where different layers in the model might
use different quantization approaches (e.g., MTP modules using
floating-point computation while the main model employs quantized
computation).
In this PR, we've moved the quantization type to the AscendFusedMoe
class and pass it as a parameter to MoECommMethod.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
```bash
export HCCL_BUFFSIZE=1024
export VLLM_VERSION=0.11.0

vllm serve /home/data/DeepSeek-R1_w8a8/ \
 --data-parallel-size 2 \
 --tensor-parallel-size 8 \
 --enable-expert-parallel \
 --served-model-name dsv3 \
 --max-model-len 32768 \
 --max-num-batched-tokens 4096 \
 --max-num-seqs 16 \
 --quantization ascend \
 --trust-remote-code \
 --gpu-memory-utilization 0.9 \
 --speculative-config '{"num_speculative_tokens": 2, "method":"deepseek_mtp"}'
```


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: realliujiaxu <realliujiaxu@163.com>
2025-11-13 11:02:31 +08:00
weichen
17259cb265 [Perf] [MoE] optimize all2allv (#3738)
### What this PR does / why we need it?
1. Replace init_routing_v2 with token_permute to optimize performance.

Note: This pr will be merged after switching ci to CANN 8.3
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
vllm bench serve bs = 48 / rr = 10000 / 2k input -> 20k output:
before:
<img width="489" height="488" alt="image"
src="https://github.com/user-attachments/assets/268a19e6-9ab2-47f0-84a1-4f6d3bc342e2"
/>
 after:
<img width="480" height="500" alt="image"
src="https://github.com/user-attachments/assets/d9b1e628-0520-42d5-8a21-b42f7cd7abc7"
/>
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Pr0Wh1teGivee <calvin_zhu0210@outlook.com>
2025-11-13 09:38:11 +08:00
realliujiaxu
6bc770cd78 [Perf] fix async copy for async scheduling (#4113)
### What this PR does / why we need it?
Only CPU tensors with `pin_memory=True` can be asynchronously copied to
the device. Currently, there are two instances where non-pinned CPU
tensors are being copied to the device, which will trigger synchronous
operations, reducing the expected benefits of asynchronous scheduling.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: realliujiaxu <realliujiaxu@163.com>
2025-11-13 09:11:26 +08:00
22dimensions
c272747d13 Upgrade to 0.11.1 newest vllm commit (#3982)
### What this PR does / why we need it?
adapt vllm-ascend main branch with vllm releases/v0.11.1

fix `forward context not set` in test_vlm.py caused by:
https://github.com/vllm-project/vllm/pull/23207

fix import `cdiv round` failed caused by:
https://github.com/vllm-project/vllm/pull/27188

fix import `init_cached_hf_modules` failed caused by:
https://github.com/vllm-project/vllm/pull/27567

adapt triton kernel `fused_recurrent_gated_delta_rule_fwd_kernel` caused
by: https://github.com/vllm-project/vllm/pull/27654
- remove unused code in sigmoid_gating.py
- `class FusedRecurrentFunction` , `fused_recurrent_gated_delta_rule`,
`fused_recurrent_gated_delta_rule_fwd`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI 


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: 22dimensions <waitingwind@foxmail.com>
2025-11-12 23:01:19 +08:00
Li Wang
3ca11d5a7c [CI] Fix nightly-ci (#4159)
### What this PR does / why we need it?
Explicit specification `NUMEXPR_MAX_THREADS` to avoid `Error. nthreads
cannot be larger than environment variable "NUMEXPR_MAX_THREADS" (64)`

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-12 22:06:49 +08:00
Angazenn
fc7e5cd9dc [main][bugfix] Change seq_lens in dummy attn_metadata to max_query_len (#4097)
### What this PR does / why we need it?
Currently, we set `seq_lens` in dummy attn_metadata to be
`max_model_len` to get max workspace for attention during capturing.
However, setting it consistently to be `max_model_len` causing dummy_run
to execute a long attention when running actual inference. For example,
if there is a single req with `seqs_lens` as [8] but `max_model_len` is
131072, the whole process will be slow down by dummy_run as it execute a
fake long-seq attention. Therefore, we instead set it to max_query_len,
which is also consistent with vLLM gpu implementation.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Angazenn <supperccell@163.com>
2025-11-12 17:31:39 +08:00
zhangsicheng5
a123f355e9 [feature] support pcp + mtp (in pd co-locate scenario) (#4098)
1. support pcp + mtp in pd co-locate scenario
2. llmdatadist connector pcp related bugfix and cleancode

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: zhangsicheng5 <zhangsicheng5@huawei.com>
2025-11-12 17:22:21 +08:00
XiaoxinWang
1b4ce63ec9 fix fullgraph in ds. (#4016)
### What this PR does / why we need it?
DS don't have 'AscendAttentionMetadataBuilder' class so will fail in
fullgraph.
We resolved the issue by modifying the code to only check for
'GDNAttentionMetadataBuilder ', while all other attention cases follow
the default branch.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-12 10:11:43 +08:00
zhangyiming
c9e5b90f53 [Doc] Fix DeepSeek-3.2-Exp doc, remove v0.11.0rc0 outdated infos. (#4095)
### What this PR does / why we need it?
Fix DeepSeek-3.2-Exp doc, remove v0.11.0rc0 outdated infos.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-12 09:11:31 +08:00
Yizhou
638dbcdb32 [Perf] Remove D2H operations to imporve performance (#4063)
### What this PR does / why we need it?
Replace masked in-place assignment with a device-side torch.where so
selection stays on-device, allowing subsequent device ops to be enqueued
earlier and removing an implicit D2H sync, reducing latency by several
hundreds μs on Ascend.

### Does this PR introduce _any_ user-facing change?
None.

### How was this patch tested?
None.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
2025-11-12 09:08:55 +08:00
thonean
e38fe92f40 [Misc][Doc] Add service profiling feature with user guide (#3756)
### What this PR does / why we need it?
To support the data collection capabilities of the msServiceProfiler on
vLLM-ascned framework and enable customization of data collection points
via configuration file, a default profiling configuration has been added
to vllm-ascend, facilitating debugging and optimization for developers
and users.

### Does this PR introduce _any_ user-facing change?
None

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: minghangc <29514143@qq.com>
2025-11-12 09:07:14 +08:00
Canlin Guo
1c677c3b87 [Test][Accuracy] Add accuracy evaluation config for InternVL3_5-8B (#3964)
### What this PR does / why we need it?

To continuously monitor the accuracy of the InternVL3_5-8B model, this
PR adds the corresponding configuration file to the CI. We need to add
the `-hf` suffix to avoid incompatibility with the `lm-eval`
preprocessor.

### How was this patch tested?

`pytest -sv ./tests/e2e/models/test_lm_eval_correctness.py --config
./tests/e2e/models/configs/InternVL3_5-8B.yaml`


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-12 09:05:55 +08:00
zzhxxx
46a41b26d3 oproj TP support acl graph (#4073)
### What this PR does / why we need it?
Reference #2167 and orpoj TP supports ACL graph.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: zzhx1 <zzh_201018@outlook.com>
2025-11-11 19:39:06 +08:00
jiangyunfan1
0e6e08e939 [TEST]Update nightly cases and add mtpx (#4111)
### What this PR does / why we need it?
This PR updates some nightly test cases and adds mtpx cases, we need to
test them daily
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-11 17:39:58 +08:00
Li Wang
9cc42226d5 [CI] Integrate mooncake to vllm-ascend base image (#4062)
### What this PR does / why we need it?
This patch aims to integrate the mooncake
[v0.3.7.2.post2](https://github.com/kvcache-ai/Mooncake/releases/tag/v0.3.7.post2)
to vllm-ascend images

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-11 15:51:16 +08:00
wangxiyuan
f811a24bf0 Remove VLLM_USE_V1 (#4086)
Drop VLLM_USE_V1 usage.  This env has been removed from vLLM already.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-11 15:43:39 +08:00
zhangxinyuehfad
d5567680a2 [Fixbug] Fix ut test (#4116)
### What this PR does / why we need it?
Fix ut test:pytest<9.0.0
test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY failed by
https://github.com/vllm-project/vllm-ascend/pull/3967, skip it now, and
fix it later.

test ok
:https://github.com/vllm-project/vllm-ascend/actions/runs/19255274573/job/55048851066?pr=4116


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-11 15:31:00 +08:00
zhangxinyuehfad
fae1c59a79 [Fix] Refactor and fix dist test to e2e full test (#3808)
### What this PR does / why we need it?
Fix ci test on A3
1. delete lables
2. fix filter yaml file name
3. refactor dist test to e2e full test 
4. skip test_models_distributed_Qwen3_MOE_TP2_WITH_EP &
test_models_distributed_Qwen3_MOE_W8A8_WITH_EP because of
https://github.com/vllm-project/vllm-ascend/issues/3895

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-11 10:36:05 +08:00
zhangxinyuehfad
b77b4f1abf [Test] Add nightly test for DeepSeek-V3.2-Exp (#3908)
### What this PR does / why we need it?
Add nightly test for DeepSeek-V3.2-Exp


### How was this patch tested?
test action:

https://github.com/vllm-project/vllm-ascend/actions/runs/19156153634/job/54757008557?pr=3908


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-11 10:29:57 +08:00
Yikun Jiang
e384755ce1 [Doc] Recover installation doc to use pip install (#4109)
### What this PR does / why we need it?
Use pip installation in installation doc and change related doctest to
validate.

### Does this PR introduce _any_ user-facing change?
No, doc only

### How was this patch tested?
Doctest related CI passed
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-11-11 09:25:44 +08:00
Apocalypse
71866d5311 [feature] chunkprefill support pcp&dcp (#3801)
### What this PR does / why we need it?
ChunkPrefill now can support Long Sequence Feature Pcp&Dcp

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI tests passed with self-test


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Apocalypse990923-qshi <qiushixu@usc.edu>
Signed-off-by: Delphine-Nic <tanwenqin@huawei.com>
Co-authored-by: Delphine-Nic <tanwenqin@huawei.com>
Co-authored-by: Delphine-Nic <3834144971@qq.com>
2025-11-11 09:18:02 +08:00
zhaomingyu13
7ffbe73d54 [main][Bugfix] Fix ngram precision issue and open e2e ngram test (#4090)
### What this PR does / why we need it?
Fix ngram precision issue and open e2e ngram test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Icey <1790571317@qq.com>
Signed-off-by: zhaomingyu <zhaomingyu13@h-partners.com>
Co-authored-by: Icey <1790571317@qq.com>
2025-11-11 09:06:24 +08:00
wangxiyuan
64220c68c5 [Doc] Add release note for v0.11.0rc1 (#3931)
Add release note for v0.11.0rc1.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-10 21:01:50 +08:00
Icey
e04a87f4be [BugFix] Fixes Qwen3-Next enable nz accuracy problem (#4058)
### What this PR does / why we need it?
- Fixes Qwen3-Next enable nz accuracy problem

### Does this PR introduce _any_ user-facing change?
N/A


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Icey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
2025-11-10 20:54:57 +08:00
22dimensions
e6625bb582 [Doc] add qwen3 w4a4 tutorial (#4076)
### What this PR does / why we need it?
v0.11.0rc1 will introduce w4a4 quantization feature, so add this
tutorial.

### Does this PR introduce _any_ user-facing change?

No


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: 22dimensions <waitingwind@foxmail.com>
2025-11-10 20:30:07 +08:00
rjg-lyh
a1558b99c2 [Core] Restore scheduling logic under default configuration (#3967)
### What this PR does / why we need it?
This PR reverts the changes introduced in PR #2894 Initially, due to
performance issues with the older version of the chunked prefill ops,
the default behavior was to use the Ascend scheduler to disable the
chunked prefill feature. However, with the improvements in the
performance of the new chunked prefill ops, this interception strategy
has been removed. This change also aligns with the community's default
configuration behavior.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed with new added/existing test.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: rjg-lyh <1318825571@qq.com>
2025-11-10 17:48:56 +08:00
herizhen
75c3f9a780 [Typo] LLama has been changed to Llama (#4089)
### What this PR does / why we need it?
First-generation model:uses"LLama",subsequent models use"Llama"
The second"L"here should be lowercase.Other instances of "LLama"on
this page should be corrected accordingly

### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: herizhen <you@example.com>
Co-authored-by: herizhen <you@example.com>
2025-11-10 16:22:52 +08:00
zhangxinyuehfad
d40ba52454 [Fix] fix Qwen2-Audio-7B-Instruct accuracy test (#4017)
### What this PR does / why we need it?

fix Qwen2-Audio-7B-Instruct accuracy test

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-10 11:54:18 +08:00
Canlin Guo
de49fb3deb [Feature][Build] Upgrade the minimum version to 3.10 (#3926)
### What this PR does / why we need it?

Closes #3728, #3657. 

The main branch is now aligned with the vllm `releases/v0.11.1` branch,
which no longer supports `Python 3.9`. Check it
[here](https://github.com/vllm-project/vllm/blob/releases/v0.11.1/pyproject.toml).

### Does this PR introduce _any_ user-facing change?

The newest version of vllm-ascend don't support Python 3.9. 

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-10 11:50:12 +08:00
Levi
0a62e671fb [Feat] flashcomm_v2 optim solution (#3232)
### What this PR does / why we need it?
Supports generalized FlashComm2 optimization, which reduces
communication overhead, decreases RmsNorm computation, and saves one
AllGather step by replacing Allreduce operations in the Attention module
with pre-AlltoAll and post-AllGather operations (used in combination
with FlashComm1). This feature is enabled during the Prefill phase and
is recommended to be used together with FlashComm1, delivering broad
performance improvements, especially in long sequence scenarios with
large tensor parallelism (TP) configurations. Benchmark tests show that
under TP16DP1 configuration, it can improve the prefill performance of
the DeepSeek model by 8% on top of FlashComm1.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: zzhxx <2783294813@qq.com>
Signed-off-by: Levi-JQ <yujinqi2@huawei.com>
Co-authored-by: Levi-JQ <yujinqi2@huawei.com>
Co-authored-by: zzhxx <2783294813@qq.com>
2025-11-10 11:01:45 +08:00