Commit Graph

639 Commits

Author SHA1 Message Date
XiaoxinWang
5508a602ed [Fix] fix aclgraph e2e test. (#4131)
### What this PR does / why we need it?
Due to the inconsistency between the attention operators used in eager
mode and graph mode, the accumulation order of the operator cannot be
guaranteed to be deterministic. Therefore, we modify the test to compare
with given outputs.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-24 17:22:03 +08:00
wangxiyuan
a1f142b7ad Drop 0.11.0 support (#4377)
There is a lot hack code for v0.11.0, which makes the code hard to
upgrade to newer vLLM version. Since v0.11.0 will release soon. Let's
drop v0.11.0 support first. Then we'll upgrade to v0.11.2 soon.


- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-24 17:08:20 +08:00
jiangyunfan1
41ddb06554 [TEST]Update deepseek mtpx acc cases standard (#4321)
### What this PR does / why we need it?
This PR updates the acc standard for deepseek mtpx cases, according to
inner standard
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-24 16:43:29 +08:00
Angazenn
9b3a484b46 [BugFix] Fix some issues caused by the ascending order of cudagraph_capture_sizes (#4338)
### What this PR does / why we need it?
In [#26016](https://github.com/vllm-project/vllm/pull/26016), vllm
change the `cudagraph_capture_sizes` to be in ascending order. This PR
fixes related issues caused by this.
### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: Angazenn <supperccell@163.com>
2025-11-22 17:33:12 +08:00
InSec
5a4e8cdeba [Feat][BugFix]Support the Qwen3-Next-80B-A3B-Instruct quantization model&Fix the NZ issue (#4245)
### What this PR does / why we need it?
Support the Qwen3-Next-80B-A3B-Instruct quantization model and Fix the
NZ issue. Triton kernel doesn't support data format nz, thus we skip
converting weight to nz on layer `conv1d`

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: IncSec <1790766300@qq.com>
2025-11-21 10:42:56 +08:00
Yizhou
cbb27feaf2 [Test] Add ACL graph capture/replay DP test (#4259)
### What this PR does / why we need it?
Add ACL graph capture/replay DP test, this is a imprved version of #3886

Restructures the multi-card ACL graph test for improved clarity,
robustness, and accuracy.

Key improvements include:
- Replaces fragile `sys.settrace` and manual patching with a clean,
reusable spy installer using `unittest.mock.patch`.
- Introduces more precise metrics by tracking
`NPUModelRunner.execute_model` and `_dummy_run` calls directly.
- Rewrites assertions to be more accurate and provides clear
explanations for the expected counts of graph captures, replays, model
executions, and dummy runs.
- Simplifies the overall test structure by separating the worker logic
into a dedicated function.
- Removes a long, unnecessary sleep at the end of the test.
- Expands test coverage by adding a larger `max_tokens` parameter.

### Does this PR introduce _any_ user-facing change?
None.

### How was this patch tested?
None.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
Co-authored-by: lilinsiman <lilinsiman@gmail.com>
2025-11-21 08:50:46 +08:00
CodeCat
470fe05df6 [Test] Add tests for the multi-node DeepSeek-V2-Lite network in GE Graph (#4039)
### What this PR does / why we need it?
Add tests for the multi-node DeepSeek-V2-Lite network in GE Graph mode,
and supplement the end-to-end (e2e) tests for the MLA and NZ features of
this network.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
2025-11-20 17:28:32 +08:00
realliujiaxu
1cdf9ffa73 [Bugfix] fix hang in async scheduling (#4233)
### What this PR does / why we need it?

After https://github.com/vllm-project/vllm-ascend/pull/4113, there is no
synchronization between steps. However, in async scheduling with
aclgraph, it is possible that the CPU's record event for the current
iteration completes before the previous iteration's graph execution has
finished.

If cpu is fast enough, device will hang on event_wait in interation i+1
(assume that event_record is executed immediately on update stream of
device):
<img width="1812" height="489" alt="image"
src="https://github.com/user-attachments/assets/373fe655-afe5-4d7d-807e-b0aacf24a543"
/>

after add synchonization, record is launched after graph replay:
<img width="1803" height="466" alt="image"
src="https://github.com/user-attachments/assets/a8a68053-bd7d-49f5-a79c-9a26ef1285cc"
/>

bubble time caused by synchronization is about 85 us on G8600:
<img width="1491" height="804" alt="image"
src="https://github.com/user-attachments/assets/968611ee-f39a-4329-8150-1c4adba25dd1"
/>

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: realliujiaxu <realliujiaxu@163.com>
Co-authored-by: hwhaokun <haokun0405@163.com>
2025-11-19 14:47:19 +08:00
Li Wang
91b6ba8ffe [CI] Fix kubernetes failed to resolve ip by dns name (#4240)
### What this PR does / why we need it?
While in the scenario where the pod has been started, but the
corresponding DNS service is not yet ready. If we immediately resolve
the DNS domain name at this time, an error will occur. see
https://github.com/vllm-project/vllm-ascend/actions/runs/19436639688/job/55609108796

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-19 14:38:13 +08:00
jiangyunfan1
9a1cfb48d4 [TEST]Update prefixcache perf threshold for qwen3-32b-int8 (#4220)
### What this PR does / why we need it?
This PR update the prefixcache threshold for qwen3-32b-int from 0.4 to
0.8, as the baseline has been improved.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-17 19:06:54 +08:00
XiaoxinWang
e38ef2c434 support FULL graph mode for GQA (#3970)
### What this PR does / why we need it?
The current library only supports the FullDecodeOnly graph mode, which
enables full graph execution during the decode. This PR extends support
to allow full graph execution in both the prefill and decode, referred
to as FULL graph mode.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-17 10:50:35 +08:00
zhangxinyuehfad
67f2b3a031 [Test] Add deepseek v3.2 exp nightly test (#4191)
### What this PR does / why we need it?

- skip the nightly image build when the github event is pull_request
- set imagepullpolicy as alway for multi_node test
- move multi_node tests ahead to have some resource clean first
- do not relevant nightly image build with nightly tests for tolerance

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Co-authored-by: wangli <wangli858794774@gmail.com>
2025-11-14 15:46:10 +08:00
欧派果奶我还要
f90ed95578 [CI] Add multi-nodes EPLB configs of DeepSeek-R1-W8A8 & Qwen3-235B-W8A8 (#4144)
### What this PR does / why we need it?
add DeepSeek-R1-W8A8 and Qwen3-235B-W8A8 configs in multi-nodes and EPLB
scenario

### Does this PR introduce _any_ user-facing change?
no

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
Co-authored-by: 白永斌 <baiyongbin3@h-partners.com>
2025-11-14 08:50:29 +08:00
Li Wang
7294f89e43 [CI] Add daily images build for nightly ci (#3989)
### What this PR does / why we need it?
Given the current excessively long build time of our nightly-ci, I
recommend installing necessary, confirmed versions of packages in the
Docker image to reduce the time required for integration testing.
Including Mooncake vllm with fixed tags, This is expected to reduce
nightly-ci duration by 2 hours.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-13 20:10:12 +08:00
drslark
9d84172359 [BugFix] adapted e2e tests for Qwen3-next-mtp (#4160)
### What this PR does / why we need it?

Now, from https://github.com/vllm-project/vllm-ascend/pull/3967, chunked
prefill and spiltfuse are defaultly enabled.

The e2e test for mtp breaks now.

After locating the bug, we found that a triton operator does not support
chunked prefill.

But if let e2e test be skipped is bad.

So, we changed the e2e test to only test the case in which chunked
prefill is off.

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

Because we only modified
`test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`.

So, we only run `pytest -s
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`
locally to test it.

Below is the result:

```text
==================================================================================================================== warnings summary ====================================================================================================================
usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8
  /usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
    import pkg_resources

<frozen importlib._bootstrap>:241
  <frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyPacked has no __module__ attribute

<frozen importlib._bootstrap>:241
  <frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyObject has no __module__ attribute

tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
  /usr/local/python3.11.10/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py:121: DeprecationWarning: The 'task' option has been deprecated and will be removed in v0.13.0 or v1.0, whichever comes first. Please remove this option.
    s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================================= 1 passed, 5 warnings in 314.52s (0:05:14) ========================================================================================================
sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute
```

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: drslark <slarksblood@qq.com>
2025-11-13 11:08:35 +08:00
Li Wang
3ca11d5a7c [CI] Fix nightly-ci (#4159)
### What this PR does / why we need it?
Explicit specification `NUMEXPR_MAX_THREADS` to avoid `Error. nthreads
cannot be larger than environment variable "NUMEXPR_MAX_THREADS" (64)`

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-12 22:06:49 +08:00
XiaoxinWang
1b4ce63ec9 fix fullgraph in ds. (#4016)
### What this PR does / why we need it?
DS don't have 'AscendAttentionMetadataBuilder' class so will fail in
fullgraph.
We resolved the issue by modifying the code to only check for
'GDNAttentionMetadataBuilder ', while all other attention cases follow
the default branch.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-12 10:11:43 +08:00
Canlin Guo
1c677c3b87 [Test][Accuracy] Add accuracy evaluation config for InternVL3_5-8B (#3964)
### What this PR does / why we need it?

To continuously monitor the accuracy of the InternVL3_5-8B model, this
PR adds the corresponding configuration file to the CI. We need to add
the `-hf` suffix to avoid incompatibility with the `lm-eval`
preprocessor.

### How was this patch tested?

`pytest -sv ./tests/e2e/models/test_lm_eval_correctness.py --config
./tests/e2e/models/configs/InternVL3_5-8B.yaml`


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-12 09:05:55 +08:00
jiangyunfan1
0e6e08e939 [TEST]Update nightly cases and add mtpx (#4111)
### What this PR does / why we need it?
This PR updates some nightly test cases and adds mtpx cases, we need to
test them daily
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-11 17:39:58 +08:00
wangxiyuan
f811a24bf0 Remove VLLM_USE_V1 (#4086)
Drop VLLM_USE_V1 usage.  This env has been removed from vLLM already.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-11 15:43:39 +08:00
zhangxinyuehfad
d5567680a2 [Fixbug] Fix ut test (#4116)
### What this PR does / why we need it?
Fix ut test:pytest<9.0.0
test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY failed by
https://github.com/vllm-project/vllm-ascend/pull/3967, skip it now, and
fix it later.

test ok
:https://github.com/vllm-project/vllm-ascend/actions/runs/19255274573/job/55048851066?pr=4116


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-11 15:31:00 +08:00
zhangxinyuehfad
b77b4f1abf [Test] Add nightly test for DeepSeek-V3.2-Exp (#3908)
### What this PR does / why we need it?
Add nightly test for DeepSeek-V3.2-Exp


### How was this patch tested?
test action:

https://github.com/vllm-project/vllm-ascend/actions/runs/19156153634/job/54757008557?pr=3908


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-11 10:29:57 +08:00
Yikun Jiang
e384755ce1 [Doc] Recover installation doc to use pip install (#4109)
### What this PR does / why we need it?
Use pip installation in installation doc and change related doctest to
validate.

### Does this PR introduce _any_ user-facing change?
No, doc only

### How was this patch tested?
Doctest related CI passed
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-11-11 09:25:44 +08:00
zhaomingyu13
7ffbe73d54 [main][Bugfix] Fix ngram precision issue and open e2e ngram test (#4090)
### What this PR does / why we need it?
Fix ngram precision issue and open e2e ngram test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Icey <1790571317@qq.com>
Signed-off-by: zhaomingyu <zhaomingyu13@h-partners.com>
Co-authored-by: Icey <1790571317@qq.com>
2025-11-11 09:06:24 +08:00
Icey
e04a87f4be [BugFix] Fixes Qwen3-Next enable nz accuracy problem (#4058)
### What this PR does / why we need it?
- Fixes Qwen3-Next enable nz accuracy problem

### Does this PR introduce _any_ user-facing change?
N/A


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Icey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
2025-11-10 20:54:57 +08:00
zhangxinyuehfad
d40ba52454 [Fix] fix Qwen2-Audio-7B-Instruct accuracy test (#4017)
### What this PR does / why we need it?

fix Qwen2-Audio-7B-Instruct accuracy test

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-10 11:54:18 +08:00
Levi
0a62e671fb [Feat] flashcomm_v2 optim solution (#3232)
### What this PR does / why we need it?
Supports generalized FlashComm2 optimization, which reduces
communication overhead, decreases RmsNorm computation, and saves one
AllGather step by replacing Allreduce operations in the Attention module
with pre-AlltoAll and post-AllGather operations (used in combination
with FlashComm1). This feature is enabled during the Prefill phase and
is recommended to be used together with FlashComm1, delivering broad
performance improvements, especially in long sequence scenarios with
large tensor parallelism (TP) configurations. Benchmark tests show that
under TP16DP1 configuration, it can improve the prefill performance of
the DeepSeek model by 8% on top of FlashComm1.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: zzhxx <2783294813@qq.com>
Signed-off-by: Levi-JQ <yujinqi2@huawei.com>
Co-authored-by: Levi-JQ <yujinqi2@huawei.com>
Co-authored-by: zzhxx <2783294813@qq.com>
2025-11-10 11:01:45 +08:00
jiangyunfan1
c116524379 [TEST]Add qwen3-235b-w8a8 and qwen3-30b-w8a8 nightly test (#3973)
### What this PR does / why we need it?
This PR adds some qwen3-235b-w8a8 cases qwen3-30b-w8a8 cases, we need
test them daily
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
by running the test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-08 18:49:28 +08:00
wangx700
24d6314718 [Bugfix] fix sleepmode level2 e2e test (#4019)
### What this PR does / why we need it?

enable sleepmode level2 e2e test and add the check logic to ensure the
nz is not enabled.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

use e2e tests


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangx700 <wangxin700@huawei.com>
2025-11-08 14:11:55 +08:00
offline893
f7ca3bc0fa [CI]Fix eplb ci. (#4052)
### What this PR does / why we need it?
This pr fixes ci on eplb

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-11-07 23:53:35 +08:00
drslark
23b785fdfb [Feat] Adapted mtp function to Qwen3-next (#3918)
### What this PR does / why we need it?

Adapts mtp function to Qwen3-next.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: drslark <slarksblood@qq.com>
2025-11-07 16:39:03 +08:00
Li Wang
259eb25f88 [CI] Quick fix mooncake for nightly-ci (#4028)
### What this PR does / why we need it?
Since we have upgraded to CANN 8.3rc1, we will no longer use the
privately maintained Mooncake repository, but instead use the official
release released by Mooncake:
https://github.com/kvcache-ai/Mooncake/releases/tag/v0.3.7.post2 .

Next step: this is only a temporary solution. We will integrate mooncake
into the vllm-ascend base image later for easier use. see
https://github.com/vllm-project/vllm-ascend/pull/3989
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-06 18:46:00 +08:00
jiangyunfan1
34b278a339 [TEST]Update nightly acc test standard (#4032)
### What this PR does / why we need it?
This PR updates the acc test standard for some cases, we need it to
better maintain acc

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
by running the test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-06 16:58:38 +08:00
XiaoxinWang
738bf2b720 support qwen3-next full_decode_only mode. (#3949)
### What this PR does / why we need it?
support qwen3-next full_decode_only mode. 
bs=1, max_token=1024
| branch| tps| e2e time|
| --- | --- | --- |
|piecewise  |3.06  | 8.15 |
|fulldecodeonly | 7.2 | 3.47 |

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
2025-11-05 08:46:05 +08:00
zhangxinyuehfad
49e6983b3b [Test] Add accuracy test for qwen3-30b-a3b-w8a8 (#3807)
### What this PR does / why we need it?
Add accuracy test for qwen3-30b-a3b-w8a8
This PR depends on https://github.com/vllm-project/vllm-ascend/pull/3799

### How was this patch tested?
qwen3-30b-a3b-w8a8 accuarcy test ok:

https://github.com/vllm-project/vllm-ascend/actions/runs/19062045267/job/54443732877?pr=3807
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-04 18:56:31 +08:00
realliujiaxu
bedf223771 [Perf] move quant before allgather in Allgather EP (#3420)
### What this PR does / why we need it?
move quant before allgather in Allgather EP, rely on
https://github.com/vllm-project/vllm-ascend/pull/3334

Deepseek R1 W8A8 performance on A2 with
`HCCL_ALGO="level0:NA;level1:pipeline"`:
| Seq length | Mean TTFT (ms) main | Mean TTFT (ms)  this PR |
|----------|----------|----------|
| 4k   |  375.21  | 364.99   |
| 16k  | 1465.23   | 1421.75  |
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: realliujiaxu <realliujiaxu@163.com>
2025-11-04 16:49:58 +08:00
jiangyunfan1
44b58b8665 [TEST]Add full graph for multimodal nightly tests (#3968)
### What this PR does / why we need it?
This PR adds full graph for multimodal nightly test, we need to maintain
this senario

### How was this patch tested?
by running the test
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-11-04 16:47:48 +08:00
ZengSilong
dc1a6cb503 [Test]Add accuracy test for multiple models (#3823)
### What this PR does / why we need it?
Add accuracy test for multiple models:
- Meta_Llama_3.1_8B_Instruct
- Qwen2.5-Omni-7B
- Qwen3-VL-8B-Instruct

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: MrZ20 <2609716663@qq.com>
2025-11-04 14:46:39 +08:00
zhangxinyuehfad
646fbac7a9 [Test] Add accuracy test for qwen3-8b-w8a8 (#3799)
### What this PR does / why we need it?
Add accuracy test for qwen3-8b-w8a8

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-11-04 09:23:11 +08:00
wangxiyuan
cc2cd42ad3 Upgrade CANN to 8.3.rc1 (#3945)
### What this PR does / why we need it?
This PR upgrade CANN from 8.2rc1 to 8.3rc1 and remove the CANN version
check logic.

TODO: we notice that UT runs failed with CANN 8.3 image. So the base
image for UT is still 8.2. We'll fix it later.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-03 20:21:07 +08:00
CodeCat
49d74785c4 [Test] Add new e2e test use deepseek-v2-lite in ge graph mode (#3937)
### What this PR does / why we need it?
The current test cases lack end-to-end (e2e) testing for the
deepseek-v2-lite network in ge graph mode.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
2025-11-03 20:10:01 +08:00
Li Wang
8f222f21f1 [CI][Nightly] Fix mooncake build (#3958)
### What this PR does / why we need it?
Fix https://github.com/vllm-project/vllm-ascend/pull/3943

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-03 20:07:47 +08:00
Li Wang
d0cc9c1203 [CI][Nightly] Correct the commit hash available for mooncake (#3943)
### What this PR does / why we need it?
Because the previous commit hash was accidentally deleted or
overwritten. This patch correct the commit hash available for
https://github.com/AscendTransport/Mooncake to make nightly ci happy
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-01 21:52:16 +08:00
Canlin Guo
f99762eb25 [E2E][MM] Add e2e tests for InternVL model (#3796)
### What this PR does / why we need it?

As a validation for #3664, add end-to-end tests to monitor the InternVL
model and ensure its continuous proper operation. This PR is only for
single-card. So the models that have more parameters than 8B like 78B
are needed to test using multi-cards.
 

### Does this PR introduce _any_ user-facing change?

None.

### How was this patch tested?

`pytest -sv tests/e2e/singlecard/multi-modal/test_internvl.py`


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-10-31 15:42:47 +08:00
lilinsiman
1f486b2dd1 [Test] Add new test model for aclgraph single_request (#3888)
### What this PR does / why we need it?
add new test model for aclgraph single_request

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2025-10-31 11:23:13 +08:00
lilinsiman
35a913cf1e add new e2e tests case for aclgraph memory (#3879)
### What this PR does / why we need it?
add new e2e tests case for aclgraph memory

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ut

- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2025-10-31 09:16:52 +08:00
Li Wang
eb0a2ee2d0 [CI] Optimize nightly CI (#3898)
### What this PR does / why we need it?
This patch mainly fix the the problem of not being able to determine the
exit status of the pod's entrypoint script and some other tiny
optimizations:
1. Shorten wait for server timeout
2. fix typo
3. fix the issue of ais_bench failing to correctly access the proxy URL
in a PD separation scenario.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-30 23:42:20 +08:00
jiangyunfan1
655a229455 [TEST]Add MALPO for aclgraph in nightly test (#3894)
### What this PR does / why we need it?
This PR adds MALPO for deepseek aclgraph, we need to test it nightly
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
2025-10-30 18:25:54 +08:00
Song Zhixin
216fc0e8e4 [feature] Prompt Embeddings Support for v1 Engine (#3026)
### What this PR does / why we need it?
this PR based on
[19746](https://github.com/vllm-project/vllm/issues/19746), support
Prompt Embeddings for v1 engine on NPU

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

```python
python examples/prompt_embed_inference.py
```


- vLLM version: v0.11.0
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

---------

Signed-off-by: jesse <szxfml@gmail.com>
2025-10-30 17:15:57 +08:00
xuyexiong
eff3e5fc6f [FEAT] Refactor spec decode to support efficient padded speculation (#3528)
### What this PR does / why we need it?
1. Refactor the file `mtp_proposer.py`, splits torchair related codes
into `mtp_torchair_proposer.py`
2. According to https://github.com/vllm-project/vllm/pull/24539,
implements padded speculative decoding as described in
https://github.com/vllm-project/vllm/issues/21984.
### Does this PR introduce _any_ user-facing change?
User can use `disable_padded_drafter_batch` to disable/enable padded
speculation, default is `False`.
offline example:
```
speculative_config={"method": "deepseek_mtp", "num_speculative_tokens": 1, "disable_padded_drafter_batch": False}
```

### How was this patch tested?

- [x] egaer with pad/unpad:
- [x] aclgraph with pad/unpad
- [x] torchair with pad/unpad

performance test of deepseek-r1 with tp16、dp1
aclgraph with pad ITL: 168ms
aclgraph with unpad ITL: 169ms
original: 178ms


- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19

---------

Signed-off-by: xuyexiong <xuyexiong@huawei.com>
2025-10-30 16:53:05 +08:00