### What this PR does / why we need it?
Add tests for the multi-node DeepSeek-V2-Lite network in GE Graph mode,
and supplement the end-to-end (e2e) tests for the MLA and NZ features of
this network.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
### What this PR does / why we need it?
avoid mrope fusion op when running qwen2.5-vl on a+x machine
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
Test text VQA accuracy on G8600 with aisbench
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
### What this PR does / why we need it?
Add some of the pitfalls I ran into when using AISBench to test
multi-modal models.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
### What this PR does / why we need it?
Add the parameter "register_buffer" for PD Aggregated Scenario in the
given example.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
remove get_metadata_cls. It's only used for V0 engine and has been removed from vLLM already.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Fix pcp + mtp bug while using acl graph.
While using pcp + mtp, we need to flatten block_table to avoid irregular
attn mask shape, this was done in mla attn_metadata builder, but we
found out that this influences block_table address and leads to
incorrect results while enable acl graph.
To fix this, we enlarge block_table buffer size and flatten block_table
in model_runner prepare_inputs, so this will not influence block_table
address.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: zhangsicheng5 <zhangsicheng5@huawei.com>
### What this PR does / why we need it?
Past:
npu_moe_gating_top_k can only support 'group_count=256' pattern
Now:
1、npu_moe_gating_top_k support all size of group_count
2、the functionality of `torch_npu.npu_moe_gating_top_k_softmax` are
included in `torch_npu.npu_moe_gating_top_k`
CANN: depends on 8.3.RC1
Performance:
1. GLM4.5-w8a8, TPS improve 6%
2. Qwen3, the same as before
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: 1092626063 <1092626063@qq.com>
### What this PR does / why we need it?
Sorts aclgraph batch sizes in ascending order, corresponding to vLLM
[#26016](https://github.com/vllm-project/vllm/pull/26016)
Ensures batch sizes for aclgraph are sorted ascending when aclgraph mode
is enabled, improving consistency and compatibility with later logic
that may depend on order.
### Does this PR introduce _any_ user-facing change?
None.
### How was this patch tested?
Waiting for #3886
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
### What this PR does / why we need it?
fix proxy when host ip using domain name
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: liziyu <liziyu16@huawei.com>
### What this PR does / why we need it?
pd proxy support ipv6, mooncake connector check whether the IPv6 address
is used and notify the user.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: liziyu <liziyu16@huawei.com>
### What this PR does / why we need it?
Currently, the default `cudagraph_capture_size` in vLLM is `[1, 2, 4 ,8
,16 ,24 ,... , max_capture_size]`. However, this is not always the best
choice on different situations. This PR aims to change the default
setting when running Qwen3-MoE on full dp (`dp_size > 1` && `tp_size ==
1`) setting, which is usually applied in Large-Scale EP.
old :
`[1, 2, 4 ,8 ,16 ,24 ,... , max_capture_size]`
new:
`[1, 2, 5 ,10 ,15, 16 ,24 ,... , max_capture_size]`
This is mainly because the performance of `_npu_paged_attention` op
degrades dramatically on old settings. We hope to provide better
performance if users do not set specific `cudagraph_capture_size`.
### Does this PR introduce _any_ user-facing change?
The default `cudagraph_capture_size` is modified in above cases.
However, if `cudagraph_capture_size` has already set by users, this PR
won't have any influence on this.
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: Angazenn <supperccell@163.com>
### What this PR does / why we need it?
Fix moe error when sp chunked the hidden_states by disabling sp by a hacky way
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
### What this PR does / why we need it?
we often install vllm-ascend in developer mode, which has no _build_info
module. it will raise error in `utils.is_310p` and
`utils.sleep_model_enabled`, then we need to modify these two function.
### Does this PR introduce _any_ user-facing change?
not involved
### How was this patch tested?
not involved
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: Ronald1995 <ronaldautomobile@163.com>
### What this PR does / why we need it?
This PR update the prefixcache threshold for qwen3-32b-int from 0.4 to
0.8, as the baseline has been improved.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
### What this PR does / why we need it?
The current library only supports the FullDecodeOnly graph mode, which
enables full graph execution during the decode. This PR extends support
to allow full graph execution in both the prefill and decode, referred
to as FULL graph mode.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
### What this PR does / why we need it?
[CI] Fix no space left in build wheel CI.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
- skip the nightly image build when the github event is pull_request
- set imagepullpolicy as alway for multi_node test
- move multi_node tests ahead to have some resource clean first
- do not relevant nightly image build with nightly tests for tolerance
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Co-authored-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Add benchmark results into `.gitignore`
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: shen-shanshan <467638484@qq.com>
### What this PR does / why we need it?
Add import_kernels interface to avoid import useless vLLM C library
Closes#3488. Reopen#3498 for CI.
### How was this patch tested?
CI tested.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
### What this PR does / why we need it?
Introduces the `acl_graph_print` function to enable printing debug
information from code running inside an ACL graph, such as custom
operators.
This works by launching a host function on a dedicated stream, bypassing
the limitations of standard `print` within compiled graph execution. The
implementation handles the necessary stream subscriptions and ensures
they are properly unregistered upon exit.
### Does this PR introduce _any_ user-facing change?
None.
### How was this patch tested?
None.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
### What this PR does / why we need it?
add DeepSeek-R1-W8A8 and Qwen3-235B-W8A8 configs in multi-nodes and EPLB
scenario
### Does this PR introduce _any_ user-facing change?
no
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
Co-authored-by: 白永斌 <baiyongbin3@h-partners.com>
### What this PR does / why we need it?
1、qwen GQA attention_v1 optim
2、DeepSeek MLA refactor, all gather q -> all gather kv
3、modelrunner refactor for chunk prefill, we remove some code not use
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: Delphine-Nic <tanwenqin@huawei.com>
Co-authored-by: Delphine-Nic <tanwenqin@huawei.com>
### What this PR does / why we need it?
Given the current excessively long build time of our nightly-ci, I
recommend installing necessary, confirmed versions of packages in the
Docker image to reduce the time required for integration testing.
Including Mooncake vllm with fixed tags, This is expected to reduce
nightly-ci duration by 2 hours.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
- Fixes the lint test fail for python 3.9, as python 3.9 is not support
now.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: leo-pony <nengjunma@outlook.com>
### What this PR does / why we need it?
Currently, the UT tests lack coverage for the Qwen3_moe network and
torchair_sfa. Therefore, supplementary tests are being added.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
by CI
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
### What this PR does / why we need it?
Correct the mistake in information documents
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ut
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: lilinsiman <lilinsiman@gmail.com>
### What this PR does / why we need it?
Now, from https://github.com/vllm-project/vllm-ascend/pull/3967, chunked
prefill and spiltfuse are defaultly enabled.
The e2e test for mtp breaks now.
After locating the bug, we found that a triton operator does not support
chunked prefill.
But if let e2e test be skipped is bad.
So, we changed the e2e test to only test the case in which chunked
prefill is off.
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
Because we only modified
`test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`.
So, we only run `pytest -s
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY`
locally to test it.
Below is the result:
```text
==================================================================================================================== warnings summary ====================================================================================================================
usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8
/usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
<frozen importlib._bootstrap>:241
<frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyPacked has no __module__ attribute
<frozen importlib._bootstrap>:241
<frozen importlib._bootstrap>:241: DeprecationWarning: builtin type SwigPyObject has no __module__ attribute
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
tests/e2e/multicard/test_qwen3_next.py::test_models_distributed_Qwen3_NEXT_MTP_TP4_SIMILARITY
/usr/local/python3.11.10/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py:121: DeprecationWarning: The 'task' option has been deprecated and will be removed in v0.13.0 or v1.0, whichever comes first. Please remove this option.
s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================================= 1 passed, 5 warnings in 314.52s (0:05:14) ========================================================================================================
sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute
```
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: drslark <slarksblood@qq.com>
### What this PR does / why we need it?
In PR https://github.com/vllm-project/vllm-ascend/pull/3420, we
initially placed the quantization type (quant_type) in the MoECommMethod
class. However, since MoECommMethod follows a singleton pattern, it
couldn't accommodate scenarios where different layers in the model might
use different quantization approaches (e.g., MTP modules using
floating-point computation while the main model employs quantized
computation).
In this PR, we've moved the quantization type to the AscendFusedMoe
class and pass it as a parameter to MoECommMethod.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
```bash
export HCCL_BUFFSIZE=1024
export VLLM_VERSION=0.11.0
vllm serve /home/data/DeepSeek-R1_w8a8/ \
--data-parallel-size 2 \
--tensor-parallel-size 8 \
--enable-expert-parallel \
--served-model-name dsv3 \
--max-model-len 32768 \
--max-num-batched-tokens 4096 \
--max-num-seqs 16 \
--quantization ascend \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--speculative-config '{"num_speculative_tokens": 2, "method":"deepseek_mtp"}'
```
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: realliujiaxu <realliujiaxu@163.com>
### What this PR does / why we need it?
Only CPU tensors with `pin_memory=True` can be asynchronously copied to
the device. Currently, there are two instances where non-pinned CPU
tensors are being copied to the device, which will trigger synchronous
operations, reducing the expected benefits of asynchronous scheduling.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: realliujiaxu <realliujiaxu@163.com>
### What this PR does / why we need it?
Explicit specification `NUMEXPR_MAX_THREADS` to avoid `Error. nthreads
cannot be larger than environment variable "NUMEXPR_MAX_THREADS" (64)`
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Currently, we set `seq_lens` in dummy attn_metadata to be
`max_model_len` to get max workspace for attention during capturing.
However, setting it consistently to be `max_model_len` causing dummy_run
to execute a long attention when running actual inference. For example,
if there is a single req with `seqs_lens` as [8] but `max_model_len` is
131072, the whole process will be slow down by dummy_run as it execute a
fake long-seq attention. Therefore, we instead set it to max_query_len,
which is also consistent with vLLM gpu implementation.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: Angazenn <supperccell@163.com>
### What this PR does / why we need it?
DS don't have 'AscendAttentionMetadataBuilder' class so will fail in
fullgraph.
We resolved the issue by modifying the code to only check for
'GDNAttentionMetadataBuilder ', while all other attention cases follow
the default branch.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com>
### What this PR does / why we need it?
Replace masked in-place assignment with a device-side torch.where so
selection stays on-device, allowing subsequent device ops to be enqueued
earlier and removing an implicit D2H sync, reducing latency by several
hundreds μs on Ascend.
### Does this PR introduce _any_ user-facing change?
None.
### How was this patch tested?
None.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
### What this PR does / why we need it?
To support the data collection capabilities of the msServiceProfiler on
vLLM-ascned framework and enable customization of data collection points
via configuration file, a default profiling configuration has been added
to vllm-ascend, facilitating debugging and optimization for developers
and users.
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: minghangc <29514143@qq.com>
### What this PR does / why we need it?
To continuously monitor the accuracy of the InternVL3_5-8B model, this
PR adds the corresponding configuration file to the CI. We need to add
the `-hf` suffix to avoid incompatibility with the `lm-eval`
preprocessor.
### How was this patch tested?
`pytest -sv ./tests/e2e/models/test_lm_eval_correctness.py --config
./tests/e2e/models/configs/InternVL3_5-8B.yaml`
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
### What this PR does / why we need it?
Reference #2167 and orpoj TP supports ACL graph.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: zzhx1 <zzh_201018@outlook.com>
### What this PR does / why we need it?
This PR updates some nightly test cases and adds mtpx cases, we need to
test them daily
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By running the test
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>