Commit Graph

632 Commits

Author SHA1 Message Date
pz1116
b1488ecdb1 [main][doc][kv_pool]Add adxl timeout parameter in kv pool user guide (#4012)
### What this PR does / why we need it?
Add adxl timeout parameter in kv pool user guide, avoiding timeout error
when initializing connections between devices.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
2025-11-05 18:39:35 +08:00
offline893
5cff3069f4 [Doc]Add developer guide of eplb. (#3759)
### What this PR does / why we need it?
Add developer guide of eplb



- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-11-05 18:35:41 +08:00
pz1116
e0c23cb011 [docs] Add kv pool developer guide (#3752)
### What this PR does / why we need it?
Add kv pool developer guide

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
vLLM version: v0.11.0rc3
vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Signed-off-by: pz1116 <zpbzpb123123@gmail.com>
2025-11-05 18:03:36 +08:00
zouyida2052
1ba158567c [Doc] add mtp doc (#3770)
### What this PR does / why we need it?
add mtp develop doc

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: zouyida2052 <zouyida2002@gmail.com>
2025-11-05 16:38:35 +08:00
wangxiyuan
3ac76fdccc [Doc] Update version policy (#3999)
Add version policy for main branch to clear how vllm-ascend work with
vllm

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-05 14:55:54 +08:00
zzzzwwjj
46d5a77688 [docs] add aclgraph developer guide (#3683)
### What this PR does / why we need it?
Add aclgraph developer guide.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: zzzzwwjj <1183291235@qq.com>
2025-11-05 10:34:28 +08:00
zhangyiming
5f08e07208 [Doc] Refactor the DeepSeek-V3.2-Exp tutorial. (#3871)
### What this PR does / why we need it?
Refactor the DeepSeek-V3.2-Exp tutorial.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-04 18:58:33 +08:00
zxr2333
15bb5098ad [PD Disaggregation]Set adxl engine as default backend and update README (#3761)
### What this PR does / why we need it?
Set adxl engine as the default Mooncake backend, because Ascend
Transport is no longer maintained.
Update README to include instructions for installing the adxl backend
Mooncake.
### Does this PR introduce _any_ user-facing change?
Users need to compile and install the mooncake backend for adxl
according to the revised README instructions.
### How was this patch tested?
By CI.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
2025-11-04 16:06:39 +08:00
wangxiyuan
cc2cd42ad3 Upgrade CANN to 8.3.rc1 (#3945)
### What this PR does / why we need it?
This PR upgrade CANN from 8.2rc1 to 8.3rc1 and remove the CANN version
check logic.

TODO: we notice that UT runs failed with CANN 8.3 image. So the base
image for UT is still 8.2. We'll fix it later.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-03 20:21:07 +08:00
wangxiyuan
fcc9a0eaeb Update torch-npu version to 2.7.1 (#3896)
### What this PR does / why we need it?
Upgrade torch-npu to the official release version 2.7.1


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-31 17:16:31 +08:00
zhangxinyuehfad
5f6d1b3323 [Doc] Update doc for release notese (#3853)
### What this PR does / why we need it?
Update doc for release notese

- vLLM version: v0.11.0
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-31 16:46:17 +08:00
wangxiyuan
10772d94e3 [Build] Force torch version (#3791)
We notice that sometimes user build vllm-ascend with incorrect torch
version. In this case, the build is passed, but when running the code,
the error `AttributeError: '_OpNamespace' '_C_ascend' object has no
attribute 'weak_ref_tensor'` is raised. Let's force the torch version to
2.7.1 and check the torch version when build from source to fix the
issue.

closes: #3342

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-30 15:53:15 +08:00
wangxiyuan
ff47524b88 [Doc] Remove modeling doc (#3789)
Remove `modeling` doc, it's useless now 

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-30 15:53:02 +08:00
Liwx
eed1957f03 Add FAQ for docker pull error on Kylin OS (#3870)
Added instructions for resolving 'invalid tar header' error on Kylin OS with an ARM64 architecture on Atlas300I hardware during docker
pull, including steps for offline loading of docker images.

---

### What this PR does / why we need it?

The primary motivation for this PR is to address a critical `docker
pull` failure that occurs on specific, yet important, enterprise
environments. Specifically, when operating on **Kylin OS (麒麟操作系统) with
an ARM64 architecture on Atlas300I hardware**, users frequently
encounter an `archive/tar: invalid tar header` error, which completely
blocks the setup process. This issue has been consistently reproduced,
with multiple retries failing with the same error, confirming that it is
a persistent environmental problem rather than a transient network
issue.

<img width="2060" height="525" alt="image"
src="https://github.com/user-attachments/assets/6c1c5728-de27-476f-8df4-723564fc290b"
/>

This guide provides a robust, step-by-step workaround using an
offline-loading method (`docker save` on a host machine and `docker
load` on the target machine). This solution is crucial for enabling
users on this platform to use vLLM.

This contribution does not directly fix an existing issue number, but it
proactively solves a significant environmental and usability problem for
a growing user base.

### Does this PR introduce _any_ user-facing change?

No.It does not alter any code, APIs, interfaces, or existing behavior of
the vLLM project.

### How was this patch tested?

The instructions and troubleshooting steps in this guide were validated
through a real-world, end-to-end test case on the my hardware and OS.

The testing process was as follows:

1. **Problem Reproduction**: An attempt was made to directly `docker
pull` the `vllm-ascend:v0.10.0rc1-310p` image on a target machine
running Kylin OS (ARM64). The `invalid tar header` failure was
successfully and consistently reproduced, confirming the existence of
the problem.
2. **Solution Implementation**: The workaround detailed in the guide was
executed:
* On a separate host machine (Ubuntu x86_64), the image was successfully
pulled using the `--platform linux/arm64` flag.
* The image was then saved to a `.tar` archive using `docker save`.
* The `.tar` archive was transferred to the target Kylin OS machine.
* The image was successfully loaded from the archive using `docker load
-i ...`.
3. **End-to-End Validation**: After loading the image, the vLLM
container was launched on the target machine following the instructions
in the guide. Both online inference (via `curl` to the API server) and
offline inference (via the Python script) were executed successfully,
confirming that the entire workflow described in the document is
accurate and effective.

Since this is a documentation-only change based on a validated workflow,
no new unit or integration tests were added to the codebase.


- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19

---------

Signed-off-by: Liwx <liweixuan1014@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-30 14:10:52 +08:00
zhangxinyuehfad
789ba4c5c2 [Doc] Update doc (#3836)
### What this PR does / why we need it?

Update doc

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-29 11:03:39 +08:00
wangxiyuan
da5f2cc1e3 [Doc] Update FAQ (#3792)
Many FAQ content is out of date, this PR refresh it.

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-27 20:32:17 +08:00
Shanshan Shen
3e5ae49160 [MM][Doc] Update online serving tutorials for Qwen2-Audio (#3606)
### What this PR does / why we need it?
Update online serving tutorials for `Qwen2-Audio`.

Part of https://github.com/vllm-project/vllm-ascend/issues/3508.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: shen-shanshan <467638484@qq.com>
2025-10-27 16:58:03 +08:00
zhangxinyuehfad
0637e8f021 [Doc] Update supported models (#3481)
### What this PR does / why we need it?
Update supported models

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-25 11:13:46 +08:00
wangxiyuan
1a9feb3ba5 Update version doc (#3599)
1. Add v0.11.0-dev branch info
2. mark rfc/long_seq_optimization branch as completed
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-25 09:37:56 +08:00
Li Wang
7f73c28a24 [CI][Doc] Optimize multi-node CI (#3565)
### What this PR does / why we need it?
This pull request mainly do the following things:
1. Add a doc for multi-node CI, The main content is the mechanism
principle and how to contribute
2. Simplify the config yaml for more developer-friendly
3. Optimized the mooncake installation script to prevent accidental
failures during installation
4. Fix the workflow to ensure the kubernetes can be apply correctly
5. Add Qwen3-235B-W8A8 disaggregated_prefill test
6. Add GLM-4.5 multi dp test
7. Add 2p1d 4nodes disaggregated_prefill test
8. Refactor nightly tests
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0rc3
- vLLM main:
17c540a993

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-25 09:23:47 +08:00
zhangyiming
ebfd09a075 [Doc] Update the Pangu Pro MoE tutorials. (#3651)
### What this PR does / why we need it?
Update the Pangu Pro MoE tutorials.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: menogrey <1299267905@qq.com>
2025-10-23 20:41:47 +08:00
Rui Kang
427b17e2da [Misc] Add a model loader that utilizes HCCL for weight loading (#2888)
### What this PR does / why we need it?

This PR introduces a new model loader called Netloader, which leverages
high-bandwidth P2P direct transfer between NPU cards to achieve weight
loading. Netloader is implemented as a plugin through the newly added
'register_model_loader' function in vLLM 0.10. It facilitates the
process of weight loading by sending weights from a pre-loaded model
(server) to an empty model of a newly started instance (client). The
server operates concurrently with normal inference tasks through
sub-threads and the 'stateless_init_torch_distributed_process_group' in
vLLM. The client initiates a transfer request after verifying that the
model and partitioning method are the same as the server's, and uses
HCCL's collective communication (send/recv) to load the weights in the
order they are stored in the model.

Application Scenarios:
1. Significantly Reduces Inference Instance Startup Time By reusing the
weights of already loaded instances and performing high-speed transfers
directly between computing cards, this method reduces model loading
latency compared to traditional remote/local pull methods.
2. Reduces Network and Storage Pressure Avoids the need to repeatedly
download weight files from remote repositories, reducing the impact on
centralized storage and network traffic, thereby enhancing overall
system stability and service quality.
3. Improves Resource Utilization and Reduces Costs Accelerating the
loading process reduces reliance on redundant computing pools, allowing
computing resources to be elastically scaled and reclaimed as needed.
4. Enhances Business Continuity and High Availability In fault recovery
scenarios, new instances can quickly take over existing services,
avoiding prolonged business interruptions and improving the system's
high availability and user experience.

### Does this PR introduce _any_ user-facing change?

Netloader utilizes the existing --load-format=netloader and
--model-loader-extra-config to be activated. The
model-loader-extra-config needs to be input as a JSON string (as it is
now)

Afterwards, you can check whether the outputs for the same sentence are
consistent when the temperature is set to 0.

Signed-off-by: destinysky <kangrui10@126.com>

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: destinysky <kangrui10@126.com>
2025-10-23 15:56:07 +08:00
Crazyang
f06a6cad1b [Doc] Update the modelslim website from gitee to gitcode. (#3615)
### What this PR does / why we need it?

Because the ModelSlim code repository has migrated from gitee to
gitcode, all relevant links in the repository have been updated.

[migration
notice](https://gitee.com/ascend/msit/tree/master/.%E6%9C%AC%E9%A1%B9%E7%9B%AE%E5%B7%B2%E7%BB%8F%E6%AD%A3%E5%BC%8F%E8%BF%81%E7%A7%BB%E8%87%B3%20Gitcode%20%E5%B9%B3%E5%8F%B0)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

vLLM version: v0.11.0rc3
vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: Crazyang <im.crazyang@gmail.com>
Signed-off-by: Pr0Wh1teGivee <calvin_zhu0210@outlook.com>
Co-authored-by: weichen <calvin_zhu0210@outlook.com>
2025-10-23 15:38:16 +08:00
Li Wang
ca104ce6f0 [Doc] Upgrade docker run command (#3645)
### What this PR does / why we need it?
Update the docker run command, specifically: add --shm-size=1g
### Does this PR introduce _any_ user-facing change?
users/developers using docker to pull vllm-ascend, the shared memory of
the container will be increased from the default 64MB to 1G

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-23 11:17:26 +08:00
KyrieWang
60e2be1b36 [Feat] Dynamic Batch Feature (#3490)
[RFC](https://github.com/vllm-project/vllm-ascend/issues/3328) for more
details.
Add dynamic batch feature in chunked prefilling strategy, the token
budget can be refined to achieve better effective throughput and TPOT.

!!! NOTE: only 910B3 is supported till now, we are working on further
improvements.
Additional file for lookup table is required.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: Cheng Wang <wangchengkyrie@outlook.com>
2025-10-22 14:13:32 +08:00
offline893
e916265b2b [CI]Add EPLB CI. (#3568)
### What this PR does / why we need it?
1.Add eplb ci to check the change of eplb feature.
2.Add param checking of eplb params. 
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Qwen in A3.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-21 22:58:02 +08:00
wangxiyuan
13e8e75143 [Refactor] refactor patch module (#3555)
### What this PR does / why we need it?
we notice that `patch_main` is never used. Usually the patch is for all
version. And if it's for specified version, we can use `vllm_version_is`
instead. So let's remove the useless sub folder in patch module to make
it clear.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-21 20:19:46 +08:00
liziyu
3164cb663c [Bugfix] mooncake connector support external dp & update readme (#3579)
### What this PR does / why we need it?

mooncake connector support external dp & update readme

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: liziyu <liziyu16@huawei.com>
2025-10-21 20:15:24 +08:00
likeful
6b6857929d [Doc] Add --shm-size option to Docker command for qwen3 vl 235B (#3519)
### What this PR does / why we need it?
Added shared memory size option to Docker run command.If shm-size is not
specified, docker will use 64MB by default. In this case,
vllm:EngineCore process may coredump if workload is high.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Done

Closes: https://github.com/vllm-project/vllm-ascend/issues/3513

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: likeful <irayki@gmail.com>
Signed-off-by: leijie2015 <irayki@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-20 23:37:35 +08:00
linfeng-yuan
068ed706c8 [feat][torchair] support super kernel feat for quantized dsr1 (#3485)
### What this PR does / why we need it?
Port #1916 and #2157 to master branch to fuse operators in deepseek moe
layers, which can reduce scheduling overhead on devices. Note that this
feature is valid only when `tp_size = 1` and
`multistream_overlap_shared_expert` is enabled with torchair graph mode.

### Does this PR introduce _any_ user-facing change?
Users can enable this feature with `--additional-config
'{"torchair_graph_config":{"enabled":true, "enable_super_kernel":true},
"multistream_overlap_shared_expert":true}'`.

### How was this patch tested?
E2E deepseek serving with 2P1D disaggregated prefill scenarios.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2025-10-20 20:04:37 +08:00
offline893
6c9909c861 [Patch]patch of v1 executor when enable eplb. (#3511)
### What this PR does / why we need it?
when using dynamic eplb, patch v1 executor to avoid create child process
failed.

### How was this patch tested?
deepseek in v3.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-19 10:54:26 +08:00
Li Wang
4c4a8458a5 [CI] Refator multi-node CI (#3487)
### What this PR does / why we need it?
Refactor the multi-machine CI use case. The purpose of this PR is to
increase the ease of adding multi-machine CI use cases, allowing
developers to add multi-machine cluster model testing use cases
(including PD separation) by simply adding a new YAML configuration
file.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-17 09:04:31 +08:00
menogrey
9ff6b0b862 [CI]: Fix doctest ci for main release (#3451)
### What this PR does / why we need it?
Fix dockets CI for main release.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: menogrey <1299267905@qq.com>
2025-10-16 14:38:11 +08:00
leo-pony
291c00a224 [Doc] pin version that can stable running 310I Duo to vllm-ascend v0.10.0rc1 (#3455)
Pin version that can stable running 310I Duo to vllm-ascend v0.10.0rc1.

### What this PR does / why we need it?
Since PR #2614 310I Duo been broken. Although we are currently working
on fixing the issue with the 310I Duo being broken, there is no
confirmed timeline for a fix in the short term. To allow users to
quickly find a working version instead of going back and forth on trial
and error, this PR fixes the version in the 310I Duo guide.

### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
NA

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
2025-10-16 08:54:09 +08:00
leo-pony
ff91904ee2 [Doc] Clearer corresponding relationship between configurations for multi-node guides (#3441)
Optimize multi-node guide: more clearer corresponding relationship
between configuration items and nodes

### What this PR does / why we need it?
Some issues caused by misunderstandings due to unclear guidance content,
for example: #3367

### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
NA

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: leo-pony <nengjunma@outlook.com>
2025-10-16 08:54:03 +08:00
offline893
5a3082cd15 [EPLB]Record expert map without dynamic eplb. (#3409)
What this PR does / why we need it?
1.Record expert map without dynamic eplb.
2.Add export PYTHONOPTIMIZE=1  when using dynamic eplb.
3.change eplb doc

Does this PR introduce any user-facing change?
How was this patch tested?
Qwen3_moe in A3.

- vLLM version: v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-15 14:21:15 +08:00
zxr2333
c2c1db78a7 [Bugfix] fix ZeroDivisionError when prefill_tp_size > num_kv_head and fix tp_resharding README (#3437)
### What this PR does / why we need it?
Fix ZeroDivisionError when prefill_tp_size > num_kv_head, in this
situation, num_head_replica can be 0 and used to divide another value,
this PR restricts the minimum value of a to be 1. And this PR fix
tp_resharding README.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By CI.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Co-authored-by: liziyu <liziyu16@huawei.com>
2025-10-15 08:45:44 +08:00
TaoYu Chen
5fe883fa43 fix the title of modelrunner's prepare inputs docs (#3457)
### What this PR does / why we need it?
Fix the wrong title of the modelrunner_prepare_inputs docs

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
pass CI

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: ChenTaoyu-SJTU <ctynb@qq.com>
2025-10-14 20:35:58 +08:00
yuzhup
78777237a9 [2/N][Feat] Attention and MoE weight prefetch in Qwen3MoE models (#3203)
### What this PR does / why we need it?

- Refacotr and integrate a unified `WeightPrefetchMethod`
- Integrate `gate_up_proj.weight` in quantized Attention modules
- Prefetching these weights ahead of matmul-like operators imporves
performance by reducing L2 cache transfer latency

### Does this PR introduce _any_ user-facing change?

Add a new config in `--additional-config` for configuration:
```json
{
    "weight_prefetch_config": {
        "enabled": True,
        "prefetch_ratio": {
            "moe": {
                "gate_up": 0.8
            },
        },
    },
}
```
This feature is enabled by default, and can be disabled through this
configuration

### How was this patch tested?


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: yuzhup <15705211260@163.com>
2025-10-14 20:16:33 +08:00
wangxiaoteng888
19b85ef1bc [Bugfix] multi_node_pd_disaggregation_mooncake.md update (#3400)
### What this PR does / why we need it?
multi_node_pd_disaggregation_mooncake.md update. Fix issues encountered
during service startup.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By ci


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiaoteng@huawei.com <wangxiaoteng@huawei.com>
2025-10-14 09:29:35 +08:00
wangxiyuan
49b850270f [Community] Nominate new maintainers: @yiz-liu @paulyu12 @weijinqian0 @nalinaly (#3406)
I'd like to nominate 4 new maintainers for vllm-ascend: 

----

Yizhou Liu [@yiz-liu](https://github.com/yiz-liu)
----

**Review Quality‌**: He has completed [40+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Ayiz-liu)
and provided solutions or guides for [10+
issues](https://github.com/vllm-project/vllm-ascend/issues?q=is%3Aissue%20commenter%3Ayiz-liu),
which includes many quality review like
[#issue-3428408401](https://github.com/vllm-project/vllm-ascend/issues/3002#issue-3428408401),
[#discussion_r2224572309](https://github.com/vllm-project/vllm-ascend/pull/1803#discussion_r2224572309),
[#issuecomment-2982470226](https://github.com/vllm-project/vllm-ascend/pull/1261#issuecomment-2982470226),
[#issuecomment-2903621197](https://github.com/vllm-project/vllm-ascend/pull/836#issuecomment-2903621197),
[#issuecomment-2857678691](https://github.com/vllm-project/vllm-ascend/issues/778#issuecomment-2857678691).

**Sustained and High-Quality Contributions:** He has contributed more
than [30+
commits](https://github.com/vllm-project/vllm-ascend/commits?author=yiz-liu)
since Mar.2025, especially, aclgraph, DP, and EP related contributions
are the main reason why I nominated him. As the owner of aclgraph
support, he continuously improves aclgraph stability and performance as
well as fixes key bugs. he laid the groundwork for EP-related
functionality and delivered multiple foundational improvements

**Community involvement:** He has a very good habit of logging
issues:https://github.com/vllm-project/vllm-ascend/issues/1649 and is
also very active and involved in [many
issues](https://github.com/vllm-project/vllm-ascend/issues?q=is%3Aissue%20state%3Aopen%20commenter%3Ayiz-liu%20-author%3Ayiz-liu)
to help users resolve issues.

----

Peng Yu  [@paulyu12](https://github.com/paulyu12)
---
The main reasons for his nomination are his expertise and key
contributions to the LORA and sustained and major contributions (initial
support/doc/bugfix) around Lora.

**Sustained and Major Contributions:** @paulyu12 starts his contribution
with [Lora and Mulit-Lora
support](697908f5cd)
since Apr 2025, he contributed about [10+ commits and
bugfixes](697908f5cd)
on vllm-ascend.
**Review Quality‌ and Community Involvement‌:** He also helped more than
10+ users address [Lora related
issues](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Apaulyu12+-author%3Apaulyu12+is%3Aclosed).

I believe his addition will further improve vLLM Ascend Lora support.

----

Jinqian Wei [@weijinqian0](https://github.com/weijinqian0)
---
The main reasons for his nomination are his key contributions to the RL
scene and the high quality of his code reviews.

**Review Quality‌:** He has completed [60+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Aweijinqian0+is%3Aopen+-author%3Aweijinqian0)
since June. 2025, include
[#comment-3284055430](https://github.com/vllm-project/vllm-ascend/pull/2791#issuecomment-3284055430),
[discussion_r2332166704](https://github.com/vllm-project/vllm-ascend/pull/2817#discussion_r2332166704),
[discussion_r2343289692](https://github.com/vllm-project/vllm-ascend/pull/2846#discussion_r2343289692)
high quality review.

**Sustained and Quality Contributions:** He has Deep understanding of
‌vLLM‌ and ‌vLLM Ascend‌ codebases and solid contributions in RL scene
(about [10+ PR
merged](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3Aweijinqian0+is%3Amerged+)
and 10+ PRs merged as co-author.

- Code Refactor: As a co-author, he participated in the refactoring of
the MOE module https://github.com/vllm-project/vllm-ascend/pull/2150
https://github.com/vllm-project/vllm-ascend/pull/2706
https://github.com/vllm-project/vllm-ascend/pull/2867
- Performance Enhancement for RL: Participated as a co-author in the
design and development of the solution, contributing to the planning of
core capabilities. https://github.com/vllm-project/vllm-ascend/pull/1547
https://github.com/vllm-project/vllm-ascend/pull/2120 and so on.

So I think he's a great addition to the vLLM Ascend Maintainer team.

----

Chuanyu Qin  [@nalinaly](https://github.com/nalinaly)
---
The main reason I nominated Qinchuanyu is because he is the initial
designer of aclgraph and torch-npu, two key components of vllm-ascend.
Considering aclgraph will eventually become the main path for
vllm-ascend's graph model, I propose to nominate him.

**Sustained and Major Contributions:** In fact, chuanyu actively helped
the users/developers of vllm-ascend since Mar 2025
([vllm-discuss#162](https://discuss.vllm.ai/t/can-ascend-officially-draft-a-documentation-on-the-vllm-ascend-adaptation-for-graph-mode/162/5)),
and also helped early users of vllm-ascend understand aclgraph. He
provided lots of help in the process of integrating aclgraph with
vllm-ascend.

**Community Involvement‌:** As speaker, he also presents help users
understand aclgraph and torch_npu [《The design philosophy of torch_npu
and the high performance principle of
aclGraph》](https://github.com/PyTorch-China/pytorch-meetup/blob/main/beijing-2025/%E3%80%905%E3%80%91torch_npu%20%E7%9A%84%E8%AE%BE%E8%AE%A1%E5%93%B2%E5%AD%A6%E4%B8%8E%20aclGraph%20%E9%AB%98%E6%80%A7%E8%83%BD%E5%8E%9F%E7%90%86-%E7%A7%A6%E4%BC%A0%E7%91%9C-0920.pdf)

----

They have activate contribution to vllm-ascend or have rich experience
for ascend AI.

Welcome!
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-14 08:51:58 +08:00
wangxiaoteng888
ca05f7d632 [Bugfix] TP size larger than KV cache head causes accuracy issues (#3366)
### What this PR does / why we need it?
Resolve the issue where, in the case of unequal TP (Tensor Parallelism),
the TP size is larger than the number of model attention kvcache heads,
causing the KV cache to generate duplicates, which leads to transmission
errors in the original code.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By ci
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
Co-authored-by: nwpu-zxr <zhouxuerong2@huawei.com>
2025-10-11 11:22:23 +08:00
wangxiyuan
ba19dd3183 Revert PTA upgrade PR (#3352)
we notice that torch npu 0919 doesn't work. This PR revert related
change which rely on 0919 version.
Revert PR: #3295  #3205  #3102 

Related: #3353

- vLLM version: v0.11.0
2025-10-10 14:09:53 +08:00
Li Wang
60b7c936c5 [Doc] Update deepseek-v3.2 doc (#3319)
### What this PR does / why we need it?
Upgrade deepseek-v3.2 doc for A2
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-10 08:55:39 +08:00
Ruri
ff37575936 [1/N][Feat] Add weight prefetch feature for Attention layers (#3146)
### What this PR does / why we need it?

- Refacotr and integrate a unified `WeightPrefetchMethod`
- Integrate `qkv_proj.weight` and `o_proj.weight` in quantized Attention
modules
- Prefetching these weights ahead of matmul-like operators imporves
performance by reducing L2 cache transfer latency

### Does this PR introduce _any_ user-facing change?

Add a new config in `--additional-config` for configuration:
```json
{
    "weight_prefetch_config": {
        "enabled": false,
        "prefetch_ratio": {
            "attn": {
                "qkv": 1.0,
                "o": 1.0,
            },
        },
    },
}
```
This feature is enabled by default, and can be disabled through this
configuration

### How was this patch tested?


- vLLM version: v0.11.0

---------

Signed-off-by: yuzhup <15705211260@163.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Co-authored-by: yuzhup <15705211260@163.com>
2025-10-09 20:38:39 +08:00
Yikun Jiang
2dde1268c7 Fix doc for A2 series and cleanup note (#3307)
### What this PR does / why we need it?
Fix doc for A2 series and cleanup note

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI passed

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-10-01 14:39:48 +08:00
wangxiyuan
b8c58d68e1 [Doc] Add deepseek v3.2 tutorial (#3275)
Add deepseek v3.2 tutorial

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: wangli <wangli858794774@gmail.com>
Co-authored-by: MengqingCao <cmq0113@163.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
2025-09-30 17:54:31 +08:00
wangxiyuan
4abdcdba4e upgrade pta to 0919 (#3295)
### What this PR does / why we need it?
Upgrade torch-npu to the newest POC version
### Does this PR introduce _any_ user-facing change?
yes, user need upgrade the pta version as well.
### How was this patch tested?


- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

---------

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-09-30 17:14:23 +08:00
wangxiyuan
00ba071022 [Doc] Release note for v0.11.0rc0 (#3224)
### What this PR does / why we need it?
Add release note for v0.11.0rc0

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-09-30 03:26:18 +08:00
Yikun Jiang
5503a3142f Bump version to v0.11.0rc3 (#3213)
### What this PR does / why we need it?
Bump version to v0.11.0rc2 and prepare vLLM Ascend v0.11.0rc0

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI passed


- vLLM version: v0.10.2
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

---------

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-09-29 21:48:06 +08:00