587 Commits

Author SHA1 Message Date
wangxiyuan
f811a24bf0 Remove VLLM_USE_V1 (#4086)
Drop VLLM_USE_V1 usage.  This env has been removed from vLLM already.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-11 15:43:39 +08:00
Yikun Jiang
e384755ce1 [Doc] Recover installation doc to use pip install (#4109)
### What this PR does / why we need it?
Use pip installation in installation doc and change related doctest to
validate.

### Does this PR introduce _any_ user-facing change?
No, doc only

### How was this patch tested?
Doctest related CI passed
- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-11-11 09:25:44 +08:00
wangxiyuan
64220c68c5 [Doc] Add release note for v0.11.0rc1 (#3931)
Add release note for v0.11.0rc1.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-10 21:01:50 +08:00
22dimensions
e6625bb582 [Doc] add qwen3 w4a4 tutorial (#4076)
### What this PR does / why we need it?
v0.11.0rc1 will introduce w4a4 quantization feature, so add this
tutorial.

### Does this PR introduce _any_ user-facing change?

No


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: 22dimensions <waitingwind@foxmail.com>
2025-11-10 20:30:07 +08:00
herizhen
75c3f9a780 [Typo] LLama has been changed to Llama (#4089)
### What this PR does / why we need it?
First-generation model:uses"LLama",subsequent models use"Llama"
The second"L"here should be lowercase.Other instances of "LLama"on
this page should be corrected accordingly

### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: herizhen <you@example.com>
Co-authored-by: herizhen <you@example.com>
2025-11-10 16:22:52 +08:00
Canlin Guo
de49fb3deb [Feature][Build] Upgrade the minimum version to 3.10 (#3926)
### What this PR does / why we need it?

Closes #3728, #3657. 

The main branch is now aligned with the vllm `releases/v0.11.1` branch,
which no longer supports `Python 3.9`. Check it
[here](https://github.com/vllm-project/vllm/blob/releases/v0.11.1/pyproject.toml).

### Does this PR introduce _any_ user-facing change?

The newest version of vllm-ascend don't support Python 3.9. 

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-10 11:50:12 +08:00
wangxiaoteng888
b1a00e0512 [docs] [P/D] add feature guide for disaggregated-prefill (#3950)
### What this PR does / why we need it?
add feature guide for disaggregated-prefill
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
by ci

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: wangxiaoteng888 <56506195+wangxiaoteng888@users.noreply.github.com>
Co-authored-by: liziyu <liziyu16@huawei.com>
2025-11-10 09:31:30 +08:00
zhangyiming
a74e76b02d [Doc] Remove extra MLAPO installation step for DeepSeek-V3.2. (#4024)
### What this PR does / why we need it?
Remove extra MLAPO installation step for DeepSeek-V3.2.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-10 09:09:59 +08:00
lilinsiman
a3ff765c65 [Info][main] Corrected the errors in the information (#4055)
### What this PR does / why we need it?
Corrected the errors in the information

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2025-11-08 18:48:59 +08:00
zhangyiming
46ef280105 [Doc] Add model feature matrix table. (#4040)
### What this PR does / why we need it?
Add model feature matrix table.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-07 11:28:05 +08:00
Li Wang
259eb25f88 [CI] Quick fix mooncake for nightly-ci (#4028)
### What this PR does / why we need it?
Since we have upgraded to CANN 8.3rc1, we will no longer use the
privately maintained Mooncake repository, but instead use the official
release released by Mooncake:
https://github.com/kvcache-ai/Mooncake/releases/tag/v0.3.7.post2 .

Next step: this is only a temporary solution. We will integrate mooncake
into the vllm-ascend base image later for easier use. see
https://github.com/vllm-project/vllm-ascend/pull/3989
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-11-06 18:46:00 +08:00
Liziqi-77
25b24c02ea [Feat](Mooncake) Supports multiple input suffixes for global_segment_size (#3690)
### What this PR does / why we need it?
- global_segment_size and local_buffer_size use constants for unified
management.
- Newly added support for input formats ending with GB, MB, KB, and B,
while being compatible with existing input methods.

### Does this PR introduce _any_ user-facing change?
- Users can use new input methods
- The documentation has also been modified

### How was this patch tested?


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: 李子琦 <liziqi_ing@163.com>
2025-11-06 14:48:15 +08:00
pz1116
b1488ecdb1 [main][doc][kv_pool]Add adxl timeout parameter in kv pool user guide (#4012)
### What this PR does / why we need it?
Add adxl timeout parameter in kv pool user guide, avoiding timeout error
when initializing connections between devices.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
2025-11-05 18:39:35 +08:00
offline893
5cff3069f4 [Doc]Add developer guide of eplb. (#3759)
### What this PR does / why we need it?
Add developer guide of eplb



- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-11-05 18:35:41 +08:00
pz1116
e0c23cb011 [docs] Add kv pool developer guide (#3752)
### What this PR does / why we need it?
Add kv pool developer guide

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
vLLM version: v0.11.0rc3
vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Signed-off-by: pz1116 <zpbzpb123123@gmail.com>
2025-11-05 18:03:36 +08:00
zouyida2052
1ba158567c [Doc] add mtp doc (#3770)
### What this PR does / why we need it?
add mtp develop doc

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: zouyida2052 <zouyida2002@gmail.com>
2025-11-05 16:38:35 +08:00
wangxiyuan
3ac76fdccc [Doc] Update version policy (#3999)
Add version policy for main branch to clear how vllm-ascend work with
vllm

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-05 14:55:54 +08:00
zzzzwwjj
46d5a77688 [docs] add aclgraph developer guide (#3683)
### What this PR does / why we need it?
Add aclgraph developer guide.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: zzzzwwjj <1183291235@qq.com>
2025-11-05 10:34:28 +08:00
zhangyiming
5f08e07208 [Doc] Refactor the DeepSeek-V3.2-Exp tutorial. (#3871)
### What this PR does / why we need it?
Refactor the DeepSeek-V3.2-Exp tutorial.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: menogrey <1299267905@qq.com>
2025-11-04 18:58:33 +08:00
zxr2333
15bb5098ad [PD Disaggregation]Set adxl engine as default backend and update README (#3761)
### What this PR does / why we need it?
Set adxl engine as the default Mooncake backend, because Ascend
Transport is no longer maintained.
Update README to include instructions for installing the adxl backend
Mooncake.
### Does this PR introduce _any_ user-facing change?
Users need to compile and install the mooncake backend for adxl
according to the revised README instructions.
### How was this patch tested?
By CI.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
2025-11-04 16:06:39 +08:00
wangxiyuan
cc2cd42ad3 Upgrade CANN to 8.3.rc1 (#3945)
### What this PR does / why we need it?
This PR upgrade CANN from 8.2rc1 to 8.3rc1 and remove the CANN version
check logic.

TODO: we notice that UT runs failed with CANN 8.3 image. So the base
image for UT is still 8.2. We'll fix it later.


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-11-03 20:21:07 +08:00
wangxiyuan
fcc9a0eaeb Update torch-npu version to 2.7.1 (#3896)
### What this PR does / why we need it?
Upgrade torch-npu to the official release version 2.7.1


- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-31 17:16:31 +08:00
zhangxinyuehfad
5f6d1b3323 [Doc] Update doc for release notese (#3853)
### What this PR does / why we need it?
Update doc for release notese

- vLLM version: v0.11.0
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-31 16:46:17 +08:00
wangxiyuan
10772d94e3 [Build] Force torch version (#3791)
We notice that sometimes user build vllm-ascend with incorrect torch
version. In this case, the build is passed, but when running the code,
the error `AttributeError: '_OpNamespace' '_C_ascend' object has no
attribute 'weak_ref_tensor'` is raised. Let's force the torch version to
2.7.1 and check the torch version when build from source to fix the
issue.

closes: #3342

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-30 15:53:15 +08:00
wangxiyuan
ff47524b88 [Doc] Remove modeling doc (#3789)
Remove `modeling` doc, it's useless now 

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-30 15:53:02 +08:00
Liwx
eed1957f03 Add FAQ for docker pull error on Kylin OS (#3870)
Added instructions for resolving 'invalid tar header' error on Kylin OS with an ARM64 architecture on Atlas300I hardware during docker
pull, including steps for offline loading of docker images.

---

### What this PR does / why we need it?

The primary motivation for this PR is to address a critical `docker
pull` failure that occurs on specific, yet important, enterprise
environments. Specifically, when operating on **Kylin OS (麒麟操作系统) with
an ARM64 architecture on Atlas300I hardware**, users frequently
encounter an `archive/tar: invalid tar header` error, which completely
blocks the setup process. This issue has been consistently reproduced,
with multiple retries failing with the same error, confirming that it is
a persistent environmental problem rather than a transient network
issue.

<img width="2060" height="525" alt="image"
src="https://github.com/user-attachments/assets/6c1c5728-de27-476f-8df4-723564fc290b"
/>

This guide provides a robust, step-by-step workaround using an
offline-loading method (`docker save` on a host machine and `docker
load` on the target machine). This solution is crucial for enabling
users on this platform to use vLLM.

This contribution does not directly fix an existing issue number, but it
proactively solves a significant environmental and usability problem for
a growing user base.

### Does this PR introduce _any_ user-facing change?

No.It does not alter any code, APIs, interfaces, or existing behavior of
the vLLM project.

### How was this patch tested?

The instructions and troubleshooting steps in this guide were validated
through a real-world, end-to-end test case on the my hardware and OS.

The testing process was as follows:

1. **Problem Reproduction**: An attempt was made to directly `docker
pull` the `vllm-ascend:v0.10.0rc1-310p` image on a target machine
running Kylin OS (ARM64). The `invalid tar header` failure was
successfully and consistently reproduced, confirming the existence of
the problem.
2. **Solution Implementation**: The workaround detailed in the guide was
executed:
* On a separate host machine (Ubuntu x86_64), the image was successfully
pulled using the `--platform linux/arm64` flag.
* The image was then saved to a `.tar` archive using `docker save`.
* The `.tar` archive was transferred to the target Kylin OS machine.
* The image was successfully loaded from the archive using `docker load
-i ...`.
3. **End-to-End Validation**: After loading the image, the vLLM
container was launched on the target machine following the instructions
in the guide. Both online inference (via `curl` to the API server) and
offline inference (via the Python script) were executed successfully,
confirming that the entire workflow described in the document is
accurate and effective.

Since this is a documentation-only change based on a validated workflow,
no new unit or integration tests were added to the codebase.


- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19

---------

Signed-off-by: Liwx <liweixuan1014@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-30 14:10:52 +08:00
zhangxinyuehfad
789ba4c5c2 [Doc] Update doc (#3836)
### What this PR does / why we need it?

Update doc

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-29 11:03:39 +08:00
wangxiyuan
da5f2cc1e3 [Doc] Update FAQ (#3792)
Many FAQ content is out of date, this PR refresh it.

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-27 20:32:17 +08:00
Shanshan Shen
3e5ae49160 [MM][Doc] Update online serving tutorials for Qwen2-Audio (#3606)
### What this PR does / why we need it?
Update online serving tutorials for `Qwen2-Audio`.

Part of https://github.com/vllm-project/vllm-ascend/issues/3508.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: shen-shanshan <467638484@qq.com>
2025-10-27 16:58:03 +08:00
zhangxinyuehfad
0637e8f021 [Doc] Update supported models (#3481)
### What this PR does / why we need it?
Update supported models

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2025-10-25 11:13:46 +08:00
wangxiyuan
1a9feb3ba5 Update version doc (#3599)
1. Add v0.11.0-dev branch info
2. mark rfc/long_seq_optimization branch as completed
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-25 09:37:56 +08:00
Li Wang
7f73c28a24 [CI][Doc] Optimize multi-node CI (#3565)
### What this PR does / why we need it?
This pull request mainly do the following things:
1. Add a doc for multi-node CI, The main content is the mechanism
principle and how to contribute
2. Simplify the config yaml for more developer-friendly
3. Optimized the mooncake installation script to prevent accidental
failures during installation
4. Fix the workflow to ensure the kubernetes can be apply correctly
5. Add Qwen3-235B-W8A8 disaggregated_prefill test
6. Add GLM-4.5 multi dp test
7. Add 2p1d 4nodes disaggregated_prefill test
8. Refactor nightly tests
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.11.0rc3
- vLLM main:
17c540a993

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-25 09:23:47 +08:00
zhangyiming
ebfd09a075 [Doc] Update the Pangu Pro MoE tutorials. (#3651)
### What this PR does / why we need it?
Update the Pangu Pro MoE tutorials.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: menogrey <1299267905@qq.com>
2025-10-23 20:41:47 +08:00
Rui Kang
427b17e2da [Misc] Add a model loader that utilizes HCCL for weight loading (#2888)
### What this PR does / why we need it?

This PR introduces a new model loader called Netloader, which leverages
high-bandwidth P2P direct transfer between NPU cards to achieve weight
loading. Netloader is implemented as a plugin through the newly added
'register_model_loader' function in vLLM 0.10. It facilitates the
process of weight loading by sending weights from a pre-loaded model
(server) to an empty model of a newly started instance (client). The
server operates concurrently with normal inference tasks through
sub-threads and the 'stateless_init_torch_distributed_process_group' in
vLLM. The client initiates a transfer request after verifying that the
model and partitioning method are the same as the server's, and uses
HCCL's collective communication (send/recv) to load the weights in the
order they are stored in the model.

Application Scenarios:
1. Significantly Reduces Inference Instance Startup Time By reusing the
weights of already loaded instances and performing high-speed transfers
directly between computing cards, this method reduces model loading
latency compared to traditional remote/local pull methods.
2. Reduces Network and Storage Pressure Avoids the need to repeatedly
download weight files from remote repositories, reducing the impact on
centralized storage and network traffic, thereby enhancing overall
system stability and service quality.
3. Improves Resource Utilization and Reduces Costs Accelerating the
loading process reduces reliance on redundant computing pools, allowing
computing resources to be elastically scaled and reclaimed as needed.
4. Enhances Business Continuity and High Availability In fault recovery
scenarios, new instances can quickly take over existing services,
avoiding prolonged business interruptions and improving the system's
high availability and user experience.

### Does this PR introduce _any_ user-facing change?

Netloader utilizes the existing --load-format=netloader and
--model-loader-extra-config to be activated. The
model-loader-extra-config needs to be input as a JSON string (as it is
now)

Afterwards, you can check whether the outputs for the same sentence are
consistent when the temperature is set to 0.

Signed-off-by: destinysky <kangrui10@126.com>

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: destinysky <kangrui10@126.com>
2025-10-23 15:56:07 +08:00
Crazyang
f06a6cad1b [Doc] Update the modelslim website from gitee to gitcode. (#3615)
### What this PR does / why we need it?

Because the ModelSlim code repository has migrated from gitee to
gitcode, all relevant links in the repository have been updated.

[migration
notice](https://gitee.com/ascend/msit/tree/master/.%E6%9C%AC%E9%A1%B9%E7%9B%AE%E5%B7%B2%E7%BB%8F%E6%AD%A3%E5%BC%8F%E8%BF%81%E7%A7%BB%E8%87%B3%20Gitcode%20%E5%B9%B3%E5%8F%B0)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

vLLM version: v0.11.0rc3
vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: Crazyang <im.crazyang@gmail.com>
Signed-off-by: Pr0Wh1teGivee <calvin_zhu0210@outlook.com>
Co-authored-by: weichen <calvin_zhu0210@outlook.com>
2025-10-23 15:38:16 +08:00
Li Wang
ca104ce6f0 [Doc] Upgrade docker run command (#3645)
### What this PR does / why we need it?
Update the docker run command, specifically: add --shm-size=1g
### Does this PR introduce _any_ user-facing change?
users/developers using docker to pull vllm-ascend, the shared memory of
the container will be increased from the default 64MB to 1G

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-23 11:17:26 +08:00
KyrieWang
60e2be1b36 [Feat] Dynamic Batch Feature (#3490)
[RFC](https://github.com/vllm-project/vllm-ascend/issues/3328) for more
details.
Add dynamic batch feature in chunked prefilling strategy, the token
budget can be refined to achieve better effective throughput and TPOT.

!!! NOTE: only 910B3 is supported till now, we are working on further
improvements.
Additional file for lookup table is required.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: Cheng Wang <wangchengkyrie@outlook.com>
2025-10-22 14:13:32 +08:00
offline893
e916265b2b [CI]Add EPLB CI. (#3568)
### What this PR does / why we need it?
1.Add eplb ci to check the change of eplb feature.
2.Add param checking of eplb params. 
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Qwen in A3.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-21 22:58:02 +08:00
wangxiyuan
13e8e75143 [Refactor] refactor patch module (#3555)
### What this PR does / why we need it?
we notice that `patch_main` is never used. Usually the patch is for all
version. And if it's for specified version, we can use `vllm_version_is`
instead. So let's remove the useless sub folder in patch module to make
it clear.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-21 20:19:46 +08:00
liziyu
3164cb663c [Bugfix] mooncake connector support external dp & update readme (#3579)
### What this PR does / why we need it?

mooncake connector support external dp & update readme

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: liziyu <liziyu16@huawei.com>
2025-10-21 20:15:24 +08:00
likeful
6b6857929d [Doc] Add --shm-size option to Docker command for qwen3 vl 235B (#3519)
### What this PR does / why we need it?
Added shared memory size option to Docker run command.If shm-size is not
specified, docker will use 64MB by default. In this case,
vllm:EngineCore process may coredump if workload is high.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Done

Closes: https://github.com/vllm-project/vllm-ascend/issues/3513

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: likeful <irayki@gmail.com>
Signed-off-by: leijie2015 <irayki@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-20 23:37:35 +08:00
linfeng-yuan
068ed706c8 [feat][torchair] support super kernel feat for quantized dsr1 (#3485)
### What this PR does / why we need it?
Port #1916 and #2157 to master branch to fuse operators in deepseek moe
layers, which can reduce scheduling overhead on devices. Note that this
feature is valid only when `tp_size = 1` and
`multistream_overlap_shared_expert` is enabled with torchair graph mode.

### Does this PR introduce _any_ user-facing change?
Users can enable this feature with `--additional-config
'{"torchair_graph_config":{"enabled":true, "enable_super_kernel":true},
"multistream_overlap_shared_expert":true}'`.

### How was this patch tested?
E2E deepseek serving with 2P1D disaggregated prefill scenarios.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2025-10-20 20:04:37 +08:00
offline893
6c9909c861 [Patch]patch of v1 executor when enable eplb. (#3511)
### What this PR does / why we need it?
when using dynamic eplb, patch v1 executor to avoid create child process
failed.

### How was this patch tested?
deepseek in v3.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-19 10:54:26 +08:00
Li Wang
4c4a8458a5 [CI] Refator multi-node CI (#3487)
### What this PR does / why we need it?
Refactor the multi-machine CI use case. The purpose of this PR is to
increase the ease of adding multi-machine CI use cases, allowing
developers to add multi-machine cluster model testing use cases
(including PD separation) by simply adding a new YAML configuration
file.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-10-17 09:04:31 +08:00
menogrey
9ff6b0b862 [CI]: Fix doctest ci for main release (#3451)
### What this PR does / why we need it?
Fix dockets CI for main release.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: menogrey <1299267905@qq.com>
2025-10-16 14:38:11 +08:00
leo-pony
291c00a224 [Doc] pin version that can stable running 310I Duo to vllm-ascend v0.10.0rc1 (#3455)
Pin version that can stable running 310I Duo to vllm-ascend v0.10.0rc1.

### What this PR does / why we need it?
Since PR #2614 310I Duo been broken. Although we are currently working
on fixing the issue with the 310I Duo being broken, there is no
confirmed timeline for a fix in the short term. To allow users to
quickly find a working version instead of going back and forth on trial
and error, this PR fixes the version in the 310I Duo guide.

### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
NA

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
2025-10-16 08:54:09 +08:00
leo-pony
ff91904ee2 [Doc] Clearer corresponding relationship between configurations for multi-node guides (#3441)
Optimize multi-node guide: more clearer corresponding relationship
between configuration items and nodes

### What this PR does / why we need it?
Some issues caused by misunderstandings due to unclear guidance content,
for example: #3367

### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
NA

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: leo-pony <nengjunma@outlook.com>
2025-10-16 08:54:03 +08:00
offline893
5a3082cd15 [EPLB]Record expert map without dynamic eplb. (#3409)
What this PR does / why we need it?
1.Record expert map without dynamic eplb.
2.Add export PYTHONOPTIMIZE=1  when using dynamic eplb.
3.change eplb doc

Does this PR introduce any user-facing change?
How was this patch tested?
Qwen3_moe in A3.

- vLLM version: v0.11.0

---------

Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
2025-10-15 14:21:15 +08:00
zxr2333
c2c1db78a7 [Bugfix] fix ZeroDivisionError when prefill_tp_size > num_kv_head and fix tp_resharding README (#3437)
### What this PR does / why we need it?
Fix ZeroDivisionError when prefill_tp_size > num_kv_head, in this
situation, num_head_replica can be 0 and used to divide another value,
this PR restricts the minimum value of a to be 1. And this PR fix
tp_resharding README.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By CI.

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

---------

Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Co-authored-by: liziyu <liziyu16@huawei.com>
2025-10-15 08:45:44 +08:00
TaoYu Chen
5fe883fa43 fix the title of modelrunner's prepare inputs docs (#3457)
### What this PR does / why we need it?
Fix the wrong title of the modelrunner_prepare_inputs docs

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
pass CI

- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: ChenTaoyu-SJTU <ctynb@qq.com>
2025-10-14 20:35:58 +08:00