### What this PR does / why we need it?
To support the data collection capabilities of the msServiceProfiler on
vLLM-ascned framework and enable customization of data collection points
via configuration file, a default profiling configuration has been added
to vllm-ascend, facilitating debugging and optimization for developers
and users.
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: minghangc <29514143@qq.com>
Drop VLLM_USE_V1 usage. This env has been removed from vLLM already.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
Use pip installation in installation doc and change related doctest to
validate.
### Does this PR introduce _any_ user-facing change?
No, doc only
### How was this patch tested?
Doctest related CI passed
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
v0.11.0rc1 will introduce w4a4 quantization feature, so add this
tutorial.
### Does this PR introduce _any_ user-facing change?
No
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: 22dimensions <waitingwind@foxmail.com>
### What this PR does / why we need it?
First-generation model:uses"LLama",subsequent models use"Llama"
The second"L"here should be lowercase.Other instances of "LLama"on
this page should be corrected accordingly
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ut
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: herizhen <you@example.com>
Co-authored-by: herizhen <you@example.com>
### What this PR does / why we need it?
Closes#3728, #3657.
The main branch is now aligned with the vllm `releases/v0.11.1` branch,
which no longer supports `Python 3.9`. Check it
[here](https://github.com/vllm-project/vllm/blob/releases/v0.11.1/pyproject.toml).
### Does this PR introduce _any_ user-facing change?
The newest version of vllm-ascend don't support Python 3.9.
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
### What this PR does / why we need it?
Remove extra MLAPO installation step for DeepSeek-V3.2.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
Corrected the errors in the information
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ut
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: lilinsiman <lilinsiman@gmail.com>
### What this PR does / why we need it?
Add model feature matrix table.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
Since we have upgraded to CANN 8.3rc1, we will no longer use the
privately maintained Mooncake repository, but instead use the official
release released by Mooncake:
https://github.com/kvcache-ai/Mooncake/releases/tag/v0.3.7.post2 .
Next step: this is only a temporary solution. We will integrate mooncake
into the vllm-ascend base image later for easier use. see
https://github.com/vllm-project/vllm-ascend/pull/3989
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
- global_segment_size and local_buffer_size use constants for unified
management.
- Newly added support for input formats ending with GB, MB, KB, and B,
while being compatible with existing input methods.
### Does this PR introduce _any_ user-facing change?
- Users can use new input methods
- The documentation has also been modified
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: 李子琦 <liziqi_ing@163.com>
### What this PR does / why we need it?
Add adxl timeout parameter in kv pool user guide, avoiding timeout error
when initializing connections between devices.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
### What this PR does / why we need it?
Add developer guide of eplb
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
Add version policy for main branch to clear how vllm-ascend work with
vllm
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
Add aclgraph developer guide.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: zzzzwwjj <1183291235@qq.com>
### What this PR does / why we need it?
Refactor the DeepSeek-V3.2-Exp tutorial.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
Set adxl engine as the default Mooncake backend, because Ascend
Transport is no longer maintained.
Update README to include instructions for installing the adxl backend
Mooncake.
### Does this PR introduce _any_ user-facing change?
Users need to compile and install the mooncake backend for adxl
according to the revised README instructions.
### How was this patch tested?
By CI.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
### What this PR does / why we need it?
This PR upgrade CANN from 8.2rc1 to 8.3rc1 and remove the CANN version
check logic.
TODO: we notice that UT runs failed with CANN 8.3 image. So the base
image for UT is still 8.2. We'll fix it later.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
Upgrade torch-npu to the official release version 2.7.1
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
We notice that sometimes user build vllm-ascend with incorrect torch
version. In this case, the build is passed, but when running the code,
the error `AttributeError: '_OpNamespace' '_C_ascend' object has no
attribute 'weak_ref_tensor'` is raised. Let's force the torch version to
2.7.1 and check the torch version when build from source to fix the
issue.
closes: #3342
- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Added instructions for resolving 'invalid tar header' error on Kylin OS with an ARM64 architecture on Atlas300I hardware during docker
pull, including steps for offline loading of docker images.
---
### What this PR does / why we need it?
The primary motivation for this PR is to address a critical `docker
pull` failure that occurs on specific, yet important, enterprise
environments. Specifically, when operating on **Kylin OS (麒麟操作系统) with
an ARM64 architecture on Atlas300I hardware**, users frequently
encounter an `archive/tar: invalid tar header` error, which completely
blocks the setup process. This issue has been consistently reproduced,
with multiple retries failing with the same error, confirming that it is
a persistent environmental problem rather than a transient network
issue.
<img width="2060" height="525" alt="image"
src="https://github.com/user-attachments/assets/6c1c5728-de27-476f-8df4-723564fc290b"
/>
This guide provides a robust, step-by-step workaround using an
offline-loading method (`docker save` on a host machine and `docker
load` on the target machine). This solution is crucial for enabling
users on this platform to use vLLM.
This contribution does not directly fix an existing issue number, but it
proactively solves a significant environmental and usability problem for
a growing user base.
### Does this PR introduce _any_ user-facing change?
No.It does not alter any code, APIs, interfaces, or existing behavior of
the vLLM project.
### How was this patch tested?
The instructions and troubleshooting steps in this guide were validated
through a real-world, end-to-end test case on the my hardware and OS.
The testing process was as follows:
1. **Problem Reproduction**: An attempt was made to directly `docker
pull` the `vllm-ascend:v0.10.0rc1-310p` image on a target machine
running Kylin OS (ARM64). The `invalid tar header` failure was
successfully and consistently reproduced, confirming the existence of
the problem.
2. **Solution Implementation**: The workaround detailed in the guide was
executed:
* On a separate host machine (Ubuntu x86_64), the image was successfully
pulled using the `--platform linux/arm64` flag.
* The image was then saved to a `.tar` archive using `docker save`.
* The `.tar` archive was transferred to the target Kylin OS machine.
* The image was successfully loaded from the archive using `docker load
-i ...`.
3. **End-to-End Validation**: After loading the image, the vLLM
container was launched on the target machine following the instructions
in the guide. Both online inference (via `curl` to the API server) and
offline inference (via the Python script) were executed successfully,
confirming that the entire workflow described in the document is
accurate and effective.
Since this is a documentation-only change based on a validated workflow,
no new unit or integration tests were added to the codebase.
- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19
---------
Signed-off-by: Liwx <liweixuan1014@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Many FAQ content is out of date, this PR refresh it.
- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
This pull request mainly do the following things:
1. Add a doc for multi-node CI, The main content is the mechanism
principle and how to contribute
2. Simplify the config yaml for more developer-friendly
3. Optimized the mooncake installation script to prevent accidental
failures during installation
4. Fix the workflow to ensure the kubernetes can be apply correctly
5. Add Qwen3-235B-W8A8 disaggregated_prefill test
6. Add GLM-4.5 multi dp test
7. Add 2p1d 4nodes disaggregated_prefill test
8. Refactor nightly tests
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main:
17c540a993
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Update the Pangu Pro MoE tutorials.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
This PR introduces a new model loader called Netloader, which leverages
high-bandwidth P2P direct transfer between NPU cards to achieve weight
loading. Netloader is implemented as a plugin through the newly added
'register_model_loader' function in vLLM 0.10. It facilitates the
process of weight loading by sending weights from a pre-loaded model
(server) to an empty model of a newly started instance (client). The
server operates concurrently with normal inference tasks through
sub-threads and the 'stateless_init_torch_distributed_process_group' in
vLLM. The client initiates a transfer request after verifying that the
model and partitioning method are the same as the server's, and uses
HCCL's collective communication (send/recv) to load the weights in the
order they are stored in the model.
Application Scenarios:
1. Significantly Reduces Inference Instance Startup Time By reusing the
weights of already loaded instances and performing high-speed transfers
directly between computing cards, this method reduces model loading
latency compared to traditional remote/local pull methods.
2. Reduces Network and Storage Pressure Avoids the need to repeatedly
download weight files from remote repositories, reducing the impact on
centralized storage and network traffic, thereby enhancing overall
system stability and service quality.
3. Improves Resource Utilization and Reduces Costs Accelerating the
loading process reduces reliance on redundant computing pools, allowing
computing resources to be elastically scaled and reclaimed as needed.
4. Enhances Business Continuity and High Availability In fault recovery
scenarios, new instances can quickly take over existing services,
avoiding prolonged business interruptions and improving the system's
high availability and user experience.
### Does this PR introduce _any_ user-facing change?
Netloader utilizes the existing --load-format=netloader and
--model-loader-extra-config to be activated. The
model-loader-extra-config needs to be input as a JSON string (as it is
now)
Afterwards, you can check whether the outputs for the same sentence are
consistent when the temperature is set to 0.
Signed-off-by: destinysky <kangrui10@126.com>
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: destinysky <kangrui10@126.com>
### What this PR does / why we need it?
Update the docker run command, specifically: add --shm-size=1g
### Does this PR introduce _any_ user-facing change?
users/developers using docker to pull vllm-ascend, the shared memory of
the container will be increased from the default 64MB to 1G
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
1.Add eplb ci to check the change of eplb feature.
2.Add param checking of eplb params.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
Qwen in A3.
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
### What this PR does / why we need it?
we notice that `patch_main` is never used. Usually the patch is for all
version. And if it's for specified version, we can use `vllm_version_is`
instead. So let's remove the useless sub folder in patch module to make
it clear.
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
mooncake connector support external dp & update readme
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: liziyu <liziyu16@huawei.com>
### What this PR does / why we need it?
Added shared memory size option to Docker run command.If shm-size is not
specified, docker will use 64MB by default. In this case,
vllm:EngineCore process may coredump if workload is high.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Done
Closes: https://github.com/vllm-project/vllm-ascend/issues/3513
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: likeful <irayki@gmail.com>
Signed-off-by: leijie2015 <irayki@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
### What this PR does / why we need it?
Port #1916 and #2157 to master branch to fuse operators in deepseek moe
layers, which can reduce scheduling overhead on devices. Note that this
feature is valid only when `tp_size = 1` and
`multistream_overlap_shared_expert` is enabled with torchair graph mode.
### Does this PR introduce _any_ user-facing change?
Users can enable this feature with `--additional-config
'{"torchair_graph_config":{"enabled":true, "enable_super_kernel":true},
"multistream_overlap_shared_expert":true}'`.
### How was this patch tested?
E2E deepseek serving with 2P1D disaggregated prefill scenarios.
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
### What this PR does / why we need it?
when using dynamic eplb, patch v1 executor to avoid create child process
failed.
### How was this patch tested?
deepseek in v3.
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
### What this PR does / why we need it?
Refactor the multi-machine CI use case. The purpose of this PR is to
increase the ease of adding multi-machine CI use cases, allowing
developers to add multi-machine cluster model testing use cases
(including PD separation) by simply adding a new YAML configuration
file.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Fix dockets CI for main release.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: menogrey <1299267905@qq.com>
Pin version that can stable running 310I Duo to vllm-ascend v0.10.0rc1.
### What this PR does / why we need it?
Since PR #2614 310I Duo been broken. Although we are currently working
on fixing the issue with the 310I Duo being broken, there is no
confirmed timeline for a fix in the short term. To allow users to
quickly find a working version instead of going back and forth on trial
and error, this PR fixes the version in the 310I Duo guide.
### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
NA
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: leo-pony <nengjunma@outlook.com>
Optimize multi-node guide: more clearer corresponding relationship
between configuration items and nodes
### What this PR does / why we need it?
Some issues caused by misunderstandings due to unclear guidance content,
for example: #3367
### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
NA
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: leo-pony <nengjunma@outlook.com>
What this PR does / why we need it?
1.Record expert map without dynamic eplb.
2.Add export PYTHONOPTIMIZE=1 when using dynamic eplb.
3.change eplb doc
Does this PR introduce any user-facing change?
How was this patch tested?
Qwen3_moe in A3.
- vLLM version: v0.11.0
---------
Signed-off-by: offline0806 <3337230449@qq.com>
Co-authored-by: offline0806 <3337230449@qq.com>
### What this PR does / why we need it?
Fix ZeroDivisionError when prefill_tp_size > num_kv_head, in this
situation, num_head_replica can be 0 and used to divide another value,
this PR restricts the minimum value of a to be 1. And this PR fix
tp_resharding README.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
By CI.
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
---------
Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Co-authored-by: liziyu <liziyu16@huawei.com>