### What this PR does / why we need it?
Building upon https://github.com/vllm-project/vllm-ascend/pull/5517 to
enable batch-invariant in vllm-ascend, we observed that the performance
of BI in eager mode remains suboptimal.
This PR further integrates batch-invariant with torch.compile, which
improves inference performance by 350% when tested with Qwen3-0.6B.
### Does this PR introduce _any_ user-facing change?
Previously, enabling both aclgraph and Batch-Invariant would cause an
"ub overflow" error. This occurred because transposed input tensors
could produce incorrect stride() values.
To fix this, we now call .contiguous() on the input tensors before
passing them to Triton kernels. This ensures a contiguous memory layout
and prevents transposed tensors from causing incorrect stride
calculations.
### Test Plan
pytest -sv --durations=0
tests/e2e/singlecard/test_aclgraph_batch_invariant.py
### Test Result
```
============================================================================ slowest durations ============================================================================
87.37s call tests/e2e/singlecard/test_aclgraph_batch_invariant.py::test_v1_generation_is_deterministic_across_batch_sizes_with_needle
77.39s call tests/e2e/singlecard/test_aclgraph_batch_invariant.py::test_logprobs_bitwise_batch_invariance_bs1_vs_bsN
74.04s call tests/e2e/singlecard/test_aclgraph_batch_invariant.py::test_logprobs_without_batch_invariance_should_fail
73.59s call tests/e2e/singlecard/test_aclgraph_batch_invariant.py::test_simple_generation
(8 durations < 0.005s hidden. Use -vv to show these durations.)
================================================================ 4 passed, 3 warnings in 312.45s (0:05:12) ================================================================
```
### Performance
export VLLM_BATCH_INVARIANT=1
vllm serve /home/Qwen3-0.6B \
--served-model-name qwen \
--port 8000 \
--max-num-seqs 256 \
--tensor-parallel-size 1 \
--max-model-len 5500 \
--max-num-batched-tokens 5500 \
--reasoning-parser qwen3 \
--gpu-memory-utilization 0.9 \
--compilation_config '{"cudagraph_mode":"FULL_DECODE_ONLY",
"cudagraph_capture_sizes":[1,2,4,8,16,32]}' \
--additional-config
'{"ascend_scheduler_config":{"enabled":true},"enable_weight_nz_layout":true}'
vllm bench serve --served-model-name qwen --trust-remote-code --backend
vllm --model /home/Qwen3-0.6B/ --endpoint /v1/completions --dataset-name
random --random-input-len 512 --random-output-len 256 --num-prompts 800
--max-concurrency 8
torch.compile batch invariant performance:
```
============ Serving Benchmark Result ============
Successful requests: 800
Failed requests: 0
Maximum request concurrency: 8
Benchmark duration (s): 477.21
Total input tokens: 409600
Total generated tokens: 204800
Request throughput (req/s): 1.68
Output token throughput (tok/s): 429.16
Peak output token throughput (tok/s): 472.00
Peak concurrent requests: 16.00
Total token throughput (tok/s): 1287.48
---------------Time to First Token----------------
Mean TTFT (ms): 285.53
Median TTFT (ms): 312.70
P99 TTFT (ms): 324.22
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 17.59
Median TPOT (ms): 17.50
P99 TPOT (ms): 18.44
---------------Inter-token Latency----------------
Mean ITL (ms): 17.59
Median ITL (ms): 17.45
P99 ITL (ms): 18.76
==================================================
```
Eager
```
============ Serving Benchmark Result ============
Successful requests: 800
Failed requests: 0
Maximum request concurrency: 8
Benchmark duration (s): 1694.70
Total input tokens: 409600
Total generated tokens: 204800
Request throughput (req/s): 0.47
Output token throughput (tok/s): 120.85
Peak output token throughput (tok/s): 136.00
Peak concurrent requests: 16.00
Total token throughput (tok/s): 362.54
---------------Time to First Token----------------
Mean TTFT (ms): 164.29
Median TTFT (ms): 129.71
P99 TTFT (ms): 1961.66
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 65.81
Median TPOT (ms): 65.15
P99 TPOT (ms): 72.27
---------------Inter-token Latency----------------
Mean ITL (ms): 65.81
Median ITL (ms): 64.64
P99 ITL (ms): 75.72
==================================================
```
- vLLM version: v0.13.0
- vLLM main:
d68209402d
---------
Signed-off-by: huangning1995 <huangning12@huawei.com>
vLLM Ascend Plugin
| About Ascend | Documentation | #SIG-Ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2025/12] We released the new official version v0.11.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
- [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploying large-scale Expert Parallelism (EP) on Ascend.
- [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
- [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl/TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
- [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
- [2025/05] We've released the first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Experts (MoE), Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
- OS: Linux
- Software:
- Python >= 3.10, < 3.12
- CANN == 8.5.0 (Ascend HDK version refers to here)
- PyTorch == 2.9.0, torch-npu == 2.9.0
- vLLM (the same version as vllm-ascend)
Getting Started
Please use the following recommended versions to get started quickly:
| Version | Release type | Doc |
|---|---|---|
| v0.13.0rc1 | Latest release candidate | See QuickStart and Installation for more details |
| v0.11.0 | Latest stable version | See QuickStart and Installation for more details |
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up the development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has a main branch and a dev branch.
- main: main branch, corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- releases/vX.Y.Z: development branch, created alongside new releases of vLLM. For example,
releases/v0.13.0is the dev branch for vLLMv0.13.0version.
Below are the maintained branches:
| Branch | Status | Note |
|---|---|---|
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.13.0 tag |
| v0.7.1-dev | Unmaintained | Only doc fixes are allowed |
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fixes are allowed, and no new release tags anymore. |
| v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
| v0.11.0-dev | Maintained | CI commitment for vLLM 0.11.0 version |
| releases/v0.13.0 | Maintained | CI commitment for vLLM 0.13.0 version |
| rfc/feature-name | Maintained | Feature branches for collaboration |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.
