seq_lens CPU cache to avoid frequent d2h copy for better performance (#6448)
### What this PR does / why we need it?
Currently, the performance of multi-modal encoding (i.e.,
`AscendMMEncoderAttention` forward) is considerably bounded by the heavy
host pre-process operations.
We can see from the profiling results below, before the real computation
of Attention, there are long free time in the device, which will lead to
extremely low NPU utilization.
<img width="2264" height="1398" alt="iShot_2026-01-23_16 26 39"
src="https://github.com/user-attachments/assets/37f21d06-e526-4f28-82fe-005746cf13bd"
/>
---
**To opitimize this, this PR has proposed four changes:**
1. Use `seq_lens` CPU cache to avoid frequent d2h copy. Before this PR,
`AscendMMEncoderAttention` will copy the `cu_seqlens` from NPU to CPU in
every forward, since the op `_npu_flash_attention_unpad()` requires CPU
`cu_seqlens` (otherwise it will crash). Thus, we use
`seq_lens_cpu_cache` to cache this tensor, since it's shared between all
layers, but may change in different forward step. When the current
`layer_index` is `0`, we update the cache, otherwise we directly use the
cache to avoid frequent `diff` and `copy` operations, which are costful.
2. Pre-compute the scale value to avoid calculating it in every forward.
3. Move the judgment of `enable_pad` from forward to the `__init__`
method.
4. Revert https://github.com/vllm-project/vllm-ascend/pull/6204.
**Performance after these optimizations:**
- **TTFT** has been reduced by **7.43%** ⬇️.
- **Throughput** has been increased by **1.23%** ⬆️.
---
> [!NOTE]
> This PR requires https://github.com/vllm-project/vllm/pull/33674 be
merged.
---
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Launch the server:
```bash
vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--dtype bfloat16 \
--limit-mm-per-prompt '{"image": 1}' \
--max-model-len 16384 \
--max-num-batched-tokens 16384 \
--no-async-scheduling
```
Run benchmark:
```bash
vllm bench serve \
--model /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--backend openai-chat \
--endpoint /v1/chat/completions \
--dataset-name hf \
--hf-split train \
--dataset-path lmarena-ai/vision-arena-bench-v0.1 \
--num-prompts 500 \
--request-rate 10 \
--burstiness 5 \
--no-stream
```
Before this PR:
```
============ Serving Benchmark Result ============
Successful requests: 500
Failed requests: 0
Request rate configured (RPS): 10.00
Benchmark duration (s): 82.23
Total input tokens: 33418
Total generated tokens: 61543
Request throughput (req/s): 6.08
Output token throughput (tok/s): 748.45
Peak output token throughput (tok/s): 3203.00
Peak concurrent requests: 402.00
Total token throughput (tok/s): 1154.86
---------------Time to First Token----------------
Mean TTFT (ms): 10275.37
Median TTFT (ms): 6297.88
P99 TTFT (ms): 22918.26
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 263.02
Median TPOT (ms): 277.61
P99 TPOT (ms): 483.56
---------------Inter-token Latency----------------
Mean ITL (ms): 257.31
Median ITL (ms): 94.83
P99 ITL (ms): 1773.90
==================================================
```
After this PR:
```
============ Serving Benchmark Result ============
Successful requests: 500
Failed requests: 0
Request rate configured (RPS): 10.00
Benchmark duration (s): 81.20
Total input tokens: 33418
Total generated tokens: 61509
Request throughput (req/s): 6.16
Output token throughput (tok/s): 757.54
Peak output token throughput (tok/s): 2562.00
Peak concurrent requests: 395.00
Total token throughput (tok/s): 1169.11
---------------Time to First Token----------------
Mean TTFT (ms): 9511.91
Median TTFT (ms): 5479.78
P99 TTFT (ms): 21427.21
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 261.12
Median TPOT (ms): 276.03
P99 TPOT (ms): 446.99
---------------Inter-token Latency----------------
Mean ITL (ms): 254.04
Median ITL (ms): 97.71
P99 ITL (ms): 1516.67
==================================================
```
- vLLM version: v0.15.0
- vLLM main:
dc917cceb8
Signed-off-by: shen-shanshan <467638484@qq.com>
vLLM Ascend Plugin
| About Ascend | Documentation | #SIG-Ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2026/02] We released the new official version v0.13.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
- [2025/12] We released the new official version v0.11.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
- [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploying large-scale Expert Parallelism (EP) on Ascend.
- [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
- [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl/TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
- [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
- [2025/05] We've released the first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Experts (MoE), Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
- OS: Linux
- Software:
- Python >= 3.10, < 3.12
- CANN == 8.5.0 (Ascend HDK version refers to here)
- PyTorch == 2.9.0, torch-npu == 2.9.0
- vLLM (the same version as vllm-ascend)
Getting Started
Please use the following recommended versions to get started quickly:
| Version | Release type | Doc |
|---|---|---|
| v0.14.0rc1 | Latest release candidate | See QuickStart and Installation for more details |
| v0.13.0 | Latest stable version | See QuickStart and Installation for more details |
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up the development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has a main branch and a dev branch.
- main: main branch, corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- releases/vX.Y.Z: development branch, created alongside new releases of vLLM. For example,
releases/v0.13.0is the dev branch for vLLMv0.13.0version.
Below are the maintained branches:
| Branch | Status | Note |
|---|---|---|
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.13.0 tag |
| v0.7.1-dev | Unmaintained | Only doc fixes are allowed |
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fixes are allowed, and no new release tags anymore. |
| v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
| v0.11.0-dev | Maintained | CI commitment for vLLM 0.11.0 version |
| releases/v0.13.0 | Maintained | CI commitment for vLLM 0.13.0 version |
| rfc/feature-name | Maintained | Feature branches for collaboration |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.
