### What this PR does / why we need it?
> Need to merge after PR #1322
According to benchmark results, this PR brings approximately 1%
performance gain.
#### Before Improvement
Profiling
<img width="1147" alt="截屏2025-06-22 14 54 47"
src="https://github.com/user-attachments/assets/4a4dc7f1-5b76-45d5-864d-dd7f8faf993c"
/>
Evaluation
```
# server launch command
python -m vllm.entrypoints.openai.api_server --model=/DeepSeek-R1-W8A8 \
--quantization ascend \
--served-model-name auto \
--trust-remote-code \
--distributed-executor-backend=mp \
--port 8006 \
-tp=16 \
--max-num-seqs 24 \
--max-model-len 32768 \
--max-num-batched-tokens 8192 \
--block-size 128 \
--no-enable-prefix-caching \
--additional-config '{"torchair_graph_config":{"enable_multistream_mla": true,"enabled":true,"use_cached_graph":true,"graph_batch_sizes":[24]},"ascend_scheduler_config":{"enabled":true},"expert_tensor_parallel_size":16}' \
--gpu-memory-utilization 0.96
# client benchmark command
python /root/vllm/benchmarks/benchmark_serving.py --backend vllm --dataset-name random \
--random-input-len 4096 \
--random-output-len 1536 \
--num-prompts 200 \
--ignore-eos \
--model auto \
--tokenizer /DeepSeek-R1-W8A8 \
--port 8006 \
--request-rate 1 \
--max-concurrency 24 \
--save-result \
--skip-initial-test \
--metric-percentiles "50,90,99"
```
```
============ Serving Benchmark Result ============
Successful requests: 200
Benchmark duration (s): 958.59
Total input tokens: 819200
Total generated tokens: 307200
Request throughput (req/s): 0.2086
Output token throughput (tok/s): 320.47
Total Token throughput (tok/s): 1175.05
---------------Time to First Token----------------
Mean TTFT (ms): 942.70
Median TTFT (ms): 713.87
P50 TTFT (ms): 713.87
P90 TTFT (ms): 1363.88
P99 TTFT (ms): 2008.73
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 68.96
Median TPOT (ms): 69.49
P50 TPOT (ms): 69.49
P90 TPOT (ms): 70.42
P99 TPOT (ms): 70.72
---------------Inter-token Latency----------------
Mean ITL (ms): 68.96
Median ITL (ms): 59.88
P50 ITL (ms): 59.88
P90 ITL (ms): 61.59
P99 ITL (ms): 68.82
==================================================
```
#### After Improvement
Profiling
<img width="1200" alt="截屏2025-06-22 14 55 42"
src="https://github.com/user-attachments/assets/e3eb9dec-0ff0-4e5f-ab94-93c65003e51f"
/>
Evaluation
```
============ Serving Benchmark Result ============
Successful requests: 200
Benchmark duration (s): 948.08
Total input tokens: 819200
Total generated tokens: 307200
Request throughput (req/s): 0.2110
Output token throughput (tok/s): 324.02
Total Token throughput (tok/s): 1188.08
---------------Time to First Token----------------
Mean TTFT (ms): 1019.25
Median TTFT (ms): 714.63
P50 TTFT (ms): 714.63
P90 TTFT (ms): 1367.31
P99 TTFT (ms): 2661.52
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 68.14
Median TPOT (ms): 68.68
P50 TPOT (ms): 68.68
P90 TPOT (ms): 69.33
P99 TPOT (ms): 70.30
---------------Inter-token Latency----------------
Mean ITL (ms): 68.14
Median ITL (ms): 59.04
P50 ITL (ms): 59.04
P90 ITL (ms): 60.93
P99 ITL (ms): 66.89
==================================================
```
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
- vLLM version: v0.9.2
- vLLM main:
65393ee064
Signed-off-by: ApsarasX <apsarax@outlook.com>
vLLM Ascend Plugin
| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
- OS: Linux
- Software:
- Python >= 3.9, < 3.12
- CANN >= 8.1.RC1
- PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
- vLLM (the same version as vllm-ascend)
Getting Started
Please refer to QuickStart and Installation for more details.
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has main branch and dev branch.
- main: main branch,corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example,
v0.7.3-devis the dev branch for vLLMv0.7.3version.
Below is maintained branches:
| Branch | Status | Note |
|---|---|---|
| main | Maintained | CI commitment for vLLM main branch and vLLM 0.9.x branch |
| v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.
