drslark 384d84c7ef [Bugfix] Avoided a bug of drafter when dp and sp are enabled (#6226)
### What this PR does / why we need it?

Avoided a bug of drafter when `dp` and `sp` are enabled.

Specifically, disable `sp` when drafter is dense.

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

An aisbench test:

```shell
python3 aisbench_test.py --input_len 3500 --output_len 1000 --data_num 100 --concurrency 320 --request_rate 8
```

The result is okay.

```text
[2026-01-24 22:38:20,256] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Calculate global interval offsets time: 0.5922 s
01/24 22:38:20 - AISBench - INFO - Process 0 using precomputed sleep offsets with 100 requests
Process-0 pid:220279: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [09:40<00:00,  5.81s/it]
Pid:      220279 | Post:        100 | Received:    100 | Failed:        0 | Post Time:12.51s | Receive Time:580.92s: 
Encoding output text...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:01<00:00, 93.75it/s]
01/24 22:48:02 - AISBench - INFO - Start converting origin data to detailed data ...
01/24 22:48:02 - AISBench - INFO - Finish converting origin data to detailed data█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:01<00:00, 95.08it/s]
01/24 22:48:02 - AISBench - INFO - Added 'Actual RPS: After Excluding Anomalies' to group 'Time - RPS: ' in legend explanation table
01/24 22:48:02 - AISBench - INFO - Successfully merged chart into position (1, 1)
01/24 22:48:02 - AISBench - INFO - RPS distribution charts saved to outputs/default/20260124_223809/performances/vllm-api-stream-chat/gsm8kdataset_rps_distribution_plot_with_actual_rps.html
01/24 22:48:02 - AISBench - INFO - Updated chart with actual RPS saved to outputs/default/20260124_223809/performances/vllm-api-stream-chat/gsm8kdataset_rps_distribution_plot_with_actual_rps.html
[2026-01-24 22:48:02,557] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_perf_inferencer] [INFO] Start extracting pref datas ...
[2026-01-24 22:48:02,558] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_perf_inferencer] [INFO] Finish extracting pref datas!
[2026-01-24 22:48:02,558] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_perf_inferencer] [INFO] Dumping detail perf data ...
Dumping data to h5: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 75.31it/s]
[2026-01-24 22:48:02,588] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_perf_inferencer] [INFO] Dump detail perf data cost: 0.02995561994612217(s)
[2026-01-24 22:48:02,588] [ais_bench.benchmark.openicl.icl_inferencer.icl_gen_perf_inferencer] [INFO] Performance task finished, results saved in outputs/default/20260124_223809/performances/vllm-api-stream-chat
01/24 22:48:02 - AISBench - INFO - time elapsed: 586.32s
Running tasks: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [09:55<00:00, 595.91s/it]
01/24 22:48:05 - AISBench - INFO - Performance evaluation tasks completed.
01/24 22:48:05 - AISBench - INFO - Loading detail perf data of model='vllm-api-stream-chat' dataset='gsm8kdataset' ...
01/24 22:48:05 - AISBench - INFO - Starting request timeline processing...
01/24 22:48:05 - AISBench - INFO - Data preprocessing completed in 0.0004s
01/24 22:48:05 - AISBench - INFO - Generating timeline traces for 100 requests...
01/24 22:48:05 - AISBench - INFO - Generated timeline trace chunks in 0.0441s
01/24 22:48:05 - AISBench - INFO - Generating concurrency traces...
01/24 22:48:05 - AISBench - INFO - Generated concurrency trace chunks in 0.0011s
01/24 22:48:05 - AISBench - INFO - Creating figure layout...
01/24 22:48:05 - AISBench - INFO - Figure layout created in 0.0504s
01/24 22:48:05 - AISBench - INFO - Writing to outputs/default/20260124_223809/performances/vllm-api-stream-chat/gsm8kdataset_plot.html...
01/24 22:48:05 - AISBench - INFO - HTML written in 0.0181s
01/24 22:48:05 - AISBench - INFO - Completed! Total execution time: 0.1148s
01/24 22:48:05 - AISBench - INFO - The gsm8kdataset_plot has been saved in outputs/default/20260124_223809/performances/vllm-api-stream-chat/gsm8kdataset_plot.html
01/24 22:48:05 - AISBench - INFO - Converting perf results of stage ...
01/24 22:48:05 - AISBench - INFO - Finish Converting!
01/24 22:48:05 - AISBench - INFO - Start calculating metrics ...
01/24 22:48:05 - AISBench - INFO - Start calculating common metrics ...
01/24 22:48:05 - AISBench - INFO - Start calculating add units ...
01/24 22:48:05 - AISBench - INFO - Finish calculating perf data!
01/24 22:48:05 - AISBench - INFO - Summarizing performance results...
01/24 22:48:05 - AISBench - INFO - Performance Results of task: vllm-api-stream-chat/gsm8kdataset: 
╒══════════════════════════╤═════════╤════════════════╤════════════════╤════════════════╤════════════════╤════════════════╤════════════════╤════════════════╤═════╕
│ Performance Parameters   │ Stage   │ Average        │ Min            │ Max            │ Median         │ P75            │ P90            │ P99            │  N  │
╞══════════════════════════╪═════════╪════════════════╪════════════════╪════════════════╪════════════════╪════════════════╪════════════════╪════════════════╪═════╡
│ E2EL                     │ total   │ 300806.1781 ms │ 189326.0489 ms │ 568345.5121 ms │ 380629.6785 ms │ 384208.3527 ms │ 385363.7709 ms │ 566871.7684 ms │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ TTFT                     │ total   │ 107441.2231 ms │ 343.8054 ms    │ 378132.3979 ms │ 188817.4877 ms │ 190985.8451 ms │ 192547.6847 ms │ 378008.356 ms  │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ TPOT                     │ total   │ 193.5585 ms    │ 185.1008 ms    │ 197.262 ms     │ 193.8146 ms    │ 195.0803 ms    │ 196.0323 ms    │ 196.9688 ms    │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ ITL                      │ total   │ 194.2067 ms    │ 0.0108 ms      │ 2782.7124 ms   │ 184.9998 ms    │ 194.2631 ms    │ 221.2895 ms    │ 304.363 ms     │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ InputTokens              │ total   │ 3506.86        │ 3431.0         │ 3508.0         │ 3508.0         │ 3508.0         │ 3508.0         │ 3508.0         │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ OutputTokens             │ total   │ 1000.0         │ 1000.0         │ 1000.0         │ 1000.0         │ 1000.0         │ 1000.0         │ 1000.0         │ 100 │
├──────────────────────────┼─────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼────────────────┼─────┤
│ OutputTokenThroughput    │ total   │ 3.7745 token/s │ 1.7595 token/s │ 5.2819 token/s │ 2.6272 token/s │ 5.1028 token/s │ 5.1502 token/s │ 5.2754 token/s │ 100 │
╘══════════════════════════╧═════════╧════════════════╧════════════════╧════════════════╧════════════════╧════════════════╧════════════════╧════════════════╧═════╛
╒══════════════════════════╤═════════╤══════════════════╕
│ Common Metric            │ Stage   │ Value            │
╞══════════════════════════╪═════════╪══════════════════╡
│ Benchmark Duration       │ total   │ 580456.2704 ms   │
├──────────────────────────┼─────────┼──────────────────┤
│ Total Requests           │ total   │ 100              │
├──────────────────────────┼─────────┼──────────────────┤
│ Failed Requests          │ total   │ 0                │
├──────────────────────────┼─────────┼──────────────────┤
│ Success Requests         │ total   │ 100              │
├──────────────────────────┼─────────┼──────────────────┤
│ Concurrency              │ total   │ 51.8224          │
├──────────────────────────┼─────────┼──────────────────┤
│ Max Concurrency          │ total   │ 320              │
├──────────────────────────┼─────────┼──────────────────┤
│ Request Throughput       │ total   │ 0.1723 req/s     │
├──────────────────────────┼─────────┼──────────────────┤
│ Total Input Tokens       │ total   │ 350686           │
├──────────────────────────┼─────────┼──────────────────┤
│ Prefill Token Throughput │ total   │ 32.6398 token/s  │
├──────────────────────────┼─────────┼──────────────────┤
│ Total generated tokens   │ total   │ 100000           │
├──────────────────────────┼─────────┼──────────────────┤
│ Input Token Throughput   │ total   │ 604.1558 token/s │
├──────────────────────────┼─────────┼──────────────────┤
│ Output Token Throughput  │ total   │ 172.2783 token/s │
├──────────────────────────┼─────────┼──────────────────┤
│ Total Token Throughput   │ total   │ 776.434 token/s  │
╘══════════════════════════╧═════════╧══════════════════╛
01/24 22:48:05 - AISBench - INFO - Performance Result files locate in outputs/default/20260124_223809/performances/vllm-api-stream-chat.
```
- vLLM version: v0.14.0
- vLLM main:
d68209402d

Signed-off-by: drslark <slarksblood@qq.com>
2026-01-25 17:45:29 +08:00
2025-08-11 22:21:29 +08:00
2025-02-05 10:53:12 +08:00
2026-01-12 11:21:31 +08:00
2025-01-29 02:44:13 -08:00
2025-12-01 09:09:51 +08:00

vllm-ascend

vLLM Ascend Plugin

DeepWiki

| About Ascend | Documentation | #SIG-Ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2025/12] We released the new official version v0.11.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
  • [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploying large-scale Expert Parallelism (EP) on Ascend.
  • [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl/TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released the first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Experts (MoE), Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
  • OS: Linux
  • Software:
    • Python >= 3.10, < 3.12
    • CANN == 8.5.0 (Ascend HDK version refers to here)
    • PyTorch == 2.9.0, torch-npu == 2.9.0
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.13.0rc1 Latest release candidate See QuickStart and Installation for more details
v0.11.0 Latest stable version See QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up the development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has a main branch and a dev branch.

  • main: main branch, corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • releases/vX.Y.Z: development branch, created alongside new releases of vLLM. For example, releases/v0.13.0 is the dev branch for vLLM v0.13.0 version.

Below are the maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM v0.13.0 tag
v0.7.1-dev Unmaintained Only doc fixes are allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fixes are allowed, and no new release tags anymore.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version
v0.11.0-dev Maintained CI commitment for vLLM 0.11.0 version
releases/v0.13.0 Maintained CI commitment for vLLM 0.13.0 version
rfc/feature-name Maintained Feature branches for collaboration

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 31 MiB
Languages
C++ 51.8%
Python 45.8%
CMake 1.1%
Shell 0.7%
C 0.5%
Other 0.1%