wanghuanjun2113 dec04ec8d8 [Bugfix] Fix incorrect layer count for MTP models in update_aclgraph_sizes (#7064)
## Summary
- Fix incorrect layer count calculation for MTP (Multi-Token Prediction)
models in `update_aclgraph_sizes()` function
- For MTP models, the draft model's layer count is stored in
`num_nextn_predict_layers` or `mtp_num_hidden_layers` (for Qwen3.5), not
in the standard `num_hidden_layers` field
- Directly accessing `draft.hf_config.num_hidden_layers` returns the
main model's layer count instead of the MTP draft model's layer count

## Bug Description
In `vllm_ascend/utils.py`, the `update_aclgraph_sizes()` function
calculates `resources_per_graph` for speculative decoding scenarios.
When calculating the resources needed for the draft model, the original
code directly accessed:

```python
resources_per_graph += draft.hf_config.num_hidden_layers + 1
```

This works correctly for standard draft models, but **fails for MTP
models** (like DeepSeek-V3's MTP or Qwen3.5's MTP) because:
1. MTP models store their layer count in model-specific fields:
   - `num_nextn_predict_layers` (DeepSeek-V3 MTP)
   - `mtp_num_hidden_layers` (Qwen3.5 MTP)
2. The `num_hidden_layers` field in these models contains the **main
model's** layer count, not the MTP layer count
3. This leads to **grossly overestimating** the `resources_per_graph`,
which in turn causes the calculated `max_batch_sizes` to be
unnecessarily small

## Fix
Use `draft.get_total_num_hidden_layers()` instead of directly accessing
`draft.hf_config.num_hidden_layers`. This method correctly handles
different model types through the `model_arch_config_convertor`
infrastructure, returning the appropriate layer count for:
- Standard draft models → `num_hidden_layers`
- DeepSeek-V3 MTP → `num_nextn_predict_layers`
- Qwen3.5 MTP → `mtp_num_hidden_layers`

🤖 Generated with [Claude Code](https://claude.com/claude-code)
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: wanghuanjun2113 <wanghuanjun2113@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 16:14:51 +08:00
2025-02-05 10:53:12 +08:00
2026-01-12 11:21:31 +08:00
2025-01-29 02:44:13 -08:00

vllm-ascend

vLLM Ascend Plugin

DeepWiki

| About Ascend | Documentation | #SIG-Ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2026/02] We released the new official version v0.13.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
  • [2025/12] We released the new official version v0.11.0! Please follow the official guide to start using vLLM Ascend Plugin on Ascend.
  • [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploying large-scale Expert Parallelism (EP) on Ascend.
  • [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl/TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released the first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Experts (MoE), Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
  • OS: Linux
  • Software:
    • Python >= 3.10, < 3.12
    • CANN == 8.5.0 (Ascend HDK version refers to here)
    • PyTorch == 2.9.0, torch-npu == 2.9.0
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.14.0rc1 Latest release candidate See QuickStart and Installation for more details
v0.13.0 Latest stable version See QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up the development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has a main branch and a dev branch.

  • main: main branch, corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • releases/vX.Y.Z: development branch, created alongside new releases of vLLM. For example, releases/v0.13.0 is the dev branch for vLLM v0.13.0 version.

Below are the maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM v0.13.0 tag
v0.7.1-dev Unmaintained Only doc fixes are allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fixes are allowed, and no new release tags anymore.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version
v0.11.0-dev Maintained CI commitment for vLLM 0.11.0 version
releases/v0.13.0 Maintained CI commitment for vLLM 0.13.0 version
rfc/feature-name Maintained Feature branches for collaboration

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 31 MiB
Languages
C++ 51.8%
Python 45.8%
CMake 1.1%
Shell 0.7%
C 0.5%
Other 0.1%