### What this PR does / why we need it? The previous implementation of the Mooncake connector only supported scenarios where the Tensor Parallel sizes for the Prefill and Decode phases were the same for MLA and GQA/MHA. For heterogeneous TP scenarios, a single rank on a decode node needs to pull the KV cache from multiple ranks on the prefill nodes and then merge them (only support prefill TP >= decode TP now). During this merge, a transpose operation is required because the layouts of the KV caches are different. To minimize transpose overhead, we use the npu_paged_cache_load operation to extract the blocks corresponding to the request from the KV cache. After performing the transpose, we use _npu_reshape_and_cache to write the blocks back to their original positions. This process is illustrated in the diagram below. b means block_size, this diagram illustrates transpose kv cache layout for one block. In the implementation, we transpose kv cache by layer for one request. <img width="1464" height="916" alt="image" src="https://github.com/user-attachments/assets/09d96a98-e41c-4733-9535-05544163081a" /> ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested - vLLM version: v0.11.0 --------- Signed-off-by: chenxiao <Jaychou1620@Gmail.com> Signed-off-by: zzy-ContiLearn <1831242919@qq.com> Signed-off-by: zzhx1 <zzh_201018@outlook.com> Signed-off-by: Kurumi5210 <jaychou1620@gmail.com> Co-authored-by: zzy-ContiLearn <1831242919@qq.com> Co-authored-by: chenxiao <cx02308786@antgroup.com> Co-authored-by: chenxiao <Jaychou1620@Gmail.com> Co-authored-by: zzhx1 <zzh_201018@outlook.com>
vLLM Ascend Plugin
| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploy large scale Expert Parallelism (EP) on Ascend.
- [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
- [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
- [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
- [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
- OS: Linux
- Software:
- Python >= 3.9, < 3.12
- CANN >= 8.2.rc1 (Ascend HDK version refers to here)
- PyTorch >= 2.7.1, torch-npu >= 2.7.1.dev20250724
- vLLM (the same version as vllm-ascend)
Getting Started
Please use the following recommended versions to get started quickly:
| Version | Release type | Doc |
|---|---|---|
| v0.11.0rc0 | Latest release candidate | QuickStart and Installation for more details |
| v0.9.1 | Latest stable version | QuickStart and Installation for more details |
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has main branch and dev branch.
- main: main branch,corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example,
v0.7.3-devis the dev branch for vLLMv0.7.3version.
Below is maintained branches:
| Branch | Status | Note |
|---|---|---|
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.11.0 tag |
| v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more. |
| v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
| rfc/feature-name | Maintained | Feature branches for collaboration |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.
