wangxiyuan 0f571c347b Nominate new maintainers @zzzzwwjj @realliujiaxu @LCAIZJ (#5152)
I'd like to nominate @zzzzwwjj @realliujiaxu @LCAIZJ to join vLLM Ascend
committer team.

@zzzzwwjj
---
- Review Quality‌:
He has completed 80+reviews since April. 2025, include
https://github.com/vllm-project/vllm-ascend/pull/3232#issuecomment-3506110786,
https://github.com/vllm-project/vllm-ascend/pull/4822#discussion_r2601661204,
https://github.com/vllm-project/vllm-ascend/pull/4768#issuecomment-3644795995
high quality review.

- Sustained Contributions
15+ Valuable bug fix and refactor is very good.

https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3Azzzzwwjj+is%3Aclosed+review%3Aapproved
Continuous optimization of code architecture

https://github.com/vllm-project/vllm-ascend/pulls?q=author%3Azzzzwwjj+is%3Amerged

- Quality Contribution‌:
https://github.com/vllm-project/vllm-ascend/pull/1229
https://github.com/vllm-project/vllm-ascend/pull/1979
https://github.com/vllm-project/vllm-ascend/pull/4359
https://github.com/vllm-project/vllm-ascend/pull/4878

- Community Involvement‌: 
He lead the https://github.com/vllm-project/vllm-ascend/issues/1147, to
refactor AscendFusedMoE at the first time.
He shared topics about large-scale distributed inference and
reinforcement learning on vLLM-Ascend meetup on August 2nd.

@realliujiaxu
---
- Review Quality‌:
He has completed about [40+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Arealliujiaxu+-author%3Arealliujiaxu+)
since September, include
https://github.com/vllm-project/vllm-ascend/pull/4868#discussion_r2605549015,
https://github.com/vllm-project/vllm-ascend/pull/2275#discussion_r2268455665.

- Sustained Contributions
He has completed (17
commits)[https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3Arealliujiaxu+is%3Amerged],
continuously optimizing the performance of the MoE model.

- Quality Contribution‌:

Contributed the Flash Comm1 feature to the community, supporting both
eager and aclgraph execution modes, while compatible with multiple MoE
models including DeepSeek and GLM4.5.
  - https://github.com/vllm-project/vllm-ascend/pull/3334
  - https://github.com/vllm-project/vllm-ascend/pull/3420
  - https://github.com/vllm-project/vllm-ascend/pull/3015
  
  co-author:
  - https://github.com/vllm-project/vllm-ascend/pull/3495
  - https://github.com/vllm-project/vllm-ascend/pull/4868

- Community Involvement‌: 
1. Completed two major refactors, enabling vllm-ascend to evolve more
rapidly and robustly: [Linear
module](https://github.com/vllm-project/vllm-ascend/pull/2867) and
[rejection
sampler](https://github.com/vllm-project/vllm-ascend/pull/4975)
2. [fixed 8
bugs](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3Arealliujiaxu+is%3Amerged+bugfix+)
in graph mode, spec decoding and async scheduling.

@LCAIZJ
---
- Review Quality‌: He's been the go-to reviewer for virtually all PD
disaggregation and KV Pool related PRs, having completed [30+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3ALCAIZJ+is%3Aopen+-author%3ALCAIZJ+)
since May 2025. Notable examples include
[discussion_r2553887360](https://github.com/vllm-project/vllm-ascend/pull/4345#discussion_r2553887360),
[issuecomment-3540994801](https://github.com/vllm-project/vllm-ascend/pull/4161#issuecomment-3540994801),
and
[discussion_r2492593988](https://github.com/vllm-project/vllm-ascend/pull/3981#discussion_r2492593988),
all demonstrating thorough and insightful feedback.
- Sustained and Quality Contributions: His contributions reflect a
strong grasp of both ‌vLLM‌ and ‌vLLM Ascend‌ codebases, particularly in
prefill-decode disaggregation and KV pool areas ([7 PRs
merged](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3ALCAIZJ+is%3Amerged+)).
Prefill-Decode Disaggregation: Delivered KV transfer functionality using
Mooncake TransferEngine and enabled layerwise KV transfer
https://github.com/vllm-project/vllm-ascend/pull/1568
https://github.com/vllm-project/vllm-ascend/pull/2602
KV Pool: Developed the foundational KV Pool infrastructure and migrated
it to the latest ADXL stack
https://github.com/vllm-project/vllm-ascend/pull/2913
https://github.com/vllm-project/vllm-ascend/pull/3350
- Quality Contribution‌:
https://github.com/vllm-project/vllm-ascend/pull/1568
https://github.com/vllm-project/vllm-ascend/pull/2602
https://github.com/vllm-project/vllm-ascend/pull/2913
https://github.com/vllm-project/vllm-ascend/pull/3350
- Community Involvement‌: 
He actively responds to [community
issues](https://github.com/vllm-project/vllm-ascend/issues?q=is%3Aissue%20commenter%3ALCAIZJ%20is%3Aopen%20-author%3ALCAIZJ),
continuously monitors functionality and accuracy issues related to PD
disaggregation and KV Pool, and proactively delivers [bug
fixes](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3ALCAIZJ+is%3Amerged+bugfix).
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-12-18 18:49:07 +08:00
2025-08-11 22:21:29 +08:00
2025-12-17 01:35:26 +08:00
2025-12-10 09:20:40 +08:00
2025-02-05 10:53:12 +08:00
2025-01-29 02:44:13 -08:00
2025-12-16 17:31:45 +08:00
2025-12-17 14:08:19 +08:00
2025-12-01 09:09:51 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploy large scale Expert Parallelism (EP) on Ascend.
  • [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
  • OS: Linux
  • Software:
    • Python >= 3.10, < 3.12
    • CANN == 8.3.rc2 (Ascend HDK version refers to here)
    • PyTorch == 2.8.0, torch-npu == 2.8.0
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.12.0rc1 Latest release candidate QuickStart and Installation for more details
v0.11.0 Latest stable version QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.3-dev is the dev branch for vLLM v0.7.3 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM v0.12.0 tag
v0.7.1-dev Unmaintained Only doc fixed is allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version
v0.11.0-dev Maintained CI commitment for vLLM 0.11.0 version
rfc/feature-name Maintained Feature branches for collaboration

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 31 MiB
Languages
C++ 51.8%
Python 45.8%
CMake 1.1%
Shell 0.7%
C 0.5%
Other 0.1%