wangxiyuan 49b850270f [Community] Nominate new maintainers: @yiz-liu @paulyu12 @weijinqian0 @nalinaly (#3406)
I'd like to nominate 4 new maintainers for vllm-ascend: 

----

Yizhou Liu [@yiz-liu](https://github.com/yiz-liu)
----

**Review Quality‌**: He has completed [40+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Ayiz-liu)
and provided solutions or guides for [10+
issues](https://github.com/vllm-project/vllm-ascend/issues?q=is%3Aissue%20commenter%3Ayiz-liu),
which includes many quality review like
[#issue-3428408401](https://github.com/vllm-project/vllm-ascend/issues/3002#issue-3428408401),
[#discussion_r2224572309](https://github.com/vllm-project/vllm-ascend/pull/1803#discussion_r2224572309),
[#issuecomment-2982470226](https://github.com/vllm-project/vllm-ascend/pull/1261#issuecomment-2982470226),
[#issuecomment-2903621197](https://github.com/vllm-project/vllm-ascend/pull/836#issuecomment-2903621197),
[#issuecomment-2857678691](https://github.com/vllm-project/vllm-ascend/issues/778#issuecomment-2857678691).

**Sustained and High-Quality Contributions:** He has contributed more
than [30+
commits](https://github.com/vllm-project/vllm-ascend/commits?author=yiz-liu)
since Mar.2025, especially, aclgraph, DP, and EP related contributions
are the main reason why I nominated him. As the owner of aclgraph
support, he continuously improves aclgraph stability and performance as
well as fixes key bugs. he laid the groundwork for EP-related
functionality and delivered multiple foundational improvements

**Community involvement:** He has a very good habit of logging
issues:https://github.com/vllm-project/vllm-ascend/issues/1649 and is
also very active and involved in [many
issues](https://github.com/vllm-project/vllm-ascend/issues?q=is%3Aissue%20state%3Aopen%20commenter%3Ayiz-liu%20-author%3Ayiz-liu)
to help users resolve issues.

----

Peng Yu  [@paulyu12](https://github.com/paulyu12)
---
The main reasons for his nomination are his expertise and key
contributions to the LORA and sustained and major contributions (initial
support/doc/bugfix) around Lora.

**Sustained and Major Contributions:** @paulyu12 starts his contribution
with [Lora and Mulit-Lora
support](697908f5cd)
since Apr 2025, he contributed about [10+ commits and
bugfixes](697908f5cd)
on vllm-ascend.
**Review Quality‌ and Community Involvement‌:** He also helped more than
10+ users address [Lora related
issues](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Apaulyu12+-author%3Apaulyu12+is%3Aclosed).

I believe his addition will further improve vLLM Ascend Lora support.

----

Jinqian Wei [@weijinqian0](https://github.com/weijinqian0)
---
The main reasons for his nomination are his key contributions to the RL
scene and the high quality of his code reviews.

**Review Quality‌:** He has completed [60+
reviews](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+commenter%3Aweijinqian0+is%3Aopen+-author%3Aweijinqian0)
since June. 2025, include
[#comment-3284055430](https://github.com/vllm-project/vllm-ascend/pull/2791#issuecomment-3284055430),
[discussion_r2332166704](https://github.com/vllm-project/vllm-ascend/pull/2817#discussion_r2332166704),
[discussion_r2343289692](https://github.com/vllm-project/vllm-ascend/pull/2846#discussion_r2343289692)
high quality review.

**Sustained and Quality Contributions:** He has Deep understanding of
‌vLLM‌ and ‌vLLM Ascend‌ codebases and solid contributions in RL scene
(about [10+ PR
merged](https://github.com/vllm-project/vllm-ascend/pulls?q=is%3Apr+author%3Aweijinqian0+is%3Amerged+)
and 10+ PRs merged as co-author.

- Code Refactor: As a co-author, he participated in the refactoring of
the MOE module https://github.com/vllm-project/vllm-ascend/pull/2150
https://github.com/vllm-project/vllm-ascend/pull/2706
https://github.com/vllm-project/vllm-ascend/pull/2867
- Performance Enhancement for RL: Participated as a co-author in the
design and development of the solution, contributing to the planning of
core capabilities. https://github.com/vllm-project/vllm-ascend/pull/1547
https://github.com/vllm-project/vllm-ascend/pull/2120 and so on.

So I think he's a great addition to the vLLM Ascend Maintainer team.

----

Chuanyu Qin  [@nalinaly](https://github.com/nalinaly)
---
The main reason I nominated Qinchuanyu is because he is the initial
designer of aclgraph and torch-npu, two key components of vllm-ascend.
Considering aclgraph will eventually become the main path for
vllm-ascend's graph model, I propose to nominate him.

**Sustained and Major Contributions:** In fact, chuanyu actively helped
the users/developers of vllm-ascend since Mar 2025
([vllm-discuss#162](https://discuss.vllm.ai/t/can-ascend-officially-draft-a-documentation-on-the-vllm-ascend-adaptation-for-graph-mode/162/5)),
and also helped early users of vllm-ascend understand aclgraph. He
provided lots of help in the process of integrating aclgraph with
vllm-ascend.

**Community Involvement‌:** As speaker, he also presents help users
understand aclgraph and torch_npu [《The design philosophy of torch_npu
and the high performance principle of
aclGraph》](https://github.com/PyTorch-China/pytorch-meetup/blob/main/beijing-2025/%E3%80%905%E3%80%91torch_npu%20%E7%9A%84%E8%AE%BE%E8%AE%A1%E5%93%B2%E5%AD%A6%E4%B8%8E%20aclGraph%20%E9%AB%98%E6%80%A7%E8%83%BD%E5%8E%9F%E7%90%86-%E7%A7%A6%E4%BC%A0%E7%91%9C-0920.pdf)

----

They have activate contribution to vllm-ascend or have rich experience
for ascend AI.

Welcome!
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-10-14 08:51:58 +08:00
2025-08-11 22:21:29 +08:00
2025-10-12 07:39:45 +08:00
2025-02-05 10:53:12 +08:00
2025-10-09 10:41:19 +08:00
2025-01-29 02:44:13 -08:00
2025-10-10 14:09:53 +08:00
2025-10-10 14:09:53 +08:00
2025-10-10 14:09:53 +08:00
2025-10-10 14:09:53 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploy large scale Expert Parallelism (EP) on Ascend.
  • [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
  • OS: Linux
  • Software:
    • Python >= 3.9, < 3.12
    • CANN >= 8.2.rc1 (Ascend HDK version refers to here)
    • PyTorch >= 2.7.1, torch-npu >= 2.7.1.dev20250724
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.11.0rc0 Latest release candidate QuickStart and Installation for more details
v0.9.1 Latest stable version QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.3-dev is the dev branch for vLLM v0.7.3 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM v0.11.0 tag
v0.7.1-dev Unmaintained Only doc fixed is allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version
rfc/feature-name Maintained Feature branches for collaboration

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 8.6 MiB
Languages
Python 66.8%
C++ 31.8%
Shell 1%
CMake 0.2%
Dockerfile 0.1%
Other 0.1%