[Doc] Improved English grammar and integrated the DeepWiki badge for Ask AI (#6216)
### What this PR does / why we need it?
README.md: Improved English grammar and integrated the DeepWiki badge
for Ask AI
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?
None
- vLLM version: v0.14.0
- vLLM main:
d68209402d
---------
Signed-off-by: fyfugoyfa <zenghaolong@huawei.com>
Signed-off-by: Mitchell-xiyunfeng <3617237115@qq.com>
Co-authored-by: fyfugoyfa <zenghaolong@huawei.com>
This commit is contained in:
34
README.md
34
README.md
@@ -9,8 +9,14 @@
|
||||
vLLM Ascend Plugin
|
||||
</h3>
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://deepwiki.com/vllm-project/vllm-ascend)
|
||||
|
||||
</div>
|
||||
|
||||
<p align="center">
|
||||
| <a href="https://www.hiascend.com/en/"><b>About Ascend</b></a> | <a href="https://docs.vllm.ai/projects/ascend/en/latest/"><b>Documentation</b></a> | <a href="https://slack.vllm.ai"><b>#sig-ascend</b></a> | <a href="https://discuss.vllm.ai/c/hardware-support/vllm-ascend-support"><b>Users Forum</b></a> | <a href="https://tinyurl.com/vllm-ascend-meeting"><b>Weekly Meeting</b></a> |
|
||||
| <a href="https://www.hiascend.com/en/"><b>About Ascend</b></a> | <a href="https://docs.vllm.ai/projects/ascend/en/latest/"><b>Documentation</b></a> | <a href="https://slack.vllm.ai"><b>#SIG-Ascend</b></a> | <a href="https://discuss.vllm.ai/c/hardware-support/vllm-ascend-support"><b>Users Forum</b></a> | <a href="https://tinyurl.com/vllm-ascend-meeting"><b>Weekly Meeting</b></a> |
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -21,11 +27,11 @@ vLLM Ascend Plugin
|
||||
*Latest News* 🔥
|
||||
|
||||
- [2025/12] We released the new official version [v0.11.0](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.11.0)! Please follow the [official guide](https://docs.vllm.ai/projects/ascend/en/v0.11.0/) to start using vLLM Ascend Plugin on Ascend.
|
||||
- [2025/09] We released the new official version [v0.9.1](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.9.1)! Please follow the [official guide](https://docs.vllm.ai/projects/ascend/en/v0.9.1/tutorials/large_scale_ep.html) to start deploy large scale Expert Parallelism (EP) on Ascend.
|
||||
- [2025/09] We released the new official version [v0.9.1](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.9.1)! Please follow the [official guide](https://docs.vllm.ai/projects/ascend/en/v0.9.1/tutorials/large_scale_ep.html) to start deploying large-scale Expert Parallelism (EP) on Ascend.
|
||||
- [2025/08] We hosted the [vLLM Beijing Meetup](https://mp.weixin.qq.com/s/7n8OYNrCC_I9SJaybHA_-Q) with vLLM and Tencent! Please find the meetup slides [here](https://drive.google.com/drive/folders/1Pid6NSFLU43DZRi0EaTcPgXsAzDvbBqF).
|
||||
- [2025/06] [User stories](https://docs.vllm.ai/projects/ascend/en/latest/community/user_stories/index.html) page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
|
||||
- [2025/06] [User stories](https://docs.vllm.ai/projects/ascend/en/latest/community/user_stories/index.html) page is now live! It kicks off with LLaMA-Factory/verl/TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
|
||||
- [2025/06] [Contributors](https://docs.vllm.ai/projects/ascend/en/latest/community/contributors.html) page is now live! All contributions deserve to be recorded, thanks for all contributors.
|
||||
- [2025/05] We've released first official version [v0.7.3](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.7.3)! We collaborated with the vLLM community to publish a blog post sharing our practice: [Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU](https://blog.vllm.ai/2025/05/12/hardware-plugin.html).
|
||||
- [2025/05] We've released the first official version [v0.7.3](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.7.3)! We collaborated with the vLLM community to publish a blog post sharing our practice: [Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU](https://blog.vllm.ai/2025/05/12/hardware-plugin.html).
|
||||
- [2025/03] We hosted the [vLLM Beijing Meetup](https://mp.weixin.qq.com/s/VtxO9WXa5fC-mKqlxNUJUQ) with vLLM team! Please find the meetup slides [here](https://drive.google.com/drive/folders/1Pid6NSFLU43DZRi0EaTcPgXsAzDvbBqF).
|
||||
- [2025/02] vLLM community officially created [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-ascend) repo for running vLLM seamlessly on the Ascend NPU.
|
||||
- [2024/12] We are working with the vLLM community to support [[RFC]: Hardware pluggable](https://github.com/vllm-project/vllm/issues/11162).
|
||||
@@ -38,7 +44,7 @@ vLLM Ascend (`vllm-ascend`) is a community maintained hardware plugin for runnin
|
||||
|
||||
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [[RFC]: Hardware pluggable](https://github.com/vllm-project/vllm/issues/11162), providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
|
||||
|
||||
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
|
||||
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Experts (MoE), Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -56,12 +62,12 @@ Please use the following recommended versions to get started quickly:
|
||||
|
||||
| Version | Release type | Doc |
|
||||
|------------|--------------|--------------------------------------|
|
||||
|v0.13.0rc1|Latest release candidate|[QuickStart](https://docs.vllm.ai/projects/ascend/en/latest/quick_start.html) and [Installation](https://docs.vllm.ai/projects/ascend/en/latest/installation.html) for more details|
|
||||
|v0.11.0|Latest stable version|[QuickStart](https://docs.vllm.ai/projects/ascend/en/v0.11.0/quick_start.html) and [Installation](https://docs.vllm.ai/projects/ascend/en/v0.11.0/installation.html) for more details|
|
||||
| v0.13.0rc1 | Latest release candidate | See [QuickStart](https://docs.vllm.ai/projects/ascend/en/latest/quick_start.html) and [Installation](https://docs.vllm.ai/projects/ascend/en/latest/installation.html) for more details |
|
||||
| v0.11.0 | Latest stable version | See [QuickStart](https://docs.vllm.ai/projects/ascend/en/v0.11.0/quick_start.html) and [Installation](https://docs.vllm.ai/projects/ascend/en/v0.11.0/installation.html) for more details |
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING](https://docs.vllm.ai/projects/ascend/en/latest/developer_guide/contribution/index.html) for more details, which is a step-by-step guide to help you set up development environment, build and test.
|
||||
See [CONTRIBUTING](https://docs.vllm.ai/projects/ascend/en/latest/developer_guide/contribution/index.html) for more details, which is a step-by-step guide to help you set up the development environment, build and test.
|
||||
|
||||
We welcome and value any contributions and collaborations:
|
||||
|
||||
@@ -70,18 +76,18 @@ We welcome and value any contributions and collaborations:
|
||||
|
||||
## Branch
|
||||
|
||||
vllm-ascend has main branch and dev branch.
|
||||
vllm-ascend has a main branch and a dev branch.
|
||||
|
||||
- **main**: main branch,corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
|
||||
- **releases/vX.Y.Z**: development branch, created with part of new releases of vLLM. For example, `releases/v0.13.0` is the dev branch for vLLM `v0.13.0` version.
|
||||
- **main**: main branch, corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
|
||||
- **releases/vX.Y.Z**: development branch, created alongside new releases of vLLM. For example, `releases/v0.13.0` is the dev branch for vLLM `v0.13.0` version.
|
||||
|
||||
Below is maintained branches:
|
||||
Below are the maintained branches:
|
||||
|
||||
| Branch | Status | Note |
|
||||
|------------|--------------|--------------------------------------|
|
||||
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.13.0 tag |
|
||||
| v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
|
||||
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more. |
|
||||
| v0.7.1-dev | Unmaintained | Only doc fixes are allowed |
|
||||
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fixes are allowed, and no new release tags anymore. |
|
||||
| v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
|
||||
| v0.11.0-dev | Maintained | CI commitment for vLLM 0.11.0 version |
|
||||
| releases/v0.13.0 | Maintained | CI commitment for vLLM 0.13.0 version |
|
||||
|
||||
Reference in New Issue
Block a user