[Doc] Update readme (#5226)
Add 0.11.0 news in Readme and correct main branch maintain rule
- vLLM version: release/v0.13.0
- vLLM main:
ad32e3e19c
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -19,6 +19,7 @@ vLLM Ascend Plugin
|
||||
|
||||
---
|
||||
*Latest News* 🔥
|
||||
- [2025/12] We released the new official version [v0.11.0](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.11.0)! Please follow the [official guide](https://docs.vllm.ai/projects/ascend/en/v0.11.0-dev/) to start using vLLM Ascend Plugin on Ascend.
|
||||
- [2025/09] We released the new official version [v0.9.1](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.9.1)! Please follow the [official guide](https://vllm-ascend.readthedocs.io/en/v0.9.1-dev/tutorials/large_scale_ep.html) to start deploy large scale Expert Parallelism (EP) on Ascend.
|
||||
- [2025/08] We hosted the [vLLM Beijing Meetup](https://mp.weixin.qq.com/s/7n8OYNrCC_I9SJaybHA_-Q) with vLLM and Tencent! Please find the meetup slides [here](https://drive.google.com/drive/folders/1Pid6NSFLU43DZRi0EaTcPgXsAzDvbBqF).
|
||||
- [2025/06] [User stories](https://vllm-ascend.readthedocs.io/en/latest/community/user_stories/index.html) page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
|
||||
@@ -73,7 +74,7 @@ Below is maintained branches:
|
||||
|
||||
| Branch | Status | Note |
|
||||
|------------|--------------|--------------------------------------|
|
||||
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.12.0 tag |
|
||||
| main | Maintained | CI commitment for vLLM main branch and vLLM v0.13.0 tag |
|
||||
| v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
|
||||
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more. |
|
||||
| v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
|
||||
|
||||
@@ -20,6 +20,7 @@ vLLM Ascend Plugin
|
||||
---
|
||||
*最新消息* 🔥
|
||||
|
||||
- [2025/12] 我们发布了新的正式版本 [v0.11.0](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.11.0)! 请按照[官方指南](https://docs.vllm.ai/projects/ascend/en/v0.11.0-dev/)开始在Ascend上部署vLLM Ascend Plugin。
|
||||
- [2025/09] 我们发布了新的正式版本 [v0.9.1](https://github.com/vllm-project/vllm-ascend/releases/tag/v0.9.1)! 请按照[官方指南](https://vllm-ascend.readthedocs.io/en/v0.9.1-dev/tutorials/large_scale_ep.html)开始在Ascend上部署大型专家并行 (EP)。
|
||||
- [2025/08] 我们与vLLM和腾讯合作举办了[vLLM北京Meetup](https://mp.weixin.qq.com/s/7n8OYNrCC_I9SJaybHA_-Q),!请在[这里](https://drive.google.com/drive/folders/1Pid6NSFLU43DZRi0EaTcPgXsAzDvbBqF)找到演讲材料。
|
||||
- [2025/06] [用户案例](https://vllm-ascend.readthedocs.io/en/latest/community/user_stories/index.html)现已上线!展示了LLaMA-Factory/verl/TRL/GPUStack等用户案例,展示了vLLM Ascend如何帮助昇腾用户在模型微调、评估、强化学习 (RL) 以及部署等场景中提升体验。
|
||||
@@ -73,7 +74,7 @@ vllm-ascend有主干分支和开发分支。
|
||||
|
||||
| 分支 | 状态 | 备注 |
|
||||
|------------|------------|---------------------|
|
||||
| main | Maintained | 基于vLLM main分支和vLLM最新版本(v0.12.0)CI看护 |
|
||||
| main | Maintained | 基于vLLM main分支和vLLM最新版本(v0.13.0)CI看护 |
|
||||
| v0.7.1-dev | Unmaintained | 只允许文档修复 |
|
||||
| v0.7.3-dev | Maintained | 基于vLLM v0.7.3版本CI看护, 只允许Bug修复,不会再发布新版本 |
|
||||
| v0.9.1-dev | Maintained | 基于vLLM v0.9.1版本CI看护 |
|
||||
|
||||
Reference in New Issue
Block a user