Bump v0.9.1rc1 release (#1349)

### What this PR does / why we need it?
Bump v0.9.1rc1 release

Closes: https://github.com/vllm-project/vllm-ascend/pull/1341
Closes: https://github.com/vllm-project/vllm-ascend/pull/1334

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI passed


---------

Signed-off-by: Shanshan Shen <87969357+shen-shanshan@users.noreply.github.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: shen-shanshan <467638484@qq.com>
This commit is contained in:
Yikun Jiang
2025-06-22 13:15:36 +08:00
committed by GitHub
parent 097e7149f7
commit c30ddb8331
9 changed files with 474 additions and 13 deletions

View File

@@ -51,9 +51,6 @@ jobs:
matrix: matrix:
include: include:
- vllm_branch: v0.9.1 - vllm_branch: v0.9.1
vllm_ascend_branch: main
vllm_use_v1: 0
- vllm_branch: v0.9.0
vllm_ascend_branch: main vllm_ascend_branch: main
vllm_use_v1: 1 vllm_use_v1: 1
max-parallel: 1 max-parallel: 1

View File

@@ -65,15 +65,15 @@ myst_substitutions = {
# the branch of vllm, used in vllm clone # the branch of vllm, used in vllm clone
# - main branch: 'main' # - main branch: 'main'
# - vX.Y.Z branch: 'vX.Y.Z' # - vX.Y.Z branch: 'vX.Y.Z'
'vllm_version': 'v0.9.0', 'vllm_version': 'v0.9.1',
# the branch of vllm-ascend, used in vllm-ascend clone and image tag # the branch of vllm-ascend, used in vllm-ascend clone and image tag
# - main branch: 'main' # - main branch: 'main'
# - vX.Y.Z branch: latest vllm-ascend release tag # - vX.Y.Z branch: latest vllm-ascend release tag
'vllm_ascend_version': 'v0.9.0rc2', 'vllm_ascend_version': 'v0.9.1rc1',
# the newest release version of vllm-ascend and matched vLLM, used in pip install. # the newest release version of vllm-ascend and matched vLLM, used in pip install.
# This value should be updated when cut down release. # This value should be updated when cut down release.
'pip_vllm_ascend_version': "0.9.0rc2", 'pip_vllm_ascend_version': "0.9.1rc1",
'pip_vllm_version': "0.9.0", 'pip_vllm_version': "0.9.1",
# CANN image tag # CANN image tag
'cann_image_tag': "8.1.rc1-910b-ubuntu22.04-py3.10", 'cann_image_tag': "8.1.rc1-910b-ubuntu22.04-py3.10",
} }

View File

@@ -22,6 +22,7 @@ Following is the Release Compatibility Matrix for vLLM Ascend Plugin:
| vLLM Ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu | MindIE Turbo | | vLLM Ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu | MindIE Turbo |
|-------------|--------------|------------------|-------------|--------------------|--------------| |-------------|--------------|------------------|-------------|--------------------|--------------|
| v0.9.1rc1 | v0.9.1 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1.post1.dev20250528 | |
| v0.9.0rc2 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | | | v0.9.0rc2 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
| v0.9.0rc1 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | | | v0.9.0rc1 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
| v0.8.5rc1 | v0.8.5.post1 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | | | v0.8.5rc1 | v0.8.5.post1 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
@@ -35,6 +36,7 @@ Following is the Release Compatibility Matrix for vLLM Ascend Plugin:
| Date | Event | | Date | Event |
|------------|-------------------------------------------| |------------|-------------------------------------------|
| 2025.06.22 | Release candidates, v0.9.1rc1 |
| 2025.06.10 | Release candidates, v0.9.0rc2 | | 2025.06.10 | Release candidates, v0.9.0rc2 |
| 2025.06.09 | Release candidates, v0.9.0rc1 | | 2025.06.09 | Release candidates, v0.9.0rc1 |
| 2025.05.29 | v0.7.x post release, v0.7.3.post1 | | 2025.05.29 | v0.7.x post release, v0.7.3.post1 |

View File

@@ -3,7 +3,7 @@
## Version Specific FAQs ## Version Specific FAQs
- [[v0.7.3.post1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1007) - [[v0.7.3.post1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1007)
- [[v0.9.0rc2] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1115) - [[v0.9.1rc1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1351)
## General FAQs ## General FAQs

View File

@@ -6,6 +6,8 @@
single_npu single_npu
single_npu_multimodal single_npu_multimodal
multi_npu multi_npu
multi_npu_moge
multi_npu_quantization multi_npu_quantization
single_node_300i
multi_node multi_node
::: :::

View File

@@ -0,0 +1,117 @@
# Multi-NPU (Pangu Pro MoE 72B)
## Run vllm-ascend on Multi-NPU
Run docker container:
```{code-block} bash
:substitutions:
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
```
Setup environment variables:
```bash
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
```
### Online Inference on Multi-NPU
Run the following script to start the vLLM server on Multi-NPU:
```bash
vllm serve /path/to/pangu-pro-moe-model \
--tensor-parallel-size 4 \
--trust-remote-code \
--enforce-eager
```
Once your server is started, you can query the model with input prompts:
```bash
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "/path/to/pangu-pro-moe-model",
"prompt": "The future of AI is",
"max_tokens": 128,
"temperature": 0
}'
```
If you run this successfully, you can see the info shown below:
```json
{"id":"cmpl-013558085d774d66bf30c704decb762a","object":"text_completion","created":1750472788,"model":"/path/to/pangu-pro-moe-model","choices":[{"index":0,"text":" not just about creating smarter machines but about fostering collaboration between humans and AI systems. This partnership can lead to more efficient problem-solving, innovative solutions, and a better quality of life for people around the globe.\n\nHowever, achieving this future requires addressing several challenges. Ethical considerations, such as bias in AI algorithms and privacy concerns, must be prioritized. Additionally, ensuring that AI technologies are accessible to all and do not exacerbate existing inequalities is crucial.\n\nIn conclusion, AI stands at the forefront of technological advancement, with vast potential to transform industries and everyday life. By embracing its opportunities while responsibly managing its risks, we can harn","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":6,"total_tokens":134,"completion_tokens":128,"prompt_tokens_details":null},"kv_transfer_params":null}
```
### Offline Inference on Multi-NPU
Run the following script to execute offline inference on multi-NPU:
```python
import gc
import torch
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import (destroy_distributed_environment,
destroy_model_parallel)
def clean_up():
destroy_model_parallel()
destroy_distributed_environment()
gc.collect()
torch.npu.empty_cache()
if __name__ == "__main__":
prompts = [
"Hello, my name is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)
llm = LLM(model="/path/to/pangu-pro-moe-model",
tensor_parallel_size=4,
distributed_executor_backend="mp",
max_model_len=1024,
trust_remote_code=True,
enforce_eager=True)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
del llm
clean_up()
```
If you run this script successfully, you can see the info shown below:
```bash
Prompt: 'Hello, my name is', Generated text: ' Daniel and I am an 8th grade student at York Middle School. I'
Prompt: 'The future of AI is', Generated text: ' following you. As the technology advances, a new report from the Institute for the'
```

View File

@@ -0,0 +1,304 @@
# Single Node (Atlas 300I series)
## Run vLLM on Altlas 300I series
Run docker container:
```{code-block} bash
:substitutions:
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|-310p
docker run --rm \
--name vllm-ascend \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
```
Setup environment variables:
```bash
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
```
### Online Inference on NPU
Run the following script to start the vLLM server on NPU(Qwen3-0.6B:1 card, Qwen2.5-7B-Instruct:2 cards, Pangu-Pro-MoE-72B: 8 cards):
:::::{tab-set}
::::{tab-item} Qwen3-0.6B
```{code-block} bash
:substitutions:
export VLLM_USE_V1=1
export MODEL="Qwen/Qwen3-0.6B"
python -m vllm.entrypoints.api_server \
--model $MODEL \
--tensor-parallel-size 1 \
--max-num-batched-tokens 2048 \
--gpu-memory-utilization 0.5 \
--max-num-seqs 4 \
--enforce-eager \
--trust-remote-code \
--max-model-len 1024 \
--disable-custom-all-reduce \
--dtype float16 \
--port 8000 \
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}'
```
::::
::::{tab-item} Qwen/Qwen2.5-7B-Instruct
```{code-block} bash
:substitutions:
export VLLM_USE_V1=1
export MODEL="Qwen/Qwen2.5-7B-Instruct"
python -m vllm.entrypoints.api_server \
--model $MODEL \
--tensor-parallel-size 2 \
--max-num-batched-tokens 2048 \
--gpu-memory-utilization 0.5 \
--max-num-seqs 4 \
--enforce-eager \
--trust-remote-code \
--max-model-len 1024 \
--disable-custom-all-reduce \
--dtype float16 \
--port 8000 \
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}'
```
::::
::::{tab-item} Pangu-Pro-MoE-72B
```{code-block} bash
:substitutions:
# Update the MODEL
export MODEL="/path/to/pangu-pro-moe-model"
export VLLM_USE_V1=1
python -m vllm.entrypoints.api_server \
--model $MODEL \
--tensor-parallel-size 8 \
--max-num-batched-tokens 2048 \
--gpu-memory-utilization 0.5 \
--max-num-seqs 4 \
--enforce-eager \
--trust-remote-code \
--max-model-len 1024 \
--disable-custom-all-reduce \
--enable-expert-parallel \
--dtype float16 \
--port 8000 \
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}' \
--additional-config '{"ascend_scheduler_config": {"enabled": true, "enable_chunked_prefill": false, "chunked_prefill_enabled": false}}'
```
::::
:::::
Once your server is started, you can query the model with input prompts
```bash
curl http://localhost:8000/generate \
-H "Content-Type: application/json" \
-d '{
"prompt": "Hello, my name is ",
"max_tokens": 20,
"temperature": 0
}'
```
If you run this script successfully, you can see the info shown below:
```bash
{"text":["The future of AI is \nA. 充满希望的 \nB. 不确定的 \nC. 危险的 \nD. 无法预测的 \n答案A \n解析"]}
```
### Offline Inference
Run the following script to execute offline inference on NPU:
:::::{tab-set}
::::{tab-item} Qwen3-0.6B
```{code-block} python
:substitutions:
from vllm import LLM, SamplingParams
import gc
import os
import torch
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import (destroy_distributed_environment,
destroy_model_parallel)
os.environ["VLLM_USE_V1"] = "1"
def clean_up():
destroy_model_parallel()
destroy_distributed_environment()
gc.collect()
torch.npu.empty_cache()
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
# Create an LLM.
llm = LLM(
model="Qwen/Qwen3-0.6B",
max_model_len=4096,
max_num_seqs=4,
trust_remote_code=True,
tensor_parallel_size=1,
enforce_eager=True, # For 300I series, only eager mode is supported.
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
)
# Generate texts from the prompts.
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
del llm
clean_up()
```
::::
::::{tab-item} Qwen2.5-7B-Instruct
```{code-block} python
:substitutions:
from vllm import LLM, SamplingParams
import gc
import os
import torch
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import (destroy_distributed_environment,
destroy_model_parallel)
os.environ["VLLM_USE_V1"] = "1"
def clean_up():
destroy_model_parallel()
destroy_distributed_environment()
gc.collect()
torch.npu.empty_cache()
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
# Create an LLM.
llm = LLM(
model="Qwen/Qwen2.5-7B-Instruct",
max_model_len=4096,
max_num_seqs=4,
trust_remote_code=True,
tensor_parallel_size=2,
enforce_eager=True, # For 300I series, only eager mode is supported.
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
)
# Generate texts from the prompts.
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
del llm
clean_up()
```
::::
::::{tab-item} Pangu-72B-MoE
```{code-block} python
:substitutions:
import gc
import os
import torch
from vllm import LLM, SamplingParams
from vllm.distributed.parallel_state import (destroy_distributed_environment,
destroy_model_parallel)
def clean_up():
destroy_model_parallel()
destroy_distributed_environment()
gc.collect()
torch.npu.empty_cache()
os.environ["VLLM_USE_V1"] = "1"
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"
if __name__ == "__main__":
# Update the model_path
model_path="/path/to/pangu-pro-moe-model"
prompts = [
"Hello, my name is",
"The future of AI is",
]
sampling_params = SamplingParams(min_tokens=8, max_tokens=8, temperature=0.0)
llm = LLM(model=model_path,
tensor_parallel_size=8,
max_num_batched_tokens=2048,
gpu_memory_utilization=0.5,
max_num_seqs=4,
enforce_eager=True, # For 300I series, only eager mode is supported.
trust_remote_code=True,
max_model_len=1024,
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed custom ops
enable_expert_parallel=True,
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
additional_config = {
'ascend_scheduler_config': {
'enabled': True,
'enable_chunked_prefill' : False,
'chunked_prefill_enabled': False
}
}
)
# Generate texts from the prompts.
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
del llm
clean_up()
```
::::
:::::
If you run this script successfully, you can see the info shown below:
```bash
Prompt: 'Hello, my name is', Generated text: " Lina. I'm a 22-year-old student from China. I'm interested in studying in the US. I'm looking for a job in the US. I want to know if there are any opportunities in the US for me to work. I'm also interested in the culture and lifestyle in the US. I want to know if there are any opportunities for me to work in the US. I'm also interested in the culture and lifestyle in the US. I'm interested in the culture"
Prompt: 'The president of the United States is', Generated text: ' the same as the president of the United Nations. This is because the president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president'
Prompt: 'The capital of France is', Generated text: ' Paris. The capital of Italy is Rome. The capital of Spain is Madrid. The capital of China is Beijing. The capital of Japan is Tokyo. The capital of India is New Delhi. The capital of Brazil is Brasilia. The capital of Egypt is Cairo. The capital of South Africa is Cape Town. The capital of Nigeria is Abuja. The capital of Lebanon is Beirut. The capital of Morocco is Rabat. The capital of Indonesia is Jakarta. The capital of Peru is Lima. The'
Prompt: 'The future of AI is', Generated text: " not just about the technology itself, but about how we use it to solve real-world problems. As AI continues to evolve, it's important to consider the ethical implications of its use. AI has the potential to bring about significant changes in society, but it also has the power to create new challenges. Therefore, it's crucial to develop a comprehensive approach to AI that takes into account both the benefits and the risks associated with its use. This includes addressing issues such as bias, privacy, and accountability."
```

View File

@@ -7,11 +7,11 @@ This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. P
## Getting Started ## Getting Started
From v0.9.0rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to eager mode temporarily by set `enforce_eager=True` when initializing the model. From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to eager mode temporarily by set `enforce_eager=True` when initializing the model.
There are two kinds for graph mode supported by vLLM Ascend: There are two kinds for graph mode supported by vLLM Ascend:
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.0rc1, only Qwen series models are well tested. - **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, only Qwen series models are well tested.
- **TorchAirGraph**: This is the GE graph mode. In v0.9.0rc1, only DeepSeek series models are supported. - **TorchAirGraph**: This is the GE graph mode. In v0.9.1rc1, only DeepSeek series models are supported.
## Using ACLGraph ## Using ACLGraph
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough. ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
@@ -55,7 +55,7 @@ outputs = model.generate("Hello, how are you?")
online example: online example:
```shell ```shell
vllm serve Qwen/Qwen2-7B-Instruct --additional-config='{"torchair_graph_config": {"enabled": True},"ascend_scheduler_config": {"enabled": True,}}' vllm serve Qwen/Qwen2-7B-Instruct --additional-config='{"torchair_graph_config": {"enabled": true},"ascend_scheduler_config": {"enabled": true,}}'
``` ```
You can find more detail about additional config [here](./additional_config.md) You can find more detail about additional config [here](./additional_config.md)

View File

@@ -1,5 +1,44 @@
# Release note # Release note
## v0.9.1rc1 - 2025.06.22
This is the 1st release candidate of v0.9.1 for vLLM Ascend. Please follow the [official doc](https://vllm-ascend.readthedocs.io/en/) to get started.
### Highlights
- Atlas 300I series is experimental supported in this release. [#1333](https://github.com/vllm-project/vllm-ascend/pull/1333) After careful consideration, this feature **will NOT be included in v0.9.1-dev branch** taking into account the v0.9.1 release quality and the feature rapid iteration to improve performance on Atlas 300I series. We will improve this from 0.9.2rc1 and later.
- Support EAGLE-3 for speculative decoding. [#1032](https://github.com/vllm-project/vllm-ascend/pull/1032)
### Model
- MoGE model is now supported. You can try with Pangu Pro Moe-72B on Atlas A2 series and Atlas 300I series. Please follow the official [tutorials](https://vllm-ascend.readthedocs.io/en/latest/tutorials/multi_npu_moge.html) and [300I series tutorials](https://vllm-ascend.readthedocs.io/en/latest/tutorials/single_node_300i.html). [#1204](https://github.com/vllm-project/vllm-ascend/pull/1204)
### Core
- Ascend PyTorch adapter (torch_npu) has been upgraded to `2.5.1.post1.dev20250528`. Dont forget to update it in your environment. [#1235](https://github.com/vllm-project/vllm-ascend/pull/1235)
- Support Atlas 300I series container image. You can get it from [quay.io](https://quay.io/repository/vllm/vllm-ascend)
- Fix token-wise padding mechanism to make multi-card graph mode work. [#1300](https://github.com/vllm-project/vllm-ascend/pull/1300)
- Upgrade vllm to 0.9.1 [#1165]https://github.com/vllm-project/vllm-ascend/pull/1165
### Other Improvements
- Initial support Chunked Prefill for MLA. [#1172](https://github.com/vllm-project/vllm-ascend/pull/1172)
- An example of best practices to run DeepSeek with ETP has been added. [#1101](https://github.com/vllm-project/vllm-ascend/pull/1101)
- Performance improvements for DeepSeek using the TorchAir graph. [#1098](https://github.com/vllm-project/vllm-ascend/pull/1098), [#1131](https://github.com/vllm-project/vllm-ascend/pull/1131)
- Supports the speculative decoding feature with AscendScheduler. [#943](https://github.com/vllm-project/vllm-ascend/pull/943)
- Improve `VocabParallelEmbedding` custom op performance. It will be enabled in the next release. [#796](https://github.com/vllm-project/vllm-ascend/pull/796)
- Fixed a device discovery and setup bug when running vLLM Ascend on Ray [#884](https://github.com/vllm-project/vllm-ascend/pull/884)
- DeepSeek with [MC2](https://www.hiascend.com/document/detail/zh/canncommercial/81RC1/developmentguide/opdevg/ascendcbestP/atlas_ascendc_best_practices_10_0043.html) (Merged Compute and Communication) now works properly. [#1268](https://github.com/vllm-project/vllm-ascend/pull/1268)
- Fixed log2phy NoneType bug with static EPLB feature. [#1186](https://github.com/vllm-project/vllm-ascend/pull/1186)
- Improved performance for DeepSeek with DBO enabled. [#997](https://github.com/vllm-project/vllm-ascend/pull/997), [#1135](https://github.com/vllm-project/vllm-ascend/pull/1135)
- Refactoring AscendFusedMoE [#1229](https://github.com/vllm-project/vllm-ascend/pull/1229)
- Add initial user stories page (include LLaMA-Factory/TRL/verl/MindIE Turbo/GPUStack) [#1224](https://github.com/vllm-project/vllm-ascend/pull/1224)
- Add unit test framework [#1201](https://github.com/vllm-project/vllm-ascend/pull/1201)
### Known Issues
- In some cases, the vLLM process may crash with a **GatherV3** error when **aclgraph** is enabled. We are working on this issue and will fix it in the next release. [#1038](https://github.com/vllm-project/vllm-ascend/issues/1038)
- Prefix cache feature does not work with the Ascend Scheduler but without chunked prefill enabled. This will be fixed in the next release. [#1350](https://github.com/vllm-project/vllm-ascend/issues/1350)
### Full Changelog
https://github.com/vllm-project/vllm-ascend/compare/v0.9.0rc2...v0.9.1rc1
## v0.9.0rc2 - 2025.06.10 ## v0.9.0rc2 - 2025.06.10
This release contains some quick fixes for v0.9.0rc1. Please use this release instead of v0.9.0rc1. This release contains some quick fixes for v0.9.0rc1. Please use this release instead of v0.9.0rc1.
@@ -21,7 +60,7 @@ This is the 1st release candidate of v0.9.0 for vllm-ascend. Please follow the [
- The performance of multi-step scheduler has been improved. Thanks for the contribution from China Merchants Bank. [#814](https://github.com/vllm-project/vllm-ascend/pull/814) - The performance of multi-step scheduler has been improved. Thanks for the contribution from China Merchants Bank. [#814](https://github.com/vllm-project/vllm-ascend/pull/814)
- LoRA、Multi-LoRA And Dynamic Serving is supported for V1 Engine now. Thanks for the contribution from China Merchants Bank. [#893](https://github.com/vllm-project/vllm-ascend/pull/893) - LoRA、Multi-LoRA And Dynamic Serving is supported for V1 Engine now. Thanks for the contribution from China Merchants Bank. [#893](https://github.com/vllm-project/vllm-ascend/pull/893)
- prefix cache and chunked prefill feature works now [#782](https://github.com/vllm-project/vllm-ascend/pull/782) [#844](https://github.com/vllm-project/vllm-ascend/pull/844) - Prefix cache and chunked prefill feature works now [#782](https://github.com/vllm-project/vllm-ascend/pull/782) [#844](https://github.com/vllm-project/vllm-ascend/pull/844)
- Spec decode and MTP features work with V1 Engine now. [#874](https://github.com/vllm-project/vllm-ascend/pull/874) [#890](https://github.com/vllm-project/vllm-ascend/pull/890) - Spec decode and MTP features work with V1 Engine now. [#874](https://github.com/vllm-project/vllm-ascend/pull/874) [#890](https://github.com/vllm-project/vllm-ascend/pull/890)
- DP feature works with DeepSeek now. [#1012](https://github.com/vllm-project/vllm-ascend/pull/1012) - DP feature works with DeepSeek now. [#1012](https://github.com/vllm-project/vllm-ascend/pull/1012)
- Input embedding feature works with V0 Engine now. [#916](https://github.com/vllm-project/vllm-ascend/pull/916) - Input embedding feature works with V0 Engine now. [#916](https://github.com/vllm-project/vllm-ascend/pull/916)