[v0.11.0][Doc] Update doc (#3852)
### What this PR does / why we need it? Update doc Signed-off-by: hfadzxy <starmoon_zhang@163.com>
This commit is contained in:
@@ -4,11 +4,11 @@
|
||||
This feature is currently experimental. In future versions, there may be behavioral changes around configuration, coverage, performance improvement.
|
||||
```
|
||||
|
||||
This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. Please note that graph mode is only available on V1 Engine. And only Qwen, DeepSeek series models are well tested from 0.9.0rc1. We'll make it stable and generalize in the next release.
|
||||
This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. Please note that graph mode is only available on V1 Engine. And only Qwen, DeepSeek series models are well tested from 0.9.0rc1. We will make it stable and generalized in the next release.
|
||||
|
||||
## Getting Started
|
||||
|
||||
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to eager mode temporarily by set `enforce_eager=True` when initializing the model.
|
||||
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to the eager mode temporarily by set `enforce_eager=True` when initializing the model.
|
||||
|
||||
There are two kinds for graph mode supported by vLLM Ascend:
|
||||
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, only Qwen series models are well tested.
|
||||
@@ -17,7 +17,7 @@ There are two kinds for graph mode supported by vLLM Ascend:
|
||||
## Using ACLGraph
|
||||
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
|
||||
|
||||
offline example:
|
||||
Offline example:
|
||||
|
||||
```python
|
||||
import os
|
||||
@@ -28,7 +28,7 @@ model = LLM(model="Qwen/Qwen2-7B-Instruct")
|
||||
outputs = model.generate("Hello, how are you?")
|
||||
```
|
||||
|
||||
online example:
|
||||
Online example:
|
||||
|
||||
```shell
|
||||
vllm serve Qwen/Qwen2-7B-Instruct
|
||||
@@ -36,9 +36,9 @@ vllm serve Qwen/Qwen2-7B-Instruct
|
||||
|
||||
## Using TorchAirGraph
|
||||
|
||||
If you want to run DeepSeek series models with graph mode, you should use [TorchAirGraph](https://www.hiascend.com/document/detail/zh/Pytorch/700/modthirdparty/torchairuseguide/torchair_0002.html). In this case, additional config is required.
|
||||
If you want to run DeepSeek series models with the graph mode, you should use [TorchAirGraph](https://www.hiascend.com/document/detail/zh/Pytorch/700/modthirdparty/torchairuseguide/torchair_0002.html). In this case, additional configuration is required.
|
||||
|
||||
offline example:
|
||||
Offline example:
|
||||
|
||||
```python
|
||||
import os
|
||||
@@ -49,19 +49,19 @@ model = LLM(model="deepseek-ai/DeepSeek-R1-0528", additional_config={"torchair_g
|
||||
outputs = model.generate("Hello, how are you?")
|
||||
```
|
||||
|
||||
online example:
|
||||
Online example:
|
||||
|
||||
```shell
|
||||
vllm serve Qwen/Qwen2-7B-Instruct --additional-config='{"torchair_graph_config": {"enabled": true},"ascend_scheduler_config": {"enabled": true,}}'
|
||||
```
|
||||
|
||||
You can find more detail about additional config [here](../configuration/additional_config.md).
|
||||
You can find more details about additional configuration [here](../configuration/additional_config.md).
|
||||
|
||||
## Fallback to Eager Mode
|
||||
## Fallback to the Eager Mode
|
||||
|
||||
If both `ACLGraph` and `TorchAirGraph` fail to run, you should fallback to eager mode.
|
||||
If both `ACLGraph` and `TorchAirGraph` fail to run, you should fallback to the eager mode.
|
||||
|
||||
offline example:
|
||||
Offline example:
|
||||
|
||||
```python
|
||||
import os
|
||||
@@ -71,7 +71,7 @@ model = LLM(model="someother_model_weight", enforce_eager=True)
|
||||
outputs = model.generate("Hello, how are you?")
|
||||
```
|
||||
|
||||
online example:
|
||||
Online example:
|
||||
|
||||
```shell
|
||||
vllm serve Qwen/Qwen2-7B-Instruct --enforce-eager
|
||||
|
||||
Reference in New Issue
Block a user