2025-06-06 21:14:34 +08:00
# Graph Mode Guide
2025-06-25 19:28:26 +08:00
```{note}
2025-06-06 21:14:34 +08:00
This feature is currently experimental. In future versions, there may be behavioral changes around configuration, coverage, performance improvement.
2025-06-25 19:28:26 +08:00
```
2025-06-06 21:14:34 +08:00
2026-03-23 17:34:51 +08:00
```{note}
In context parallel scenario (i.e. prefill_context_parallel_size * decode_context_parallel_size > 1), "cudagraph_mode" is not sufficiently supported to be set to "FULL" yet.
```
2025-10-29 11:03:39 +08:00
This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. Please note that graph mode is only available on V1 Engine. And only Qwen, DeepSeek series models are well tested from 0.9.0rc1. We will make it stable and generalized in the next release.
2025-06-06 21:14:34 +08:00
## Getting Started
2026-02-13 15:50:05 +08:00
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fall back to the eager mode temporarily by setting `enforce_eager=True` when initializing the model.
2025-06-06 21:14:34 +08:00
2026-02-13 15:50:05 +08:00
There are two kinds of graph mode supported by vLLM Ascend:
2026-01-15 09:06:01 +08:00
2026-02-13 15:50:05 +08:00
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, Qwen and DeepSeek series models are well tested.
- **XliteGraph**: This is the OpenEuler Xlite graph mode. In v0.11.0, only Llama, Qwen dense series models, Qwen MoE series models, and Qwen3-VL are supported.
2025-06-06 21:14:34 +08:00
## Using ACLGraph
2026-01-15 09:06:01 +08:00
2026-02-13 15:50:05 +08:00
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine.
2025-06-06 21:14:34 +08:00
2025-10-29 11:03:39 +08:00
Offline example:
2025-06-06 21:14:34 +08:00
```python
import os
from vllm import LLM
2025-11-13 15:53:58 +08:00
model = LLM(model="path/to/Qwen2-7B-Instruct")
2025-06-06 21:14:34 +08:00
outputs = model.generate("Hello, how are you?")
```
2025-10-29 11:03:39 +08:00
Online example:
2025-06-06 21:14:34 +08:00
```shell
vllm serve Qwen/Qwen2-7B-Instruct
```
2025-12-08 08:27:46 +08:00
## Using XliteGraph
2026-02-13 15:50:05 +08:00
If you want to run Llama, Qwen dense series models, Qwen MoE series models, or Qwen3-VL with Xlite graph mode, please install xlite, and set xlite_graph_config.
2025-12-08 08:27:46 +08:00
```bash
pip install xlite
```
Offline example:
```python
import os
from vllm import LLM
# xlite supports the decode-only mode by default, and the full mode can be enabled by setting: "full_mode": True
model = LLM(model="path/to/Qwen3-32B", tensor_parallel_size=8, additional_config={"xlite_graph_config": {"enabled": True, "full_mode": True}})
outputs = model.generate("Hello, how are you?")
```
Online example:
```shell
vllm serve path/to/Qwen3-32B --tensor-parallel-size 8 --additional-config='{"xlite_graph_config": {"enabled": true, "full_mode": true}}'
```
[Doc][Misc] Comprehensive documentation cleanup and grammatical fixes (#8073)
What this PR does / why we need it?
This pull request performs a comprehensive cleanup of the vLLM Ascend
documentation. It fixes numerous typos, grammatical errors, and phrasing
issues across community guidelines, developer documents, hardware
tutorials, and feature guides. Key improvements include correcting
hardware names (e.g., Atlas 300I), fixing broken links, cleaning up code
examples (removing duplicate flags and trailing commas), and improving
the clarity of technical explanations. These changes are necessary to
ensure the documentation is professional, accurate, and easy for users
to follow.
Does this PR introduce any user-facing change?
No, this PR contains documentation-only updates.
How was this patch tested?
The changes were manually reviewed for accuracy and grammatical
correctness. No functional code changes were introduced.
---------
Signed-off-by: herizhen <1270637059@qq.com>
Signed-off-by: herizhen <59841270+herizhen@users.noreply.github.com>
2026-04-09 15:37:57 +08:00
You can find more details about [Xlite ](https://atomgit.com/openeuler/GVirt/blob/master/xlite/README.md )
2025-12-08 08:27:46 +08:00
2025-10-29 11:03:39 +08:00
## Fallback to the Eager Mode
2025-06-06 21:14:34 +08:00
2026-02-13 15:50:05 +08:00
If `ACLGraph` and `XliteGraph` all fail to run, you should fall back to the eager mode.
2025-06-06 21:14:34 +08:00
2025-10-29 11:03:39 +08:00
Offline example:
2025-06-06 21:14:34 +08:00
```python
import os
from vllm import LLM
model = LLM(model="someother_model_weight", enforce_eager=True)
outputs = model.generate("Hello, how are you?")
```
2025-10-29 11:03:39 +08:00
Online example:
2025-06-06 21:14:34 +08:00
```shell
2025-11-08 18:48:59 +08:00
vllm serve someother_model_weight --enforce-eager
2025-06-06 21:14:34 +08:00
```