### What this PR does / why we need it? This patch adds support for the Qwen3-MoE model in Xlite. For more details about Xlite, please refer to the following link:https://atomgit.com/openeuler/GVirt/blob/master/xlite/README.md. Qwen3-MoE TODO List: - [ ] Qwen3-235B-A22B support - [ ] Qwen3-MoE weights NZ support - [ ] Qwen3-MoE data parallel support ## Qwen3-30B-A3B-Instruct-2507 910B3(A2) Online Inference Performance Comparison - aclgraph: main(69b170b8b5) - xlite-full: main + xlite-full - xlite-decode-only: main + xlite-decode-only - diff1: Performance comparison between xlite-full and aclgraph - diff2: Performance comparison between xlite-decode-only and aclgraph | maxconcurrency | item | TTFT(ms) | | TPOT(ms) | | QPS (req/s) | OutputSpeed (token/s) | | --- | --- | --- | --- | --- | --- | --- | --- | | | | Avg | P99 | Avg | P99 | | | | 1 | baseline-aclgraph | 205.07 | 287.29 | 12.34 | 12.65 | 0.14 | 78.81 | | 1 | xlite-full | 66.40 | 113.69 | 11.71 | 12.40 | 0.15 | 84.73 | | 1 | xlite-decode-only | 221.15 | 316.40 | 12.16 | 12.91 | 0.14 | 79.70 | | 1 | diff1 | -67.62% | -60.43% | -5.11% | -1.98% | 7.14% | 7.51% | | 1 | diff2 | 7.84% | 10.13% | -1.46% | 2.06% | 0.00% | 1.13% | | | | | | | | | | | 16 | baseline-aclgraph | 1892.16 | 13916.86 | 22.78 | 39.28 | 1.15 | 589.89 | | 16 | xlite-full | 1355.40 | 8907.45 | 15.96 | 25.15 | 1.65 | 850.21 | | 16 | xlite-decode-only | 1519.42 | 8711.64 | 19.23 | 29.73 | 1.38 | 711.60 | | 16 | diff1 | -28.37% | -36.00% | -29.94% | -35.97% | 43.48% | 44.13% | | 16 | diff2 | -19.70% | -37.40% | -15.58% | -24.31% | 20.00% | 20.63% | | | | | | | | | | | 32 | baseline-aclgraph | 673.80 | 3914.90 | 32.20 | 37.95 | 1.80 | 928.54 | | 32 | xlite-full | 481.65 | 2710.50 | 19.95 | 25.35 | 2.91 | 1506.67 | | 32 | xlite-decode-only | 372.22 | 1095.25 | 25.19 | 28.47 | 2.33 | 1202.82 | | 32 | diff1 | -28.52% | -30.76% | -38.04% | -33.20% | 61.67% | 62.26% | | 32 | diff2 | -44.76% | -72.02% | -21.77% | -24.98% | 29.44% | 29.54% | | | | | | | | | | | 48 | baseline-aclgraph | 583.18 | 3277.65 | 41.02 | 46.05 | 2.17 | 1115.08 | | 48 | xlite-full | 973.42 | 8237.33 | 23.29 | 30.50 | 3.71 | 1908.09 | | 48 | xlite-decode-only | 480.79 | 2026.98 | 31.48 | 35.41 | 2.83 | 1453.75 | | 48 | diff1 | 66.92% | 151.32% | -43.22% | -33.77% | 70.97% | 71.12% | | 48 | diff2 | -17.56% | -38.16% | -23.26% | -23.11% | 30.41% | 30.37% | | | | | | | | | | | 64 | baseline-aclgraph | 742.74 | 5953.39 | 47.79 | 53.15 | 2.48 | 1272.37 | | 64 | xlite-full | 545.22 | 3941.34 | 25.09 | 30.41 | 4.64 | 2376.44 | | 64 | xlite-decode-only | 752.40 | 4534.29 | 38.67 | 43.28 | 3.06 | 1567.94 | | 64 | diff1 | -26.59% | -33.80% | -47.50% | -42.78% | 87.10% | 86.77% | | 64 | diff2 | 1.30% | -23.84% | -19.08% | -18.57% | 23.39% | 23.23% | | | | | | | | | | | 100 | baseline-aclgraph | 565.52 | 1716.81 | 60.89 | 68.69 | 3.08 | 1580.64 | | 100 | xlite-full | 398.14 | 2328.88 | 30.70 | 32.45 | 6.01 | 3086.42 | | 100 | xlite-decode-only | 712.53 | 4875.94 | 52.71 | 60.78 | 3.53 | 1813.58 | | 100 | diff1 | -29.60% | 35.65% | -49.58% | -52.76% | 95.13% | 95.26% | | 100 | diff2 | 26.00% | 184.01% | -13.43% | -11.52% | 14.61% | 14.74% | | | | | | | | | | | 150 | baseline-aclgraph | 842.42 | 5175.01 | 73.60 | 88.18 | 3.80 | 1952.26 | | 150 | xlite-full | 568.52 | 4204.33 | 37.90 | 40.01 | 7.27 | 3734.72 | | 150 | xlite-decode-only | 654.43 | 2504.06 | 67.40 | 77.00 | 4.18 | 2145.11 | | 150 | diff1 | -32.51% | -18.76% | -48.51% | -54.63% | 91.32% | 91.30% | | 150 | diff2 | -22.32% | -51.61% | -8.42% | -12.68% | 10.00% | 9.88% | | | | | | | | | | | 200 | baseline-aclgraph | 750.63 | 3049.91 | 88.26 | 101.95 | 4.28 | 2189.72 | | 200 | xlite-full | 558.48 | 3791.98 | 45.54 | 49.04 | 8.17 | 4175.52 | | 200 | xlite-decode-only | 807.09 | 4254.95 | 85.18 | 101.79 | 4.44 | 2271.52 | | 200 | diff1 | -25.60% | 24.33% | -48.40% | -51.90% | 90.89% | 90.69% | | 200 | diff2 | 7.52% | 39.51% | -3.49% | -0.16% | 3.74% | 3.74% | | | | | | | | | | ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main:2c24bc6996--------- Signed-off-by: changdawei1 <changdawei3@huawei.com> Co-authored-by: LVYANGGUO <275926687@qq.com> Co-authored-by: lulina <lina.lulina@huawei.com>
2.7 KiB
Graph Mode Guide
This feature is currently experimental. In future versions, there may be behavioral changes around configuration, coverage, performance improvement.
This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. Please note that graph mode is only available on V1 Engine. And only Qwen, DeepSeek series models are well tested from 0.9.0rc1. We will make it stable and generalized in the next release.
Getting Started
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to the eager mode temporarily by setting enforce_eager=True when initializing the model.
There are two kinds for graph mode supported by vLLM Ascend:
- ACLGraph: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, Qwen and Deepseek series models are well tested.
- XliteGraph: This is the openeuler xlite graph mode. In v0.11.0, only Llama, Qwen dense series models, Qwen MoE series models, and Qwen3-vl are supported.
Using ACLGraph
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
Offline example:
import os
from vllm import LLM
model = LLM(model="path/to/Qwen2-7B-Instruct")
outputs = model.generate("Hello, how are you?")
Online example:
vllm serve Qwen/Qwen2-7B-Instruct
Using XliteGraph
If you want to run Llama, Qwen dense series models, Qwen MoE series models, or Qwen3-vl with xlite graph mode, please install xlite, and set xlite_graph_config.
pip install xlite
Offline example:
import os
from vllm import LLM
# xlite supports the decode-only mode by default, and the full mode can be enabled by setting: "full_mode": True
model = LLM(model="path/to/Qwen3-32B", tensor_parallel_size=8, additional_config={"xlite_graph_config": {"enabled": True, "full_mode": True}})
outputs = model.generate("Hello, how are you?")
Online example:
vllm serve path/to/Qwen3-32B --tensor-parallel-size 8 --additional-config='{"xlite_graph_config": {"enabled": true, "full_mode": true}}'
You can find more details about xlite here
Fallback to the Eager Mode
If ACLGraph and XliteGraph all fail to run, you should fallback to the eager mode.
Offline example:
import os
from vllm import LLM
model = LLM(model="someother_model_weight", enforce_eager=True)
outputs = model.generate("Hello, how are you?")
Online example:
vllm serve someother_model_weight --enforce-eager