From b5ccef6115bf5599b66def82a042c10d091e3708 Mon Sep 17 00:00:00 2001 From: Yikun Jiang Date: Tue, 16 Sep 2025 01:16:06 +0800 Subject: [PATCH] [Doc] Add doc for Qwen3 Next (#2916) ### What this PR does / why we need it? Add doc for Qwen3 Next ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Doc CI passed Related: https://github.com/vllm-project/vllm-ascend/issues/2884 - vLLM version: v0.10.2 - vLLM main: https://github.com/vllm-project/vllm/commit/01413e0cf5a04da4049ffa38b6ff3df27ccabd06 Signed-off-by: Yikun Jiang --- docs/source/tutorials/index.md | 1 + docs/source/tutorials/multi_npu_qwen3_next.md | 156 ++++++++++++++++++ 2 files changed, 157 insertions(+) create mode 100644 docs/source/tutorials/multi_npu_qwen3_next.md diff --git a/docs/source/tutorials/index.md b/docs/source/tutorials/index.md index 6bd52be..0b0b7cd 100644 --- a/docs/source/tutorials/index.md +++ b/docs/source/tutorials/index.md @@ -8,6 +8,7 @@ single_npu_multimodal single_npu_audio single_npu_qwen3_embedding single_npu_qwen3_quantization +multi_npu_qwen3_next multi_npu multi_npu_moge multi_npu_qwen3_moe diff --git a/docs/source/tutorials/multi_npu_qwen3_next.md b/docs/source/tutorials/multi_npu_qwen3_next.md new file mode 100644 index 0000000..e24f28a --- /dev/null +++ b/docs/source/tutorials/multi_npu_qwen3_next.md @@ -0,0 +1,156 @@ +# Multi-NPU (Qwen3-Next) + +```{note} +The Qwen3 Next are using [Triton Ascend](https://gitee.com/ascend/triton-ascend) which is currently experimental. In future versions, there may be behavioral changes around stability, accuracy and performance improvement. +``` + +## Run vllm-ascend on Multi-NPU with Qwen3 Next + +Run docker container: + +```{code-block} bash + :substitutions: +# Update the vllm-ascend image +export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version| +docker run --rm \ +--name vllm-ascend-qwen3 \ +--device /dev/davinci0 \ +--device /dev/davinci1 \ +--device /dev/davinci2 \ +--device /dev/davinci3 \ +--device /dev/davinci_manager \ +--device /dev/devmm_svm \ +--device /dev/hisi_hdc \ +-v /usr/local/dcmi:/usr/local/dcmi \ +-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ +-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ +-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ +-v /etc/ascend_install.info:/etc/ascend_install.info \ +-v /root/.cache:/root/.cache \ +-p 8000:8000 \ +-it $IMAGE bash +``` + +Setup environment variables: + +```bash +# Load model from ModelScope to speed up download +export VLLM_USE_MODELSCOPE=True +``` + +### Install Triton Ascend + +:::::{tab-set} +::::{tab-item} Linux (aarch64) + +The [Triton Ascend](https://gitee.com/ascend/triton-ascend) is required when you run Qwen3 Next, please follow the instructions below to install it and its dependency. + +Install the Ascend BiSheng toolkit: + +```bash +wget https://vllm-ascend.obs.cn-north-4.myhuaweicloud.com/vllm-ascend/Ascend-BiSheng-toolkit_aarch64.run +chmod a+x Ascend-BiSheng-toolkit_aarch64.run +./Ascend-BiSheng-toolkit_aarch64.run --install +source /usr/local/Ascend/8.3.RC1/bisheng_toolkit/set_env.sh +``` + +Install Triton Ascend: + +```bash +wget https://vllm-ascend.obs.cn-north-4.myhuaweicloud.com/vllm-ascend/triton_ascend-3.2.0.dev20250914-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl +pip install triton_ascend-3.2.0.dev20250914-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl +``` + +:::: + +::::{tab-item} Linux (x86_64) + +Coming soon ... + +:::: +::::: + +### Inference on Multi-NPU + +Please make sure you already executed the command: + +```bash +source /usr/local/Ascend/8.3.RC1/bisheng_toolkit/set_env.sh +``` + +:::::{tab-set} +::::{tab-item} Online Inference + +Run the following script to start the vLLM server on Multi-NPU: + +For an Atlas A2 with 64GB of NPU card memory, tensor-parallel-size should be at least 4, and for 32GB of memory, tensor-parallel-size should be at least 8. + +```bash +vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --tensor-parallel-size 4 --max-model-len 4096 --gpu-memory-utilization 0.7 --enforce-eager +``` + +Once your server is started, you can query the model with input prompts + +```bash +curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ + "model": "Qwen/Qwen3-Next-80B-A3B-Instruct", + "messages": [ + {"role": "user", "content": "Give me a short introduction to large language models."} + ], + "temperature": 0.6, + "top_p": 0.95, + "top_k": 20, + "max_tokens": 4096 +}' +``` + +:::: + +::::{tab-item} Offline Inference + +Run the following script to execute offline inference on multi-NPU: + +```python +import gc +import torch + +from vllm import LLM, SamplingParams +from vllm.distributed.parallel_state import (destroy_distributed_environment, + destroy_model_parallel) + +def clean_up(): + destroy_model_parallel() + destroy_distributed_environment() + gc.collect() + torch.npu.empty_cache() + +if __name__ == '__main__': + prompts = [ + "Who are you?", + ] + sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40, max_tokens=32) + llm = LLM(model="Qwen/Qwen3-Next-80B-A3B-Instruct", + tensor_parallel_size=4, + enforce_eager=True, + distributed_executor_backend="mp", + gpu_memory_utilization=0.7, + max_model_len=4096) + + outputs = llm.generate(prompts, sampling_params) + for output in outputs: + prompt = output.prompt + generated_text = output.outputs[0].text + print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") + + del llm + clean_up() +``` + +If you run this script successfully, you can see the info shown below: + +```bash +Prompt: 'Who are you?', Generated text: ' What do you know about me?\n\nHello! I am Qwen, a large-scale language model independently developed by the Tongyi Lab under Alibaba Group. I am' +``` + +:::: +:::::