From a470452871dc7dc79877e796f10b99e91f9aee57 Mon Sep 17 00:00:00 2001 From: Xinyu Dong Date: Tue, 17 Feb 2026 16:17:25 +0800 Subject: [PATCH] [Docs] Fix app.readthedocs buliding (#210) Signed-off-by: dongxinyu03 --- docs/source/tutorials/index.md | 2 ++ .../multi_xpu_DeepSeek-V3.2-Exp-w8a8.md | 14 +++++--- .../tutorials/single_xpu_InternVL2_5-26B.md | 34 ++++++++++++++----- .../tutorials/single_xpu_Qwen3-VL-32B.md | 34 ++++++++++++++----- 4 files changed, 63 insertions(+), 21 deletions(-) diff --git a/docs/source/tutorials/index.md b/docs/source/tutorials/index.md index 0dcfcd0..23415b7 100644 --- a/docs/source/tutorials/index.md +++ b/docs/source/tutorials/index.md @@ -8,5 +8,7 @@ single_xpu_Qwen3-VL-32B single_xpu_InternVL2_5-26B multi_xpu_Qwen2.5-VL-32B multi_xpu_GLM-4.5 +multi_xpu_GLM-5-W8A8-INT8 +multi_xpu_DeepSeek-V3.2-Exp-w8a8 multi_xpu_Qwen3-Coder-480B-A35B(W8A8) ::: diff --git a/docs/source/tutorials/multi_xpu_DeepSeek-V3.2-Exp-w8a8.md b/docs/source/tutorials/multi_xpu_DeepSeek-V3.2-Exp-w8a8.md index e4f2b6f..48a0ec3 100644 --- a/docs/source/tutorials/multi_xpu_DeepSeek-V3.2-Exp-w8a8.md +++ b/docs/source/tutorials/multi_xpu_DeepSeek-V3.2-Exp-w8a8.md @@ -7,6 +7,7 @@ Setup environment using container: Please follow the [installation.md](../installation.md) document to set up the environment first. Create a container + ```bash # !/bin/bash # rundocker.sh @@ -36,13 +37,16 @@ docker run -itd ${DOCKER_DEVICE_CONFIG} \ ### Preparation Weight - Pull DeepSeek-V3.2-Exp-w8a8-int8 weights + ``` wget -O DeepSeek-V3.2-Exp-w8a8-int8.tar.gz https://aihc-private-hcd.bj.bcebos.com/v1/LLM/DeepSeek/DeepSeek-V3.2-Exp-w8a8-int8.tar.gz?authorization=bce-auth-v1%2FALTAKvz6x4eqcmSsKjQxq3vZdB%2F2025-12-24T06%3A07%3A10Z%2F-1%2Fhost%2Fa324bf469176934a05f75d3acabc3c1fb891be150f43fb1976e65b7ec68733db ``` + - Ensure that the field "quantization_config" is included.If not, deployment will result in an OOM (Out of Memory) error. vim model/DeepSeek-V3.2-Exp-w8a8-int8/config.json -```config.json + +```json "quantization_config": { "config_groups": { "group_0": { @@ -108,7 +112,7 @@ export CUDA_GRAPH_OPTIMIZE_STREAM=1 && \ export XMLIR_ENABLE_MOCK_TORCH_COMPILE=false && \ export XPU_USE_MOE_SORTED_THRES=1 && \ export USE_ORI_ROPE=1 && \ -export VLLM_USE_V1=1 +export VLLM_USE_V1=1 python -m vllm.entrypoints.openai.api_server \ --host 0.0.0.0 \ @@ -129,9 +133,9 @@ python -m vllm.entrypoints.openai.api_server \ --compilation-config '{"splitting_ops":["vllm.unified_attention", "vllm.unified_attention_with_output", "vllm.unified_attention_with_output_kunlun", - "vllm.mamba_mixer2", - "vllm.mamba_mixer", - "vllm.short_conv", + "vllm.mamba_mixer2", + "vllm.mamba_mixer", + "vllm.short_conv", "vllm.linear_attention", "vllm.plamo2_mamba_mixer", "vllm.gdn_attention", diff --git a/docs/source/tutorials/single_xpu_InternVL2_5-26B.md b/docs/source/tutorials/single_xpu_InternVL2_5-26B.md index c33084d..5d639d1 100644 --- a/docs/source/tutorials/single_xpu_InternVL2_5-26B.md +++ b/docs/source/tutorials/single_xpu_InternVL2_5-26B.md @@ -86,8 +86,10 @@ if __name__ == "__main__": main() ``` + ::::: If you run this script successfully, you can see the info shown below: + ```bash ================================================== Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': '你好!你是谁?'}]}] @@ -95,9 +97,11 @@ Model response: 你好!我是一个由人工智能驱动的助手,旨在帮助回答问题、提供信息和解决日常问题。请问有什么我可以帮助你的? ================================================== ``` + ### Online Serving on Single XPU Start the vLLM server on a single XPU: -```bash + +```text python -m vllm.entrypoints.openai.api_server \ --host 0.0.0.0 \ --port 9988 \ @@ -114,25 +118,29 @@ python -m vllm.entrypoints.openai.api_server \ --no-enable-chunked-prefill \ --distributed-executor-backend mp \ --served-model-name InternVL2_5-26B \ - --compilation-config '{"splitting_ops": ["vllm.unified_attention", + --compilation-config '{"splitting_ops": ["vllm.unified_attention", "vllm.unified_attention_with_output", "vllm.unified_attention_with_output_kunlun", "vllm.mamba_mixer2", "vllm.mamba_mixer", - "vllm.short_conv", - "vllm.linear_attention", - "vllm.plamo2_mamba_mixer", - "vllm.gdn_attention", + "vllm.short_conv", + "vllm.linear_attention", + "vllm.plamo2_mamba_mixer", + "vllm.gdn_attention", "vllm.sparse_attn_indexer"]} - #Version 0.11.0 + #Version 0.11.0 ``` + If your service start successfully, you can see the info shown below: + ```bash (APIServer pid=157777) INFO: Started server process [157777] (APIServer pid=157777) INFO: Waiting for application startup. (APIServer pid=157777) INFO: Application startup complete. ``` + Once your server is started, you can query the model with input prompts: + ```bash curl http://localhost:9988/v1/completions \ -H "Content-Type: application/json" \ @@ -145,17 +153,23 @@ curl http://localhost:9988/v1/completions \ "top_k": 50 }' ``` + If you query the server successfully, you can see the info shown below (client): + ```bash {"id":"cmpl-23a24afd616d4a47910aeeccb20921ed","object":"text_completion","created":1768891222,"model":"InternVL2_5-26B","choices":[{"index":0,"text":" 你有什么问题吗?\n\n你好!我是书生·AI,很高兴能与你交流。请问有什么我可以帮助你的吗?无论是解答问题、提供信息还是其他方面的帮助,我都会尽力而为。请告诉我你的需求。","logprobs":null,"finish_reason":"stop","stop_reason":92542,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":53,"completion_tokens":47,"prompt_tokens_details":null},"kv_transfer_params":null} ``` + Logs of the vllm server: + ```bash (APIServer pid=161632) INFO: 127.0.0.1:56708 - "POST /v1/completions HTTP/1.1" 200 OK (APIServer pid=161632) INFO 01-20 14:40:25 [loggers.py:127] Engine 000: Avg prompt throughput: 0.6 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% (APIServer pid=161632) INFO 01-20 14:40:35 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% ``` + Input an image for testing.Here,a python script is used: + ```python import requests import base64 @@ -193,13 +207,17 @@ payload = { response = requests.post(API_URL, json=payload) print(response.json()) ``` + If you query the server successfully, you can see the info shown below (client): + ```bash {'id': 'chatcmpl-9aeab6044795458da04f2fdcf1d0445d', 'object': 'chat.completion', 'created': 1768891349, 'model': 'InternVL2_5-26B', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好!这张图片上有一个黄色的笑脸表情符号,双手合十,旁边写着“Hugging Face”。这个表情符号看起来很开心,似乎在表示拥抱或欢迎。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': 92542, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 790, 'total_tokens': 827, 'completion_tokens': 37, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None} ``` + Logs of the vllm server: + ```bash (APIServer pid=161632) INFO: 127.0.0.1:58686 - "POST /v1/chat/completions HTTP/1.1" 200 OK (APIServer pid=161632) INFO 01-20 14:42:35 [loggers.py:127] Engine 000: Avg prompt throughput: 79.0 tokens/s, Avg generation throughput: 3.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% (APIServer pid=161632) INFO 01-20 14:42:45 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% -``` \ No newline at end of file +``` diff --git a/docs/source/tutorials/single_xpu_Qwen3-VL-32B.md b/docs/source/tutorials/single_xpu_Qwen3-VL-32B.md index 81d322b..5fba0fb 100644 --- a/docs/source/tutorials/single_xpu_Qwen3-VL-32B.md +++ b/docs/source/tutorials/single_xpu_Qwen3-VL-32B.md @@ -85,19 +85,23 @@ if __name__ == "__main__": main() ``` + ::::: If you run this script successfully, you can see the info shown below: + ```bash ================================================== Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': 'tell a joke'}]}] Model response: - Why don’t skeletons fight each other? + Why don’t skeletons fight each other? Because they don’t have the guts! 🦴😄 ================================================== ``` + ### Online Serving on Single XPU Start the vLLM server on a single XPU: -```bash + +```text python -m vllm.entrypoints.openai.api_server \ --host 0.0.0.0 \ --port 9988 \ @@ -114,25 +118,29 @@ python -m vllm.entrypoints.openai.api_server \ --no-enable-chunked-prefill \ --distributed-executor-backend mp \ --served-model-name Qwen3-VL-32B \ - --compilation-config '{"splitting_ops": ["vllm.unified_attention", + --compilation-config '{"splitting_ops": ["vllm.unified_attention", "vllm.unified_attention_with_output", "vllm.unified_attention_with_output_kunlun", "vllm.mamba_mixer2", "vllm.mamba_mixer", - "vllm.short_conv", - "vllm.linear_attention", - "vllm.plamo2_mamba_mixer", - "vllm.gdn_attention", + "vllm.short_conv", + "vllm.linear_attention", + "vllm.plamo2_mamba_mixer", + "vllm.gdn_attention", "vllm.sparse_attn_indexer"]} - #Version 0.11.0 + #Version 0.11.0 ``` + If your service start successfully, you can see the info shown below: + ```bash (APIServer pid=109442) INFO: Started server process [109442] (APIServer pid=109442) INFO: Waiting for application startup. (APIServer pid=109442) INFO: Application startup complete. ``` + Once your server is started, you can query the model with input prompts: + ```bash curl http://localhost:9988/v1/completions \ -H "Content-Type: application/json" \ @@ -143,11 +151,15 @@ curl http://localhost:9988/v1/completions \ "temperature": 0 }' ``` + If you query the server successfully, you can see the info shown below (client): + ```bash {"id":"cmpl-4f61fe821ff34f23a91baade5de5103e","object":"text_completion","created":1768876583,"model":"Qwen3-VL-32B","choices":[{"index":0,"text":" 你好!我是通义千问,是阿里云研发的超大规模语言模型。我能够回答问题、创作文字、编程等,还能根据你的需求进行多轮对话。有什么我可以帮你的吗?😊\n\n(温馨提示:我是一个AI助手,虽然我尽力提供准确和有用的信息,但请记得在做重要决策时,最好结合专业意见或进一步核实信息哦!)","logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":5,"total_tokens":90,"completion_tokens":85,"prompt_tokens_details":null},"kv_transfer_params":null} ``` + Logs of the vllm server: + ```bash (APIServer pid=109442) INFO: 127.0.0.1:19962 - "POST /v1/completions HTTP/1.1" 200 OK (APIServer pid=109442) INFO 01-20 10:36:28 [loggers.py:127] Engine 000: Avg prompt throughput: 0.5 tokens/s, Avg generation throughput: 8.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% @@ -155,7 +167,9 @@ Logs of the vllm server: (APIServer pid=109442) INFO 01-20 10:43:23 [chat_utils.py:560] Detected the chat template content format to be 'openai'. You can set `--chat-template-content-format` to override this. (APIServer pid=109442) INFO 01-20 10:43:28 [loggers.py:127] Engine 000: Avg prompt throughput: 9.0 tokens/s, Avg generation throughput: 6.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.5%, Prefix cache hit rate: 0.0% ``` + Input an image for testing.Here,a python script is used: + ```python import requests import base64 @@ -191,11 +205,15 @@ payload = { response = requests.post(API_URL, json=payload) print(response.json()) ``` + If you query the server successfully, you can see the info shown below (client): + ```bash {'id': 'chatcmpl-4b42fe46f2c84991b0af5d5e1ffad9ba', 'object': 'chat.completion', 'created': 1768877003, 'model': 'Qwen3-VL-32B', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好!这张图片展示的是“Hugging Face”的标志。\n\n图片左侧是一个黄色的圆形表情符号(emoji),它有着圆圆的眼睛、张开的嘴巴露出微笑,双手合拢在脸颊两侧,做出一个拥抱或欢迎的姿态,整体传达出友好、温暖和亲切的感觉。\n\n图片右侧是黑色的英文文字“Hugging Face”,字体简洁现代,与左侧的表情符号相呼应。\n\n整个标志设计简洁明了,背景为纯白色,突出了标志本身。这个标志属于Hugging Face公司,它是一家知名的开源人工智能公司,尤其在自然语言处理(NLP)领域以提供预训练模型(如Transformers库)和模型托管平台而闻名。\n\n整体来看,这个标志通过可爱的表情符号和直白的文字,成功传达了公司“拥抱”技术、开放共享、友好的品牌理念。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 90, 'total_tokens': 266, 'completion_tokens': 176, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None} ``` + Logs of the vllm server: + ```bash (APIServer pid=109442) INFO: 127.0.0.1:26854 - "POST /v1/chat/completions HTTP/1.1" 200 OK (APIServer pid=109442) INFO 01-20 10:43:38 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 10.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%