[Docs] Add XPU tutorials for Qwen / InternVL (#140)

Signed-off-by: Joeegin <3318329726@qq.com>
This commit is contained in:
Joeegin
2026-01-22 13:50:49 +08:00
committed by GitHub
parent 74d4f804e8
commit 58f570ddea
4 changed files with 620 additions and 0 deletions

View File

@@ -4,6 +4,9 @@
:caption: Deployment
:maxdepth: 1
single_xpu_Qwen3-8B
single_xpu_Qwen3-VL-32B
single_xpu_InternVL2_5-26B
multi_xpu_Qwen2.5-VL-32B
multi_xpu_GLM-4.5
multi_xpu_Qwen3-Coder-480B-A35B(W8A8)
:::

View File

@@ -0,0 +1,209 @@
# Multi XPU (Qwen2.5-VL-32B)
## Run vllm-kunlun on Multi XPU
Setup environment using container:
```bash
# !/bin/bash
# rundocker.sh
XPU_NUM=8
DOCKER_DEVICE_CONFIG=""
if [ $XPU_NUM -gt 0 ]; then
for idx in $(seq 0 $((XPU_NUM-1))); do
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpu${idx}:/dev/xpu${idx}"
done
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpuctrl:/dev/xpuctrl"
fi
export build_image="xxxxxxxxxxxxxxxxx"
docker run -itd ${DOCKER_DEVICE_CONFIG} \
--net=host \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--tmpfs /dev/shm:rw,nosuid,nodev,exec,size=32g \
--cap-add=SYS_PTRACE \
-v /home/users/vllm-kunlun:/home/vllm-kunlun \
-v /usr/local/bin/xpu-smi:/usr/local/bin/xpu-smi \
--name "$1" \
-w /workspace \
"$build_image" /bin/bash
```
### Offline Inference on Multi XPU
Start the server in a container:
```bash
from vllm import LLM, SamplingParams
def main():
model_path = "/models/Qwen2.5-VL-32B-Instruct"
llm_params = {
"model": model_path,
"tensor_parallel_size": 2,
"trust_remote_code": True,
"dtype": "float16",
"enable_chunked_prefill": False,
"enable_prefix_caching": False,
"distributed_executor_backend": "mp",
"max_model_len": 16384,
"gpu_memory_utilization": 0.9,
}
llm = LLM(**llm_params)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "你好!你是谁?"
}
]
}
]
sampling_params = SamplingParams(
max_tokens=200,
temperature=0.7,
top_k=50,
top_p=0.9
)
outputs = llm.chat(messages, sampling_params=sampling_params)
response = outputs[0].outputs[0].text
print("=" * 50)
print("Input content:", messages)
print("Model response:\n", response)
print("=" * 50)
if __name__ == "__main__":
main()
```
:::::
If you run this script successfully, you can see the info shown below:
```bash
==================================================
Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': '你好!你是谁?'}]}]
Model response:
你好我是通义千问阿里巴巴集团旗下的超大规模语言模型。你可以叫我Qwen。我能够回答问题、创作文字比如写故事、写公文、写邮件、写剧本、逻辑推理、编程等等还能表达观点玩游戏等。有什么我可以帮助你的吗 😊
==================================================
```
### Online Serving on Multi XPU
Start the vLLM server on a multi XPU:
```bash
python -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 9988 \
--model /models/Qwen2.5-VL-32B-Instruct \
--gpu-memory-utilization 0.9 \
--trust-remote-code \
--max-model-len 32768 \
--tensor-parallel-size 2 \
--dtype float16 \
--max_num_seqs 128 \
--max_num_batched_tokens 32768 \
--block-size 128 \
--no-enable-prefix-caching \
--no-enable-chunked-prefill \
--distributed-executor-backend mp \
--served-model-name Qwen2.5-VL-32B-Instruct \
--compilation-config '{"splitting_ops": ["vllm.unified_attention",
"vllm.unified_attention_with_output",
"vllm.unified_attention_with_output_kunlun",
"vllm.mamba_mixer2",
"vllm.mamba_mixer",
"vllm.short_conv",
"vllm.linear_attention",
"vllm.plamo2_mamba_mixer",
"vllm.gdn_attention",
"vllm.sparse_attn_indexer"]}'
#Version 0.11.0
```
If your service start successfully, you can see the info shown below:
```bash
(APIServer pid=110552) INFO: Started server process [110552]
(APIServer pid=110552) INFO: Waiting for application startup.
(APIServer pid=110552) INFO: Application startup complete.
```
Once your server is started, you can query the model with input prompts:
```bash
curl http://localhost:9988/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2.5-VL-32B-Instruct",
"prompt": "你好!你是谁?",
"max_tokens": 100,
"temperature": 0.7
}'
```
If you query the server successfully, you can see the info shown below (client):
```bash
{"id":"cmpl-9784668ac5bc4b4e975d0aa5ee8377c6","object":"text_completion","created":1768898088,"model":"Qwen2.5-VL-32B-Instruct","choices":[{"index":0,"text":" 你好!我是通义千问,阿里巴巴集团旗下的超大规模语言模型。你可以回答问题、创作文字,如写故事、公文、邮件、剧本等,还能表达\n","logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":5,"total_tokens":45,"completion_tokens":40,"prompt_tokens_details":null},"kv_transfer_params":null}
```
Logs of the vllm server:
```bash
(APIServer pid=110552) INFO 01-20 16:34:48 [loggers.py:127] Engine 000: Avg prompt throughput: 0.5 tokens/s, Avg generation throughput: 0.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=110552) INFO: 127.0.0.1:17988 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=110552) INFO 01-20 16:34:58 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3.4 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=110552) INFO 01-20 16:35:08 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```
Input an image for testing.Here,a python script is used:
```python
import requests
import base64
API_URL = "http://localhost:9988/v1/chat/completions"
MODEL_NAME = "Qwen2.5-VL-32B-Instruct"
IMAGE_PATH = "/images.jpeg"
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
base64_image = encode_image(IMAGE_PATH)
payload = {
"model": MODEL_NAME,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "你好!请描述一下这张图片。"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 300,
"temperature": 0.1,
"top_p": 0.9,
"top_k": 50
}
response = requests.post(API_URL, json=payload)
print(response.json())
```
If you query the server successfully, you can see the info shown below (client):
```bash
{'id': 'chatcmpl-9857119aed664a3e8f078efd90defdca', 'object': 'chat.completion', 'created': 1768898198, 'model': 'Qwen2.5-VL-32B-Instruct', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好!这张图片展示了一个标志,内容如下:\n\n1. **左侧图标**\n - 一个黄色的圆形笑脸表情符号。\n - 笑脸的表情非常开心,眼睛眯成弯弯的形状,嘴巴张开露出牙齿,显得非常愉快。\n - 笑脸的双手在胸前做出拥抱的动作,手掌朝外,象征着“拥抱”或“友好的姿态”。\n\n2. **右侧文字**\n - 文字是英文单词“Hugging Face”。\n - 字体为黑色字体风格简洁、现代看起来像是无衬线字体sans-serif。\n\n3. **整体设计**\n - 整个标志的设计非常简洁明了,颜色对比鲜明(黄色笑脸和黑色文字),背景为纯白色,给人一种干净、友好的感觉。\n - 笑脸和文字之间的间距适中,布局平衡。\n\n这个标志可能属于某个品牌或组织名字为“Hugging Face”从设计来看它传达了一种友好、开放和积极的形象。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 95, 'total_tokens': 311, 'completion_tokens': 216, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None}
```
Logs of the vllm server:
```bash
(APIServer pid=110552) INFO: 127.0.0.1:19378 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=110552) INFO 01-20 16:36:49 [loggers.py:127] Engine 000: Avg prompt throughput: 9.5 tokens/s, Avg generation throughput: 21.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=110552) INFO 01-20 16:36:59 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```

View File

@@ -0,0 +1,205 @@
# Single XPU (InternVL2_5-26B)
## Run vllm-kunlun on Single XPU
Setup environment using container:
```bash
# !/bin/bash
# rundocker.sh
XPU_NUM=8
DOCKER_DEVICE_CONFIG=""
if [ $XPU_NUM -gt 0 ]; then
for idx in $(seq 0 $((XPU_NUM-1))); do
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpu${idx}:/dev/xpu${idx}"
done
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpuctrl:/dev/xpuctrl"
fi
export build_image="xxxxxxxxxxxxxxxxx"
docker run -itd ${DOCKER_DEVICE_CONFIG} \
--net=host \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--tmpfs /dev/shm:rw,nosuid,nodev,exec,size=32g \
--cap-add=SYS_PTRACE \
-v /home/users/vllm-kunlun:/home/vllm-kunlun \
-v /usr/local/bin/xpu-smi:/usr/local/bin/xpu-smi \
--name "$1" \
-w /workspace \
"$build_image" /bin/bash
```
### Offline Inference on Single XPU
Start the server in a container:
```bash
from vllm import LLM, SamplingParams
def main():
model_path = "/models/InternVL2_5-26B"
llm_params = {
"model": model_path,
"tensor_parallel_size": 1,
"trust_remote_code": True,
"dtype": "float16",
"enable_chunked_prefill": False,
"enable_prefix_caching": False,
"distributed_executor_backend": "mp",
"max_model_len": 16384,
"gpu_memory_utilization": 0.9,
}
llm = LLM(**llm_params)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "你好!你是谁?"
}
]
}
]
sampling_params = SamplingParams(
max_tokens=200,
temperature=0.7,
top_k=50,
top_p=0.9
)
outputs = llm.chat(messages, sampling_params=sampling_params)
response = outputs[0].outputs[0].text
print("=" * 50)
print("Input content:", messages)
print("Model response:\n", response)
print("=" * 50)
if __name__ == "__main__":
main()
```
:::::
If you run this script successfully, you can see the info shown below:
```bash
==================================================
Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': '你好!你是谁?'}]}]
Model response:
你好!我是一个由人工智能驱动的助手,旨在帮助回答问题、提供信息和解决日常问题。请问有什么我可以帮助你的?
==================================================
```
### Online Serving on Single XPU
Start the vLLM server on a single XPU:
```bash
python -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 9988 \
--model /models/InternVL2_5-26B \
--gpu-memory-utilization 0.9 \
--trust-remote-code \
--max-model-len 32768 \
--tensor-parallel-size 1 \
--dtype float16 \
--max_num_seqs 128 \
--max_num_batched_tokens 32768 \
--block-size 128 \
--no-enable-prefix-caching \
--no-enable-chunked-prefill \
--distributed-executor-backend mp \
--served-model-name InternVL2_5-26B \
--compilation-config '{"splitting_ops": ["vllm.unified_attention",
"vllm.unified_attention_with_output",
"vllm.unified_attention_with_output_kunlun",
"vllm.mamba_mixer2",
"vllm.mamba_mixer",
"vllm.short_conv",
"vllm.linear_attention",
"vllm.plamo2_mamba_mixer",
"vllm.gdn_attention",
"vllm.sparse_attn_indexer"]}
#Version 0.11.0
```
If your service start successfully, you can see the info shown below:
```bash
(APIServer pid=157777) INFO: Started server process [157777]
(APIServer pid=157777) INFO: Waiting for application startup.
(APIServer pid=157777) INFO: Application startup complete.
```
Once your server is started, you can query the model with input prompts:
```bash
curl http://localhost:9988/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "InternVL2_5-26B",
"prompt": "你好!你是谁?",
"max_tokens": 100,
"temperature": 0.7,
"top_p": 0.9,
"top_k": 50
}'
```
If you query the server successfully, you can see the info shown below (client):
```bash
{"id":"cmpl-23a24afd616d4a47910aeeccb20921ed","object":"text_completion","created":1768891222,"model":"InternVL2_5-26B","choices":[{"index":0,"text":" 你有什么问题吗?\n\n你好我是书生·AI很高兴能与你交流。请问有什么我可以帮助你的吗无论是解答问题、提供信息还是其他方面的帮助我都会尽力而为。请告诉我你的需求。","logprobs":null,"finish_reason":"stop","stop_reason":92542,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":53,"completion_tokens":47,"prompt_tokens_details":null},"kv_transfer_params":null}
```
Logs of the vllm server:
```bash
(APIServer pid=161632) INFO: 127.0.0.1:56708 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=161632) INFO 01-20 14:40:25 [loggers.py:127] Engine 000: Avg prompt throughput: 0.6 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=161632) INFO 01-20 14:40:35 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```
Input an image for testing.Here,a python script is used:
```python
import requests
import base64
API_URL = "http://localhost:9988/v1/chat/completions"
MODEL_NAME = "InternVL2_5-26B"
IMAGE_PATH = "/images.jpeg"
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
base64_image = encode_image(IMAGE_PATH)
payload = {
"model": MODEL_NAME,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "你好!请描述一下这张图片。"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 300,
"temperature": 0.1,
"top_p": 0.9,
"top_k": 50
}
response = requests.post(API_URL, json=payload)
print(response.json())
```
If you query the server successfully, you can see the info shown below (client):
```bash
{'id': 'chatcmpl-9aeab6044795458da04f2fdcf1d0445d', 'object': 'chat.completion', 'created': 1768891349, 'model': 'InternVL2_5-26B', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好这张图片上有一个黄色的笑脸表情符号双手合十旁边写着“Hugging Face”。这个表情符号看起来很开心似乎在表示拥抱或欢迎。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': 92542, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 790, 'total_tokens': 827, 'completion_tokens': 37, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None}
```
Logs of the vllm server:
```bash
(APIServer pid=161632) INFO: 127.0.0.1:58686 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=161632) INFO 01-20 14:42:35 [loggers.py:127] Engine 000: Avg prompt throughput: 79.0 tokens/s, Avg generation throughput: 3.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=161632) INFO 01-20 14:42:45 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```

View File

@@ -0,0 +1,203 @@
# Single XPU (Qwen3-VL-32B)
## Run vllm-kunlun on Single XPU
Setup environment using container:
```bash
# !/bin/bash
# rundocker.sh
XPU_NUM=8
DOCKER_DEVICE_CONFIG=""
if [ $XPU_NUM -gt 0 ]; then
for idx in $(seq 0 $((XPU_NUM-1))); do
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpu${idx}:/dev/xpu${idx}"
done
DOCKER_DEVICE_CONFIG="${DOCKER_DEVICE_CONFIG} --device=/dev/xpuctrl:/dev/xpuctrl"
fi
export build_image="xxxxxxxxxxxxxxxxx"
docker run -itd ${DOCKER_DEVICE_CONFIG} \
--net=host \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--tmpfs /dev/shm:rw,nosuid,nodev,exec,size=32g \
--cap-add=SYS_PTRACE \
-v /home/users/vllm-kunlun:/home/vllm-kunlun \
-v /usr/local/bin/xpu-smi:/usr/local/bin/xpu-smi \
--name "$1" \
-w /workspace \
"$build_image" /bin/bash
```
### Offline Inference on Single XPU
Start the server in a container:
```bash
from vllm import LLM, SamplingParams
def main():
model_path = "/models/Qwen3-VL-32B"
llm_params = {
"model": model_path,
"tensor_parallel_size": 1,
"trust_remote_code": True,
"dtype": "float16",
"enable_chunked_prefill": False,
"distributed_executor_backend": "mp",
"max_model_len": 16384,
"gpu_memory_utilization": 0.9,
}
llm = LLM(**llm_params)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "tell a joke"
}
]
}
]
sampling_params = SamplingParams(
max_tokens=200,
temperature=1.0,
top_k=50,
top_p=1.0
)
outputs = llm.chat(messages, sampling_params=sampling_params)
response = outputs[0].outputs[0].text
print("=" * 50)
print("Input content:", messages)
print("Model response:\n", response)
print("=" * 50)
if __name__ == "__main__":
main()
```
:::::
If you run this script successfully, you can see the info shown below:
```bash
==================================================
Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': 'tell a joke'}]}]
Model response:
Why dont skeletons fight each other?
Because they dont have the guts! 🦴😄
==================================================
```
### Online Serving on Single XPU
Start the vLLM server on a single XPU:
```bash
python -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 9988 \
--model /models/Qwen3-VL-32B \
--gpu-memory-utilization 0.9 \
--trust-remote-code \
--max-model-len 32768 \
--tensor-parallel-size 1 \
--dtype float16 \
--max_num_seqs 128 \
--max_num_batched_tokens 32768 \
--block-size 128 \
--no-enable-prefix-caching \
--no-enable-chunked-prefill \
--distributed-executor-backend mp \
--served-model-name Qwen3-VL-32B \
--compilation-config '{"splitting_ops": ["vllm.unified_attention",
"vllm.unified_attention_with_output",
"vllm.unified_attention_with_output_kunlun",
"vllm.mamba_mixer2",
"vllm.mamba_mixer",
"vllm.short_conv",
"vllm.linear_attention",
"vllm.plamo2_mamba_mixer",
"vllm.gdn_attention",
"vllm.sparse_attn_indexer"]}
#Version 0.11.0
```
If your service start successfully, you can see the info shown below:
```bash
(APIServer pid=109442) INFO: Started server process [109442]
(APIServer pid=109442) INFO: Waiting for application startup.
(APIServer pid=109442) INFO: Application startup complete.
```
Once your server is started, you can query the model with input prompts:
```bash
curl http://localhost:9988/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen3-VL-32B",
"prompt": "你好!你是谁?",
"max_tokens": 100,
"temperature": 0
}'
```
If you query the server successfully, you can see the info shown below (client):
```bash
{"id":"cmpl-4f61fe821ff34f23a91baade5de5103e","object":"text_completion","created":1768876583,"model":"Qwen3-VL-32B","choices":[{"index":0,"text":" 你好!我是通义千问,是阿里云研发的超大规模语言模型。我能够回答问题、创作文字、编程等,还能根据你的需求进行多轮对话。有什么我可以帮你的吗?😊\n\n温馨提示我是一个AI助手虽然我尽力提供准确和有用的信息但请记得在做重要决策时最好结合专业意见或进一步核实信息哦","logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":5,"total_tokens":90,"completion_tokens":85,"prompt_tokens_details":null},"kv_transfer_params":null}
```
Logs of the vllm server:
```bash
(APIServer pid=109442) INFO: 127.0.0.1:19962 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=109442) INFO 01-20 10:36:28 [loggers.py:127] Engine 000: Avg prompt throughput: 0.5 tokens/s, Avg generation throughput: 8.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=109442) INFO 01-20 10:36:38 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=109442) INFO 01-20 10:43:23 [chat_utils.py:560] Detected the chat template content format to be 'openai'. You can set `--chat-template-content-format` to override this.
(APIServer pid=109442) INFO 01-20 10:43:28 [loggers.py:127] Engine 000: Avg prompt throughput: 9.0 tokens/s, Avg generation throughput: 6.9 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.5%, Prefix cache hit rate: 0.0%
```
Input an image for testing.Here,a python script is used:
```python
import requests
import base64
API_URL = "http://localhost:9988/v1/chat/completions"
MODEL_NAME = "Qwen3-VL-32B"
IMAGE_PATH = "/images.jpeg"
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
base64_image = encode_image(IMAGE_PATH)
payload = {
"model": MODEL_NAME,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "你好!请描述一下这张图片。"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 300,
"temperature": 0.1
}
response = requests.post(API_URL, json=payload)
print(response.json())
```
If you query the server successfully, you can see the info shown below (client):
```bash
{'id': 'chatcmpl-4b42fe46f2c84991b0af5d5e1ffad9ba', 'object': 'chat.completion', 'created': 1768877003, 'model': 'Qwen3-VL-32B', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好这张图片展示的是“Hugging Face”的标志。\n\n图片左侧是一个黄色的圆形表情符号emoji它有着圆圆的眼睛、张开的嘴巴露出微笑双手合拢在脸颊两侧做出一个拥抱或欢迎的姿态整体传达出友好、温暖和亲切的感觉。\n\n图片右侧是黑色的英文文字“Hugging Face”字体简洁现代与左侧的表情符号相呼应。\n\n整个标志设计简洁明了背景为纯白色突出了标志本身。这个标志属于Hugging Face公司它是一家知名的开源人工智能公司尤其在自然语言处理NLP领域以提供预训练模型如Transformers库和模型托管平台而闻名。\n\n整体来看这个标志通过可爱的表情符号和直白的文字成功传达了公司“拥抱”技术、开放共享、友好的品牌理念。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 90, 'total_tokens': 266, 'completion_tokens': 176, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None}
```
Logs of the vllm server:
```bash
(APIServer pid=109442) INFO: 127.0.0.1:26854 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=109442) INFO 01-20 10:43:38 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 10.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=109442) INFO 01-20 10:43:48 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```