[Docs] Fix app.readthedocs buliding (#210)

Signed-off-by: dongxinyu03 <dongxinyu03@baidu.com>
This commit is contained in:
Xinyu Dong
2026-02-17 16:17:25 +08:00
committed by GitHub
parent d9ad42a174
commit a470452871
4 changed files with 63 additions and 21 deletions

View File

@@ -86,8 +86,10 @@ if __name__ == "__main__":
main()
```
:::::
If you run this script successfully, you can see the info shown below:
```bash
==================================================
Input content: [{'role': 'user', 'content': [{'type': 'text', 'text': '你好!你是谁?'}]}]
@@ -95,9 +97,11 @@ Model response:
你好!我是一个由人工智能驱动的助手,旨在帮助回答问题、提供信息和解决日常问题。请问有什么我可以帮助你的?
==================================================
```
### Online Serving on Single XPU
Start the vLLM server on a single XPU:
```bash
```text
python -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 9988 \
@@ -114,25 +118,29 @@ python -m vllm.entrypoints.openai.api_server \
--no-enable-chunked-prefill \
--distributed-executor-backend mp \
--served-model-name InternVL2_5-26B \
--compilation-config '{"splitting_ops": ["vllm.unified_attention",
--compilation-config '{"splitting_ops": ["vllm.unified_attention",
"vllm.unified_attention_with_output",
"vllm.unified_attention_with_output_kunlun",
"vllm.mamba_mixer2",
"vllm.mamba_mixer",
"vllm.short_conv",
"vllm.linear_attention",
"vllm.plamo2_mamba_mixer",
"vllm.gdn_attention",
"vllm.short_conv",
"vllm.linear_attention",
"vllm.plamo2_mamba_mixer",
"vllm.gdn_attention",
"vllm.sparse_attn_indexer"]}
#Version 0.11.0
#Version 0.11.0
```
If your service start successfully, you can see the info shown below:
```bash
(APIServer pid=157777) INFO: Started server process [157777]
(APIServer pid=157777) INFO: Waiting for application startup.
(APIServer pid=157777) INFO: Application startup complete.
```
Once your server is started, you can query the model with input prompts:
```bash
curl http://localhost:9988/v1/completions \
-H "Content-Type: application/json" \
@@ -145,17 +153,23 @@ curl http://localhost:9988/v1/completions \
"top_k": 50
}'
```
If you query the server successfully, you can see the info shown below (client):
```bash
{"id":"cmpl-23a24afd616d4a47910aeeccb20921ed","object":"text_completion","created":1768891222,"model":"InternVL2_5-26B","choices":[{"index":0,"text":" 你有什么问题吗?\n\n你好我是书生·AI很高兴能与你交流。请问有什么我可以帮助你的吗无论是解答问题、提供信息还是其他方面的帮助我都会尽力而为。请告诉我你的需求。","logprobs":null,"finish_reason":"stop","stop_reason":92542,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":53,"completion_tokens":47,"prompt_tokens_details":null},"kv_transfer_params":null}
```
Logs of the vllm server:
```bash
(APIServer pid=161632) INFO: 127.0.0.1:56708 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=161632) INFO 01-20 14:40:25 [loggers.py:127] Engine 000: Avg prompt throughput: 0.6 tokens/s, Avg generation throughput: 4.6 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=161632) INFO 01-20 14:40:35 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```
Input an image for testing.Here,a python script is used:
```python
import requests
import base64
@@ -193,13 +207,17 @@ payload = {
response = requests.post(API_URL, json=payload)
print(response.json())
```
If you query the server successfully, you can see the info shown below (client):
```bash
{'id': 'chatcmpl-9aeab6044795458da04f2fdcf1d0445d', 'object': 'chat.completion', 'created': 1768891349, 'model': 'InternVL2_5-26B', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '你好这张图片上有一个黄色的笑脸表情符号双手合十旁边写着“Hugging Face”。这个表情符号看起来很开心似乎在表示拥抱或欢迎。', 'refusal': None, 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [], 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': 92542, 'token_ids': None}], 'service_tier': None, 'system_fingerprint': None, 'usage': {'prompt_tokens': 790, 'total_tokens': 827, 'completion_tokens': 37, 'prompt_tokens_details': None}, 'prompt_logprobs': None, 'prompt_token_ids': None, 'kv_transfer_params': None}
```
Logs of the vllm server:
```bash
(APIServer pid=161632) INFO: 127.0.0.1:58686 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=161632) INFO 01-20 14:42:35 [loggers.py:127] Engine 000: Avg prompt throughput: 79.0 tokens/s, Avg generation throughput: 3.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=161632) INFO 01-20 14:42:45 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
```
```