[Lint]Style: reformat markdown files via markdownlint (#5884)

### What this PR does / why we need it?
reformat markdown files via markdownlint

- vLLM version: v0.13.0
- vLLM main:
bde38c11df

---------

Signed-off-by: root <root@LAPTOP-VQKDDVMG.localdomain>
Signed-off-by: MrZ20 <2609716663@qq.com>
Co-authored-by: root <root@LAPTOP-VQKDDVMG.localdomain>
This commit is contained in:
SILONG ZENG
2026-01-15 09:06:01 +08:00
committed by GitHub
parent 96edd4673f
commit 4811ba62e0
75 changed files with 711 additions and 308 deletions

View File

@@ -19,6 +19,7 @@ Refer to [feature guide](../user_guide/feature_guide/index.md) to get the featur
### Model Weight
require 1 Atlas 800I A2 (64G × 8) node or 1 Atlas 800 A3 (64G × 16) node:
- `Qwen2.5-VL-3B-Instruct`: [Download model weight](https://modelscope.cn/models/Qwen/Qwen2.5-VL-3B-Instruct)
- `Qwen2.5-VL-7B-Instruct`: [Download model weight](https://modelscope.cn/models/Qwen/Qwen2.5-VL-7B-Instruct)
- `Qwen2.5-VL-32B-Instruct`:[Download model weight](https://modelscope.cn/models/Qwen/Qwen2.5-VL-32B-Instruct)
@@ -188,7 +189,7 @@ print(generated_text)
If you run this script successfully, you can see the info shown below:
```
```shell
**Visual Components:**
1. **Abstract Geometric Icon (Left Side):**
@@ -284,7 +285,7 @@ print(generated_text)
If you run this script successfully, you can see the info shown below:
```
```shell
The image displays a logo and text related to the Qwen model, which is an artificial intelligence (AI) language model developed by Alibaba Cloud. Here is a detailed description of the elements in the image:
### **1. Logo:**
@@ -469,6 +470,7 @@ INFO 12-05 08:51:20 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 toke
### Using Language Model Evaluation Harness
The accuracy of some models is already within our CI monitoring scope, including:
- `Qwen2.5-VL-7B-Instruct`
- `Qwen3-VL-8B-Instruct`
@@ -547,6 +549,7 @@ lm_eval \
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/benchmarks.html) for more details.
There are three `vllm bench` subcommand:
- `latency`: Benchmark the latency of a single batch of requests.
- `serve`: Benchmark the online serving throughput.
- `throughput`: Benchmark offline inference throughput.