### What this PR does / why we need it? 1. This PR cherry pick commit that contains current best performance at 3.5k/1.5k and 128k/1k from main to 0.18.0 branch. 2. This PR introduce MiniMax-M2.7 0day information to users. 3. To finish previous step we also changes MiniMax doc name from MiniMax-M2.5.md to MiniMax-M2.md --------- Signed-off-by: limuyuan <limuyuan3@huawei.com> Co-authored-by: limuyuan <limuyuan3@huawei.com>
37 lines
652 B
Markdown
37 lines
652 B
Markdown
# Model Tutorials
|
|
|
|
This section provides tutorials for different models of vLLM Ascend.
|
|
|
|
:::{toctree}
|
|
:caption: Model Tutorials
|
|
:maxdepth: 1
|
|
Qwen2.5-Omni.md
|
|
Qwen2.5-7B.md
|
|
Qwen3-Dense.md
|
|
Qwen-VL-Dense.md
|
|
Qwen3-30B-A3B.md
|
|
Qwen3-235B-A22B.md
|
|
Qwen3-VL-30B-A3B-Instruct.md
|
|
Qwen3-VL-235B-A22B-Instruct.md
|
|
Qwen3-Coder-30B-A3B.md
|
|
Qwen3_embedding.md
|
|
Qwen3-VL-Embedding.md
|
|
Qwen3_reranker.md
|
|
Qwen3-VL-Reranker.md
|
|
Qwen3-8B-W4A8.md
|
|
Qwen3-32B-W4A4.md
|
|
Qwen3-Next.md
|
|
Qwen3-Omni-30B-A3B-Thinking.md
|
|
Qwen3.5-27B.md
|
|
Qwen3.5-397B-A17B.md
|
|
DeepSeek-V3.1.md
|
|
DeepSeek-V3.2.md
|
|
DeepSeek-R1.md
|
|
GLM4.x.md
|
|
GLM5.md
|
|
Kimi-K2-Thinking.md
|
|
Kimi-K2.5.md
|
|
PaddleOCR-VL.md
|
|
MiniMax-M2.md
|
|
:::
|