### What this PR does / why we need it?
Provide sample guidance for running long-sequence DeepSeek across
multiple nodes
To guide users on using the context parallel feature, a practical
example is provided.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: release/v0.13.0
- vLLM main:
bc0a5a0c08
Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
29 lines
472 B
Markdown
29 lines
472 B
Markdown
# Tutorials
|
|
|
|
:::{toctree}
|
|
:caption: Deployment
|
|
:maxdepth: 1
|
|
Qwen2.5-Omni.md
|
|
Qwen2.5-7B
|
|
Qwen3-Dense
|
|
Qwen-VL-Dense.md
|
|
Qwen3-30B-A3B.md
|
|
Qwen3-235B-A22B.md
|
|
Qwen3-VL-235B-A22B-Instruct.md
|
|
Qwen3-Coder-30B-A3B
|
|
Qwen3_embedding
|
|
Qwen3_reranker
|
|
Qwen3-8B-W4A8
|
|
Qwen3-32B-W4A4
|
|
Qwen3-Next
|
|
DeepSeek-V3.1.md
|
|
DeepSeek-V3.2.md
|
|
DeepSeek-R1.md
|
|
Kimi-K2-Thinking
|
|
pd_disaggregation_mooncake_single_node
|
|
pd_disaggregation_mooncake_multi_node
|
|
long_sequence_context_parallel_multi_node
|
|
ray
|
|
310p
|
|
:::
|