[Doc] Update doc (#3836)

### What this PR does / why we need it?

Update doc

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0rc3
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.1

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
This commit is contained in:
zhangxinyuehfad
2025-10-29 11:03:39 +08:00
committed by GitHub
parent 1e31b07fa7
commit 789ba4c5c2
47 changed files with 583 additions and 566 deletions

View File

@@ -2,7 +2,7 @@
## Overview
Expert balancing for MoE models in LLM serving is essential for optimal performance. Dynamically changing experts during inference can negatively impact TTFT (Time To First Token) and TPOT (Tokens Per Output Token) due to stop-the-world operations. SwiftBalancer enables asynchronous expert load balancing with zero-overhead expert movement, ensuring seamless service continuity.
Expert balancing for MoE models in LLM serving is essential for optimal performance. Dynamically changing experts during inference can negatively impact TTFT (Time To First Token) and TPOT (Time Per Output Token) due to stop-the-world operations. SwiftBalancer enables asynchronous expert load balancing with zero-overhead expert movement, ensuring seamless service continuity.
## EPLB Effects
@@ -61,16 +61,16 @@ vllm serve Qwen/Qwen3-235B-A22 \
## Critical Considerations
1. Parameter Tuning:
- num_iterations_eplb_update: Higher values (e.g., 400+) for stable workloads; lower values (e.g., 100-200) for fluctuating traffic.
- num_wait_worker_iterations: Should be ≥30 to avoid premature balancing during startup.
- num_wait_worker_iterations: Should be ≥ 30 to avoid premature balancing during startup.
- init_redundancy_expert: Must match tensor-parallel size (e.g., 16 for 16 GPUs) to ensure sufficient redundancy.
2. Hardware Requirements:
- Ensure all GPUs have identical memory capacity and compute capabilities.
- Network bandwidth must support expert redistribution traffic (≥10Gbps recommended).
- Ensure that all GPUs have identical memory capacity and compute capabilities.
- Network bandwidth must support expert redistribution traffic (≥ 10 Gbps recommended).
3. Model Compatibility:
- Only MoE models with explicit expert parallelism support (e.g., Qwen3-235B-A22) are compatible.
- Verify model architecture supports dynamic expert routing via --enable-expert-parallel.
- Verify model architecture supports dynamic expert routing through --enable-expert-parallel.
4. Gating Configuration:
- When gate_eplb=true, validate that the gating mechanism can handle expert movement without routing errors.
@@ -83,7 +83,7 @@ vllm serve Qwen/Qwen3-235B-A22 \
6. Startup Behavior:
- Initial requests may experience higher latency during the first balancing cycle (typically 1-2 minutes).
- Avoid sudden traffic spikes during warm-up phase.
- Avoid sudden traffic spikes during the warm-up phase.
7. Common Pitfalls:
- Incorrect tensor-parallel-size vs. actual GPU count → causes resource underutilization.