### What this PR does / why we need it?
Fix configuration errors in our documentation.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
NA.
Signed-off-by: linfeng-yuan <1102311262@qq.com>
Optimize multi-node guide: more clearer corresponding relationship
between configuration items and nodes
### What this PR does / why we need it?
Some issues caused by misunderstandings due to unclear guidance content,
for example: #3367
### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
NA
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: leo-pony <nengjunma@outlook.com>
### What this PR does / why we need it?
Fixed the expression of msit for code clone
- vLLM version: v0.10.0
- vLLM main:
afa5b7ca0b
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
In fact, the kimi-k2 model is similar to the deepseek model, and we only
need to make a few changes to support it. what does this pr do:
1. Add kimi-k2-w8a8 deployment doc
2. Update quantization doc
3. Upgrade torchair support list
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.10.0
- vLLM main:
9edd1db02b
---------
Signed-off-by: wangli <wangli858794774@gmail.com>