Files
xc-llm-ascend/docs/source/tutorials/index.md
Li Wang bf84f2dbfa [Doc] Support kimi-k2-w8a8 (#2162)
### What this PR does / why we need it?
In fact, the kimi-k2 model is similar to the deepseek model, and we only
need to make a few changes to support it. what does this pr do:
1. Add kimi-k2-w8a8 deployment doc
2. Update quantization doc
3. Upgrade torchair support list
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?


- vLLM version: v0.10.0
- vLLM main:
9edd1db02b

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
2025-08-06 19:28:47 +08:00

253 B

Tutorials

:::{toctree} :caption: Deployment :maxdepth: 1 single_npu single_npu_multimodal single_npu_audio single_npu_qwen3_embedding multi_npu multi_npu_moge multi_npu_qwen3_moe multi_npu_quantization single_node_300i multi_node multi_node_kimi :::