[Doc][Misc][v0.18.0] Updated the document configuration for DeepSeek-V3.2 (#7970)

### What this PR does / why we need it?

To avoid misleading users, the unmaintained DSV32 models, such as the
floating-point model, are deleted from the document.This PR removes the
BF16 version entries for DeepSeek-V3.2 from the documentation.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Documentation update only.

Signed-off-by: wyh145 <1987244901@qq.com>
This commit is contained in:
Nagisa125
2026-04-07 16:17:28 +08:00
committed by GitHub
parent 6c19270498
commit fbd5d0fd55

View File

@@ -16,9 +16,7 @@ Refer to [feature guide](../../user_guide/feature_guide/index.md) to get the fea
### Model Weight
- `DeepSeek-V3.2-Exp`(BF16 version): require 2 Atlas 800 A3 (64G × 16) nodes or 4 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://modelers.cn/models/Modelers_Park/DeepSeek-V3.2-Exp-BF16)
- `DeepSeek-V3.2-Exp-W8A8`(Quantized version): require 1 Atlas 800 A3 (64G × 16) node or 2 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3.2-Exp-W8A8)
- `DeepSeek-V3.2`(BF16 version): require 2 Atlas 800 A3 (64G × 16) nodes or 4 Atlas 800 A2 (64G × 8) nodes. Model weight in BF16 not found now.
- `DeepSeek-V3.2-w8a8`(Quantized version): require 1 Atlas 800 A3 (64G × 16) node or 2 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3.2-W8A8/)
It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`.