From fbd5d0fd55f8e58430e0eef8e606a082e2bc3dad Mon Sep 17 00:00:00 2001 From: Nagisa125 <166619298+Nagisa125@users.noreply.github.com> Date: Tue, 7 Apr 2026 16:17:28 +0800 Subject: [PATCH] [Doc][Misc][v0.18.0] Updated the document configuration for DeepSeek-V3.2 (#7970) ### What this PR does / why we need it? To avoid misleading users, the unmaintained DSV32 models, such as the floating-point model, are deleted from the document.This PR removes the BF16 version entries for DeepSeek-V3.2 from the documentation. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Documentation update only. Signed-off-by: wyh145 <1987244901@qq.com> --- docs/source/tutorials/models/DeepSeek-V3.2.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/tutorials/models/DeepSeek-V3.2.md b/docs/source/tutorials/models/DeepSeek-V3.2.md index 18817647..cc8ed00d 100644 --- a/docs/source/tutorials/models/DeepSeek-V3.2.md +++ b/docs/source/tutorials/models/DeepSeek-V3.2.md @@ -16,9 +16,7 @@ Refer to [feature guide](../../user_guide/feature_guide/index.md) to get the fea ### Model Weight -- `DeepSeek-V3.2-Exp`(BF16 version): require 2 Atlas 800 A3 (64G × 16) nodes or 4 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://modelers.cn/models/Modelers_Park/DeepSeek-V3.2-Exp-BF16) - `DeepSeek-V3.2-Exp-W8A8`(Quantized version): require 1 Atlas 800 A3 (64G × 16) node or 2 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3.2-Exp-W8A8) -- `DeepSeek-V3.2`(BF16 version): require 2 Atlas 800 A3 (64G × 16) nodes or 4 Atlas 800 A2 (64G × 8) nodes. Model weight in BF16 not found now. - `DeepSeek-V3.2-w8a8`(Quantized version): require 1 Atlas 800 A3 (64G × 16) node or 2 Atlas 800 A2 (64G × 8) nodes. [Download model weight](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3.2-W8A8/) It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`.