[Doc] Fix DeepSeek-3.2-Exp doc, remove v0.11.0rc0 outdated infos. (#4095)

### What this PR does / why we need it?
Fix DeepSeek-3.2-Exp doc, remove v0.11.0rc0 outdated infos.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

---------

Signed-off-by: menogrey <1299267905@qq.com>
This commit is contained in:
zhangyiming
2025-11-12 09:11:31 +08:00
committed by GitHub
parent 638dbcdb32
commit c9e5b90f53
4 changed files with 48 additions and 31 deletions

View File

@@ -305,7 +305,10 @@ First, check physical layer connectivity, then verify each node, and finally ver
Execute the following commands on each node in sequence. The results must all be `success` and the status must be `UP`:
:::::{tab-set}
:sync-group: multi-node
::::{tab-item} A2 series
:sync: A2
```bash
# Check the remote switch ports
@@ -324,6 +327,7 @@ Execute the following commands on each node in sequence. The results must all be
::::
::::{tab-item} A3 series
:sync: A3
```bash
# Check the remote switch ports
@@ -346,7 +350,10 @@ Execute the following commands on each node in sequence. The results must all be
#### Interconnect Verification:
##### 1. Get NPU IP Addresses
:::::{tab-set}
:sync-group: multi-node
::::{tab-item} A2 series
:sync: A2
```bash
for i in {0..7}; do hccn_tool -i $i -ip -g | grep ipaddr; done
@@ -354,6 +361,7 @@ for i in {0..7}; do hccn_tool -i $i -ip -g | grep ipaddr; done
::::
::::{tab-item} A3 series
:sync: A3
```bash
for i in {0..15}; do hccn_tool -i $i -ip -g | grep ipaddr; done
@@ -376,7 +384,10 @@ Using vLLM-ascend official container is more efficient to run multi-node environ
Run the following command to start the container in each node (You should download the weight to /root/.cache in advance):
:::::{tab-set}
:sync-group: multi-node
::::{tab-item} A2 series
:sync: A2
```{code-block} bash
:substitutions:
@@ -417,6 +428,7 @@ docker run --rm \
::::
::::{tab-item} A3 series
:sync: A3
```{code-block} bash
:substitutions:
@@ -465,7 +477,3 @@ docker run --rm \
::::
:::::
### Verify installation
TODO

View File

@@ -6,8 +6,6 @@ DeepSeek-V3.2-Exp is a sparse attention model. The main architecture is similar
This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation.
The `DeepSeek-V3.2-Exp` model is first supported in `vllm-ascend:v0.11.0rc0`.
## Supported Features
Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.
@@ -29,28 +27,17 @@ If you want to deploy multi-node environment, you need to verify multi-node comm
### Installation
:::::{tab-set}
::::{tab-item} Use deepseek-v3.2 docker image
In `vllm-ascend:v0.11.0rc0` release, we provide the all-in-one images `quay.io/ascend/vllm-ascend:v0.11.0rc0-deepseek-v3.2-exp`(for Atlas 800 A2) and `quay.io/ascend/vllm-ascend:v0.11.0rc0-a3-deepseek-v3.2-exp`(for Atlas 800 A3).
Refer to [using docker](../installation.md#set-up-using-docker) to set up environment using Docker, remember to replace the image with deepseek-v3.2 docker image.
:::{note}
- The image is based on a specific version `vllm-ascend:v0.11.0rc0` and will not continue to release new version. Move to another tab `Use vllm-ascend docker image` for latest support of deepseek-v3.2 on vllm-ascend.
- Only AArch64 architecture are supported currently due to extra operator's installation limitations.
:::
::::
::::{tab-item} Use vllm-ascend docker image
You can using our official docker image and install extra operator for supporting `DeepSeek-V3.2-Exp`.
:::{note}
Only AArch64 architecture are supported currently due to extra operator's installation limitations.
:::
For `A3` image:
:::::{tab-set}
:sync-group: install
::::{tab-item} A3 series
:sync: A3
1. Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).
@@ -66,7 +53,9 @@ wget https://vllm-ascend.obs.cn-north-4.myhuaweicloud.com/vllm-ascend/a3/custom_
pip install custom_ops-1.0-cp311-cp311-linux_aarch64.whl
```
For `A2` image:
::::
::::{tab-item} A2 series
:sync: A2
1. Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).
@@ -82,17 +71,15 @@ wget https://vllm-ascend.obs.cn-north-4.myhuaweicloud.com/vllm-ascend/a2/custom_
pip install custom_ops-1.0-cp311-cp311-linux_aarch64.whl
```
::::
::::{tab-item} Build from source
You can build all from source.
- Install `vllm-ascend`, refer to [set up using python](../installation.md#set-up-using-python).
- Install extra operator for supporting `DeepSeek-V3.2-Exp`, refer to `Use vllm-ascend docker image` tab.
::::
:::::
In addition, if you don't want to use the docker image as above, you can also build all from source:
- Install `vllm-ascend` from source, refer to [installation](../installation.md).
- Install extra operator for supporting `DeepSeek-V3.2-Exp`, refer to the above tab.
If you want to deploy multi-node environment, you need to set up environment on each node.
## Deployment
@@ -130,7 +117,10 @@ vllm serve vllm-ascend/DeepSeek-V3.2-Exp-W8A8 \
- `DeepSeek-V3.2-Exp-w8a8`: require 2 Atlas 800 A2 (64G × 8).
:::::{tab-set}
:sync-group: install
::::{tab-item} DeepSeek-V3.2-Exp A3 series
:sync: A3
Run the following scripts on two nodes respectively.
@@ -219,6 +209,7 @@ vllm serve /root/.cache/Modelers_Park/DeepSeek-V3.2-Exp \
::::
::::{tab-item} DeepSeek-V3.2-Exp-W8A8 A2 series
:sync: A2
Run the following scripts on two nodes respectively.

View File

@@ -96,8 +96,10 @@ We can run the following scripts to launch a server on the prefiller/decoder nod
### Layerwise
:::::{tab-set}
:sync-group: nodes
::::{tab-item} Prefiller node 1
:sync: prefill node1
```shell
unset ftp_proxy
@@ -152,6 +154,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Prefiller node 2
:sync: prefill node2
```shell
unset ftp_proxy
@@ -206,6 +209,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Decoder node 1 (master Node)
:sync: decoder node1
```shell
unset ftp_proxy
@@ -262,6 +266,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Decoder node 2 (primary node)
:sync: decoder node2
```shell
unset ftp_proxy
@@ -323,8 +328,10 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
### Non-layerwise
:::::{tab-set}
:sync-group: nodes
::::{tab-item} Prefiller node 1
:sync: prefill node1
```shell
unset ftp_proxy
@@ -379,6 +386,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Prefiller node 2
:sync: prefill node2
```shell
unset ftp_proxy
@@ -433,6 +441,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Decoder node 1 (master node)
:sync: decoder node1
```shell
unset ftp_proxy
@@ -489,6 +498,7 @@ vllm serve /model/Qwen3-235B-A22B-W8A8 \
::::
::::{tab-item} Decoder node 2 (primary Node)
:sync: decoder node2
```shell
unset ftp_proxy

View File

@@ -44,7 +44,10 @@ export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
Run the following script to execute offline inference on a single NPU:
:::::{tab-set}
:sync-group: inference
::::{tab-item} Graph Mode
:sync: graph mode
```{code-block} python
:substitutions:
@@ -71,6 +74,7 @@ for output in outputs:
::::
::::{tab-item} Eager Mode
:sync: eager mode
```{code-block} python
:substitutions:
@@ -110,7 +114,10 @@ Prompt: 'The future of AI is', Generated text: ' following you. As the technolog
Run docker container to start the vLLM server on a single NPU:
:::::{tab-set}
:sync-group: inference
::::{tab-item} Graph Mode
:sync: graph mode
```{code-block} bash
:substitutions:
@@ -139,6 +146,7 @@ vllm serve Qwen/Qwen3-8B --max_model_len 26240
::::
::::{tab-item} Eager Mode
:sync: eager mode
```{code-block} bash
:substitutions: