Included multi-node DeepSeekv3 example (#2707)
This commit is contained in:
@@ -69,6 +69,55 @@ python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --di
|
||||
|
||||
If you have two H100 nodes, the usage is similar to the aforementioned H20.
|
||||
|
||||
### Example serving with Docker two H200*8 nodes
|
||||
There are two H200 nodes, each with 8 GPUs. The first node's IP is `192.168.114.10`, and the second node's IP is `192.168.114.11`. Configure the endpoint to expose it to another Docker container using `--host 0.0.0.0` and `--port 40000`, and set up communications with `--dist-init-addr 192.168.114.10:20000`.
|
||||
A single H200 with 8 devices can run DeepSeek V3, the dual H200 setup is just to demonstrate multi-node usage.
|
||||
|
||||
```bash
|
||||
# node 1
|
||||
docker run --gpus all \
|
||||
--shm-size 32g \
|
||||
--network=host \
|
||||
-v ~/.cache/huggingface:/root/.cache/huggingface \
|
||||
--name sglang_multinode1 \
|
||||
-it \
|
||||
--rm \
|
||||
--env "HF_TOKEN=$HF_TOKEN" \
|
||||
--ipc=host \
|
||||
lmsysorg/sglang:latest \
|
||||
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --dist-init-addr 192.168.114.10:20000 --nnodes 2 --node-rank 0 --trust-remote-code --host 0.0.0.0 --port 40000
|
||||
```
|
||||
|
||||
```bash
|
||||
# node 2
|
||||
docker run --gpus all \
|
||||
--shm-size 32g \
|
||||
--network=host \
|
||||
-v ~/.cache/huggingface:/root/.cache/huggingface \
|
||||
--name sglang_multinode2 \
|
||||
-it \
|
||||
--rm \
|
||||
--env "HF_TOKEN=$HF_TOKEN" \
|
||||
--ipc=host \
|
||||
lmsysorg/sglang:latest \
|
||||
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --dist-init-addr 192.168.114.10:20000 --nnodes 2 --node-rank 1 --trust-remote-code --host 0.0.0.0 --port 40000
|
||||
```
|
||||
|
||||
To ensure functionality, we include a test from a client Docker container.
|
||||
```bash
|
||||
docker run --gpus all \
|
||||
--shm-size 32g \
|
||||
--network=host \
|
||||
-v ~/.cache/huggingface:/root/.cache/huggingface \
|
||||
--name sglang_multinode_client \
|
||||
-it \
|
||||
--rm \
|
||||
--env "HF_TOKEN=$HF_TOKEN" \
|
||||
--ipc=host \
|
||||
lmsysorg/sglang:latest \
|
||||
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 1 --random-output 512 --random-range-ratio 1 --num-prompts 1 --host 0.0.0.0 --port 40000 --output-file "deepseekv3_multinode.jsonl"
|
||||
```
|
||||
|
||||
## DeepSeek V3 Optimization Plan
|
||||
|
||||
https://github.com/sgl-project/sglang/issues/2591
|
||||
|
||||
Reference in New Issue
Block a user