From 098d659c0e809a6cb4a6a0792cbbf159db011c10 Mon Sep 17 00:00:00 2001 From: Yineng Zhang Date: Mon, 30 Dec 2024 13:33:29 +0800 Subject: [PATCH] docs: update README (#2651) --- benchmark/deepseek_v3/README.md | 9 ++++++++- docs/developer/development_guide_using_docker.md | 2 ++ 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/benchmark/deepseek_v3/README.md b/benchmark/deepseek_v3/README.md index 0343de33b..8bd8fe974 100644 --- a/benchmark/deepseek_v3/README.md +++ b/benchmark/deepseek_v3/README.md @@ -7,7 +7,7 @@ Special thanks to Meituan's Search & Recommend Platform Team and Baseten's Model ## Hardware Recommendation - 8 x NVIDIA H200 GPUs -If you do not have GPUs with large enough memory, please try multi-node tensor parallelism ([help 1](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L88-L95) [help 2](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L152-L168)). +If you do not have GPUs with large enough memory, please try multi-node tensor parallelism ([help 1](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L88-L95) [help 2](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L152-L168)). Here is an example serving with [2 H20 node](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208) ## Installation & Launch @@ -15,6 +15,11 @@ If you encounter errors when starting the server, ensure the weights have finish ### Using Docker (Recommended) ```bash +# Pull latest image +# https://hub.docker.com/r/lmsysorg/sglang/tags +docker pull lmsysorg/sglang:latest + +# Launch docker run --gpus all --shm-size 32g -p 30000:30000 -v ~/.cache/huggingface:/root/.cache/huggingface --ipc=host lmsysorg/sglang:latest \ python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V3 --tp 8 --trust-remote-code --port 30000 ``` @@ -62,6 +67,8 @@ GLOO_SOCKET_IFNAME=eth0 python -m sglang.launch_server --model-path deepseek-ai/ GLOO_SOCKET_IFNAME=eth0 python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --nccl-init 10.0.0.1:5000 --nnodes 2 --node-rank 1 --trust-remote-code ``` +If you have two H100 nodes, the usage is similar to the aforementioned H20. + ## DeepSeek V3 Optimization Plan https://github.com/sgl-project/sglang/issues/2591 diff --git a/docs/developer/development_guide_using_docker.md b/docs/developer/development_guide_using_docker.md index c6990f780..918057d0e 100644 --- a/docs/developer/development_guide_using_docker.md +++ b/docs/developer/development_guide_using_docker.md @@ -14,6 +14,8 @@ tar xf vscode_cli_alpine_x64_cli.tar.gz ## Setup Docker Container +The following startup command is an example for internal development by the SGLang team. You can **modify or add directory mappings as needed**, especially for model weight downloads, to prevent repeated downloads by different Docker containers. + ### H100 ```bash