update toc for doc and dockerfile code style format (#6450)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
This commit is contained in:
8
docs/references/developer.rst
Normal file
8
docs/references/developer.rst
Normal file
@@ -0,0 +1,8 @@
|
||||
Developer Reference
|
||||
==========================
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
development_guide_using_docker.md
|
||||
release_process.md
|
||||
setup_github_runner.md
|
||||
108
docs/references/development_guide_using_docker.md
Normal file
108
docs/references/development_guide_using_docker.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# Development Guide Using Docker
|
||||
|
||||
## Setup VSCode on a Remote Host
|
||||
(Optional - you can skip this step if you plan to run sglang dev container locally)
|
||||
|
||||
1. In the remote host, download `code` from [Https://code.visualstudio.com/docs/?dv=linux64cli](https://code.visualstudio.com/download) and run `code tunnel` in a shell.
|
||||
|
||||
Example
|
||||
```bash
|
||||
wget https://vscode.download.prss.microsoft.com/dbazure/download/stable/fabdb6a30b49f79a7aba0f2ad9df9b399473380f/vscode_cli_alpine_x64_cli.tar.gz
|
||||
tar xf vscode_cli_alpine_x64_cli.tar.gz
|
||||
|
||||
# https://code.visualstudio.com/docs/remote/tunnels
|
||||
./code tunnel
|
||||
```
|
||||
|
||||
2. In your local machine, press F1 in VSCode and choose "Remote Tunnels: Connect to Tunnel".
|
||||
|
||||
## Setup Docker Container
|
||||
|
||||
### Option 1. Use the default dev container automatically from VSCode
|
||||
There is a `.devcontainer` folder in the sglang repository root folder to allow VSCode to automatically start up within dev container. You can read more about this VSCode extension in VSCode official document [Developing inside a Container](https://code.visualstudio.com/docs/devcontainers/containers).
|
||||

|
||||
(*Figure 1: Diagram from VSCode official documentation [Developing inside a Container](https://code.visualstudio.com/docs/devcontainers/containers).*)
|
||||
|
||||
To enable this, you only need to:
|
||||
1. Start Visual Studio Code and install [VSCode dev container extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
|
||||
2. Press F1, type and choose "Dev Container: Open Folder in Container.
|
||||
3. Input the `sglang` local repo path in your machine and press enter.
|
||||
|
||||
The first time you open it in dev container might take longer due to docker pull and build. Once it's successful, you should set on your status bar at the bottom left displaying that you are in a dev container:
|
||||
|
||||

|
||||
|
||||
Now when you run `sglang.launch_server` in the VSCode terminal or start debugging using F5, sglang server will be started in the dev container with all your local changes applied automatically:
|
||||
|
||||

|
||||
|
||||
|
||||
### Option 2. Start up containers manually (advanced)
|
||||
|
||||
The following startup command is an example for internal development by the SGLang team. You can **modify or add directory mappings as needed**, especially for model weight downloads, to prevent repeated downloads by different Docker containers.
|
||||
|
||||
❗️ **Note on RDMA**
|
||||
|
||||
1. `--network host` and `--privileged` are required by RDMA. If you don't need RDMA, you can remove them but keeping them there does not harm. Thus, we enable these two flags by default in the commands below.
|
||||
2. You may need to set `NCCL_IB_GID_INDEX` if you are using RoCE, for example: `export NCCL_IB_GID_INDEX=3`.
|
||||
|
||||
```bash
|
||||
# Change the name to yours
|
||||
docker run -itd --shm-size 32g --gpus all -v <volumes-to-mount> --ipc=host --network=host --privileged --name sglang_dev lmsysorg/sglang:dev /bin/zsh
|
||||
docker exec -it sglang_dev /bin/zsh
|
||||
```
|
||||
Some useful volumes to mount are:
|
||||
1. **Huggingface model cache**: mounting model cache can avoid re-download every time docker restarts. Default location on Linux is `~/.cache/huggingface/`.
|
||||
2. **SGLang repository**: code changes in the SGLang local repository will be automatically synced to the .devcontainer.
|
||||
|
||||
Example 1: Monting local cache folder `/opt/dlami/nvme/.cache` but not the SGLang repo. Use this when you prefer to manually transfer local code changes to the devcontainer.
|
||||
```bash
|
||||
docker run -itd --shm-size 32g --gpus all -v /opt/dlami/nvme/.cache:/root/.cache --ipc=host --network=host --privileged --name sglang_zhyncs lmsysorg/sglang:dev /bin/zsh
|
||||
docker exec -it sglang_zhyncs /bin/zsh
|
||||
```
|
||||
Example 2: Mounting both HuggingFace cache and local SGLang repo. Local code changes are automatically synced to the devcontainer as the SGLang is installed in editable mode in the dev image.
|
||||
```bash
|
||||
docker run -itd --shm-size 32g --gpus all -v $HOME/.cache/huggingface/:/root/.cache/huggingface -v $HOME/src/sglang:/sgl-workspace/sglang --ipc=host --network=host --privileged --name sglang_zhyncs lmsysorg/sglang:dev /bin/zsh
|
||||
docker exec -it sglang_zhyncs /bin/zsh
|
||||
```
|
||||
## Debug SGLang with VSCode Debugger
|
||||
1. (Create if not exist) open `launch.json` in VSCode.
|
||||
2. Add the following config and save. Please note that you can edit the script as needed to apply different parameters or debug a different program (e.g. benchmark script).
|
||||
```JSON
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Python Debugger: launch_server",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"module": "sglang.launch_server",
|
||||
"console": "integratedTerminal",
|
||||
"args": [
|
||||
"--model-path", "meta-llama/Llama-3.2-1B",
|
||||
"--host", "0.0.0.0",
|
||||
"--port", "30000",
|
||||
"--trust-remote-code",
|
||||
],
|
||||
"justMyCode": false
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
3. Press "F5" to start. VSCode debugger will ensure that the program will pause at the breakpoints even if the program is running at remote SSH/Tunnel host + dev container.
|
||||
|
||||
## Profile
|
||||
|
||||
```bash
|
||||
# Change batch size, input, output and add `disable-cuda-graph` (for easier analysis)
|
||||
# e.g. DeepSeek V3
|
||||
nsys profile -o deepseek_v3 python3 -m sglang.bench_one_batch --batch-size 1 --input 128 --output 256 --model deepseek-ai/DeepSeek-V3 --trust-remote-code --tp 8 --disable-cuda-graph
|
||||
```
|
||||
|
||||
## Evaluation
|
||||
|
||||
```bash
|
||||
# e.g. gsm8k 8 shot
|
||||
python3 benchmark/gsm8k/bench_sglang.py --num-questions 2000 --parallel 2000 --num-shots 8
|
||||
```
|
||||
18
docs/references/release_process.md
Normal file
18
docs/references/release_process.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# PyPI Package Release Process
|
||||
|
||||
## Update the version in code
|
||||
Update the package version in `python/pyproject.toml` and `python/sglang/__init__.py`.
|
||||
|
||||
## Upload the PyPI package
|
||||
|
||||
```
|
||||
pip install build twine
|
||||
```
|
||||
|
||||
```
|
||||
cd python
|
||||
bash upload_pypi.sh
|
||||
```
|
||||
|
||||
## Make a release in GitHub
|
||||
Make a new release https://github.com/sgl-project/sglang/releases/new.
|
||||
49
docs/references/setup_github_runner.md
Normal file
49
docs/references/setup_github_runner.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Set Up Self-Hosted Runners for GitHub Action
|
||||
|
||||
## Add a Runner
|
||||
|
||||
### Step 1: Start a docker container.
|
||||
|
||||
You can mount a folder for the shared huggingface model weights cache. The command below uses `/tmp/huggingface` as an example.
|
||||
|
||||
```
|
||||
docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
|
||||
# Nvidia
|
||||
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
|
||||
# AMD
|
||||
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post5-rocm630 /bin/bash
|
||||
# AMD just the last 2 GPUs
|
||||
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post5-rocm630 /bin/bash
|
||||
```
|
||||
|
||||
### Step 2: Configure the runner by `config.sh`
|
||||
|
||||
Run these commands inside the container.
|
||||
|
||||
```
|
||||
apt update && apt install -y curl python3-pip git
|
||||
export RUNNER_ALLOW_RUNASROOT=1
|
||||
```
|
||||
|
||||
Then follow https://github.com/sgl-project/sglang/settings/actions/runners/new?arch=x64&os=linux to run `config.sh`
|
||||
|
||||
**Notes**
|
||||
- Do not need to specify the runner group
|
||||
- Give it a name (e.g., `test-sgl-gpu-0`) and some labels (e.g., `1-gpu-runner`). The labels can be edited later in Github Settings.
|
||||
- Do not need to change the work folder.
|
||||
|
||||
### Step 3: Run the runner by `run.sh`
|
||||
|
||||
- Set up environment variables
|
||||
```
|
||||
export HF_HOME=/hf_home
|
||||
export SGLANG_IS_IN_CI=true
|
||||
export HF_TOKEN=hf_xxx
|
||||
export OPENAI_API_KEY=sk-xxx
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
```
|
||||
|
||||
- Run it forever
|
||||
```
|
||||
while true; do ./run.sh; echo "Restarting..."; sleep 2; done
|
||||
```
|
||||
Reference in New Issue
Block a user