Fix warnings in doc build (#1852)
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
# Backend: SGLang Runtime (SRT)
|
||||
The SGLang Runtime (SRT) is an efficient serving engine.
|
||||
|
||||
### Quick Start
|
||||
## Quick Start
|
||||
Launch a server
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
|
||||
@@ -22,7 +22,7 @@ curl http://localhost:30000/generate \
|
||||
|
||||
Learn more about the argument specification, streaming, and multi-modal support [here](https://sgl-project.github.io/sampling_params.html).
|
||||
|
||||
### OpenAI Compatible API
|
||||
## OpenAI Compatible API
|
||||
In addition, the server supports OpenAI-compatible APIs.
|
||||
|
||||
```python
|
||||
@@ -61,7 +61,7 @@ print(response)
|
||||
|
||||
It supports streaming, vision, and almost all features of the Chat/Completions/Models/Batch endpoints specified by the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/).
|
||||
|
||||
### Additional Server Arguments
|
||||
## Additional Server Arguments
|
||||
- To enable multi-GPU tensor parallelism, add `--tp 2`. If it reports the error "peer access is not supported between these two devices", add `--enable-p2p-check` to the server launch command.
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 2
|
||||
@@ -94,7 +94,7 @@ python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct
|
||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --nccl-init sgl-dev-0:50000 --nnodes 2 --node-rank 1
|
||||
```
|
||||
|
||||
### Engine Without HTTP Server
|
||||
## Engine Without HTTP Server
|
||||
|
||||
We also provide an inference engine **without a HTTP server**. For example,
|
||||
|
||||
@@ -123,7 +123,7 @@ if __name__ == "__main__":
|
||||
This can be used for offline batch inference and building custom servers.
|
||||
You can view the full example [here](https://github.com/sgl-project/sglang/tree/main/examples/runtime/engine).
|
||||
|
||||
### Supported Models
|
||||
## Supported Models
|
||||
|
||||
**Generative Models**
|
||||
- Llama / Llama 2 / Llama 3 / Llama 3.1
|
||||
@@ -162,7 +162,7 @@ You can view the full example [here](https://github.com/sgl-project/sglang/tree/
|
||||
|
||||
Instructions for supporting a new model are [here](https://sgl-project.github.io/model_support.html).
|
||||
|
||||
#### Use Models From ModelScope
|
||||
### Use Models From ModelScope
|
||||
<details>
|
||||
<summary>More</summary>
|
||||
|
||||
@@ -188,7 +188,7 @@ docker run --gpus all \
|
||||
|
||||
</details>
|
||||
|
||||
#### Run Llama 3.1 405B
|
||||
### Run Llama 3.1 405B
|
||||
<details>
|
||||
<summary>More</summary>
|
||||
|
||||
@@ -206,7 +206,7 @@ GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/
|
||||
|
||||
</details>
|
||||
|
||||
### Benchmark Performance
|
||||
## Benchmark Performance
|
||||
|
||||
- Benchmark a single static batch by running the following command without launching a server. The arguments are the same as for `launch_server.py`.
|
||||
Note that this is not a dynamic batching server, so it may run out of memory for a batch size that a real server can handle.
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# Frontend: Structured Generation Language (SGLang)
|
||||
The frontend language can be used with local models or API models. It is an alternative to the OpenAI API. You may found it easier to use for complex prompting workflow.
|
||||
|
||||
### Quick Start
|
||||
## Quick Start
|
||||
The example below shows how to use SGLang to answer a multi-turn question.
|
||||
|
||||
#### Using Local Models
|
||||
### Using Local Models
|
||||
First, launch a server with
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
|
||||
@@ -36,7 +36,7 @@ for m in state.messages():
|
||||
print(state["answer_1"])
|
||||
```
|
||||
|
||||
#### Using OpenAI Models
|
||||
### Using OpenAI Models
|
||||
Set the OpenAI API Key
|
||||
```
|
||||
export OPENAI_API_KEY=sk-******
|
||||
@@ -67,11 +67,11 @@ for m in state.messages():
|
||||
print(state["answer_1"])
|
||||
```
|
||||
|
||||
#### More Examples
|
||||
### More Examples
|
||||
Anthropic and VertexAI (Gemini) models are also supported.
|
||||
You can find more examples at [examples/quick_start](https://github.com/sgl-project/sglang/tree/main/examples/frontend_language/quick_start).
|
||||
|
||||
### Language Feature
|
||||
## Language Feature
|
||||
To begin with, import sglang.
|
||||
```python
|
||||
import sglang as sgl
|
||||
@@ -84,7 +84,7 @@ The system will manage the state, chat template, parallelism and batching for yo
|
||||
|
||||
The complete code for the examples below can be found at [readme_examples.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/readme_examples.py)
|
||||
|
||||
#### Control Flow
|
||||
### Control Flow
|
||||
You can use any Python code within the function body, including control flow, nested function calls, and external libraries.
|
||||
|
||||
```python
|
||||
@@ -99,7 +99,7 @@ def tool_use(s, question):
|
||||
s += "The key word to search is" + sgl.gen("word")
|
||||
```
|
||||
|
||||
#### Parallelism
|
||||
### Parallelism
|
||||
Use `fork` to launch parallel prompts.
|
||||
Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.
|
||||
|
||||
@@ -121,7 +121,7 @@ def tip_suggestion(s):
|
||||
s += "In summary" + sgl.gen("summary")
|
||||
```
|
||||
|
||||
#### Multi-Modality
|
||||
### Multi-Modality
|
||||
Use `sgl.image` to pass an image as input.
|
||||
|
||||
```python
|
||||
@@ -133,7 +133,7 @@ def image_qa(s, image_file, question):
|
||||
|
||||
See also [local_example_llava_next.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/quick_start/local_example_llava_next.py).
|
||||
|
||||
#### Constrained Decoding
|
||||
### Constrained Decoding
|
||||
Use `regex` to specify a regular expression as a decoding constraint.
|
||||
This is only supported for local models.
|
||||
|
||||
@@ -148,7 +148,7 @@ def regular_expression_gen(s):
|
||||
)
|
||||
```
|
||||
|
||||
#### JSON Decoding
|
||||
### JSON Decoding
|
||||
Use `regex` to specify a JSON schema with a regular expression.
|
||||
|
||||
```python
|
||||
@@ -177,7 +177,7 @@ def character_gen(s, name):
|
||||
|
||||
See also [json_decode.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/json_decode.py) for an additional example of specifying formats with Pydantic models.
|
||||
|
||||
#### Batching
|
||||
### Batching
|
||||
Use `run_batch` to run a batch of requests with continuous batching.
|
||||
|
||||
```python
|
||||
@@ -196,7 +196,7 @@ states = text_qa.run_batch(
|
||||
)
|
||||
```
|
||||
|
||||
#### Streaming
|
||||
### Streaming
|
||||
Add `stream=True` to enable streaming.
|
||||
|
||||
```python
|
||||
@@ -215,7 +215,7 @@ for out in state.text_iter():
|
||||
print(out, end="", flush=True)
|
||||
```
|
||||
|
||||
#### Roles
|
||||
### Roles
|
||||
|
||||
Use `sgl.system`, `sgl.user` and `sgl.assistant` to set roles when using Chat models. You can also define more complex role prompts using begin and end tokens.
|
||||
|
||||
@@ -233,6 +233,6 @@ def chat_example(s):
|
||||
s += sgl.assistant_end()
|
||||
```
|
||||
|
||||
#### Tips and Implementation Details
|
||||
### Tips and Implementation Details
|
||||
- The `choices` argument in `sgl.gen` is implemented by computing the [token-length normalized log probabilities](https://blog.eleuther.ai/multiple-choice-normalization/) of all choices and selecting the one with the highest probability.
|
||||
- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. It is compatible with `temperature=0` and `temperature != 0`.
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Install
|
||||
# Install SGLang
|
||||
|
||||
You can install SGLang using any of the methods below.
|
||||
|
||||
### Method 1: With pip
|
||||
## Method 1: With pip
|
||||
```
|
||||
pip install --upgrade pip
|
||||
pip install "sglang[all]"
|
||||
@@ -13,7 +13,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
|
||||
|
||||
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
|
||||
|
||||
### Method 2: From source
|
||||
## Method 2: From source
|
||||
```
|
||||
# Use the last release branch
|
||||
git clone -b v0.3.4.post2 https://github.com/sgl-project/sglang.git
|
||||
@@ -28,7 +28,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
|
||||
|
||||
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
|
||||
|
||||
### Method 3: Using docker
|
||||
## Method 3: Using docker
|
||||
The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags), built from [Dockerfile](https://github.com/sgl-project/sglang/tree/main/docker).
|
||||
Replace `<secret>` below with your huggingface hub [token](https://huggingface.co/docs/hub/en/security-tokens).
|
||||
|
||||
@@ -42,7 +42,7 @@ docker run --gpus all \
|
||||
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
|
||||
```
|
||||
|
||||
### Method 4: Using docker compose
|
||||
## Method 4: Using docker compose
|
||||
|
||||
<details>
|
||||
<summary>More</summary>
|
||||
@@ -54,7 +54,7 @@ docker run --gpus all \
|
||||
2. Execute the command `docker compose up -d` in your terminal.
|
||||
</details>
|
||||
|
||||
### Method 5: Run on Kubernetes or Clouds with SkyPilot
|
||||
## Method 5: Run on Kubernetes or Clouds with SkyPilot
|
||||
|
||||
<details>
|
||||
<summary>More</summary>
|
||||
@@ -95,7 +95,7 @@ sky status --endpoint 30000 sglang
|
||||
3. To further scale up your deployment with autoscaling and failure recovery, check out the [SkyServe + SGLang guide](https://github.com/skypilot-org/skypilot/tree/master/llm/sglang#serving-llama-2-with-sglang-for-more-traffic-using-skyserve).
|
||||
</details>
|
||||
|
||||
### Common Notes
|
||||
## Common Notes
|
||||
- [FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub.
|
||||
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
|
||||
- The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. This allows you to build SGLang programs locally and execute them by connecting to the remote backend.
|
||||
|
||||
Reference in New Issue
Block a user