2024-03-10 18:51:47 -07:00
< div align = "center" >
< img src = "assets/logo.png" alt = "logo" width = "400" > < / img >
< / div >
--------------------------------------------------------------------------------
2024-01-17 02:54:41 -08:00
| [**Blog** ](https://lmsys.org/blog/2024-01-17-sglang/ ) | [**Paper** ](https://arxiv.org/abs/2312.07104 ) |
2024-01-08 04:37:50 +00:00
SGLang is a structured generation language designed for large language models (LLMs).
It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system.
2024-06-07 12:51:40 -07:00
The core features include:
2024-06-08 02:06:52 -07:00
- **Flexible Frontend Language**: Enables easy programming of LLM applications with chained generation calls, advanced prompting, control flow, multiple modalities, parallelism, and external interactions.
2024-07-04 00:05:40 -07:00
- **High-Performance Backend Runtime**: Features RadixAttention for accelerating complex LLM programs by reusing the KV cache across multiple calls. It can also serve as a standalone inference engine with all common techniques implemented (e.g., continuous batching and tensor parallelism).
2024-01-08 04:37:50 +00:00
2024-02-03 02:50:13 -08:00
## News
2024-02-05 11:22:06 +00:00
- [2024/02] 🔥 SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog ](https://lmsys.org/blog/2024-02-05-compressed-fsm/ )).
2024-02-20 02:09:03 +09:00
- [2024/01] 🔥 SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage ](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo )).
2024-02-05 11:22:06 +00:00
- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog ](https://lmsys.org/blog/2024-01-17-sglang/ )).
2024-02-03 02:50:13 -08:00
2024-01-08 04:37:50 +00:00
## Contents
- [Install ](#install )
- [Quick Start ](#quick-start )
2024-01-20 14:00:29 +09:00
- [Frontend: Structured Generation Language (SGLang) ](#frontend-structured-generation-language-sglang )
2024-01-08 04:37:50 +00:00
- [Backend: SGLang Runtime (SRT) ](#backend-sglang-runtime-srt )
- [Benchmark And Performance ](#benchmark-and-performance )
- [Roadmap ](#roadmap )
- [Citation And Acknowledgment ](#citation-and-acknowledgment )
## Install
2024-01-09 12:43:40 -08:00
### Method 1: With pip
```
pip install "sglang[all]"
2024-01-08 04:37:50 +00:00
2024-07-03 02:07:34 -07:00
# Install FlashInfer CUDA kernels
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/
```
2024-07-02 02:25:07 -07:00
2024-01-09 12:43:40 -08:00
### Method 2: From source
2024-01-08 04:37:50 +00:00
```
2024-06-22 00:45:33 -07:00
git clone https://github.com/sgl-project/sglang.git
2024-01-08 04:37:50 +00:00
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
2024-07-03 02:07:34 -07:00
# Install FlashInfer CUDA kernels
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/
```
2024-07-02 02:25:07 -07:00
2024-07-04 00:55:40 -07:00
### Method 3: Using docker
The docker images are available on Docker Hub as [lmsysorg/sglang ](https://hub.docker.com/r/lmsysorg/sglang/tags ).
2024-07-04 00:53:49 -07:00
2024-07-04 00:55:40 -07:00
### Common Notes
2024-07-03 02:07:34 -07:00
- If you see errors from the Triton compiler, please install the [Triton Nightly ](https://triton-lang.org/main/getting-started/installation.html ).
- If you cannot install FlashInfer, check out its [installation ](https://docs.flashinfer.ai/installation.html# ) page. If you still cannot install it, you can use the slower Triton kernels by adding `--disable-flashinfer` when launching the server.
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"` .
2024-01-16 15:49:03 -08:00
2024-01-08 04:37:50 +00:00
## Quick Start
The example below shows how to use sglang to answer a mulit-turn question.
2024-01-30 04:29:32 -08:00
### Using Local Models
First, launch a server with
2024-01-16 02:46:27 -08:00
```
2024-01-30 04:29:32 -08:00
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
2024-01-16 02:46:27 -08:00
```
2024-01-30 04:29:32 -08:00
Then, connect to the server and answer a multi-turn question.
2024-01-08 04:37:50 +00:00
```python
2024-01-30 04:29:32 -08:00
from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint
2024-01-08 04:37:50 +00:00
@function
def multi_turn_question(s, question_1, question_2):
s += system("You are a helpful assistant.")
s += user(question_1)
s += assistant(gen("answer_1", max_tokens=256))
s += user(question_2)
s += assistant(gen("answer_2", max_tokens=256))
2024-01-30 04:29:32 -08:00
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
2024-01-08 04:37:50 +00:00
state = multi_turn_question.run(
question_1="What is the capital of the United States?",
question_2="List two local attractions.",
)
for m in state.messages():
print(m["role"], ":", m["content"])
2024-01-23 05:07:30 -08:00
print(state["answer_1"])
2024-01-08 04:37:50 +00:00
```
2024-01-30 04:29:32 -08:00
### Using OpenAI Models
Set the OpenAI API Key
2024-01-08 04:37:50 +00:00
```
2024-01-30 04:29:32 -08:00
export OPENAI_API_KEY=sk-******
2024-01-08 04:37:50 +00:00
```
2024-01-30 04:29:32 -08:00
Then, answer a multi-turn question.
2024-01-08 04:37:50 +00:00
```python
2024-01-30 04:29:32 -08:00
from sglang import function, system, user, assistant, gen, set_default_backend, OpenAI
2024-01-08 04:37:50 +00:00
@function
def multi_turn_question(s, question_1, question_2):
s += system("You are a helpful assistant.")
s += user(question_1)
s += assistant(gen("answer_1", max_tokens=256))
s += user(question_2)
s += assistant(gen("answer_2", max_tokens=256))
2024-01-30 04:29:32 -08:00
set_default_backend(OpenAI("gpt-3.5-turbo"))
2024-01-08 04:37:50 +00:00
state = multi_turn_question.run(
question_1="What is the capital of the United States?",
question_2="List two local attractions.",
)
for m in state.messages():
print(m["role"], ":", m["content"])
2024-01-23 05:07:30 -08:00
print(state["answer_1"])
2024-01-08 04:37:50 +00:00
```
### More Examples
2024-01-17 02:54:41 -08:00
Anthropic and VertexAI (Gemini) models are also supported.
2024-01-08 04:37:50 +00:00
You can find more examples at [examples/quick_start ](examples/quick_start ).
2024-01-20 14:00:29 +09:00
## Frontend: Structured Generation Language (SGLang)
2024-01-08 04:37:50 +00:00
2024-01-15 21:37:11 -08:00
To begin with, import sglang.
```python
import sglang as sgl
```
2024-01-16 02:46:27 -08:00
`sglang` provides some simple primitives such as `gen` , `select` , `fork` , `image` .
2024-01-15 21:37:11 -08:00
You can implement your prompt flow in a function decorated by `sgl.function` .
You can then invoke the function with `run` or `run_batch` .
2024-01-30 04:29:32 -08:00
The system will manage the state, chat template, parallelism and batching for you.
2024-01-15 21:37:11 -08:00
2024-01-30 05:45:27 -08:00
The complete code for the examples below can be found at [readme_examples.py ](examples/usage/readme_examples.py )
2024-01-08 04:37:50 +00:00
### Control Flow
2024-01-16 19:53:55 -08:00
You can use any Python code within the function body, including control flow, nested function calls, and external libraries.
2024-01-15 21:37:11 -08:00
```python
@sgl .function
2024-01-30 05:45:27 -08:00
def tool_use(s, question):
s += "To answer this question: " + question + ". "
s += "I need to use a " + sgl.gen("tool", choices=["calculator", "search engine"]) + ". "
2024-01-15 21:37:11 -08:00
if s["tool"] == "calculator":
s += "The math expression is" + sgl.gen("expression")
2024-01-30 05:45:27 -08:00
elif s["tool"] == "search engine":
s += "The key word to search is" + sgl.gen("word")
2024-01-15 21:37:11 -08:00
```
2024-01-08 04:37:50 +00:00
### Parallelism
2024-01-16 19:53:55 -08:00
Use `fork` to launch parallel prompts.
Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.
2024-01-15 21:37:11 -08:00
```python
@sgl .function
def tip_suggestion(s):
s += (
"Here are two tips for staying healthy: "
"1. Balanced Diet. 2. Regular Exercise.\n\n"
)
2024-01-16 19:53:55 -08:00
forks = s.fork(2)
2024-01-15 21:37:11 -08:00
for i, f in enumerate(forks):
f += f"Now, expand tip {i+1} into a paragraph:\n"
f += sgl.gen(f"detailed_tip", max_tokens=256, stop="\n\n")
s += "Tip 1:" + forks[0]["detailed_tip"] + "\n"
s += "Tip 2:" + forks[1]["detailed_tip"] + "\n"
s += "In summary" + sgl.gen("summary")
```
2024-01-08 04:37:50 +00:00
### Multi Modality
2024-01-16 19:53:55 -08:00
Use `sgl.image` to pass an image as input.
2024-01-08 04:37:50 +00:00
```python
@sgl .function
2024-01-08 21:20:23 +00:00
def image_qa(s, image_file, question):
2024-01-08 04:37:50 +00:00
s += sgl.user(sgl.image(image_file) + question)
2024-01-15 21:37:11 -08:00
s += sgl.assistant(sgl.gen("answer", max_tokens=256)
2024-01-08 04:37:50 +00:00
```
2024-01-30 05:45:27 -08:00
See also [srt_example_llava.py ](examples/quick_start/srt_example_llava.py ).
2024-01-15 21:37:11 -08:00
### Constrained Decoding
2024-01-21 14:56:25 -08:00
Use `regex` to specify a regular expression as a decoding constraint.
This is only supported for local models.
2024-01-16 19:53:55 -08:00
2024-01-15 21:37:11 -08:00
```python
2024-01-16 04:18:54 -08:00
@sgl .function
2024-01-15 21:37:11 -08:00
def regular_expression_gen(s):
s += "Q: What is the IP address of the Google DNS servers?\n"
2024-01-16 04:18:54 -08:00
s += "A: " + sgl.gen(
2024-01-15 21:37:11 -08:00
"answer",
temperature=0,
regex=r"((25[0-5]|2[0-4]\d|[01]?\d\d?).){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)",
)
```
2024-01-08 21:20:23 +00:00
2024-01-30 05:45:27 -08:00
### JSON Decoding
2024-02-03 17:42:01 -08:00
Use `regex` to specify a JSON schema with a regular expression.
2024-01-30 05:45:27 -08:00
```python
character_regex = (
r"""\{\n"""
+ r""" "name": "[\w\d\s]{1,16}",\n"""
+ r""" "house": "(Gryffindor|Slytherin|Ravenclaw|Hufflepuff)",\n"""
+ r""" "blood status": "(Pure-blood|Half-blood|Muggle-born)",\n"""
+ r""" "occupation": "(student|teacher|auror|ministry of magic|death eater|order of the phoenix)",\n"""
+ r""" "wand": \{\n"""
+ r""" "wood": "[\w\d\s]{1,16}",\n"""
+ r""" "core": "[\w\d\s]{1,16}",\n"""
+ r""" "length": [0-9]{1,2}\.[0-9]{0,2}\n"""
+ r""" \},\n"""
+ r""" "alive": "(Alive|Deceased)",\n"""
+ r""" "patronus": "[\w\d\s]{1,16}",\n"""
+ r""" "bogart": "[\w\d\s]{1,16}"\n"""
+ r"""\}"""
)
@sgl .function
def character_gen(s, name):
2024-02-05 11:22:06 +00:00
s += name + " is a character in Harry Potter. Please fill in the following information about this character.\n"
2024-01-30 05:45:27 -08:00
s += sgl.gen("json_output", max_tokens=256, regex=character_regex)
```
2024-02-03 17:42:01 -08:00
See also [json_decode.py ](examples/usage/json_decode.py ) for an additional example on specifying formats with Pydantic models.
2024-01-30 05:45:27 -08:00
2024-01-08 04:37:50 +00:00
### Batching
2024-01-16 19:53:55 -08:00
Use `run_batch` to run a batch of requests with continuous batching.
2024-01-15 21:37:11 -08:00
```python
@sgl .function
def text_qa(s, question):
s += "Q: " + question + "\n"
s += "A:" + sgl.gen("answer", stop="\n")
states = text_qa.run_batch(
[
{"question": "What is the capital of the United Kingdom?"},
{"question": "What is the capital of France?"},
{"question": "What is the capital of Japan?"},
],
2024-01-16 19:53:55 -08:00
progress_bar=True
2024-01-15 21:37:11 -08:00
)
```
2024-01-08 04:37:50 +00:00
### Streaming
2024-01-16 19:53:55 -08:00
Add `stream=True` to enable streaming.
2024-01-15 21:37:11 -08:00
```python
@sgl .function
def text_qa(s, question):
s += "Q: " + question + "\n"
s += "A:" + sgl.gen("answer", stop="\n")
2024-02-11 22:25:57 +01:00
state = text_qa.run(
2024-01-15 21:37:11 -08:00
question="What is the capital of France?",
2024-01-16 19:53:55 -08:00
temperature=0.1,
stream=True
)
2024-01-08 04:37:50 +00:00
2024-01-15 21:37:11 -08:00
for out in state.text_iter():
print(out, end="", flush=True)
```
2024-01-08 04:37:50 +00:00
2024-01-23 03:43:19 -08:00
### Tips and Implementation Details
- The `choices` argument in `sgl.gen` is implemented by computing the normalized log probabilities of all choices and selecting the one with the highest probability.
- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex.
2024-01-08 04:37:50 +00:00
## Backend: SGLang Runtime (SRT)
The SGLang Runtime (SRT) is designed to work best with the SGLang frontend.
However, it can also be used as a standalone API server.
2024-01-16 15:49:03 -08:00
In this case, the [RadixAttention ](https://arxiv.org/abs/2312.07104 ) can still greatly accelerate many use cases with automatic KV cache reuse.
2024-01-08 04:37:50 +00:00
### Usage
Launch a server
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
Send a request
```
2024-01-18 11:49:27 -08:00
curl http://localhost:30000/generate \
2024-01-08 04:37:50 +00:00
-H "Content-Type: application/json" \
-d '{
2024-01-18 11:49:27 -08:00
"text": "Once upon a time,",
2024-01-23 16:25:26 +08:00
"sampling_params": {
2024-01-18 11:49:27 -08:00
"max_new_tokens": 16,
"temperature": 0
}
2024-01-08 04:37:50 +00:00
}'
```
2024-01-18 11:49:27 -08:00
Learn more about the argument format [here ](docs/sampling_params.md ).
2024-01-18 17:00:56 -08:00
### OpenAI Compatible API
In addition, the server supports an experimental OpenAI-compatible API.
```python
import openai
client = openai.Client(
base_url="http://127.0.0.1:30000/v1", api_key="EMPTY")
2024-01-18 23:43:09 -08:00
# Text completion
2024-01-18 17:00:56 -08:00
response = client.completions.create(
model="default",
prompt="The capital of France is",
temperature=0,
max_tokens=32,
)
print(response)
2024-01-18 23:43:09 -08:00
# Chat completion
response = client.chat.completions.create(
model="default",
messages=[
{"role": "system", "content": "You are a helpful AI assistant"},
{"role": "user", "content": "List 3 countries and their capitals."},
],
temperature=0,
max_tokens=64,
)
print(response)
```
2024-05-13 00:17:02 -07:00
By default, the server uses the chat template specified in the model tokenizer from Hugging Face. It should just work for most official models such as Llama-2/Llama-3.
If needed, you can also override the chat template when launching the server:
2024-01-18 23:43:09 -08:00
```
2024-01-18 23:51:19 -08:00
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template llama-2
2024-01-18 23:43:09 -08:00
```
If the chat template you are looking for is missing, you are welcome to contribute it.
2024-05-13 00:17:02 -07:00
Meanwhile, you can also temporarily register your chat template as follows:
2024-01-18 23:43:09 -08:00
```json
{
"name": "my_model",
"system": "< |im_start|>system",
"user": "< |im_start|>user",
"assistant": "< |im_start|>assistant",
"sep_style": "CHATML",
"sep": "< |im_end|>",
"stop_str": ["< |im_end|>", "< |im_start|>"]
}
```
```
2024-01-18 23:51:19 -08:00
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template ./my_model_template.json
2024-01-18 17:00:56 -08:00
```
2024-01-08 04:37:50 +00:00
### Additional Arguments
2024-07-06 23:34:10 -07:00
- Add `--tp 2` to enable tensor parallelism. If it indicates `peer access is not supported between these two devices` , add `--enable-p2p-check` option.
2024-01-08 04:37:50 +00:00
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --tp 2
```
2024-05-27 22:46:04 -07:00
- Add `--dp 2` to enable data parallelism. It can also be used together with tp. Data parallelism is better for throughput if there is enough memory.
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --dp 2 --tp 2
```
2024-01-16 15:49:03 -08:00
- If you see out-of-memory errors during serving, please try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static` . The default value is `0.9`
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --mem-fraction-static 0.7
```
2024-06-01 17:46:08 -05:00
- See [hyperparameter_tuning.md ](docs/hyperparameter_tuning.md ) on tuning hyperparameters for better performance.
2024-01-08 04:37:50 +00:00
### Supported Models
- Llama
- Mistral
- Mixtral
2024-02-01 22:44:05 +00:00
- Qwen / Qwen 2
2024-03-11 04:43:39 -07:00
- Gemma
- Please add a new flag `--attention-reduce-in-fp32` to avoid some precision errors.
- `python -m sglang.launch_server --model-path google/gemma-7b-it --port 30000 --attention-reduce-in-fp32`
2024-01-08 04:37:50 +00:00
- LLaVA
2024-01-30 23:12:33 +09:00
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
2024-03-11 04:43:39 -07:00
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000`
2024-05-24 18:38:20 +08:00
- LLaVA-NeXT-Video
- see [srt_example_llava_v.sh ](examples/usage/llava_video/srt_example_llava_v.sh )
2024-02-01 22:44:05 +00:00
- Yi-VL
- see [srt_example_yi_vl.py ](examples/quick_start/srt_example_yi_vl.py ).
2024-04-17 00:16:32 -07:00
- StableLM
- Command-R
- DBRX
- AWQ/GPTQ/Marlin quantization
Instructions for supporting a new model are [here ](https://github.com/sgl-project/sglang/blob/main/docs/model_support.md ).
2024-01-08 04:37:50 +00:00
## Benchmark And Performance
2024-01-15 21:37:11 -08:00
- Llama-7B on NVIDIA A10G, FP16, Tensor Parallelism=1

- Mixtral-8x7B on NVIDIA A10G, FP16, Tensor Parallelism=8

2024-07-01 09:54:08 -07:00
- Learn more about the above [results ](docs/benchmark_results.md ).
- Synthetic latency and throughput benchmark [scripts ](https://github.com/sgl-project/sglang/tree/main/benchmark/latency_throughput ).
2024-01-15 21:37:11 -08:00
2024-01-08 04:37:50 +00:00
## Roadmap
2024-02-06 23:14:59 -08:00
https://github.com/sgl-project/sglang/issues/157
2024-01-08 04:37:50 +00:00
## Citation And Acknowledgment
```
2024-06-08 02:06:52 -07:00
@misc {zheng2024sglang,
title={SGLang: Efficient Execution of Structured Language Model Programs},
author={Lianmin Zheng and Liangsheng Yin and Zhiqiang Xie and Chuyue Sun and Jeff Huang and Cody Hao Yu and Shiyi Cao and Christos Kozyrakis and Ion Stoica and Joseph E. Gonzalez and Clark Barrett and Ying Sheng},
year={2024},
2024-01-08 04:37:50 +00:00
eprint={2312.07104},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
2024-05-24 18:38:20 +08:00
We learned from the design and reused some code of the following projects: [Guidance ](https://github.com/guidance-ai/guidance ), [vLLM ](https://github.com/vllm-project/vllm ), [LightLLM ](https://github.com/ModelTC/lightllm ), [FlashInfer ](https://github.com/flashinfer-ai/flashinfer ), [Outlines ](https://github.com/outlines-dev/outlines ), [LMQL ](https://github.com/eth-sri/lmql ).