2024-10-11 18:55:30 +05:30
< div align = "center" id = "sglangtop" >
2024-10-11 14:27:42 +05:30
< img src = "https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png" alt = "logo" width = "400" margin = "10px" > < / img >
2024-03-10 18:51:47 -07:00
2024-07-28 22:24:27 +10:00
[](https://pypi.org/project/sglang)

[](https://github.com/sgl-project/sglang/tree/main/LICENSE)
[](https://github.com/sgl-project/sglang/issues)
[](https://github.com/sgl-project/sglang/issues)
2024-11-09 11:32:13 -08:00
[-006BFF)](https://gurubase.io/g/sglang)
2024-07-28 22:24:27 +10:00
2024-07-28 22:27:52 +10:00
< / div >
2024-03-10 18:51:47 -07:00
--------------------------------------------------------------------------------
2024-12-17 04:33:36 -08:00
| [**Blog** ](https://lmsys.org/blog/2024-07-25-sglang-llama3/ )
| [**Documentation** ](https://sgl-project.github.io/ )
| [**Join Slack** ](https://join.slack.com/t/sgl-fru7574/shared_invite/zt-2tmmp6flg-89dOlJW2TjnBrTRk1I_~GA )
| [**Join Bi-Weekly Development Meeting** ](https://docs.google.com/document/d/1xEow4eIM152xNcRxqZz9VEcOiTQo8-CEuuQ5qTmkt-E/edit?usp=sharing )
| [**Slides** ](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#slides ) |
2024-01-08 04:37:50 +00:00
2024-10-06 15:14:29 -07:00
## News
2024-12-05 01:24:51 +08:00
- [2024/12] 🔥 SGLang v0.4: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs ([blog ](https://lmsys.org/blog/2024-12-04-sglang-v0-4/ )).
2024-10-19 07:11:02 -07:00
- [2024/10] 🔥 The First SGLang Online Meetup ([slides ](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#the-first-sglang-online-meetup )).
- [2024/09] SGLang v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision ([blog ](https://lmsys.org/blog/2024-09-04-sglang-v0-3/ )).
- [2024/07] Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) ([blog ](https://lmsys.org/blog/2024-07-25-sglang-llama3/ )).
2024-02-03 02:50:13 -08:00
2024-07-19 09:54:01 -07:00
< details >
< summary > More< / summary >
2024-10-19 12:58:55 -07:00
- [2024/02] SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog ](https://lmsys.org/blog/2024-02-05-compressed-fsm/ )).
2024-08-24 08:02:23 -07:00
- [2024/04] SGLang is used by the official **LLaVA-NeXT (video)** release ([blog ](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/ )).
2024-07-25 09:13:37 -07:00
- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog ](https://lmsys.org/blog/2024-01-17-sglang/ )).
2024-07-19 09:54:01 -07:00
- [2024/01] SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage ](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo )).
< / details >
2024-10-05 11:16:47 -07:00
## About
SGLang is a fast serving framework for large language models and vision language models.
It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
The core features include:
2024-11-24 04:47:10 -08:00
- **Fast Backend Runtime**: Provides efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, overhead-free CPU scheduler, continuous batching, token attention (paged attention), tensor parallelism, FlashInfer kernels, chunked prefill, and quantization (FP8/INT4/AWQ/GPTQ).
2024-10-05 11:16:47 -07:00
- **Flexible Frontend Language**: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
2024-11-21 17:24:25 -05:00
- **Extensive Model Support**: Supports a wide range of generative models (Llama, Gemma, Mistral, QWen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.
2024-10-05 11:16:47 -07:00
- **Active Community**: SGLang is open-source and backed by an active community with industry adoption.
2024-11-03 22:33:03 -08:00
## Getting Started
2024-11-24 08:25:56 -08:00
- [Install SGLang ](https://sgl-project.github.io/start/install.html )
- [Send requests ](https://sgl-project.github.io/start/send_request.html )
- [Backend: SGLang Runtime (SRT) ](https://sgl-project.github.io/backend/backend.html )
- [Frontend: Structured Generation Language (SGLang) ](https://sgl-project.github.io/frontend/frontend.html )
2024-01-23 03:43:19 -08:00
2024-07-25 09:13:37 -07:00
## Benchmark And Performance
2024-12-05 01:24:51 +08:00
Learn more in our release blogs: [v0.2 blog ](https://lmsys.org/blog/2024-07-25-sglang-llama3/ ), [v0.3 blog ](https://lmsys.org/blog/2024-09-04-sglang-v0-3/ ), [v0.4 blog ](https://lmsys.org/blog/2024-12-04-sglang-v0-4/ )
2024-01-15 21:37:11 -08:00
2024-01-08 04:37:50 +00:00
## Roadmap
2024-09-22 01:50:37 -07:00
[Development Roadmap (2024 Q4) ](https://github.com/sgl-project/sglang/issues/1487 )
2024-01-08 04:37:50 +00:00
2024-11-24 08:25:56 -08:00
## Adoption and Sponsorship
2024-12-06 17:59:26 +08:00
The project is supported by (alphabetically): AMD, Baseten, Etched, Hyperbolic, Jam & Tea Studios, LinkedIn, Meituan, NVIDIA, RunPod, Stanford, UC Berkeley, xAI and 01.AI.
2024-11-24 08:25:56 -08:00
## Acknowledgment and Citation
We learned from the design and reused code from the following projects: [Guidance ](https://github.com/guidance-ai/guidance ), [vLLM ](https://github.com/vllm-project/vllm ), [LightLLM ](https://github.com/ModelTC/lightllm ), [FlashInfer ](https://github.com/flashinfer-ai/flashinfer ), [Outlines ](https://github.com/outlines-dev/outlines ), and [LMQL ](https://github.com/eth-sri/lmql ).
2024-07-16 19:18:54 -07:00
Please cite our paper, [SGLang: Efficient Execution of Structured Language Model Programs ](https://arxiv.org/abs/2312.07104 ), if you find the project useful.