2024-10-11 15:26:25 -07:00
# SGLang Engine
2025-04-26 07:39:29 +08:00
SGLang provides a direct inference engine without the need for an HTTP server. There are generally these use cases:
2024-10-11 15:26:25 -07:00
2025-04-26 07:39:29 +08:00
- [Offline Batch Inference ](#offline-batch-inference )
- [Embedding Generation ](#embedding-generation )
- [Custom Server ](#custom-server )
- [Token-In-Token-Out for RLHF ](#token-in-token-out-for-rlhf )
- [Inference Using FastAPI ](#inference-using-fastapi )
2024-10-11 15:26:25 -07:00
## Examples
2025-03-05 14:18:26 +08:00
### [Offline Batch Inference](./offline_batch_inference.py)
2024-10-11 15:26:25 -07:00
In this example, we launch an SGLang engine and feed a batch of inputs for inference. If you provide a very large batch, the engine will intelligently schedule the requests to process efficiently and prevent OOM (Out of Memory) errors.
2025-03-05 14:18:26 +08:00
### [Embedding Generation](./embedding.py)
2024-11-11 16:43:35 -05:00
In this example, we launch an SGLang engine and feed a batch of inputs for embedding generation.
2025-03-05 14:18:26 +08:00
### [Custom Server](./custom_server.py)
2024-10-11 15:26:25 -07:00
This example demonstrates how to create a custom server on top of the SGLang Engine. We use [Sanic ](https://sanic.dev/en/ ) as an example. The server supports both non-streaming and streaming endpoints.
2025-04-26 07:39:29 +08:00
#### Steps
2024-10-11 15:26:25 -07:00
1. Install Sanic:
2025-04-26 07:39:29 +08:00
```bash
pip install sanic
```
2024-10-11 15:26:25 -07:00
2. Run the server:
2025-04-26 07:39:29 +08:00
```bash
python custom_server
```
2024-10-11 15:26:25 -07:00
3. Send requests:
2025-04-26 07:39:29 +08:00
```bash
curl -X POST http://localhost:8000/generate -H "Content-Type: application/json" -d '{"prompt": "The Transformer architecture is..."}'
curl -X POST http://localhost:8000/generate_stream -H "Content-Type: application/json" -d '{"prompt": "The Transformer architecture is..."}' --no-buffer
```
2024-10-11 15:26:25 -07:00
2025-04-26 07:39:29 +08:00
This will send both non-streaming and streaming requests to the server.
2025-03-05 14:18:26 +08:00
2025-03-05 16:16:31 -05:00
### [Token-In-Token-Out for RLHF](../token_in_token_out)
2025-03-05 14:18:26 +08:00
In this example, we launch an SGLang engine, feed tokens as input and generate tokens as output.
2025-04-24 21:27:05 +05:30
### [Inference Using FastAPI](fastapi_engine_inference.py)
This example demonstrates how to create a FastAPI server that uses the SGLang engine for text generation.