example: add vlm to token in & out example (#3941)

Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
This commit is contained in:
Mick
2025-03-05 14:18:26 +08:00
committed by GitHub
parent e074d84e5b
commit 583d6af71b
9 changed files with 154 additions and 29 deletions

View File

@@ -9,15 +9,15 @@ SGLang provides a direct inference engine without the need for an HTTP server. T
## Examples
### 1. [Offline Batch Inference](./offline_batch_inference.py)
### [Offline Batch Inference](./offline_batch_inference.py)
In this example, we launch an SGLang engine and feed a batch of inputs for inference. If you provide a very large batch, the engine will intelligently schedule the requests to process efficiently and prevent OOM (Out of Memory) errors.
### 2. [Embedding Generation](./embedding.py)
### [Embedding Generation](./embedding.py)
In this example, we launch an SGLang engine and feed a batch of inputs for embedding generation.
### 3. [Custom Server](./custom_server.py)
### [Custom Server](./custom_server.py)
This example demonstrates how to create a custom server on top of the SGLang Engine. We use [Sanic](https://sanic.dev/en/) as an example. The server supports both non-streaming and streaming endpoints.
@@ -43,3 +43,7 @@ curl -X POST http://localhost:8000/generate_stream -H "Content-Type: applicatio
```
This will send both non-streaming and streaming requests to the server.
### [Token-In-Token-Out for RLHF](./token_in_token_out)
In this example, we launch an SGLang engine, feed tokens as input and generate tokens as output.