Files
sglang/docs/basic_usage/offline_engine_api.ipynb

236 lines
7.0 KiB
Plaintext
Raw Permalink Normal View History

2024-11-02 22:03:38 -07:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Offline Engine API\n",
"\n",
"SGLang provides a direct inference engine without the need for an HTTP server, especially for use cases where additional HTTP server adds unnecessary complexity or overhead. Here are two general use cases:\n",
"\n",
"- Offline Batch Inference\n",
"- Custom Server on Top of the Engine\n",
"\n",
"This document focuses on the offline batch inference, demonstrating four different inference modes:\n",
"\n",
"- Non-streaming synchronous generation\n",
"- Streaming synchronous generation\n",
"- Non-streaming asynchronous generation\n",
"- Streaming asynchronous generation\n",
"\n",
"Additionally, you can easily build a custom server on top of the SGLang offline engine. A detailed example working in a python script can be found in [custom_server](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/custom_server.py).\n",
"\n"
2024-11-02 22:03:38 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Nest Asyncio\n",
"Note that if you want to use **Offline Engine** in ipython or some other nested loop code, you need to add the following code:\n",
"```python\n",
"import nest_asyncio\n",
"\n",
"nest_asyncio.apply()\n",
"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced Usage\n",
"\n",
"The engine supports [vlm inference](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/offline_batch_inference_vlm.py) as well as [extracting hidden states](https://github.com/sgl-project/sglang/blob/main/examples/runtime/hidden_states). \n",
"\n",
"Please see [the examples](https://github.com/sgl-project/sglang/tree/main/examples/runtime/engine) for further use cases."
]
},
2024-11-02 22:03:38 -07:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Offline Batch Inference\n",
"\n",
"SGLang offline engine supports batch inference with efficient scheduling."
2024-11-02 22:03:38 -07:00
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"# launch the offline engine\n",
"import asyncio\n",
"\n",
"import sglang as sgl\n",
2025-08-10 19:49:45 -07:00
"import sglang.test.doc_patch\n",
"from sglang.utils import async_stream_and_merge, stream_and_merge\n",
"\n",
"llm = sgl.Engine(model_path=\"qwen/qwen2.5-0.5b-instruct\")"
2024-11-02 22:03:38 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Non-streaming Synchronous Generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"prompts = [\n",
" \"Hello, my name is\",\n",
" \"The president of the United States is\",\n",
" \"The capital of France is\",\n",
" \"The future of AI is\",\n",
"]\n",
"\n",
"sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
"\n",
"outputs = llm.generate(prompts, sampling_params)\n",
"for prompt, output in zip(prompts, outputs):\n",
2024-11-24 08:25:56 -08:00
" print(\"===============================\")\n",
" print(f\"Prompt: {prompt}\\nGenerated text: {output['text']}\")"
2024-11-02 22:03:38 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming Synchronous Generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"prompts = [\n",
" \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
" \"Provide a concise factual statement about Frances capital city. The capital of France is\",\n",
" \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
2024-11-02 22:03:38 -07:00
"]\n",
"\n",
"sampling_params = {\n",
" \"temperature\": 0.2,\n",
" \"top_p\": 0.9,\n",
"}\n",
2024-11-02 22:03:38 -07:00
"\n",
"print(\"\\n=== Testing synchronous streaming generation with overlap removal ===\\n\")\n",
2024-11-02 22:03:38 -07:00
"\n",
"for prompt in prompts:\n",
" print(f\"Prompt: {prompt}\")\n",
" merged_output = stream_and_merge(llm, prompt, sampling_params)\n",
" print(\"Generated text:\", merged_output)\n",
2024-11-02 22:03:38 -07:00
" print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Non-streaming Asynchronous Generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"prompts = [\n",
" \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
" \"Provide a concise factual statement about Frances capital city. The capital of France is\",\n",
" \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
2024-11-02 22:03:38 -07:00
"]\n",
"\n",
"sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
"\n",
2024-11-24 08:25:56 -08:00
"print(\"\\n=== Testing asynchronous batch generation ===\")\n",
2024-11-02 22:03:38 -07:00
"\n",
"\n",
"async def main():\n",
" outputs = await llm.async_generate(prompts, sampling_params)\n",
"\n",
" for prompt, output in zip(prompts, outputs):\n",
2024-11-24 08:25:56 -08:00
" print(f\"\\nPrompt: {prompt}\")\n",
" print(f\"Generated text: {output['text']}\")\n",
2024-11-02 22:03:38 -07:00
"\n",
"\n",
"asyncio.run(main())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming Asynchronous Generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"prompts = [\n",
" \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
" \"Provide a concise factual statement about Frances capital city. The capital of France is\",\n",
" \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
2024-11-02 22:03:38 -07:00
"]\n",
"\n",
2024-11-02 22:03:38 -07:00
"sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
"\n",
"print(\"\\n=== Testing asynchronous streaming generation (no repeats) ===\")\n",
2024-11-02 22:03:38 -07:00
"\n",
"\n",
"async def main():\n",
" for prompt in prompts:\n",
2024-11-24 08:25:56 -08:00
" print(f\"\\nPrompt: {prompt}\")\n",
2024-11-02 22:03:38 -07:00
" print(\"Generated text: \", end=\"\", flush=True)\n",
"\n",
" # Replace direct calls to async_generate with our custom overlap-aware version\n",
" async for cleaned_chunk in async_stream_and_merge(llm, prompt, sampling_params):\n",
" print(cleaned_chunk, end=\"\", flush=True)\n",
"\n",
" print() # New line after each prompt\n",
2024-11-02 22:03:38 -07:00
"\n",
"\n",
"asyncio.run(main())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2024-11-02 22:03:38 -07:00
"outputs": [],
"source": [
"llm.shutdown()"
]
}
],
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
2024-11-02 22:03:38 -07:00
}
},
"nbformat": 4,
"nbformat_minor": 2
}