This commit is contained in:
Lianmin Zheng
2024-11-02 11:46:00 -07:00
committed by GitHub
parent 3b60558dd7
commit 7b394e5f2b
6 changed files with 87 additions and 265 deletions

View File

@@ -97,5 +97,5 @@ sky status --endpoint 30000 sglang
## Common Notes
- [FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub.
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
- If you only need to use OpenAI models with the frontend language, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
- The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. This allows you to build SGLang programs locally and execute them by connecting to the remote backend.

View File

@@ -5,7 +5,6 @@
"metadata": {},
"source": [
"# Quick Start: Sending Requests\n",
"\n",
"This notebook provides a quick-start guide for using SGLang after installation."
]
},
@@ -14,7 +13,6 @@
"metadata": {},
"source": [
"## Launch a server\n",
"\n",
"This code block is equivalent to executing \n",
"\n",
"```bash\n",
@@ -83,7 +81,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using OpenAI Compatible API w/ Requests"
"## Using Requests"
]
},
{
@@ -119,9 +117,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using OpenAI Python Client\n",
"\n",
"You can also use the OpenAI Python API library to send requests."
"## Using OpenAI Python Client"
]
},
{
@@ -153,6 +149,41 @@
"print_highlight(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"client = openai.Client(base_url=\"http://127.0.0.1:30000/v1\", api_key=\"None\")\n",
"\n",
"# Use stream=True for streaming responses\n",
"response = client.chat.completions.create(\n",
" model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful AI assistant\"},\n",
" {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
" ],\n",
" temperature=0,\n",
" max_tokens=64,\n",
" stream=True,\n",
")\n",
"\n",
"# Handle the streaming output\n",
"for chunk in response:\n",
" if chunk.choices[0].delta.content:\n",
" print(chunk.choices[0].delta.content, end='', flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -184,6 +215,46 @@
"print_highlight(response.json())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests, json\n",
"\n",
"response = requests.post(\n",
" \"http://localhost:30000/generate\",\n",
" json={\n",
" \"text\": \"The capital of France is\",\n",
" \"sampling_params\": {\n",
" \"temperature\": 0,\n",
" \"max_new_tokens\": 32,\n",
" },\n",
" \"stream\": True,\n",
" },\n",
" stream=True,\n",
")\n",
"\n",
"prev = 0\n",
"for chunk in response.iter_lines(decode_unicode=False):\n",
" chunk = chunk.decode(\"utf-8\")\n",
" if chunk and chunk.startswith(\"data:\"):\n",
" if chunk == \"data: [DONE]\":\n",
" break\n",
" data = json.loads(chunk[5:].strip(\"\\n\"))\n",
" output = data[\"text\"]\n",
" print(output[prev:], end=\"\", flush=True)\n",
" prev = len(output)"
]
},
{
"cell_type": "code",
"execution_count": 6,