Files
sglang/docs/backend/native_api.ipynb
2024-11-08 07:42:47 +08:00

494 lines
14 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Native APIs\n",
"\n",
"Apart from the OpenAI compatible APIs, the SGLang Runtime also provides its native server APIs. We introduce these following APIs:\n",
"\n",
"- `/generate` (text generation model)\n",
"- `/get_server_args`\n",
"- `/get_model_info`\n",
"- `/health`\n",
"- `/health_generate`\n",
"- `/flush_cache`\n",
"- `/get_memory_pool_size`\n",
"- `/update_weights`\n",
"- `/encode`(embedding model)\n",
"- `/classify`(reward model)\n",
"\n",
"We mainly use `requests` to test these APIs in the following examples. You can also use `curl`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Launch A Server"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:44:42.063503Z",
"iopub.status.busy": "2024-11-07T18:44:42.063379Z",
"iopub.status.idle": "2024-11-07T18:45:07.255300Z",
"shell.execute_reply": "2024-11-07T18:45:07.254547Z"
}
},
"outputs": [],
"source": [
"from sglang.utils import (\n",
" execute_shell_command,\n",
" wait_for_server,\n",
" terminate_process,\n",
" print_highlight,\n",
")\n",
"\n",
"import requests\n",
"\n",
"server_process = execute_shell_command(\n",
" \"\"\"\n",
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --port=30010\n",
"\"\"\"\n",
")\n",
"\n",
"wait_for_server(\"http://localhost:30010\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate (text generation model)\n",
"Generate completions. This is similar to the `/v1/completions` in OpenAI API. Detailed parameters can be found in the [sampling parameters](../references/sampling_params.md)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.258292Z",
"iopub.status.busy": "2024-11-07T18:45:07.257710Z",
"iopub.status.idle": "2024-11-07T18:45:07.611559Z",
"shell.execute_reply": "2024-11-07T18:45:07.610842Z"
}
},
"outputs": [],
"source": [
"url = \"http://localhost:30010/generate\"\n",
"data = {\"text\": \"What is the capital of France?\"}\n",
"\n",
"response = requests.post(url, json=data)\n",
"print_highlight(response.json())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get Server Args\n",
"Get the arguments of a server."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.613911Z",
"iopub.status.busy": "2024-11-07T18:45:07.613746Z",
"iopub.status.idle": "2024-11-07T18:45:07.620286Z",
"shell.execute_reply": "2024-11-07T18:45:07.619779Z"
}
},
"outputs": [],
"source": [
"url = \"http://localhost:30010/get_server_args\"\n",
"\n",
"response = requests.get(url)\n",
"print_highlight(response.json())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get Model Info\n",
"\n",
"Get the information of the model.\n",
"\n",
"- `model_path`: The path/name of the model.\n",
"- `is_generation`: Whether the model is used as generation model or embedding model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.622407Z",
"iopub.status.busy": "2024-11-07T18:45:07.622267Z",
"iopub.status.idle": "2024-11-07T18:45:07.628290Z",
"shell.execute_reply": "2024-11-07T18:45:07.627793Z"
}
},
"outputs": [],
"source": [
"url = \"http://localhost:30010/get_model_info\"\n",
"\n",
"response = requests.get(url)\n",
"response_json = response.json()\n",
"print_highlight(response_json)\n",
"assert response_json[\"model_path\"] == \"meta-llama/Llama-3.2-1B-Instruct\"\n",
"assert response_json[\"is_generation\"] is True\n",
"assert response_json.keys() == {\"model_path\", \"is_generation\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Health Check\n",
"- `/health`: Check the health of the server.\n",
"- `/health_generate`: Check the health of the server by generating one token."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.630585Z",
"iopub.status.busy": "2024-11-07T18:45:07.630235Z",
"iopub.status.idle": "2024-11-07T18:45:07.643498Z",
"shell.execute_reply": "2024-11-07T18:45:07.643007Z"
}
},
"outputs": [],
"source": [
"url = \"http://localhost:30010/health_generate\"\n",
"\n",
"response = requests.get(url)\n",
"print_highlight(response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.645336Z",
"iopub.status.busy": "2024-11-07T18:45:07.645196Z",
"iopub.status.idle": "2024-11-07T18:45:07.650363Z",
"shell.execute_reply": "2024-11-07T18:45:07.649837Z"
}
},
"outputs": [],
"source": [
"url = \"http://localhost:30010/health\"\n",
"\n",
"response = requests.get(url)\n",
"print_highlight(response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Flush Cache\n",
"\n",
"Flush the radix cache. It will be automatically triggered when the model weights are updated by the `/update_weights` API."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.652212Z",
"iopub.status.busy": "2024-11-07T18:45:07.652076Z",
"iopub.status.idle": "2024-11-07T18:45:07.658633Z",
"shell.execute_reply": "2024-11-07T18:45:07.658119Z"
}
},
"outputs": [],
"source": [
"# flush cache\n",
"\n",
"url = \"http://localhost:30010/flush_cache\"\n",
"\n",
"response = requests.post(url)\n",
"print_highlight(response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get Memory Pool Size\n",
"\n",
"Get the memory pool size in number of tokens.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.660468Z",
"iopub.status.busy": "2024-11-07T18:45:07.660325Z",
"iopub.status.idle": "2024-11-07T18:45:07.666476Z",
"shell.execute_reply": "2024-11-07T18:45:07.665984Z"
}
},
"outputs": [],
"source": [
"# get_memory_pool_size\n",
"\n",
"url = \"http://localhost:30010/get_memory_pool_size\"\n",
"\n",
"response = requests.get(url)\n",
"print_highlight(response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Update Weights\n",
"\n",
"Update model weights without restarting the server. Use for continuous evaluation during training. Only applicable for models with the same architecture and parameter size."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:07.668242Z",
"iopub.status.busy": "2024-11-07T18:45:07.668108Z",
"iopub.status.idle": "2024-11-07T18:45:08.725709Z",
"shell.execute_reply": "2024-11-07T18:45:08.725021Z"
}
},
"outputs": [],
"source": [
"# successful update with same architecture and size\n",
"\n",
"url = \"http://localhost:30010/update_weights\"\n",
"data = {\"model_path\": \"meta-llama/Llama-3.2-1B\"}\n",
"\n",
"response = requests.post(url, json=data)\n",
"print_highlight(response.text)\n",
"assert response.json()[\"success\"] is True\n",
"assert response.json()[\"message\"] == \"Succeeded to update model weights.\"\n",
"assert response.json().keys() == {\"success\", \"message\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:08.727865Z",
"iopub.status.busy": "2024-11-07T18:45:08.727721Z",
"iopub.status.idle": "2024-11-07T18:45:11.165841Z",
"shell.execute_reply": "2024-11-07T18:45:11.165282Z"
}
},
"outputs": [],
"source": [
"# failed update with different parameter size\n",
"\n",
"url = \"http://localhost:30010/update_weights\"\n",
"data = {\"model_path\": \"meta-llama/Llama-3.2-3B\"}\n",
"\n",
"response = requests.post(url, json=data)\n",
"response_json = response.json()\n",
"print_highlight(response_json)\n",
"assert response_json[\"success\"] is False\n",
"assert response_json[\"message\"] == (\n",
" \"Failed to update weights: The size of tensor a (2048) must match \"\n",
" \"the size of tensor b (3072) at non-singleton dimension 1.\\n\"\n",
" \"Rolling back to original weights.\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Encode (embedding model)\n",
"\n",
"Encode text into embeddings. Note that this API is only available for [embedding models](openai_api_embeddings.html#openai-apis-embedding) and will raise an error for generation models.\n",
"Therefore, we launch a new server to server an embedding model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:11.167853Z",
"iopub.status.busy": "2024-11-07T18:45:11.167711Z",
"iopub.status.idle": "2024-11-07T18:45:39.542988Z",
"shell.execute_reply": "2024-11-07T18:45:39.542135Z"
}
},
"outputs": [],
"source": [
"terminate_process(server_process)\n",
"\n",
"embedding_process = execute_shell_command(\n",
" \"\"\"\n",
"python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct \\\n",
" --port 30020 --host 0.0.0.0 --is-embedding\n",
"\"\"\"\n",
")\n",
"\n",
"wait_for_server(\"http://localhost:30020\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:39.545416Z",
"iopub.status.busy": "2024-11-07T18:45:39.545005Z",
"iopub.status.idle": "2024-11-07T18:45:39.588793Z",
"shell.execute_reply": "2024-11-07T18:45:39.588054Z"
}
},
"outputs": [],
"source": [
"# successful encode for embedding model\n",
"\n",
"url = \"http://localhost:30020/encode\"\n",
"data = {\"model\": \"Alibaba-NLP/gte-Qwen2-7B-instruct\", \"text\": \"Once upon a time\"}\n",
"\n",
"response = requests.post(url, json=data)\n",
"response_json = response.json()\n",
"print_highlight(f\"Text embedding (first 10): {response_json['embedding'][:10]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Classify (reward model)\n",
"\n",
"SGLang Runtime also supports reward models. Here we use a reward model to classify the quality of pairwise generations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:39.590729Z",
"iopub.status.busy": "2024-11-07T18:45:39.590446Z",
"iopub.status.idle": "2024-11-07T18:45:59.660376Z",
"shell.execute_reply": "2024-11-07T18:45:59.659992Z"
}
},
"outputs": [],
"source": [
"terminate_process(embedding_process)\n",
"\n",
"# Note that SGLang now treats embedding models and reward models as the same type of models.\n",
"# This will be updated in the future.\n",
"\n",
"reward_process = execute_shell_command(\n",
" \"\"\"\n",
"python -m sglang.launch_server --model-path Skywork/Skywork-Reward-Llama-3.1-8B-v0.2 --port 30030 --host 0.0.0.0 --is-embedding\n",
"\"\"\"\n",
")\n",
"\n",
"wait_for_server(\"http://localhost:30030\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:45:59.661779Z",
"iopub.status.busy": "2024-11-07T18:45:59.661641Z",
"iopub.status.idle": "2024-11-07T18:46:00.475726Z",
"shell.execute_reply": "2024-11-07T18:46:00.475269Z"
}
},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"PROMPT = (\n",
" \"What is the range of the numeric output of a sigmoid node in a neural network?\"\n",
")\n",
"\n",
"RESPONSE1 = \"The output of a sigmoid node is bounded between -1 and 1.\"\n",
"RESPONSE2 = \"The output of a sigmoid node is bounded between 0 and 1.\"\n",
"\n",
"CONVS = [\n",
" [{\"role\": \"user\", \"content\": PROMPT}, {\"role\": \"assistant\", \"content\": RESPONSE1}],\n",
" [{\"role\": \"user\", \"content\": PROMPT}, {\"role\": \"assistant\", \"content\": RESPONSE2}],\n",
"]\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"Skywork/Skywork-Reward-Llama-3.1-8B-v0.2\")\n",
"prompts = tokenizer.apply_chat_template(CONVS, tokenize=False)\n",
"\n",
"url = \"http://localhost:30030/classify\"\n",
"data = {\"model\": \"Skywork/Skywork-Reward-Llama-3.1-8B-v0.2\", \"text\": prompts}\n",
"\n",
"responses = requests.post(url, json=data).json()\n",
"for response in responses:\n",
" print_highlight(f\"reward: {response['embedding'][0]}\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"execution": {
"iopub.execute_input": "2024-11-07T18:46:00.477283Z",
"iopub.status.busy": "2024-11-07T18:46:00.477025Z",
"iopub.status.idle": "2024-11-07T18:46:00.525758Z",
"shell.execute_reply": "2024-11-07T18:46:00.525236Z"
}
},
"outputs": [],
"source": [
"terminate_process(reward_process)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "AlphaMeemory",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}