Fix docs (#1890)
This commit is contained in:
@@ -5,7 +5,6 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Native APIs\n",
|
||||
"\n",
|
||||
"Apart from the OpenAI compatible APIs, the SGLang Runtime also provides its native server APIs. We introduce these following APIs:\n",
|
||||
"\n",
|
||||
"- `/generate`\n",
|
||||
@@ -40,7 +39,6 @@
|
||||
" terminate_process,\n",
|
||||
" print_highlight,\n",
|
||||
")\n",
|
||||
"import subprocess, json\n",
|
||||
"\n",
|
||||
"server_process = execute_shell_command(\n",
|
||||
"\"\"\"\n",
|
||||
@@ -56,8 +54,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Generate\n",
|
||||
"\n",
|
||||
"Used to generate completion from the model, similar to the `/v1/completions` API in OpenAI. Detailed parameters can be found in the [sampling parameters](https://sgl-project.github.io/references/sampling_params.html)."
|
||||
"Generate completions. This is similar to the `/v1/completions` in OpenAI API. Detailed parameters can be found in the [sampling parameters](https://sgl-project.github.io/references/sampling_params.html)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,7 +69,7 @@
|
||||
"data = {\"text\": \"What is the capital of France?\"}\n",
|
||||
"\n",
|
||||
"response = requests.post(url, json=data)\n",
|
||||
"print_highlight(response.text)"
|
||||
"print_highlight(response.json())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -80,8 +77,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get Server Args\n",
|
||||
"\n",
|
||||
"Used to get the serving args when the server is launched."
|
||||
"Get the arguments of a server."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -102,7 +98,7 @@
|
||||
"source": [
|
||||
"## Get Model Info\n",
|
||||
"\n",
|
||||
"Used to get the model info.\n",
|
||||
"Get the information of the model.\n",
|
||||
"\n",
|
||||
"- `model_path`: The path/name of the model.\n",
|
||||
"- `is_generation`: Whether the model is used as generation model or embedding model."
|
||||
@@ -120,7 +116,7 @@
|
||||
"response_json = response.json()\n",
|
||||
"print_highlight(response_json)\n",
|
||||
"assert response_json[\"model_path\"] == \"meta-llama/Llama-3.2-1B-Instruct\"\n",
|
||||
"assert response_json[\"is_generation\"] == True\n",
|
||||
"assert response_json[\"is_generation\"] is True\n",
|
||||
"assert response_json.keys() == {\"model_path\", \"is_generation\"}"
|
||||
]
|
||||
},
|
||||
@@ -128,8 +124,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Health and Health Generate\n",
|
||||
"\n",
|
||||
"## Health Check\n",
|
||||
"- `/health`: Check the health of the server.\n",
|
||||
"- `/health_generate`: Check the health of the server by generating one token."
|
||||
]
|
||||
@@ -164,7 +159,7 @@
|
||||
"source": [
|
||||
"## Flush Cache\n",
|
||||
"\n",
|
||||
"Used to flush the radix cache. It will be automatically triggered when the model weights are updated by the `/update_weights` API."
|
||||
"Flush the radix cache. It will be automatically triggered when the model weights are updated by the `/update_weights` API."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -259,7 +254,7 @@
|
||||
"source": [
|
||||
"## Encode\n",
|
||||
"\n",
|
||||
"Used to encode text into embeddings. Note that this API is only available for [embedding models](./openai_embedding_api.ipynb) and will raise an error for generation models.\n",
|
||||
"Encode text into embeddings. Note that this API is only available for [embedding models](./openai_embedding_api.ipynb) and will raise an error for generation models.\n",
|
||||
"Therefore, we launch a new server to server an embedding model.\n"
|
||||
]
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user