2024-10-25 20:48:35 -07:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
2024-11-02 01:02:17 -07:00
"# OpenAI APIs - Embedding\n",
2024-10-27 10:51:42 -07:00
"\n",
2024-11-02 11:46:00 -07:00
"SGLang provides OpenAI-compatible APIs to enable a smooth transition from OpenAI services to self-hosted local models.\n",
"A complete reference for the API is available in the [OpenAI API Reference](https://platform.openai.com/docs/guides/embeddings).\n",
2024-10-27 10:51:42 -07:00
"\n",
2025-08-10 19:49:45 -07:00
"This tutorial covers the embedding APIs for embedding models. For a list of the supported models see the [corresponding overview page](../supported_models/embedding_models.md)\n"
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2024-10-27 10:51:42 -07:00
"## Launch A Server\n",
"\n",
2024-12-31 22:11:00 +00:00
"Launch the server in your terminal and wait for it to initialize. Remember to add `--is-embedding` to the command."
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "code",
2024-11-02 00:17:30 -07:00
"execution_count": null,
2024-11-23 05:04:51 +08:00
"metadata": {},
2024-11-02 00:17:30 -07:00
"outputs": [],
2024-10-25 20:48:35 -07:00
"source": [
2025-08-10 19:49:45 -07:00
"from sglang.test.doc_patch import launch_server_cmd\n",
2025-02-15 03:57:00 +00:00
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
2024-10-26 10:44:11 -07:00
"\n",
2025-02-15 03:57:00 +00:00
"embedding_process, port = launch_server_cmd(\n",
2024-10-27 10:51:42 -07:00
" \"\"\"\n",
2025-04-21 02:38:25 +02:00
"python3 -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-1.5B-instruct \\\n",
2025-09-04 09:52:53 -04:00
" --host 0.0.0.0 --is-embedding --log-level warning\n",
2024-10-27 10:51:42 -07:00
"\"\"\"\n",
")\n",
2024-10-25 20:48:35 -07:00
"\n",
2025-02-15 03:57:00 +00:00
"wait_for_server(f\"http://localhost:{port}\")"
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2024-11-02 01:02:17 -07:00
"## Using cURL"
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "code",
2024-11-02 00:17:30 -07:00
"execution_count": null,
2024-11-23 05:04:51 +08:00
"metadata": {},
2024-11-02 00:17:30 -07:00
"outputs": [],
2024-10-25 20:48:35 -07:00
"source": [
2024-10-27 10:51:42 -07:00
"import subprocess, json\n",
"\n",
"text = \"Once upon a time\"\n",
2024-10-25 20:48:35 -07:00
"\n",
2025-02-15 03:57:00 +00:00
"curl_text = f\"\"\"curl -s http://localhost:{port}/v1/embeddings \\\n",
2025-06-21 13:21:06 -07:00
" -H \"Content-Type: application/json\" \\\n",
2025-04-21 02:38:25 +02:00
" -d '{{\"model\": \"Alibaba-NLP/gte-Qwen2-1.5B-instruct\", \"input\": \"{text}\"}}'\"\"\"\n",
2024-10-27 10:51:42 -07:00
"\n",
2025-06-21 13:21:06 -07:00
"result = subprocess.check_output(curl_text, shell=True)\n",
"\n",
"print(result)\n",
"\n",
"text_embedding = json.loads(result)[\"data\"][0][\"embedding\"]\n",
2024-10-27 10:51:42 -07:00
"\n",
2024-10-30 00:39:41 -07:00
"print_highlight(f\"Text embedding (first 10): {text_embedding[:10]}\")"
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2024-11-02 11:46:00 -07:00
"## Using Python Requests"
2024-11-02 01:02:17 -07:00
]
},
{
"cell_type": "code",
"execution_count": null,
2024-11-23 05:04:51 +08:00
"metadata": {},
2024-11-02 01:02:17 -07:00
"outputs": [],
"source": [
"import requests\n",
"\n",
"text = \"Once upon a time\"\n",
"\n",
"response = requests.post(\n",
2025-02-15 03:57:00 +00:00
" f\"http://localhost:{port}/v1/embeddings\",\n",
2025-04-21 02:38:25 +02:00
" json={\"model\": \"Alibaba-NLP/gte-Qwen2-1.5B-instruct\", \"input\": text},\n",
2024-11-02 01:02:17 -07:00
")\n",
"\n",
"text_embedding = response.json()[\"data\"][0][\"embedding\"]\n",
"\n",
"print_highlight(f\"Text embedding (first 10): {text_embedding[:10]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using OpenAI Python Client"
2024-10-25 20:48:35 -07:00
]
},
{
"cell_type": "code",
2024-11-02 00:17:30 -07:00
"execution_count": null,
2024-11-23 05:04:51 +08:00
"metadata": {},
2024-11-02 00:17:30 -07:00
"outputs": [],
2024-10-25 20:48:35 -07:00
"source": [
"import openai\n",
"\n",
2025-02-15 03:57:00 +00:00
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
2024-10-25 20:48:35 -07:00
"\n",
"# Text embedding example\n",
"response = client.embeddings.create(\n",
2025-04-21 02:38:25 +02:00
" model=\"Alibaba-NLP/gte-Qwen2-1.5B-instruct\",\n",
2024-10-27 10:51:42 -07:00
" input=text,\n",
2024-10-25 20:48:35 -07:00
")\n",
"\n",
"embedding = response.data[0].embedding[:10]\n",
2024-10-30 00:39:41 -07:00
"print_highlight(f\"Text embedding (first 10): {embedding}\")"
2024-10-27 10:51:42 -07:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using Input IDs\n",
"\n",
"SGLang also supports `input_ids` as input to get the embedding."
]
},
{
"cell_type": "code",
2024-11-02 00:17:30 -07:00
"execution_count": null,
2024-11-23 05:04:51 +08:00
"metadata": {},
2024-11-02 00:17:30 -07:00
"outputs": [],
2024-10-27 10:51:42 -07:00
"source": [
"import json\n",
"import os\n",
"from transformers import AutoTokenizer\n",
"\n",
"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n",
"\n",
2025-04-21 02:38:25 +02:00
"tokenizer = AutoTokenizer.from_pretrained(\"Alibaba-NLP/gte-Qwen2-1.5B-instruct\")\n",
2024-10-27 10:51:42 -07:00
"input_ids = tokenizer.encode(text)\n",
"\n",
2025-02-15 03:57:00 +00:00
"curl_ids = f\"\"\"curl -s http://localhost:{port}/v1/embeddings \\\n",
2025-06-21 13:21:06 -07:00
" -H \"Content-Type: application/json\" \\\n",
2025-04-21 02:38:25 +02:00
" -d '{{\"model\": \"Alibaba-NLP/gte-Qwen2-1.5B-instruct\", \"input\": {json.dumps(input_ids)}}}'\"\"\"\n",
2024-10-27 10:51:42 -07:00
"\n",
"input_ids_embedding = json.loads(subprocess.check_output(curl_ids, shell=True))[\"data\"][\n",
" 0\n",
"][\"embedding\"]\n",
"\n",
2024-10-30 00:39:41 -07:00
"print_highlight(f\"Input IDs embedding (first 10): {input_ids_embedding[:10]}\")"
2024-10-25 20:48:35 -07:00
]
2024-10-26 10:44:11 -07:00
},
{
"cell_type": "code",
2024-11-23 05:04:51 +08:00
"execution_count": null,
"metadata": {},
2024-10-31 20:10:16 -07:00
"outputs": [],
2024-10-26 10:44:11 -07:00
"source": [
2025-02-19 19:15:44 +00:00
"terminate_process(embedding_process)"
2024-10-26 10:44:11 -07:00
]
2025-05-11 23:22:11 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Multi-Modal Embedding Model\n",
"Please refer to [Multi-Modal Embedding Model](../supported_models/embedding_models.md)"
]
2024-10-25 20:48:35 -07:00
}
],
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
2024-11-23 05:04:51 +08:00
"pygments_lexer": "ipython3"
2024-10-25 20:48:35 -07:00
}
},
"nbformat": 4,
"nbformat_minor": 2
}