331 lines
11 KiB
Plaintext
331 lines
11 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# OpenAI APIs - Vision\n",
|
||
"\n",
|
||
"SGLang provides OpenAI-compatible APIs to enable a smooth transition from OpenAI services to self-hosted local models.\n",
|
||
"A complete reference for the API is available in the [OpenAI API Reference](https://platform.openai.com/docs/guides/vision).\n",
|
||
"This tutorial covers the vision APIs for vision language models.\n",
|
||
"\n",
|
||
"SGLang supports vision language models such as Llama 3.2, LLaVA-OneVision, and QWen-VL2 \n",
|
||
"- [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) \n",
|
||
"- [lmms-lab/llava-onevision-qwen2-72b-ov-chat](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat) \n",
|
||
"- [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Launch A Server\n",
|
||
"\n",
|
||
"This code block is equivalent to executing \n",
|
||
"\n",
|
||
"```bash\n",
|
||
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-11B-Vision-Instruct \\\n",
|
||
" --port 30000 --chat-template llama_3_vision\n",
|
||
"```\n",
|
||
"in your terminal and wait for the server to be ready.\n",
|
||
"\n",
|
||
"Remember to add `--chat-template llama_3_vision` to specify the vision chat template, otherwise the server only supports text.\n",
|
||
"We need to specify `--chat-template` for vision language models because the chat template provided in Hugging Face tokenizer only supports text."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:43:47.311708Z",
|
||
"iopub.status.busy": "2024-11-07T18:43:47.311517Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:18.512576Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:18.511909Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from sglang.utils import (\n",
|
||
" execute_shell_command,\n",
|
||
" wait_for_server,\n",
|
||
" terminate_process,\n",
|
||
" print_highlight,\n",
|
||
")\n",
|
||
"\n",
|
||
"embedding_process = execute_shell_command(\n",
|
||
" \"\"\"\n",
|
||
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-11B-Vision-Instruct \\\n",
|
||
" --port=30000 --chat-template=llama_3_vision\n",
|
||
"\"\"\"\n",
|
||
")\n",
|
||
"\n",
|
||
"wait_for_server(\"http://localhost:30000\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using cURL\n",
|
||
"\n",
|
||
"Once the server is up, you can send test requests using curl or requests."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:44:18.515678Z",
|
||
"iopub.status.busy": "2024-11-07T18:44:18.515314Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:22.880793Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:22.880303Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import subprocess\n",
|
||
"\n",
|
||
"curl_command = \"\"\"\n",
|
||
"curl -s http://localhost:30000/v1/chat/completions \\\n",
|
||
" -d '{\n",
|
||
" \"model\": \"meta-llama/Llama-3.2-11B-Vision-Instruct\",\n",
|
||
" \"messages\": [\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"What’s in this image?\"\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" }\n",
|
||
" }\n",
|
||
" ]\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" \"max_tokens\": 300\n",
|
||
" }'\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"response = subprocess.check_output(curl_command, shell=True).decode()\n",
|
||
"print_highlight(response)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using Python Requests"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:44:22.883309Z",
|
||
"iopub.status.busy": "2024-11-07T18:44:22.883160Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:27.048810Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:27.048074Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import requests\n",
|
||
"\n",
|
||
"url = \"http://localhost:30000/v1/chat/completions\"\n",
|
||
"\n",
|
||
"data = {\n",
|
||
" \"model\": \"meta-llama/Llama-3.2-11B-Vision-Instruct\",\n",
|
||
" \"messages\": [\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\"type\": \"text\", \"text\": \"What’s in this image?\"},\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" },\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" \"max_tokens\": 300,\n",
|
||
"}\n",
|
||
"\n",
|
||
"response = requests.post(url, json=data)\n",
|
||
"print_highlight(response.text)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using OpenAI Python Client"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:44:27.051312Z",
|
||
"iopub.status.busy": "2024-11-07T18:44:27.051190Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:32.358097Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:32.357628Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"\n",
|
||
"client = OpenAI(base_url=\"http://localhost:30000/v1\", api_key=\"None\")\n",
|
||
"\n",
|
||
"response = client.chat.completions.create(\n",
|
||
" model=\"meta-llama/Llama-3.2-11B-Vision-Instruct\",\n",
|
||
" messages=[\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"What is in this image?\",\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" },\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" max_tokens=300,\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(response.choices[0].message.content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Multiple-Image Inputs\n",
|
||
"\n",
|
||
"The server also supports multiple images and interleaved text and images if the model supports it."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:44:32.359532Z",
|
||
"iopub.status.busy": "2024-11-07T18:44:32.359413Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:36.164664Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:36.164005Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"\n",
|
||
"client = OpenAI(base_url=\"http://localhost:30000/v1\", api_key=\"None\")\n",
|
||
"\n",
|
||
"response = client.chat.completions.create(\n",
|
||
" model=\"meta-llama/Llama-3.2-11B-Vision-Instruct\",\n",
|
||
" messages=[\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\",\n",
|
||
" },\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png\",\n",
|
||
" },\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"I have two very different images. They are not related at all. \"\n",
|
||
" \"Please describe the first image in one sentence, and then describe the second image in another sentence.\",\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" temperature=0,\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(response.choices[0].message.content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2024-11-07T18:44:36.167123Z",
|
||
"iopub.status.busy": "2024-11-07T18:44:36.166535Z",
|
||
"iopub.status.idle": "2024-11-07T18:44:37.743761Z",
|
||
"shell.execute_reply": "2024-11-07T18:44:37.742510Z"
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"terminate_process(embedding_process)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Chat Template\n",
|
||
"\n",
|
||
"As mentioned before, if you do not specify a vision model's `--chat-template`, the server uses Hugging Face's default template, which only supports text.\n",
|
||
"\n",
|
||
"We list popular vision models with their chat templates:\n",
|
||
"\n",
|
||
"- [meta-llama/Llama-3.2-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) uses `llama_3_vision`.\n",
|
||
"- [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) uses `qwen2-vl`.\n",
|
||
"- [LlaVA-OneVision](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) uses `chatml-llava`.\n",
|
||
"- [LLaVA-NeXT](https://huggingface.co/collections/lmms-lab/llava-next-6623288e2d61edba3ddbf5ff) uses `chatml-llava`.\n",
|
||
"- [Llama3-LLaVA-NeXT](https://huggingface.co/lmms-lab/llama3-llava-next-8b) uses `llava_llama_3`.\n",
|
||
"- [LLaVA-v1.5 / 1.6](https://huggingface.co/liuhaotian/llava-v1.6-34b) uses `vicuna_v1.1`."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.7"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|