From 44c998fcb553f5210a96f1dc033d24f15003486c Mon Sep 17 00:00:00 2001 From: Yuanhan Zhang Date: Fri, 24 May 2024 18:38:20 +0800 Subject: [PATCH] Add the instruction link to the LLaVA-NeXT-Video at README (#463) --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a0df39622..8d2ebc601 100644 --- a/README.md +++ b/README.md @@ -377,6 +377,8 @@ python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000` - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000` - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000` +- LLaVA-NeXT-Video + - see [srt_example_llava_v.sh](examples/usage/llava_video/srt_example_llava_v.sh) - Yi-VL - see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py). - StableLM @@ -410,4 +412,4 @@ https://github.com/sgl-project/sglang/issues/157 } ``` -We learned from the design and reused some code of the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), [LMQL](https://github.com/eth-sri/lmql). \ No newline at end of file +We learned from the design and reused some code of the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), [LMQL](https://github.com/eth-sri/lmql).