Update model support in readme (#370)
This commit is contained in:
@@ -378,7 +378,12 @@ python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port
|
|||||||
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000`
|
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000`
|
||||||
- Yi-VL
|
- Yi-VL
|
||||||
- see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).
|
- see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).
|
||||||
- AWQ/GPTQ quantization
|
- StableLM
|
||||||
|
- Command-R
|
||||||
|
- DBRX
|
||||||
|
- AWQ/GPTQ/Marlin quantization
|
||||||
|
|
||||||
|
Instructions for supporting a new model are [here](https://github.com/sgl-project/sglang/blob/main/docs/model_support.md).
|
||||||
|
|
||||||
## Benchmark And Performance
|
## Benchmark And Performance
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user