Improve gemma and documentations (#278)
This commit is contained in:
@@ -369,8 +369,13 @@ python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port
|
||||
- Mistral
|
||||
- Mixtral
|
||||
- Qwen / Qwen 2
|
||||
- Gemma
|
||||
- Please add a new flag `--attention-reduce-in-fp32` to avoid some precision errors.
|
||||
- `python -m sglang.launch_server --model-path google/gemma-7b-it --port 30000 --attention-reduce-in-fp32`
|
||||
- LLaVA
|
||||
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
|
||||
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
|
||||
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000`
|
||||
- Yi-VL
|
||||
- see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).
|
||||
- AWQ/GPTQ quantization
|
||||
|
||||
Reference in New Issue
Block a user