diff --git a/README.md b/README.md index a7c50fa..8e88ef2 100644 --- a/README.md +++ b/README.md @@ -33,6 +33,7 @@ To demonstrate their model quality, we follow [`llama.cpp`](https://github.com/g |4B | 13.20 | 13.21 | 13.28 | 13.24 | 13.27 | 13.61 | 13.44 | 13.67 | 15.65 | |7B | 14.21 | 14.24 | 14.35 | 14.32 | 14.12 | 14.35 | 14.47 | 15.11 | 16.57 | |14B | 10.91 | 10.91 | 10.93 | 10.98 | 10.88 | 10.92 | 10.92 | 11.24 | 12.27 | +|32B | 8.87 | 8.89 | 8.91 | 8.94 | 8.93 | 8.96 | 9.17 | 9.14 | 10.51 | |72B | 7.97 | 7.99 | 7.99 | 7.99 | 8.01 | 8.00 | 8.01 | 8.06 | 8.63 | ## Model Details @@ -48,17 +49,14 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and ## How to use -Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `modelscope` (`pip install modelscope`) as shown below: -```python -from modelscope.hub.file_download import model_file_download -model_dir = model_file_download(model_id='qwen/Qwen1.5-32B-Chat-GGUF',file_path='qwen1_5-32b-chat-q5_k_m.gguf',revision='master',cache_dir='/mnt/workspace/') +Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below: +```shell +huggingface-cli download Qwen/Qwen1.5-32B-Chat-GGUF qwen1_5-32b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False ``` -We demonstrate how to install and use `llama.cpp` to run Qwen1.5: +We demonstrate how to use `llama.cpp` to run Qwen1.5: ```shell -git clone https://github.com/ggerganov/llama.cpp.git -cd llama.cpp -make -j && ./main -m /mnt/workspace/qwen/Qwen1.5-32B-Chat-GGUF/qwen1_5-32b-chat-q5_k_m.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e +./main -m qwen1_5-32b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt ```