diff --git a/README.md b/README.md index 2cc80e6..c6d9d97 100644 --- a/README.md +++ b/README.md @@ -40,14 +40,14 @@ Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/ru We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository `llama.cpp`. -Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `modelscope`: +Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`: 1. Install ```shell - pip install -U modelscope + pip install -U huggingface_hub ``` 2. Download: ```shell - modelscope download --model=qwen/Qwen2.5-1.5B-Instruct-GGUF --local_dir . qwen2.5-1.5b-instruct-q5_k_m.gguf + huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct-GGUF qwen2.5-1.5b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False ``` For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode: