diff --git a/README.md b/README.md index 3e25392..29c6e4d 100644 --- a/README.md +++ b/README.md @@ -1,47 +1,60 @@ --- -license: Apache License 2.0 - -#model-type: -##如 gpt、phi、llama、chatglm、baichuan 等 -#- gpt - -#domain: -##如 nlp、cv、audio、multi-modal -#- nlp - -#language: -##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa -#- cn - -#metrics: -##如 CIDEr、Blue、ROUGE 等 -#- CIDEr - -#tags: -##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他 -#- pretrained - -#tools: -##如 vllm、fastchat、llamacpp、AdaSeq 等 -#- vllm +base_model: prithivMLmods/Horologium-QwenC-1.5B +language: +- en +library_name: transformers +license: apache-2.0 +pipeline_tag: text-generation +tags: +- text-generation-inference +- code +- math +- RL +- QwenC +- llama-cpp +- gguf-my-repo --- -### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。 -#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型 -SDK下载 +# prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF +This model was converted to GGUF format from [`prithivMLmods/Horologium-QwenC-1.5B`](https://huggingface.co/prithivMLmods/Horologium-QwenC-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. +Refer to the [original model card](https://huggingface.co/prithivMLmods/Horologium-QwenC-1.5B) for more details on the model. + +## Use with llama.cpp +Install llama.cpp through brew (works on Mac and Linux) + ```bash -#安装ModelScope -pip install modelscope +brew install llama.cpp + ``` -```python -#SDK模型下载 -from modelscope import snapshot_download -model_dir = snapshot_download('prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF') -``` -Git下载 -``` -#Git模型下载 -git clone https://www.modelscope.cn/prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF.git +Invoke the llama.cpp server or the CLI. + +### CLI: +```bash +llama-cli --hf-repo prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF --hf-file horologium-qwenc-1.5b-q8_0.gguf -p "The meaning to life and the universe is" ``` -
如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
\ No newline at end of file +### Server: +```bash +llama-server --hf-repo prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF --hf-file horologium-qwenc-1.5b-q8_0.gguf -c 2048 +``` + +Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. + +Step 1: Clone llama.cpp from GitHub. +``` +git clone https://github.com/ggerganov/llama.cpp +``` + +Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). +``` +cd llama.cpp && LLAMA_CURL=1 make +``` + +Step 3: Run inference through the main binary. +``` +./llama-cli --hf-repo prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF --hf-file horologium-qwenc-1.5b-q8_0.gguf -p "The meaning to life and the universe is" +``` +or +``` +./llama-server --hf-repo prithivMLmods/Horologium-QwenC-1.5B-Q8_0-GGUF --hf-file horologium-qwenc-1.5b-q8_0.gguf -c 2048 +``` diff --git a/configuration.json b/configuration.json new file mode 100644 index 0000000..bbeeda1 --- /dev/null +++ b/configuration.json @@ -0,0 +1 @@ +{"framework": "pytorch", "task": "text-generation", "allow_remote": true} \ No newline at end of file diff --git a/horologium-qwenc-1.5b-q8_0.gguf b/horologium-qwenc-1.5b-q8_0.gguf new file mode 100644 index 0000000..9a6d185 --- /dev/null +++ b/horologium-qwenc-1.5b-q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a1e2717f0f21a0c34447e2ce6169c9dc05611731a5ce9292fad3df3e62da423 +size 135