Llamacpp quants

This commit is contained in:
ai-modelscope
2024-11-26 17:22:52 +08:00
parent d0529223ae
commit c16bf4ccf6
22 changed files with 200 additions and 63 deletions

49
.gitattributes vendored
View File

@@ -1,47 +1,54 @@
*.7z filter=lfs diff=lfs merge=lfs -text *.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text *.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text *.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text *.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text *.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text *.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text *.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text *.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text *.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text *.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text *.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text *.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text *.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text *.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text *.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text *.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text *.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text *.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text *.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text *.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text *.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text *.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text *.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2-f32.gguf filter=lfs diff=lfs merge=lfs -text
Oumuamua-7b-instruct-v2.imatrix filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:61afb0d76de60d1e55e99ed457d3d4bb7d5ccc96273d4b9c65332969bc16de48
size 3055763552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fcf046cfaf6ca4e9b629bb3f434f4dbfea252107737f7abeef2ebaf8f10c5b6a
size 2865971296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b1f2c67d5411b48449b600bfbe503968e261c0de990e6366ed77b1b785cad96
size 2766618720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa462786026393a1da9bde8f298f183ab246bc78beb25cccc6ed60f856852cb1
size 3822534752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8610da98f28ce0f37dbb9c783180f8b42ab490c674a305fe7795f953b5c4c2af
size 3556458592

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d780703b55cd36f87360411024b68989760bc55777d544711cd7ef95d6a96b5
size 3382394976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:64e94ec5a74b15a206c87494ce36f1d1645eb165e94215941f18590c04c7b8f3
size 4432019552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6d4215b9c9e072dd235e32ecfbba6d34e221a3d341fe25e139ab868073e1f2bc
size 3270197344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a2a41f0fc8dce8a25900c531739cfbc52da1df421a3601066f32738fb437828b
size 4359667808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:17978e9bcdf8ca9ff6e4b416c7a3501720299d18db474becd17e16e61ee49b5f
size 4056629344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9f895c2e33882b11da32c8a94ff7791607e1527e38cdab2f9b97e91e7c3dda87
size 3702210656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b7eb29eb70e8b7c6f56fb2ba80b378bec3269d9e7f006a02dfba6b88428468d
size 4888674400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f85f8e39545cc5d4779448a01f350777e1bf1d083eb2ae0e1e8b074bfd2df412
size 4660609120

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a6dfd2d3d9cc41c6233caa34aa0ab79da1c964a11b81c376afbcce7527ba326f
size 5635260512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:561c406f8423512eec6d34ea5ca816f6e90132de26cef632f2e9a7276058667f
size 5501567072

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:52f9411f38e10aaf091e17c348c16138b435433a0c8a22deebf5499a0b94797d
size 6428508256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7eed1c5b5fc4904441d152027600da750afa6b46df677ad5e928e2845157a3d1
size 8118812768

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8bed186f59beb25664b1e5d849af29693f53a7011906d8f6c2565c2bb59989f4
size 29321805632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:69fedc9d5a58c83e273c18546ed0ebb7b64f4c24f1fbdbfaa43ddaba6d1302fc
size 4988171

154
README.md
View File

@@ -1,47 +1,119 @@
--- ---
license: Apache License 2.0 base_model:
- nitky/Oumuamua-7b-base
#model-type: - nitky/Oumuamua-7b-instruct
##如 gpt、phi、llama、chatglm、baichuan 等 - tokyotech-llm/Swallow-MS-7b-v0.1
#- gpt - mistralai/Mistral-7B-v0.1
- prometheus-eval/prometheus-7b-v2.0
#domain: - cognitivecomputations/dolphin-2.8-mistral-7b-v02
##如 nlp、cv、audio、multi-modal - ZhangShenao/SELM-Zephyr-7B-iter-3
#- nlp - HachiML/Mistral-7B-v0.3-m3-lora
- openbmb/Eurus-7b-kto
#language: - kaist-ai/janus-dpo-7b
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa - nitky/RP-7b-instruct
#- cn - stabilityai/japanese-stablelm-base-gamma-7b
- NTQAI/chatntq-ja-7b-v1.0
#metrics: - Weyaxi/Einstein-v6-7B
##如 CIDEr、Blue、ROUGE 等 - internistai/base-7b-v0.2
#- CIDEr - ZySec-AI/ZySec-7B
library_name: transformers
#tags: tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他 - mergekit
#- pretrained - merge
language:
#tools: - ja
##如 vllm、fastchat、llamacpp、AdaSeq 等 - en
#- vllm pipeline_tag: text-generation
license: apache-2.0
quantized_by: bartowski
--- ---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载 ## Llamacpp imatrix Quantizations of Oumuamua-7b-instruct-v2
```bash
#安装ModelScope Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3152">b3152</a> for quantization.
pip install modelscope
Original model: https://huggingface.co/nitky/Oumuamua-7b-instruct-v2
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
``` ```
```python <s> [INST] <<SYS>>
#SDK模型下载 {system_prompt}
from modelscope import snapshot_download <</SYS>>
model_dir = snapshot_download('bartowski/Oumuamua-7b-instruct-v2-GGUF')
``` {prompt} [/INST] </s>
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/bartowski/Oumuamua-7b-instruct-v2-GGUF.git
``` ```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p> ## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Oumuamua-7b-instruct-v2-Q8_0.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q8_0.gguf) | Q8_0 | 8.11GB | Extremely high quality, generally unneeded but max available quant. |
| [Oumuamua-7b-instruct-v2-Q6_K.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q6_K.gguf) | Q6_K | 6.42GB | Very high quality, near perfect, *recommended*. |
| [Oumuamua-7b-instruct-v2-Q5_K_M.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q5_K_M.gguf) | Q5_K_M | 5.63GB | High quality, *recommended*. |
| [Oumuamua-7b-instruct-v2-Q5_K_S.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q5_K_S.gguf) | Q5_K_S | 5.50GB | High quality, *recommended*. |
| [Oumuamua-7b-instruct-v2-Q4_K_M.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q4_K_M.gguf) | Q4_K_M | 4.88GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Oumuamua-7b-instruct-v2-Q4_K_S.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q4_K_S.gguf) | Q4_K_S | 4.66GB | Slightly lower quality with more space savings, *recommended*. |
| [Oumuamua-7b-instruct-v2-IQ4_XS.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ4_XS.gguf) | IQ4_XS | 4.43GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Oumuamua-7b-instruct-v2-Q3_K_L.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q3_K_L.gguf) | Q3_K_L | 4.35GB | Lower quality but usable, good for low RAM availability. |
| [Oumuamua-7b-instruct-v2-Q3_K_M.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q3_K_M.gguf) | Q3_K_M | 4.05GB | Even lower quality. |
| [Oumuamua-7b-instruct-v2-IQ3_M.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ3_M.gguf) | IQ3_M | 3.82GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Oumuamua-7b-instruct-v2-Q3_K_S.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q3_K_S.gguf) | Q3_K_S | 3.70GB | Low quality, not recommended. |
| [Oumuamua-7b-instruct-v2-IQ3_XS.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ3_XS.gguf) | IQ3_XS | 3.55GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Oumuamua-7b-instruct-v2-IQ3_XXS.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ3_XXS.gguf) | IQ3_XXS | 3.38GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Oumuamua-7b-instruct-v2-Q2_K.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-Q2_K.gguf) | Q2_K | 3.27GB | Very low quality but surprisingly usable. |
| [Oumuamua-7b-instruct-v2-IQ2_M.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ2_M.gguf) | IQ2_M | 3.05GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Oumuamua-7b-instruct-v2-IQ2_S.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ2_S.gguf) | IQ2_S | 2.86GB | Very low quality, uses SOTA techniques to be usable. |
| [Oumuamua-7b-instruct-v2-IQ2_XS.gguf](https://huggingface.co/bartowski/Oumuamua-7b-instruct-v2-GGUF/blob/main/Oumuamua-7b-instruct-v2-IQ2_XS.gguf) | IQ2_XS | 2.76GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Oumuamua-7b-instruct-v2-GGUF --include "Oumuamua-7b-instruct-v2-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Oumuamua-7b-instruct-v2-GGUF --include "Oumuamua-7b-instruct-v2-Q8_0.gguf/*" --local-dir Oumuamua-7b-instruct-v2-Q8_0
```
You can either specify a new local-dir (Oumuamua-7b-instruct-v2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}