Update metadata with huggingface_hub

This commit is contained in:
ai-modelscope
2024-11-30 14:52:33 +08:00
parent 7ead34f997
commit 44c9a435d8
28 changed files with 237 additions and 63 deletions

57
.gitattributes vendored
View File

@@ -1,47 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_0_8_8.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_0_4_8.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_0_4_4.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO-f16.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Crab-DPO.imatrix filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fa807e4046329713c9b2fc2454e87f45e4a572a6186c1a3a86a7a94b3c96ff7d
size 2504249952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9b2cb6f3a35e73620cdd42b57bec8f82265d6d3b3892d9041801bcd9dde9284c
size 3288846944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fa3d277ce5f8765cca045e69a6cdeecafc25084dae9086783f67bd1874b9f127
size 3022770784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fbf688e585b9731ff839c0eb7ae8b4e16a64386eaafdf8955c62ac814d7cdefc
size 3911963232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0f03f0dc3a8ab5f3cfa43c436ca5cfc79e385a2e37c1c974975b4dea3df205d2
size 2722878048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:423da79998dab9da48197fec82d7eaf0f0b3cc7e0c5aa5043789c3af5dd364ef
size 2853950048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a33329432ad4eab115b93c9454ad395570c50ac66558313844072528e09d784a
size 3825980000

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:63df1798e22616ca0b0ea64bddd68229b6fe66ca68b04dda0c930efc4a5818a3
size 3522941536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:44e4352f015061fd51bf218a02dca3dff2e20950e33372ee70217e16fec927fc
size 3168522848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:34fa7b77cab4786c7060dae3898319dde602d4b0175450fccc5cf75a0eb2a7e4
size 3943420512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5cddf7ee69cc105dad7b34aa11bd70549c314f4c1d92218cdde22a78fcfb923c
size 4127969888

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:30aa1f32a9f83a2bd36366beebcc2a80297d8adec599f8600148c7458b928d2d
size 4113289824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:71465ae73779476ce8803967265a849fcb1a8768411bc8ac4042a6012c8f3fb9
size 4113289824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ce31b102b7355cbe1bc6a6e62ac91f1af8ad0e0f9dd377e91f4be4b03d9b88da
size 4113289824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50ee63189027f47516107eef54abbad97c14556c04c347ffdc55b2bae5244f65
size 4472427104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2c8ac29442d0e484c28a8780d145f6de28d6713b9236f50d5e5c7d909534298f
size 4372812384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d41af9cb30cd8e7ce97e31f42eff479d6e9ae8190ba9a4a1c4cfd9917a0e53aa
size 4144747104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:894157694ee5c1cfc5cca0a78b1485b9e98236268ae73dca6effde3a293b37b6
size 5219013216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:76bc94f27e3eb6cb280b48ab17c8eab6c48caa6355988ac59d3f2f568750e3cf
size 5136175712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca108f7535664898a587acd9c149435178de4aeec3eea0614ada44f723ee7469
size 5002482272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:484e4c9294c47909f5298890bf2711bb69349ca494dbaab97eea46bef6923c6f
size 5947249248

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dcfa6d6b29494c85d97ce9838836838f91bf4041665b994a6339f3bd050c4a36
size 6012260960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:26b0ee2e13387e1fb9f3673ccc04ef1387919bdf0e229038251bf702533b827a
size 7702565472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b56143d7abe81fa16f1d88f0a522649d715ef95ce094fd9d8f4974700fc7b39e
size 14497337696

3
Mistral-Crab-DPO.imatrix Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b93e2e422f9480918009a6071d5b02998980924e442bd30f3a558bc99841680
size 4988170

167
README.md
View File

@@ -1,47 +1,132 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
base_model: THU-KEG/Mistral-Crab-DPO
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- alignment-handbook
- generated_from_trainer
quantized_by: bartowski
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载
```bash
#安装ModelScope
pip install modelscope
## Llamacpp imatrix Quantizations of Mistral-Crab-DPO
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization.
Original model: https://huggingface.co/THU-KEG/Mistral-Crab-DPO
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('bartowski/Mistral-Crab-DPO-GGUF')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/bartowski/Mistral-Crab-DPO-GGUF.git
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Mistral-Crab-DPO-f16.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-f16.gguf) | f16 | 14.50GB | false | Full F16 weights. |
| [Mistral-Crab-DPO-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q8_0.gguf) | Q8_0 | 7.70GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Mistral-Crab-DPO-Q6_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q6_K_L.gguf) | Q6_K_L | 6.01GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Mistral-Crab-DPO-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q6_K.gguf) | Q6_K | 5.95GB | false | Very high quality, near perfect, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_L.gguf) | Q5_K_L | 5.22GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_M.gguf) | Q5_K_M | 5.14GB | false | High quality, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_S.gguf) | Q5_K_S | 5.00GB | false | High quality, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_L.gguf) | Q4_K_L | 4.47GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_M.gguf) | Q4_K_M | 4.37GB | false | Good quality, default size for must use cases, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_S.gguf) | Q4_K_S | 4.14GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Mistral-Crab-DPO-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0.gguf) | Q4_0 | 4.13GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Mistral-Crab-DPO-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.11GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.11GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.11GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_XL.gguf) | Q3_K_XL | 3.94GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Mistral-Crab-DPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ4_XS.gguf) | IQ4_XS | 3.91GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Mistral-Crab-DPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_L.gguf) | Q3_K_L | 3.83GB | false | Lower quality but usable, good for low RAM availability. |
| [Mistral-Crab-DPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_M.gguf) | Q3_K_M | 3.52GB | false | Low quality. |
| [Mistral-Crab-DPO-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ3_M.gguf) | IQ3_M | 3.29GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Mistral-Crab-DPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_S.gguf) | Q3_K_S | 3.17GB | false | Low quality, not recommended. |
| [Mistral-Crab-DPO-IQ3_XS.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ3_XS.gguf) | IQ3_XS | 3.02GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Mistral-Crab-DPO-Q2_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q2_K_L.gguf) | Q2_K_L | 2.85GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Mistral-Crab-DPO-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q2_K.gguf) | Q2_K | 2.72GB | false | Very low quality but surprisingly usable. |
| [Mistral-Crab-DPO-IQ2_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ2_M.gguf) | IQ2_M | 2.50GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Mistral-Crab-DPO-GGUF --include "Mistral-Crab-DPO-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Mistral-Crab-DPO-GGUF --include "Mistral-Crab-DPO-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Mistral-Crab-DPO-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}