初始化项目,由ModelHub XC社区提供模型
Model: bartowski/Mawdistical-S1_Infracelestial-7B-GGUF Source: Original Platform
This commit is contained in:
60
.gitattributes
vendored
Normal file
60
.gitattributes
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-bf16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Mawdistical-S1_Infracelestial-7B-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ2_M.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6bf970ca61fa03c3e9603ae541b1614c0b17854e27097fc8d61c2dd534c520e5
|
||||
size 3168859456
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ3_M.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:625895f4de9ad79669775c9f1df04db5880702ac60dddae81f210565a9f9c6b2
|
||||
size 3786900800
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ3_XS.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9767c76913f17af745bd21816a5485eca2be0d3207c191b47ba058aa9e0a5cb5
|
||||
size 3549726016
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ3_XXS.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1156367dcbd36b02586e4e0caefa022435cbd3c6f1625b6b2d63a29e5a866cb1
|
||||
size 3331847488
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ4_NL.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c495d255535c74ef2050a76048ca7e2bb869775133b0095af41f18385bdc95fd
|
||||
size 4572419392
|
||||
3
Mawdistical-S1_Infracelestial-7B-IQ4_XS.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d27c35b30c3d8e093345c9efddffd6dc1fd3b98877e4a76f63aad53472905789
|
||||
size 4367799616
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q2_K.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6a38b31cec0f1c26c15cc53728b0b693dd1b6b298efdf1da499d724b97813a22
|
||||
size 3225040192
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q2_K_L.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q2_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ee19ce124ff29335b7a1ffdd7212456e29c89ff0204aa62e7f1153dba1343b7a
|
||||
size 3831760192
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q3_K_L.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a3cdca88b2a86c22e4843c72e12a5584040f2b2cfb5a1262ddfc98c5a38e7694
|
||||
size 4181427520
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q3_K_M.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:09236beac44d0134bd41763e18144251112604f34b29c7a46f93c7de57c41353
|
||||
size 3980363072
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q3_K_S.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bef476fe344d270ae38258d5e80a5a13fa49e062326c954ee0a0f53dec7fcaa6
|
||||
size 3643802944
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q3_K_XL.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q3_K_XL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ad0612f1025a8e00db25f94c2f90ba8f7cafbe2c7b27a2abe7dc8168b149f474
|
||||
size 4725048640
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q4_0.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c02eeefb40a31439128483a4a49c431efcb29b1c5844807821c832145dee46f1
|
||||
size 4544763200
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q4_1.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c171df2d80ad9d13a0393dadedaab6ee3a1c97a6bc07e2768201960155dd0e21
|
||||
size 4952167744
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q4_K_L.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q4_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5fe60421b9ffe597541ac3adbc4140caba818def51357a441d1e700a34c3bf2d
|
||||
size 5317674304
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q4_K_M.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2cc112427d3d799e757339399b441422bb0fe5d7d9a4ae801f921143d6eebb6c
|
||||
size 4856567104
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q4_K_S.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0ff0bb4446f625ff469c268704293fe5c967bc556b99335e866e7189c51afba3
|
||||
size 4565472576
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q5_K_L.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q5_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:84e71a2d86bfdfc2f0570319e684d3c389618fd6ace4e6d0be52ff120f3d3995
|
||||
size 5972378944
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q5_K_M.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:772fec3edfe44e6f1c749cbec3f7fdcafe3a94b436d6d767f0da272a518c1564
|
||||
size 5588931904
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q5_K_S.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f68d9111dff7c37a21ca331579f1cccf5428837a876a2228db1620bd1ebfb4cf
|
||||
size 5370844480
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q6_K.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:14333b283e1cd12232d6243eeead17193fde2650852625537dae70925aacc7c2
|
||||
size 6534800704
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q6_K_L.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q6_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0741c02c9ad3bcbc955ed4a14eeaeb17898f20ae7b404e72403605db6457f20b
|
||||
size 6835733824
|
||||
3
Mawdistical-S1_Infracelestial-7B-Q8_0.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7633fd88076128315aeda2bf816a58da025f9689e9691cb97b5ec6f3a8287f74
|
||||
size 8106509632
|
||||
3
Mawdistical-S1_Infracelestial-7B-bf16.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-bf16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:19f495cd207da82bba2042b33bb4bf3681378cad14da9daf0301add2f35be767
|
||||
size 15252227072
|
||||
3
Mawdistical-S1_Infracelestial-7B-imatrix.gguf
Normal file
3
Mawdistical-S1_Infracelestial-7B-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9688016f634fa7b4a4c30d23ba90673cb2fb7d9f8621ae277880cd1b2f439ef4
|
||||
size 5162880
|
||||
193
README.md
Normal file
193
README.md
Normal file
@@ -0,0 +1,193 @@
|
||||
---
|
||||
quantized_by: bartowski
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- nsfw
|
||||
- explicit
|
||||
- roleplay
|
||||
- mixed-AI
|
||||
- furry
|
||||
- anthro
|
||||
- chat
|
||||
- manipulation
|
||||
- sfw
|
||||
license: other
|
||||
language:
|
||||
- en
|
||||
base_model_relation: quantized
|
||||
inference: false
|
||||
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/jk4OFD0hJma2GT9vTqlRR.jpeg
|
||||
base_model: Mawdistical-S1/Infracelestial-7B
|
||||
---
|
||||
|
||||
## Llamacpp imatrix Quantizations of Infracelestial-7B by Mawdistical-S1
|
||||
|
||||
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b8779">b8779</a> for quantization.
|
||||
|
||||
Original model: https://huggingface.co/Mawdistical-S1/Infracelestial-7B
|
||||
|
||||
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/82ae9b520227f57d79ba04add13d0d0d)
|
||||
|
||||
Run them in your choice of tools:
|
||||
|
||||
- [llama.cpp](https://github.com/ggml-org/llama.cpp)
|
||||
- [ramalama](https://github.com/containers/ramalama)
|
||||
- [LM Studio](https://lmstudio.ai/)
|
||||
- [koboldcpp](https://github.com/LostRuins/koboldcpp)
|
||||
- [Jan AI](https://www.jan.ai/)
|
||||
- [Text Generation Web UI](https://github.com/oobabooga/text-generation-webui)
|
||||
- [LoLLMs](https://github.com/ParisNeo/lollms)
|
||||
|
||||
Note: if it's a newly supported model, you may need to wait for an update from the developers.
|
||||
|
||||
## Prompt format
|
||||
|
||||
```
|
||||
<|im_start|>system
|
||||
{system_prompt}<|im_end|>
|
||||
<|im_start|>user
|
||||
{prompt}<|im_end|>
|
||||
<|im_start|>assistant
|
||||
```
|
||||
|
||||
## Download a file (not the whole branch) from below:
|
||||
|
||||
| Filename | Quant type | File Size | Split | Description |
|
||||
| -------- | ---------- | --------- | ----- | ----------- |
|
||||
| [Infracelestial-7B-bf16.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-bf16.gguf) | bf16 | 15.25GB | false | Full BF16 weights. |
|
||||
| [Infracelestial-7B-Q8_0.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q8_0.gguf) | Q8_0 | 8.11GB | false | Extremely high quality, generally unneeded but max available quant. |
|
||||
| [Infracelestial-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q6_K_L.gguf) | Q6_K_L | 6.84GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
||||
| [Infracelestial-7B-Q6_K.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q6_K.gguf) | Q6_K | 6.53GB | false | Very high quality, near perfect, *recommended*. |
|
||||
| [Infracelestial-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q5_K_L.gguf) | Q5_K_L | 5.97GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
||||
| [Infracelestial-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q5_K_M.gguf) | Q5_K_M | 5.59GB | false | High quality, *recommended*. |
|
||||
| [Infracelestial-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q5_K_S.gguf) | Q5_K_S | 5.37GB | false | High quality, *recommended*. |
|
||||
| [Infracelestial-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q4_K_L.gguf) | Q4_K_L | 5.32GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
||||
| [Infracelestial-7B-Q4_1.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q4_1.gguf) | Q4_1 | 4.95GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
|
||||
| [Infracelestial-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q4_K_M.gguf) | Q4_K_M | 4.86GB | false | Good quality, default size for most use cases, *recommended*. |
|
||||
| [Infracelestial-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q3_K_XL.gguf) | Q3_K_XL | 4.73GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
||||
| [Infracelestial-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q4_K_S.gguf) | Q4_K_S | 4.57GB | false | Slightly lower quality with more space savings, *recommended*. |
|
||||
| [Infracelestial-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ4_NL.gguf) | IQ4_NL | 4.57GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
|
||||
| [Infracelestial-7B-Q4_0.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q4_0.gguf) | Q4_0 | 4.54GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
|
||||
| [Infracelestial-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ4_XS.gguf) | IQ4_XS | 4.37GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
||||
| [Infracelestial-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q3_K_L.gguf) | Q3_K_L | 4.18GB | false | Lower quality but usable, good for low RAM availability. |
|
||||
| [Infracelestial-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q3_K_M.gguf) | Q3_K_M | 3.98GB | false | Low quality. |
|
||||
| [Infracelestial-7B-Q2_K_L.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q2_K_L.gguf) | Q2_K_L | 3.83GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
||||
| [Infracelestial-7B-IQ3_M.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ3_M.gguf) | IQ3_M | 3.79GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
||||
| [Infracelestial-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q3_K_S.gguf) | Q3_K_S | 3.64GB | false | Low quality, not recommended. |
|
||||
| [Infracelestial-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ3_XS.gguf) | IQ3_XS | 3.55GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
||||
| [Infracelestial-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.33GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
||||
| [Infracelestial-7B-Q2_K.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-Q2_K.gguf) | Q2_K | 3.23GB | false | Very low quality but surprisingly usable. |
|
||||
| [Infracelestial-7B-IQ2_M.gguf](https://huggingface.co/bartowski/Mawdistical-S1_Infracelestial-7B-GGUF/blob/main/Mawdistical-S1_Infracelestial-7B-IQ2_M.gguf) | IQ2_M | 3.17GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
||||
|
||||
## Embed/output weights
|
||||
|
||||
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
|
||||
|
||||
## Downloading using huggingface-cli
|
||||
|
||||
<details>
|
||||
<summary>Click to view download instructions</summary>
|
||||
|
||||
First, make sure you have hugginface-cli installed:
|
||||
|
||||
```
|
||||
pip install -U "huggingface_hub[cli]"
|
||||
```
|
||||
|
||||
Then, you can target the specific file you want:
|
||||
|
||||
```
|
||||
huggingface-cli download bartowski/Mawdistical-S1_Infracelestial-7B-GGUF --include "Mawdistical-S1_Infracelestial-7B-Q4_K_M.gguf" --local-dir ./
|
||||
```
|
||||
|
||||
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
||||
|
||||
```
|
||||
huggingface-cli download bartowski/Mawdistical-S1_Infracelestial-7B-GGUF --include "Mawdistical-S1_Infracelestial-7B-Q8_0/*" --local-dir ./
|
||||
```
|
||||
|
||||
You can either specify a new local-dir (Mawdistical-S1_Infracelestial-7B-Q8_0) or download them all in place (./)
|
||||
|
||||
</details>
|
||||
|
||||
## ARM/AVX information
|
||||
|
||||
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
|
||||
|
||||
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggml-org/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
|
||||
|
||||
As of llama.cpp build [b4282](https://github.com/ggml-org/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
|
||||
|
||||
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggml-org/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
|
||||
|
||||
<details>
|
||||
<summary>Click to view Q4_0_X_X information (deprecated</summary>
|
||||
|
||||
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
|
||||
|
||||
<details>
|
||||
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
|
||||
|
||||
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
|
||||
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
|
||||
|
||||
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
|
||||
|
||||
</details>
|
||||
|
||||
</details>
|
||||
|
||||
## Which file should I choose?
|
||||
|
||||
<details>
|
||||
<summary>Click here for details</summary>
|
||||
|
||||
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
||||
|
||||
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
|
||||
|
||||
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
|
||||
|
||||
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
|
||||
|
||||
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
|
||||
|
||||
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
|
||||
|
||||
If you want to get more into the weeds, you can check out this extremely useful feature chart:
|
||||
|
||||
[llama.cpp feature matrix](https://github.com/ggml-org/llama.cpp/wiki/Feature-matrix)
|
||||
|
||||
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
|
||||
|
||||
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
||||
|
||||
</details>
|
||||
|
||||
## Credits
|
||||
|
||||
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
|
||||
|
||||
Thank you ZeroWw for the inspiration to experiment with embed/output.
|
||||
|
||||
Thank you to LM Studio for sponsoring my work.
|
||||
|
||||
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
||||
Reference in New Issue
Block a user