初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF Source: Original Platform
This commit is contained in:
60
.gitattributes
vendored
Normal file
60
.gitattributes
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bff7a675d6ba9cd52fcb3475b2dc6556a15f2b0b45234ff011a7eaea03600550
|
||||
size 924191904
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dd8212d7d8cfaa77426fc3fb0c92b639a811696d376dce69f089a5a7506ec7ab
|
||||
size 868158624
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8a4f2a6df86d126b987d130da9b416de617692a0c98694b22831f2f8b864244e
|
||||
size 1229032608
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f7aba0f0093748d225a1f1b6ebe8b04b9dcafee4e576a60b26996254c1ab58f3
|
||||
size 1154321568
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XS.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ec9008b1bd5f19ab6ac4ebb96cf3b3efee25ab01c0f21e7c41caf4697517451b
|
||||
size 1100549280
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XXS.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f1b5284a21f00064a20dec53d544fb485f01d6d89f13c93454d8dedb564abda5
|
||||
size 1017580704
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:de4b7e586090d6aea8466bf375fb0af12837bd11d22c98b79d36d6262ddaa020
|
||||
size 1599669408
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f1fb695278d4ef7864fa47827b878c92b5855c7d9db741afbda31288c5c14a04
|
||||
size 1542849696
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XS.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8af942f8da43cd4d164a095f8bbe0a485ced213a8483ada6a59533b34314736a
|
||||
size 1476789408
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XXS.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:205f74764d3dfb1b945c4b3b39ee4bfcc2166f90e333b2990a27bdab984d51af
|
||||
size 1348766880
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_NL.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:23081f5a7d6ebfc7dfe632e6be99f9e9272461d7fd9da2504a2652e7ccfe184e
|
||||
size 1917191328
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_XS.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:117cc86e18024a68e8f0cf705f2028f9a791d8fa4b13de86a439167f0b2ee186
|
||||
size 1829110944
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a2bee4d6ba0d99039e11abe1c92e4536724e306e29567c17daf01c8b62522791
|
||||
size 1363936416
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e9730f4fd6db3229f090e5fbe375416ef3eee28a764417f60e49666bb9f0b467
|
||||
size 1274283168
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_L.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3147edd72ae4a6347305d057a2b2ca0fa1753d0c4960b3f293405e7ed45a3503
|
||||
size 1815348384
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:108c00d8733e178c105723dddc852b4c58aff2e32b68a8a6242f0e19efcc9fe3
|
||||
size 1687159968
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5db2f7b9d01e18cbc017c7771f3e868f96ca2d9ad15a35956fcadcedb8f7a6d6
|
||||
size 1542849696
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_0.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:012c1754a740e621d3942363b421e2689ad99f31aabb677cef47bb30722dc0a4
|
||||
size 1921909920
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_1.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ec9e1d0c4762fac27d2b0253a74783a56d8d028f1f11966379ac287bcd56c5b2
|
||||
size 2093352096
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bed4fbde6f11546cf6b2acf32bbd52d3f25a68e3a442357bcefae821e08f6498
|
||||
size 2019378336
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5c40835936a0839e0f28bc296a121dc6d80a07612103e63da0d4e39ea3a43007
|
||||
size 1928201376
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_M.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a20aba4aa5faa5cd71d7a521deb5f9c5d6307175904c7ddd0a9395feea9729bd
|
||||
size 2322154656
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_S.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:67277001b3c565c6f5e7743e13d26c8dbd9cfadf49520c5e9db478e6bf7a1469
|
||||
size 2269512864
|
||||
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q6_K.gguf
Normal file
3
Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:99333d4a74b62d03d649ecc695d46c3352724c960dc16a996d74ec828d986286
|
||||
size 2643854496
|
||||
85
README.md
Normal file
85
README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
base_model: DavidAU/Deep-Reasoning-Llama-3.2-BlackSheep-3B
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
quantized_by: mradermacher
|
||||
tags:
|
||||
- reasoning
|
||||
- thinking
|
||||
- cot
|
||||
- deepseek
|
||||
- Llama 3.2
|
||||
- 128k context
|
||||
- fine tune
|
||||
- llama-3
|
||||
- llama-3.2
|
||||
---
|
||||
## About
|
||||
|
||||
<!-- ### quantize_version: 2 -->
|
||||
<!-- ### output_tensor_quantised: 1 -->
|
||||
<!-- ### convert_type: hf -->
|
||||
<!-- ### vocab_type: -->
|
||||
<!-- ### tags: nicoboss -->
|
||||
weighted/imatrix quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-BlackSheep-3B
|
||||
|
||||
<!-- provided-files -->
|
||||
static quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-GGUF
|
||||
## Usage
|
||||
|
||||
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||
more details, including on how to concatenate multi-part files.
|
||||
|
||||
## Provided Quants
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
| Link | Type | Size/GB | Notes |
|
||||
|:-----|:-----|--------:|:------|
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-BlackSheep-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-BlackSheep-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
|
||||
And here are Artefact2's thoughts on the matter:
|
||||
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||
|
||||
## FAQ / Model Request
|
||||
|
||||
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||
questions you might have and/or if you want some other model quantized.
|
||||
|
||||
## Thanks
|
||||
|
||||
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||
me use its servers and providing upgrades to my workstation to enable
|
||||
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||
|
||||
<!-- end -->
|
||||
3
imatrix.dat
Normal file
3
imatrix.dat
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:cdc37f4720478a72dcf265fc7017e641446e946dacaed09d11a37e78be23526b
|
||||
size 2988377
|
||||
Reference in New Issue
Block a user