初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Q2.5-R1-3B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-30 23:30:30 +08:00
commit 952cc62be4
26 changed files with 210 additions and 0 deletions

59
.gitattributes vendored Normal file
View File

@@ -0,0 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Q2.5-R1-3B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

3
Q2.5-R1-3B.i1-IQ1_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6be4ea21adcc37035279006ac6930f65aa2f3e881e7ca4c69e601bc59238a270
size 849637792

3
Q2.5-R1-3B.i1-IQ1_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1ad66628ac3489278e0f1db35cfa1c1d982ce28d756b6bdddeb686d970b2a6f
size 790704544

3
Q2.5-R1-3B.i1-IQ2_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7127131189765cbb144903b3e79deb480e8fca9e9d4090d07e759687af8587c8
size 1140126112

3
Q2.5-R1-3B.i1-IQ2_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:636a7a91725fe37980f6b95175c67f2a1d5b2ccd6fd1cff5f3d3c218f34d70bb
size 1061548448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c7fca0c521ade7e5d06f866aefb678ad47be5c77aa7ad19a9f1c719aee57e56a
size 1031156128

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5d91241571be0824ea176a9346062e1b5b9e4747a8a9679b204c335d4ac6c7d8
size 947859872

3
Q2.5-R1-3B.i1-IQ3_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:10e6a54d0446304e887e3b8a455d98fc5b5fbcb15be4e66765b2d8e0ffef6c3c
size 1488431552

3
Q2.5-R1-3B.i1-IQ3_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:404c45dedc67b838be524ad6c376b61b0760e4831ecb354e7354d0a369a0017d
size 1456400832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1f8e70d7493ffea4ebfa99b930be70bec9c4ddd527986a9a23ad177d416afba6
size 1391372736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a34e7ea57870da35726189c18786bdea9e4f30d9cb74536df4c080e55ca992ed
size 1282437536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55ea5f071a80dcb54cfaaece8636a3e64afdee81ad23be595475e6deb1e13199
size 1824745920

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e87b64daafa03b286c7ab8e2edc0d346ca5ede8e9373bd269024cd46452e0512
size 1738631616

3
Q2.5-R1-3B.i1-Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e4361b0dc8a4b1b31d50c10317a913e7de785e740328f254d154feacbe01ef0
size 1274292672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e84f76560f06dab81dbd87831e18fcab5cdd2c37c3f8a96ecf9102ee4e755fb7
size 1197664704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cca9fd1c8f304cfe97bee442f7993bac92189db723780019493a2a9dd84d7b8a
size 1706928576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa6d8f4974c83d22daebc76ce85feb2ce57add480dd1840f51f83804049b68d6
size 1590012352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:367b00aa7cafd6731e69c2a177b1204b42433d4b7f8960b175c808cafe5921d0
size 1453894080

3
Q2.5-R1-3B.i1-Q4_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b606452fbe9d9aa32f1b29a456e9495dafdd63a3bfadc974e2f9dda98e1eb28
size 1828022720

3
Q2.5-R1-3B.i1-Q4_1.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:087d38fb9b7bcb89f1173543cf1a1d3a56273e96a2edabd61fec18409b045f17
size 1995794880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9d1dfeaa4ee31dc656312bf5358ec64d4950a03b34216b3a2977a4c70065f9b0
size 1929439680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ef7c75c1041e267b7aeb5aae7ed0bdb74b05c0d85539563d0636d5f8b96b1cf
size 1833920960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:49b93b782e37e425a193846df5fb7c3ac1852bd08a866384683f07b2e7a4c488
size 2224351680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:708cb5e734d2f8a9bc369e453757040a483e26a9f80a295ed38aec79f588ef4f
size 2169203136

3
Q2.5-R1-3B.i1-Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5d243482cd9b4eb34283a46ff7f25370bef1cd659f48b4e75d27b8cb471032c
size 2537695680

79
README.md Normal file
View File

@@ -0,0 +1,79 @@
---
base_model: Triangle104/Q2.5-R1-3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Triangle104/Q2.5-R1-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Q2.5-R1-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q2.5-R1-3B-i1-GGUF/resolve/main/Q2.5-R1-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->