初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Aira-2-1B1-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 17:49:00 +08:00
commit bd9a8665dd
27 changed files with 219 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aira-2-1B1.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

3
Aira-2-1B1.i1-IQ1_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ed38765ad4194b7c1a4b5e1e55d5d987e2a9ff5362f1d2652d09c2d6a24d8f6e
size 289712512

3
Aira-2-1B1.i1-IQ1_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fb08f0382e38d10be0ad18c84595d3d435bfbd3fd202e8dfb588e32fdf290bd3
size 269977984

3
Aira-2-1B1.i1-IQ2_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a45c778dc3d325ba8c318cd61f0f60ce53ae44cb3fcc42fc3917f75d32613e90
size 400088480

3
Aira-2-1B1.i1-IQ2_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:27c35f2bafc68a5677cd72ad61f2724b66f872d7221df1e8b4a33f0524348902
size 373775776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8bf1084d33dea91ada2d664ea513d1159256c0dcf0903219e1b54386cd323d6d
size 351799680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac7b52be9a446d173c7aac7ef872de8b2b16db1a78c83caddffae2bcc6c591be
size 322603392

3
Aira-2-1B1.i1-IQ3_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9ca6271aa3e08ebb9d532fbc9d855e6568f73e75e494d6bbc78db50b31854c4d
size 516207360

3
Aira-2-1B1.i1-IQ3_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0353df226e3a008a8d99d221900755bd34687393f8a4d430028bd9497e633fd0
size 500888320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53253bd27728ecc6b8ecdb145a54f753c177db6e68870f04d63533be17f5bb5c
size 477639424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ba169f9bbb322f8caa785c98efddd8b3082bcfcfe8de710dc9e2f78aed5a616
size 445144480

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:abad50b767b85c7435e9e8dcb2d5c8e25c293c8f04238d1421b3be5ecd004354
size 638183488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:59226b25441a9e9c2cea86a0171adb63e82d0b38b87a3c8527baae47fe7e97f8
size 606217984

3
Aira-2-1B1.i1-Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7c50e7b1d8c05bf72a1045379743cde4d7447f3291337abe4742d78853998efe
size 432144096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5af26b73bf42366defba195feec0067f15583ae467d824010ea8d3ac02fcb9d4
size 402407136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5ab8f24d636996dab28e3b8579e835cdba1cc5e2cc1f0e6b6200eec7fb3cd7c
size 591540992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:959b8e6fadef70388bc6f7201593101433fce6368e6ade8dc99b92c057bd74d5
size 548418304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:93e1f8d5af880c75e45f1b577b28ba07a8ceb224518c5afb9ee90737b5d41567
size 499356416

3
Aira-2-1B1.i1-Q4_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:92d9366906dcaa55191e07727e2da6c49262b30b6df6d475c7a503e2805cb841
size 638183488

3
Aira-2-1B1.i1-Q4_1.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d668abc7033b238999ec6d7ab08b3cfa8960176b7746c83178d977464796b385
size 701393600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:96a58734d3f7f4a36ba6660d18e1c1d538ca75c4aab2cdb33031749c15b4df25
size 667830336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:438c6a896d57733cb921efe5e0a6a7e01fecc9e7c6ed202b3c24fd0431a1be4e
size 639887424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b864f1da5227451ae96b2c843e3908f262bd8435655ba8f3751545be6cc8360
size 782060864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85c26ab7a5c8097c20b34d06137a00e76957d484c671dde9d38f9a6f30ce2f34
size 766045504

3
Aira-2-1B1.i1-Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:52c8bb34de9a3d207608a94d1757690bb849edbf4a50ce2750555d053bc58bd3
size 903430816

84
README.md Normal file
View File

@@ -0,0 +1,84 @@
---
base_model: nicholasKluge/Aira-2-1B1
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nicholasKluge/Aira-2-1B1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aira-2-1B1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q4_0.gguf) | i1-Q4_0 | 0.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q4_1.gguf) | i1-Q4_1 | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aira-2-1B1-i1-GGUF/resolve/main/Aira-2-1B1.i1-Q6_K.gguf) | i1-Q6_K | 1.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:420e7beedc0726f05d04d969583d5e15f3bf18849a90c4f11a760390b2f50840
size 1582051