初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Ahma-7B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-08 19:29:05 +08:00
commit 0be2d74463
27 changed files with 219 additions and 0 deletions

59
.gitattributes vendored Normal file
View File

@@ -0,0 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q4_0_4_4.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-Q4_0_4_8.gguf filter=lfs diff=lfs merge=lfs -text
Ahma-7B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

3
Ahma-7B.i1-IQ1_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b5e00089858f5793919dcb0719373fc42ded9df5555af06146f89b69f356fe8f
size 1786046944

3
Ahma-7B.i1-IQ1_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e9fb981a8826d2edcfcdc85f979f1e0e4c632e44e789cd9a5c27d26f8eae2762
size 1663658464

3
Ahma-7B.i1-IQ2_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8aa08843bc5144a2c937dd99a97e958426f17b68c28296b032a204f033cb2b2a
size 2508245472

3
Ahma-7B.i1-IQ2_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ff427cee4ee62403d99e377a19527ed003100f7c202e5a3e7f8b20efdbcdb25
size 2345060832

3
Ahma-7B.i1-IQ2_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:835cdf2de6b9a63af1dd199e022259fc759ec8913d79f75b67d403611cac7317
size 2169989600

3
Ahma-7B.i1-IQ2_XXS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2dd360bb5cb2a6669f86d787eb6543ebdaa9235dcc9973eb784ad97857543e29
size 1990027744

3
Ahma-7B.i1-IQ3_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6369911d91babd876884a52ef2f5e129ed22da5f984c3ddd240e4039bb4a0938
size 3280906720

3
Ahma-7B.i1-IQ3_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c0d66af6c09069072089eeae0e0ca6a1c346725d72e60416532775840dfbee21
size 3114346976

3
Ahma-7B.i1-IQ3_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:541f29ed52fd1894dfa878de946b69d6cbf4046c64d23259c83a4ea9a55c3ab5
size 2962565600

3
Ahma-7B.i1-IQ3_XXS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e43df5278af28a5c8a18ece5bb2d5c47cca415c4e4b48f99e27f0f0893ed9b2b
size 2733885920

3
Ahma-7B.i1-IQ4_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a15de0bb6b659e9c6dbbbbf625a2e97405c09fdf3e711e8c2ac1862d0b0ff5d1
size 3798796768

3
Ahma-7B.i1-Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4eb305684475ea2b9800e4369adbd4c7b3493c9dc3ebd5ec654451e4f7a8f6ee
size 2685487584

3
Ahma-7B.i1-Q3_K_L.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cc0409d655dd9f76abfd699a6939be5b00d032b2af4bac0fcbbba231c8fb6999
size 3763153376

3
Ahma-7B.i1-Q3_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:084d5612937f57cc69c7eac694c30347a568a4050da05cac5c7090cc1c4cd2ab
size 3464047072

3
Ahma-7B.i1-Q3_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5989a6120ea5941fb1efba511d04f5851f00b11322d62c609a11c6e720664d82
size 3114346976

3
Ahma-7B.i1-Q4_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1477fb3fe8e11e7a2a4fd3bae1b48815c592a9974232976ecb16139da8e59d1
size 4020668896

3
Ahma-7B.i1-Q4_0_4_4.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2250dc45dd6c2769bc37a8de9710097cd3faf481dc1133c6971dc25c22b18ebd
size 4009396704

3
Ahma-7B.i1-Q4_0_4_8.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e3e525681e31224c518c0f718d94cfa5237a823e508e3790d9f0f1e20320bcf4
size 4009396704

3
Ahma-7B.i1-Q4_0_8_8.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0df1f9f0de104ea453e4524ebc092fa6680ed94b6d505b6486dffcf4996452e3
size 4009396704

3
Ahma-7B.i1-Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7f6441730e82b214a0cef104591d0857469ac7d1059bbb5bb8e5dfd40cd5c2ab
size 4264593888

3
Ahma-7B.i1-Q4_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:38d572d4a88b4be7dca0da78a5218eb34d5e155ef945620b520422fc305972bd
size 4040329696

3
Ahma-7B.i1-Q5_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7f9a7764296e672cafe15701a3b122703649eb2d621a0a90952f65cdc2e92da6
size 4983261664

3
Ahma-7B.i1-Q5_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf4743cd83ee26ab35fe4cf5f0bb20628a39bb320b2c9bb4a3aa0899a170e710
size 4851796448

3
Ahma-7B.i1-Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:08abb745889f62f39afca4cd4c7fbf0d9020de59b839b487aaab1ad126277ed8
size 5746846176

85
README.md Normal file
View File

@@ -0,0 +1,85 @@
---
base_model: Finnish-NLP/Ahma-7B
datasets:
- Finnish-NLP/CulturaX_fi_cleaned
- Finnish-NLP/HPLT_1.2_fi_cleaned
- Finnish-NLP/wikipedia_20231101_fi_cleaned
- Finnish-NLP/Reddit_fi_2006_2022
- intfloat/multilingual_cc_news
language:
- fi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- finnish
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Finnish-NLP/Ahma-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Ahma-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.1 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.1 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.1 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ahma-7B-i1-GGUF/resolve/main/Ahma-7B.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d61d44571551f0df7968cfaeed3c41b21ac34fa6c4109b07e0a760bc6fa1a9cc
size 4562173