初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-09 01:33:54 +08:00
commit 2dbdb45926
27 changed files with 223 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Aisha-Llama-3.1-8B-Complete.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01c3b1742516b610c2740304fa711659366784481bb6431bb234e4d22ecbb745
size 2161977184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a5aaa1e5dee89775e526ca91e0e7428b3bbe115f0b7cc37bf65027dd3bbf5a6e
size 2019632992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2704cea1ecc4784a605864531a562335813179e8ab72121caf8a3ea6f5888bf6
size 2948286304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:75d236ddfa56fb5fae41312be1c674eb907e9cf1cbf6271d21de2f5b8404e47c
size 2758494048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e18bd5645f9d1912b95b9972ed292dd0631cef503bc19dbc289ea927c05ab513
size 2605786976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d0768c5275d8f2b88c3d7472ab0004dd2068eb833aa822f76f565170318ba22c
size 2399217504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d74f0a1c08981195f749ef552a4acde2f7bf0814cae8faf4d6d9c083ff922c51
size 3784828768

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:34d4131dd66ae3b7a46808f39f706e904f792a28a38e74d10d371c6a8fc4aba5
size 3682330464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9494128cfb1a203fc1d3a6f5ce83b1a037c2c17ed627b2720c2c95430c731b91
size 3518752608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a2bcf14a0f833cc7a97141438ffb355a035ebfe1e311fb75439ed5a8d2d0cd28
size 3274917728

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e942d9646622cfbc75eb900eb87656ce47771cbe65128ce668e87fb911cb22b4
size 4677994336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e80541c2fe51c7138db5d358fb8bd52c51f1d569ee0f03826df4e47a480e43e0
size 4447668064

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:753e3476019065e76ec5c495c2dc1167a80221174d85698aa3256760af3be160
size 3179136864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b18fdbe9de2f507e7e5aa04d8b29bc568fc51369df55e70e16e760af96307ddc
size 2988820320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:caca4de8d8c1d4ff138b3f48ebf7bdaadb775503e6f20db9ca3e4f2a5304deda
size 4321961824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b08fa228380a6e87b1319cc0d4ff0bfbfe9e15155e0cb0d62421226230dc5d18
size 4018923360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e1c95866f0fddb1b6387e7253fcae2296bc7bf6873e5b02eaccbf92d60a0c71f
size 3664504672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:03ac0b1dab58f59f993615e775f089a4dd6252cd3568f1dd77efdaf2e57485c4
size 4675897184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b60a75f4ce4773482e24cf63660199cfd5ed94ba949425942e674eadbb00b84
size 5130258272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d282f87fd5aed9bceed3c1c113862f9a1a13179ebe91af81e7d076bce0de8e84
size 4920739680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:469495e328ef18782ecc6d2ff20288b175e50215c2fbbec4ef4a18273aab1dfb
size 4692674400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0de7592cbae56dd4cacf17fae17ffef0ddb04079574b03a77de806f3fbbad814
size 5732992864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b6691709e2757bd05532183b13bad5310e19e80c25cac65bb8f81ba77d9b23e
size 5599299424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ffba0abb9b0f2cdaa7356d673467b0c52f16bb761eddc0d09bc8b0a339332ee
size 6596011872

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3dd5f4fd12feff626d8f6b35610c6facd6b5059a6441e7d36f2c8970a1d860ea
size 5015200

88
README.md Normal file
View File

@@ -0,0 +1,88 @@
---
base_model: V3N0M/Aisha-Llama-3.1-8B-Complete
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- gguf
- llama.cpp
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/V3N0M/Aisha-Llama-3.1-8B-Complete
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Aisha-Llama-3.1-8B-Complete-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aisha-Llama-3.1-8B-Complete-i1-GGUF/resolve/main/Aisha-Llama-3.1-8B-Complete.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->