初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-09 13:12:21 +08:00
commit 99fea18471
27 changed files with 230 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e911470cce3fe7d9b087ce83ae5755f75644d6b2f3186c3a167c8ef244f7f009
size 2161977728

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c0f644789cd2b103ecbd34909f80ff4d016aaffb11aa7fada25108c6e4767a8a
size 2019633536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f7fbb0a640dc152df6e0c1f7bf0ed0bbcf1004aafaa111e317df1105df03c871
size 2948286848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fcfe4c1fdfe851399b3137649e7f53e8a41c476c7d3d42aa1c83601f36b9c49b
size 2758494592

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3ee3a752e9ed657b400bb65b0dc217f770cabdf758c0242370aebb302885e2d6
size 2605787520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55be7f9d6292ed2b51ba5ca434ba4d50c4ac44d19eb02b0809cae8efd1719484
size 2399218048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6f1a214df680011851990cf8eee230d13e17c4d2b927e66a239eb01ed92c7eec
size 3784829312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78eddb112ffdcec263c608cb9f1e7a5157d56f80531a4b50e6b41750fbadbae7
size 3682331008

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1573badd1d71ae3a4c61e499e4def71207639094649f98c7798d65bd16980b82
size 3518753152

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d2cf42186189c7127511127269caf3881a0add93773b7ad59e6a9d0f17a19157
size 3274918272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e09fb13a066a415f03ce22a079f296885f6362477e75bcec817a3cb2663479a5
size 4677994880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:899a36764877395fdd21684ceb19cbfadd772e1a47b4dc99bfc09a5c21a930b9
size 4447668608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89c3285db7e6552b028332acdfc8b812f54b1af599735cb54f9dabcaed754e0a
size 3179137408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9f9ca92d278d793ae29681e6efca88683ffb5cd1faca83c305cc9b1d28a10b6f
size 2988820864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b784e9fb9304440d307894f5882a690bb3bb174ad9b44b819fc07e56e71b1220
size 4321962368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4015614eceda75a7a1e01040f58e1b55364d6c7e635f5db7baf1967d0a8a4c87
size 4018923904

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6deaedbb8de3bf189aa6a8e8ecfcd21333e4820ef95e0ac1fd4cabc434e74a6c
size 3664505216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ea18e8925f97b750b7e65ab30776fda461045719817bd20a03ac54896eb06655
size 4675897728

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d0a62f0594f6ecb0ce1cc899e99eea8c61308ac882e3fbc7a7555970b231f51
size 5130258816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:039b3a20f5d9bf0652bccb0bdd2de693ffc289967279bcf775d8626a6b943a27
size 4920740224

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7153762672b9b549398f9126f1cdc8020c72ac2c83ce6e204fbb333a172004df
size 4692674944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fcb64d7e249646df0c8c54b312e1d26bbce4a92606cb1149be496f1b29541de0
size 5732993408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4580a0d79ad81b7bbd955efab84d93ff744c1c6d24e8e927eacca586d79520a5
size 5599299968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:19114f0183a5b1eabf1373ab4fb7a0602391c92fc2c1f0ddfc37c3f1bf02574b
size 6596012416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f59fc2a817ee6e8530697509f72d1f29c017dc451d4c1dc7c1965372fc7fdb1d
size 5015200

95
README.md Normal file
View File

@@ -0,0 +1,95 @@
---
base_model: s21mind/HexaMind-Llama-3.1-8B-v25-Generalist
language:
- en
library_name: transformers
license: llama3.1
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- safety
- truthfulqa
- mathematics
- reasoning
- open-llm-leaderboard
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/s21mind/HexaMind-Llama-3.1-8B-v25-Generalist
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/HexaMind-Llama-3.1-8B-v25-Generalist-i1-GGUF/resolve/main/HexaMind-Llama-3.1-8B-v25-Generalist.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->