初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Llama-3-Obsidian-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-20 04:01:22 +08:00
commit b2539edcb5
24 changed files with 198 additions and 0 deletions

57
.gitattributes vendored Normal file
View File

@@ -0,0 +1,57 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3-Obsidian.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b8f65cef134d111f91e658191fd48c0435e0bd577e69c2da53b3f089587f6f63
size 2161971776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ccc8d60d6faf705485b9b8925cc81d1c8db9328d5bccddc38f2574b7266f969
size 2019627584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aff6a2f4a2ec2c1f6a98393dec3c545c862bb88c477fa9e751018f5c8274e27c
size 2948280896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e08d2d35198f4e3daca3224078fcc096df6691fac906537b3a9f689de9e5e3da
size 2758488640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:663fded5a30c501c2c13fef25c9fe28e8fd95ff5a2b2db09149a531b675e6b4e
size 2605781568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:60a43f03665b84407d7f3d2b3ed3842174f350860d47aca1453acdf6be343c09
size 2399212096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dd0693e2b6ee550b4229b574c109d1eed955b543136e5c1af3a9eb87279ab258
size 3784823360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:759d427d082a8e5ea6d5f0c96e907022f0228aa6688ec2fdef738aeb1086710b
size 3682325056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cc0265fe23395c1eece806bc3a995bdf5383318d8f69bb041838e7c47263bfab
size 3518747200

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5d2dec017d5d3ca855751ce9170ea05a2bc9d277fad0044c85732c70ef9e822
size 3274912320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2aead16326d61d1a909854a2436b38f2ac5dfd5a5d6c1e80922eebe6d37df777
size 4447662656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0092af9d94f9a712b09be70e4fe4d40d3ebfcdeb530034c2e39da19d0d1323a
size 3179131456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:463a2d318150c228645720d2847bef98ebaa0ad275d99e0285db0b86b5b2ca78
size 4321956416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:690365cafa84ddae75c5f11c43fd7a3d4598bc07a443c78719122701e010b248
size 4018917952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:80a7ae2c54f4dbe829234ea8a1e1757a3ad40097f3f6be8148184b66e1dde174
size 3664499264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:28d2883ae148f1cc7e28c6e90cf4f6f7b6a46c3f61effcacdd7fc32b87bd3ab8
size 4675891776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3b3be3e94b1631f22ab217930df117ffd2498d2caca415f2e2ccdf976fba7c37
size 4920734272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:146716b916517566281b3b53ef86b3d4f4a40dd8f3a8a48625ca72d2a17ef3b6
size 4692668992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4cdcf9111f547b2f01477e9d110ac0cc7319e05e8a42440fef3f83396efafa9e
size 5732987456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cad721ae991b739d7dce7ff20eae1d923881004eb1c2a845b5dfcbd93a9ff759
size 5599294016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3306bad063fefd97a76afba407b6e914d6c90b480886fea0f4d6588bae62fe0f
size 6596006464

75
README.md Normal file
View File

@@ -0,0 +1,75 @@
---
base_model: Capx/Llama-3-Obsidian
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- general purpose
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Capx/Llama-3-Obsidian
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Obsidian-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Obsidian-i1-GGUF/resolve/main/Llama-3-Obsidian.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fab2674848e64fb2dd65bb40731f821f5bfddcc37733976cf065216aa6cdbb56
size 4988143