Compare commits

...

10 Commits

Author SHA1 Message Date
team mradermacher
f5c59d7804 auto-patch README.md 2025-01-30 15:09:27 +00:00
team mradermacher
b86b8514df uploaded from rich1 2025-01-30 15:05:57 +00:00
team mradermacher
09044cda21 uploaded from rich1 2025-01-30 15:05:24 +00:00
team mradermacher
a7daa23f4c uploaded from rich1 2025-01-30 14:55:47 +00:00
team mradermacher
aee9838b60 uploaded from rich1 2025-01-30 14:54:53 +00:00
team mradermacher
84608ff2c7 uploaded from rich1 2025-01-30 14:47:08 +00:00
team mradermacher
9f7b37cc4f uploaded from rich1 2025-01-30 14:43:07 +00:00
team mradermacher
0af43b97c2 uploaded from rich1 2025-01-30 14:40:29 +00:00
team mradermacher
2e21b064f6 uploaded from rich1 2025-01-30 14:39:37 +00:00
team mradermacher
c17da33693 uploaded from rich1 2025-01-30 14:34:47 +00:00
11 changed files with 108 additions and 0 deletions

9
.gitattributes vendored
View File

@@ -49,3 +49,12 @@ L3.1-Artemis-h-8B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
L3.1-Artemis-h-8B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0792eb88a62e5fbde7e774e28e691110611bb0c5594105d43de5f222490926b7
size 2019630080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2ee281c1fb5f33ca5257574d98de9d8b4e148ffe72fbd572be4353ea59a67cd
size 2758491136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:49c50357781da9cbf7d8d171f9c7d3d4ac50e8e7ee91707064fab527b25655f7
size 2605784064

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf0e723fd7e4f1343c0deec3fe1dd8c28b04dfa6805b72fd17766ed28b37933b
size 3682327552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5626e0b2ef68d5b27b199aeb14d9b4a31cc2d3ca0e79415a377ec093ac3d5d19
size 3518749696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3c55cad282ad0c043b3f43009bc4972e5a1785c0c46b415a6f05ea9da9b958b8
size 4675894272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:83de369cbd7e1dbcb2b5c3d085ea994bc1ae3e7211b49c0978938ead496524b7
size 5130255360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:07dcc898e47a20191339540aeb0fffc3b40cad134eb4bed7a64bbd896eec8630
size 5732989952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ecec39a7518f0d10f69f5dc0bcc10617e8197dd483250faa8541dee8038069b
size 5599296512

View File

@@ -1,6 +1,78 @@
---
base_model: mergekit-community/L3.1-Artemis-h-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/L3.1-Artemis-h-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Artemis-h-8B-i1-GGUF/resolve/main/L3.1-Artemis-h-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->