初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF Source: Original Platform
This commit is contained in:
60
.gitattributes
vendored
Normal file
60
.gitattributes
vendored
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Llama3.1-DeluXeOne-8B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ1_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:bd890be2fbde7cd600c9faa145a802c4fc7547d89d3f05bf84424dc4a14618a3
|
||||||
|
size 2161977408
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ1_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ad7f1eed46afb76694aaee631ec57ec0ba1c3b235cde101ae1f46e8735cf75f5
|
||||||
|
size 2019633216
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ2_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:295cfbf25ad6524390433f15dbc0a2b5561009a0adde0cc43be4db096477ecac
|
||||||
|
size 2948286528
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ2_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:2a0d4e8109c66814cde00144039c994c09396681454cd41a3c9825b0b2ce0ec3
|
||||||
|
size 2758494272
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ2_XS.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1efa0868444fcdc8b8b71b90592f5ea25d8eb9b4fa7a7f607feefc38a962ae30
|
||||||
|
size 2605787200
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ2_XXS.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:fb1823031ea5ec585532d499e8acf6b79ff6e8a0acd4a83e955af3cd6d113df6
|
||||||
|
size 2399217728
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ3_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:110de970eea20b1872e78e127d70238101d8911dbe7009d4ecd9263dad57f40b
|
||||||
|
size 3784828992
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ3_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:fc8ee51f277bb0c6f10e1a0bd37d8d2cd0c34f3bd448e7d16fe0736e143e50aa
|
||||||
|
size 3682330688
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ3_XS.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:61a5d2c556a87634d7e68ae7f8b821e3b5c23ae9a61fbf94b2471363d9a406d1
|
||||||
|
size 3518752832
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ3_XXS.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c5103727d8bb6d4f60c165da8cdd5ab6a776bbe2e813a54050a6966716a8612e
|
||||||
|
size 3274917952
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ4_NL.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:aa83e3928ead7e7468357ba7f2765a0ead6a6f6edcac76c2afb4c3a88e15b147
|
||||||
|
size 4677994560
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-IQ4_XS.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5cbe63700d22be819dbd98a1c02ee892f8a255956244a26f4e23f476e9257f76
|
||||||
|
size 4447668288
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q2_K.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:21bb5314ce26e90f2122ecc205051ca8d03d2e7242f6923baee16b0067ae6e2d
|
||||||
|
size 3179137088
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q2_K_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q2_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:43fc93af8d14a005264af6f76360eecb13d87c2f223dd6b52fe1c95284ac1f02
|
||||||
|
size 2988820544
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_L.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:76d7984b128998b6459746f3e11292f3decc6c9ea5fc1e06283367809df1997c
|
||||||
|
size 4321962048
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:8e7f6f996a6688dd627265511661ffb8811d84ccbaa948c3dffe60365257d2c9
|
||||||
|
size 4018923584
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:640f6fe1a4af66e68f91b33c0d07ff8b9f4e920ef7fae737fd3642ac3ff0d843
|
||||||
|
size 3664504896
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q4_0.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f16461c736741b8211d2318559b5ca224b6a7226cef4b38cc6167336e6234d8b
|
||||||
|
size 4675897408
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q4_1.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e54dd76051b108bdb618aac95095cee4dd95c301429edf9de581db67707889bb
|
||||||
|
size 5130258496
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q4_K_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9ad34ff4dcd416249bf951834fa0a8bbf3ab1392579d642c85a799b713f1323c
|
||||||
|
size 4920739904
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q4_K_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5449782acb317e8eda37133cdbb7f598b4dcdd0ac9acce9c30a0ef0834ffa4a4
|
||||||
|
size 4692674624
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q5_K_M.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5f38e3dbaecdf437e9000479bb4a6d9cdd946cbe063dd3736b0c35dc4ebb2703
|
||||||
|
size 5732993088
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q5_K_S.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e6f4d800b782b78dd728a03ddcba387aff94bf054bf731e5e0acf3b087cfdab9
|
||||||
|
size 5599299648
|
||||||
3
Llama3.1-DeluXeOne-8B.i1-Q6_K.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c7004e669dcc85b9095fcc42dab9b1c9a0b1bfc86f09c882cf24d4e2d472f0e3
|
||||||
|
size 6596012096
|
||||||
3
Llama3.1-DeluXeOne-8B.imatrix.gguf
Normal file
3
Llama3.1-DeluXeOne-8B.imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:68c6897a5ccb8158ed51b6a0e994f939b5ba7dacaa3649d830e1733c3c19bb79
|
||||||
|
size 5015200
|
||||||
89
README.md
Normal file
89
README.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
base_model: Yuma42/Llama3.1-DeluXeOne-8B
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
library_name: transformers
|
||||||
|
mradermacher:
|
||||||
|
readme_rev: 1
|
||||||
|
quantized_by: mradermacher
|
||||||
|
tags:
|
||||||
|
- merge
|
||||||
|
- mergekit
|
||||||
|
- lazymergekit
|
||||||
|
- dphn/Dolphin-X1-8B
|
||||||
|
---
|
||||||
|
## About
|
||||||
|
|
||||||
|
<!-- ### quantize_version: 2 -->
|
||||||
|
<!-- ### output_tensor_quantised: 1 -->
|
||||||
|
<!-- ### convert_type: hf -->
|
||||||
|
<!-- ### vocab_type: -->
|
||||||
|
<!-- ### tags: nicoboss -->
|
||||||
|
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
|
||||||
|
<!-- ### quants_skip: -->
|
||||||
|
<!-- ### skip_mmproj: -->
|
||||||
|
weighted/imatrix quants of https://huggingface.co/Yuma42/Llama3.1-DeluXeOne-8B
|
||||||
|
|
||||||
|
<!-- provided-files -->
|
||||||
|
|
||||||
|
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama3.1-DeluXeOne-8B-i1-GGUF).***
|
||||||
|
|
||||||
|
static quants are available at https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-GGUF
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||||
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||||
|
more details, including on how to concatenate multi-part files.
|
||||||
|
|
||||||
|
## Provided Quants
|
||||||
|
|
||||||
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||||
|
|
||||||
|
| Link | Type | Size/GB | Notes |
|
||||||
|
|:-----|:-----|--------:|:------|
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DeluXeOne-8B-i1-GGUF/resolve/main/Llama3.1-DeluXeOne-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
|
||||||
|
|
||||||
|
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||||
|
types (lower is better):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
And here are Artefact2's thoughts on the matter:
|
||||||
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||||
|
|
||||||
|
## FAQ / Model Request
|
||||||
|
|
||||||
|
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||||
|
questions you might have and/or if you want some other model quantized.
|
||||||
|
|
||||||
|
## Thanks
|
||||||
|
|
||||||
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||||
|
me use its servers and providing upgrades to my workstation to enable
|
||||||
|
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||||
|
|
||||||
|
<!-- end -->
|
||||||
Reference in New Issue
Block a user