commit 1ffacc10ce701217b98c66d57e32bc4381cce340 Author: ModelHub XC Date: Wed Apr 22 12:22:47 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..996b085 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,60 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +imatrix.dat filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Uncensored-0.3c.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ1_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ1_M.gguf new file mode 100644 index 0000000..e5cdfb1 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ1_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7bf3ae2441858062e48431a24c40bf96fe0900caadba921b33b2dc6df509079 +size 2161975328 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ1_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ1_S.gguf new file mode 100644 index 0000000..40c8486 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ1_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5f986200d485578337d01058b89e283dd788f4ba64e5244aeed7eb5d8b92e46 +size 2019631136 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ2_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_M.gguf new file mode 100644 index 0000000..17b4288 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb7a45cc51a4a4c6db1baeb1d9eb55c6da044b151c46fd59ab347f094a81cdb6 +size 2948284448 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ2_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_S.gguf new file mode 100644 index 0000000..3f3c6c2 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d8041374398b6052e53beee6674062625d30994d191f59b1d1d9e9fd0571b73 +size 2758492192 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XS.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XS.gguf new file mode 100644 index 0000000..eba54fa --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48ccca0cc70a7d0882235aa33141edd711b6486066c0a5ea69a5b256fe2c55f7 +size 2605785120 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XXS.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XXS.gguf new file mode 100644 index 0000000..f03b40c --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:407636ffcbaf6f686d79729015a4cabc8f655e547a4e737eddbc935e35bd3e56 +size 2399215648 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ3_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_M.gguf new file mode 100644 index 0000000..f4ec19b --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2493c33db25f147e2396acc4117747e3dfde2452920178a45cd42098b06650e5 +size 3784826912 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ3_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_S.gguf new file mode 100644 index 0000000..cdc8326 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86fa5c400308f14933ab6258ad9eeb3571d310e5246d2cec185a1c7a599c0de8 +size 3682328608 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XS.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XS.gguf new file mode 100644 index 0000000..72e28a5 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e043d3320c73bcee67f12652db277df2b93cd503bfff6ab3507c4986a14ea167 +size 3518750752 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XXS.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XXS.gguf new file mode 100644 index 0000000..610b6ed --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:007551412852fe9fc8550e5be396a174733e1f58b423d82e6b2e044df263fb28 +size 3274915872 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ4_NL.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ4_NL.gguf new file mode 100644 index 0000000..1f7c65a --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ4_NL.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8faea531f79a3bb5846b5ab767252a0f3b37291fe78e99cbc4abb865086346c0 +size 4677992480 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-IQ4_XS.gguf b/Llama-3-8B-Uncensored-0.3c.i1-IQ4_XS.gguf new file mode 100644 index 0000000..a99bcda --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b05d114a05d9ba1679f93cc37970ef3fa05f04052faced592a03aa2bf6a4ad1 +size 4447666208 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q2_K.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q2_K.gguf new file mode 100644 index 0000000..374dfed --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9436422e06da519b22d4aca9b473057fd03eb2b4fdd9a941be0ce17bd87ddf7 +size 3179135008 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q2_K_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q2_K_S.gguf new file mode 100644 index 0000000..c131258 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q2_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c52b6f199ea7b354714fcedf79764806a418f1d243f63a559f3bb3a10e885510 +size 2988818464 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_L.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_L.gguf new file mode 100644 index 0000000..58edfad --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c9a8369053c4a65368ac15c90cd99da783d940104cdcc96bca0566c79bb233 +size 4321959968 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_M.gguf new file mode 100644 index 0000000..05719c7 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:940dbf57b4e1f7beadbf12cc59cb452e9b8aa6630f3b0fbb493b95580f916818 +size 4018921504 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_S.gguf new file mode 100644 index 0000000..b08ca7f --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfdccbfd6f41af90c74cdbc290ee7dd3d6861398ce3214d06346979af77bd7cd +size 3664502816 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q4_0.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q4_0.gguf new file mode 100644 index 0000000..a0f486f --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64434c9f6817d61c6b48e049b2434e7543079e1d00673d8d3306ad1083a9f2f8 +size 4675895328 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q4_1.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q4_1.gguf new file mode 100644 index 0000000..61d6872 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q4_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:992f06c5fad9d0439a6e8ca501faa316bb112ef677772392904945fe8dd50e8e +size 5130256416 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_M.gguf new file mode 100644 index 0000000..ee42310 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd88c4bb1e544cf4d17f8bec0c19687b849c83cc5657ef0ce357c0f36bbe881c +size 4920737824 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_S.gguf new file mode 100644 index 0000000..7d4a615 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4de85960d05ee24c32b473db92df571ebcbc6ff7c2f0c5a8575400d5d61d71e0 +size 4692672544 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_M.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_M.gguf new file mode 100644 index 0000000..5497c88 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7111269c83f39fb016cc804f6b09a6086edc91db4d7afaf8c701209779da9cd +size 5732991008 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_S.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_S.gguf new file mode 100644 index 0000000..411c7d0 --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a96a9d7b33652f15163a74524b2ecaa70450f3220ab4072dae56b070688ec7af +size 5599297568 diff --git a/Llama-3-8B-Uncensored-0.3c.i1-Q6_K.gguf b/Llama-3-8B-Uncensored-0.3c.i1-Q6_K.gguf new file mode 100644 index 0000000..85cdebf --- /dev/null +++ b/Llama-3-8B-Uncensored-0.3c.i1-Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90f4c663d3a50078d96b47a9d69ec1d6c25c0ba4390478e838ffe587e9b39b60 +size 6596010016 diff --git a/README.md b/README.md new file mode 100644 index 0000000..9eb4139 --- /dev/null +++ b/README.md @@ -0,0 +1,78 @@ +--- +base_model: MrRobotoAI/Llama-3-8B-Uncensored-0.3c +language: +- en +library_name: transformers +quantized_by: mradermacher +tags: +- mergekit +- merge +--- +## About + + + + + + +weighted/imatrix quants of https://huggingface.co/MrRobotoAI/Llama-3-8B-Uncensored-0.3c + + +static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-GGUF +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Uncensored-0.3c-i1-GGUF/resolve/main/Llama-3-8B-Uncensored-0.3c.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. + + diff --git a/imatrix.dat b/imatrix.dat new file mode 100644 index 0000000..a4e8008 --- /dev/null +++ b/imatrix.dat @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d970cf2f672ba4b1948100eb1741da1459e1bd4be21b91f908f148cf6a76819 +size 4988157