From 82661ca07bf1ff29213efd39b37c0e6456b74440 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sat, 9 May 2026 15:08:40 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: Melvin56/Qwen3-0.6B-abliterated-GGUF Source: Original Platform --- .gitattributes | 44 ++++++++++++++++++++ README.md | 66 ++++++++++++++++++++++++++++++ imatrix.dat | 3 ++ qwen3-0.6b-abliterated-BF16.gguf | 3 ++ qwen3-0.6b-abliterated-IQ4_XS.gguf | 3 ++ qwen3-0.6b-abliterated-Q2_K.gguf | 3 ++ qwen3-0.6b-abliterated-Q3_K_M.gguf | 3 ++ qwen3-0.6b-abliterated-Q4_K_M.gguf | 3 ++ qwen3-0.6b-abliterated-Q5_K_M.gguf | 3 ++ qwen3-0.6b-abliterated-Q6_K.gguf | 3 ++ qwen3-0.6b-abliterated-Q8_0.gguf | 3 ++ 11 files changed, 137 insertions(+) create mode 100644 .gitattributes create mode 100644 README.md create mode 100644 imatrix.dat create mode 100644 qwen3-0.6b-abliterated-BF16.gguf create mode 100644 qwen3-0.6b-abliterated-IQ4_XS.gguf create mode 100644 qwen3-0.6b-abliterated-Q2_K.gguf create mode 100644 qwen3-0.6b-abliterated-Q3_K_M.gguf create mode 100644 qwen3-0.6b-abliterated-Q4_K_M.gguf create mode 100644 qwen3-0.6b-abliterated-Q5_K_M.gguf create mode 100644 qwen3-0.6b-abliterated-Q6_K.gguf create mode 100644 qwen3-0.6b-abliterated-Q8_0.gguf diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..110bbcd --- /dev/null +++ b/.gitattributes @@ -0,0 +1,44 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +imatrix.dat filter=lfs diff=lfs merge=lfs -text +qwen3-0.6b-abliterated-BF16.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..a869b70 --- /dev/null +++ b/README.md @@ -0,0 +1,66 @@ +--- +license: apache-2.0 +license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE +pipeline_tag: text-generation +base_model: +- huihui-ai/Qwen3-0.6B-abliterated +tags: +- chat +- abliterated +- uncensored +extra_gated_prompt: >- + **Usage Warnings** + + + “**Risk of Sensitive or Controversial Outputs**“: This model’s safety + filtering has been significantly reduced, potentially generating sensitive, + controversial, or inappropriate content. Users should exercise caution and + rigorously review generated outputs. + + “**Not Suitable for All Audiences**:“ Due to limited content filtering, the + model’s outputs may be inappropriate for public settings, underage users, or + applications requiring high security. + + “**Legal and Ethical Responsibilities**“: Users must ensure their usage + complies with local laws and ethical standards. Generated content may carry + legal or ethical risks, and users are solely responsible for any consequences. + + “**Research and Experimental Use**“: It is recommended to use this model for + research, testing, or controlled environments, avoiding direct use in + production or public-facing commercial applications. + + “**Monitoring and Review Recommendations**“: Users are strongly advised to + monitor model outputs in real-time and conduct manual reviews when necessary + to prevent the dissemination of inappropriate content. + + “**No Default Safety Guarantees**“: Unlike standard models, this model has not + undergone rigorous safety optimization. huihui.ai bears no responsibility for + any consequences arising from its use. +--- + +# Melvin56/Qwen3-0.6B-abliterated-GGUF + +Original Model : [huihui-ai/Qwen3-0.6B-abliterated](https://huggingface.co/huihui-ai/Qwen3-0.6B-abliterated) + +Llama.cpp build: 0208355 (5342) + +I used imatrix to create all these quants using this [Dataset](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c/#file-calibration_data_v5_rc-txt). + +--- + +| | CPU (AVX2) | CPU (ARM NEON) | Metal | cuBLAS | rocBLAS | SYCL | CLBlast | Vulkan | Kompute | +| :------------ | :---------: | :------------: | :---: | :----: | :-----: | :---: | :------: | :----: | :------: | +| K-quants | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ 🐢5 | ✅ 🐢5 | ❌ | +| I-quants | ✅ 🐢4 | ✅ 🐢4 | ✅ 🐢4 | ✅ | ✅ | Partial¹ | ❌ | ❌ | ❌ | +``` +✅: feature works +🚫: feature does not work +❓: unknown, please contribute if you can test it youself +🐢: feature is slow +¹: IQ3_S and IQ1_S, see #5886 +²: Only with -ngl 0 +³: Inference is 50% slower +⁴: Slower than K-quants of comparable size +⁵: Slower than cuBLAS/rocBLAS on similar cards +⁶: Only q8_0 and iq4_nl +``` \ No newline at end of file diff --git a/imatrix.dat b/imatrix.dat new file mode 100644 index 0000000..23d183a --- /dev/null +++ b/imatrix.dat @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa1d71c1015aa47e030a20ffc332e5391baab3be40b46c150104e15fc32b2ec1 +size 1153405 diff --git a/qwen3-0.6b-abliterated-BF16.gguf b/qwen3-0.6b-abliterated-BF16.gguf new file mode 100644 index 0000000..05da6a2 --- /dev/null +++ b/qwen3-0.6b-abliterated-BF16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b3b7ba1f4675f786e7f9b5196cc329707d9189304c2553733ecaa187b56b8fa +size 1198178400 diff --git a/qwen3-0.6b-abliterated-IQ4_XS.gguf b/qwen3-0.6b-abliterated-IQ4_XS.gguf new file mode 100644 index 0000000..622a52f --- /dev/null +++ b/qwen3-0.6b-abliterated-IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efdbd61f9d6f0832c0e75afe9bf9374a1ed6e9ed22fb31f7d2ae4bd863b68e9a +size 367799648 diff --git a/qwen3-0.6b-abliterated-Q2_K.gguf b/qwen3-0.6b-abliterated-Q2_K.gguf new file mode 100644 index 0000000..fea544c --- /dev/null +++ b/qwen3-0.6b-abliterated-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50cfed0a8e2d82957ac8951d0c53ccd8febe37fc143fb7a0a1cae69b5a6d3b3d +size 296234336 diff --git a/qwen3-0.6b-abliterated-Q3_K_M.gguf b/qwen3-0.6b-abliterated-Q3_K_M.gguf new file mode 100644 index 0000000..452380b --- /dev/null +++ b/qwen3-0.6b-abliterated-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40148c5adada8d41a64ff83f6f94669f3b880682aea2ccc2d4bed0881891ccf5 +size 347123040 diff --git a/qwen3-0.6b-abliterated-Q4_K_M.gguf b/qwen3-0.6b-abliterated-Q4_K_M.gguf new file mode 100644 index 0000000..d212a79 --- /dev/null +++ b/qwen3-0.6b-abliterated-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64964cc6487de249fdb86f7cbc47942705ec9ca42ac1c624c2ad487d5569d6c1 +size 396701024 diff --git a/qwen3-0.6b-abliterated-Q5_K_M.gguf b/qwen3-0.6b-abliterated-Q5_K_M.gguf new file mode 100644 index 0000000..39686f3 --- /dev/null +++ b/qwen3-0.6b-abliterated-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d53f9f874fcd218851d06d91c073b8bf8c0795dcf947bde5586de40bc75e64f +size 444411232 diff --git a/qwen3-0.6b-abliterated-Q6_K.gguf b/qwen3-0.6b-abliterated-Q6_K.gguf new file mode 100644 index 0000000..2ba7dda --- /dev/null +++ b/qwen3-0.6b-abliterated-Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6edd26a6fa06a478801766cd1f2a06683e097a191f5a1029a9ba55ac6e011f2 +size 495103328 diff --git a/qwen3-0.6b-abliterated-Q8_0.gguf b/qwen3-0.6b-abliterated-Q8_0.gguf new file mode 100644 index 0000000..b88d38e --- /dev/null +++ b/qwen3-0.6b-abliterated-Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56b55c2338378e21dcb0da7820f230b022609f4d3146bcc7c52c3cbd1cf3a220 +size 639443296