commit c03ad6052a97a4888038db8d191c6b2342ed60bd Author: ModelHub XC Date: Tue Apr 28 16:21:06 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..dd43258 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,47 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.f16.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Instruct-RR-Abliterated.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Llama-3-8B-Instruct-RR-Abliterated.IQ4_XS.gguf b/Llama-3-8B-Instruct-RR-Abliterated.IQ4_XS.gguf new file mode 100644 index 0000000..31d203c --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d214dafc6ddac0fb86aa5aa9632c3de0d37a9057afea32b3c7ff04d9aeec0b3e +size 4484364064 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q2_K.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q2_K.gguf new file mode 100644 index 0000000..421aa30 --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d798f8d1ef07a2ad83a1d6faaf06f91e871b69dfe674fb2e76adef39dda781f5 +size 3179132704 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_L.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_L.gguf new file mode 100644 index 0000000..49f4712 --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09091f7a6d441105bb3ab59a49d0d1d22bc4600be99d35a898aa07e8846ea34b +size 4321957664 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_M.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_M.gguf new file mode 100644 index 0000000..7e7556d --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41b5c8df2d98ec908d01edc19771a0bcac5a4d79777fc0c141991ee2d4e03b3a +size 4018919200 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_S.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_S.gguf new file mode 100644 index 0000000..622d38e --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38421f2752a7c482917dbe1cc369ab2400b8820284de784e108a24f31e544ea9 +size 3664500512 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_M.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_M.gguf new file mode 100644 index 0000000..0c05a7b --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edff2372bdfbf70285da09d4b5c4c0c227974fa91ecf863398dc937031158cec +size 4920735520 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_S.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_S.gguf new file mode 100644 index 0000000..a97fcc4 --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c49d0525cdd6f0e3f851b96aac94850e0ed5e69b3b2faf6ccc24eafb7019cda +size 4692670240 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_M.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_M.gguf new file mode 100644 index 0000000..446781a --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e5bfa681dde027f567217ab3321fce94a043ee6ba8ea2899641c3ebaf3a937d +size 5732988704 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_S.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_S.gguf new file mode 100644 index 0000000..1a6ad9c --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c73a9561707306672c9118bf9ecb69f2dd79e34d3bc6140b329b15c004d31b63 +size 5599295264 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q6_K.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q6_K.gguf new file mode 100644 index 0000000..32480c5 --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1e4645fe36c3ff1f916de5543c3b3aa67dcfd4e1e9bfd77cb6849ca2341f378 +size 6596007712 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.Q8_0.gguf b/Llama-3-8B-Instruct-RR-Abliterated.Q8_0.gguf new file mode 100644 index 0000000..ad963eb --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6525d958a3e6a5ea9be05dd026104671ce756f147a5db9ccadead3fff6cecf2f +size 8540772128 diff --git a/Llama-3-8B-Instruct-RR-Abliterated.f16.gguf b/Llama-3-8B-Instruct-RR-Abliterated.f16.gguf new file mode 100644 index 0000000..de9fcf4 --- /dev/null +++ b/Llama-3-8B-Instruct-RR-Abliterated.f16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a13c66b47d2a4d5a0a9a2ce9fcda989cb7a850a2240b7f765862ea4ce8b74bdd +size 16068892448 diff --git a/README.md b/README.md new file mode 100644 index 0000000..912fb20 --- /dev/null +++ b/README.md @@ -0,0 +1,80 @@ +--- +base_model: wangzhang/Llama-3-8B-Instruct-RR-Abliterated +language: +- en +- zh +library_name: transformers +license: llama3 +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +tags: +- abliterated +- abliterix +- circuit-breakers +- representation-rerouting +- safety-removed +- llama3 +--- +## About + + + + + + + + + +static quants of https://huggingface.co/wangzhang/Llama-3-8B-Instruct-RR-Abliterated + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3-8B-Instruct-RR-Abliterated-GGUF).*** + +weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-i1-GGUF +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q2_K.gguf) | Q2_K | 3.3 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_S.gguf) | Q3_K_S | 3.8 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q3_K_L.gguf) | Q3_K_L | 4.4 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.IQ4_XS.gguf) | IQ4_XS | 4.6 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_S.gguf) | Q5_K_S | 5.7 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q5_K_M.gguf) | Q5_K_M | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q6_K.gguf) | Q6_K | 6.7 | very good quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-RR-Abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-RR-Abliterated.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. + +