commit bc57c362b515a64208eb7d87f2d423d35dc8f639 Author: ModelHub XC Date: Thu May 7 09:44:12 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..7a4c923 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,60 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.imatrix.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text +Qwen3-8B-vl-instruct-abliterated.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_M.gguf new file mode 100644 index 0000000..2b93471 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dd882106073b74494577578ffc1fed24b8633a29196e1a86644a1230960a6dc +size 2256149440 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_S.gguf new file mode 100644 index 0000000..aa9ef50 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c820e2116a11b50ad6ac9f88b306223fe2ec7d50d5d2479f4a7c5fa5712cce5 +size 2115771328 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_M.gguf new file mode 100644 index 0000000..b1d624c --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f19347b7097d04fdeebc12adb69f674fad46975f28dbd33806f81bad175df10 +size 3051916224 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_S.gguf new file mode 100644 index 0000000..76927e7 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d06740dae173a3577c0afdc4d8784e2d40ca16c3922cb14263577ed5dc5df47 +size 2864745408 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XS.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XS.gguf new file mode 100644 index 0000000..d4d90d0 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9474d32b172c3aa00e598b80a205f067cd17d7ee2311e227b5b7e24a78d70c86 +size 2696158144 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XXS.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XXS.gguf new file mode 100644 index 0000000..680561a --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c811024f5631dbcb092432832bf1c668c2ee431f0d242e091ccc8721046f79e +size 2490112960 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_M.gguf new file mode 100644 index 0000000..2148ac6 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99e4895629d4a3ad06e7daf4bc7edc1d691f8b3628f103052ef72524de1567b2 +size 3896622016 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_S.gguf new file mode 100644 index 0000000..cce4fe1 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3d9f0f4762b96069aaede48c192e0ea14514d58411008ffffa5dbdeb2e0594c +size 3789667264 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XS.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XS.gguf new file mode 100644 index 0000000..e6ea01b --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dafb363c9bd94e10dd005c0537e289b927796661ae47f792b1b3118b1ef91ab +size 3626875840 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XXS.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XXS.gguf new file mode 100644 index 0000000..5789e28 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:274cc544b4a980d540fc92651275e73a854993c75102e487dab02ceb085edafa +size 3369634752 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_NL.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_NL.gguf new file mode 100644 index 0000000..6dcb598 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_NL.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:659dbb62856b4db047db9dbc664ecb16c1661da1e55c1591c3314038818bf9e9 +size 4793625536 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_XS.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_XS.gguf new file mode 100644 index 0000000..696f17a --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0bbe32a36b7084124ebf6b0367d8156422b51a66e87967a7744c9e32363ca25 +size 4561841088 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K.gguf new file mode 100644 index 0000000..18b150a --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0be6bbc4d6c9e021f264c4cf336cad5e8f67a4f5dacd0f18bf52e97714a9bb0e +size 3281734592 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K_S.gguf new file mode 100644 index 0000000..40fc1a0 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e35f35ab21e59fbcab3284e4ebacbc171078e0a9bd29fa9a8ccb501838b09ebd +size 3083553728 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_L.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_L.gguf new file mode 100644 index 0000000..3e4db59 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61e90eb478b797306bb09e223ee663da93d3735b4d9f52bc6a94ff365730e4d0 +size 4431395776 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_M.gguf new file mode 100644 index 0000000..33e922d --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e6a0c4593029077b6b16ce4463ee531f90853c0f46bebda702168d89f3e326d +size 4124163008 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_S.gguf new file mode 100644 index 0000000..6a7e322 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:090e9f6f54b8bfe16faf5f75bf56dd2c65eeb97298c5e4a7ee7cf6be49db3373 +size 3769613248 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q4_0.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_0.gguf new file mode 100644 index 0000000..b0a551b --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e049fe7d296e9fe43f10e3d2bfee3cce832216e53814c4db05c0fa39e5db2fcb +size 4787334080 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q4_1.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_1.gguf new file mode 100644 index 0000000..9efeebb --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4cee69476aa7ac689f06bbf9573b350430b3334e698c2823e95be43fbf68ae0 +size 5247757248 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_M.gguf new file mode 100644 index 0000000..dd16b2f --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a69369be086cfb821e82a1f1046c5de51c60d4e747a2348f7917cf118ca4265 +size 5027785664 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_S.gguf new file mode 100644 index 0000000..a91b743 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02dfb4561ee46c5f44ce34cef507f1ea576833dbff671abf7513b8a497d15678 +size 4802014144 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_M.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_M.gguf new file mode 100644 index 0000000..65c4248 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ae0d87481f9033a9147f8bc1188d1f103104779720ce780a2b1e19385781f92 +size 5851114432 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_S.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_S.gguf new file mode 100644 index 0000000..9c7aa9c --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34a4bc27d106ed707d9213f972418a37ac23d32c43573eb9c338558653c9d3fa +size 5720763328 diff --git a/Qwen3-8B-vl-instruct-abliterated.i1-Q6_K.gguf b/Qwen3-8B-vl-instruct-abliterated.i1-Q6_K.gguf new file mode 100644 index 0000000..639a58e --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.i1-Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:344ba40a3fed35cfdb949ced4ea92ca98ee62cd0e4959cebbb8e9fcbdcba5b9b +size 6725901248 diff --git a/Qwen3-8B-vl-instruct-abliterated.imatrix.gguf b/Qwen3-8B-vl-instruct-abliterated.imatrix.gguf new file mode 100644 index 0000000..2be7692 --- /dev/null +++ b/Qwen3-8B-vl-instruct-abliterated.imatrix.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cadbaa53cab2464cec62ce8b11b4ab186f2f491fec399cd74be18e9eb41e240 +size 5347200 diff --git a/README.md b/README.md new file mode 100644 index 0000000..b4dba63 --- /dev/null +++ b/README.md @@ -0,0 +1,86 @@ +--- +base_model: Nitral-Archive/Qwen3-8B-vl-instruct-abliterated +language: +- en +library_name: transformers +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +--- +## About + + + + + + + + + +weighted/imatrix quants of https://huggingface.co/Nitral-Archive/Qwen3-8B-vl-instruct-abliterated + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-8B-vl-instruct-abliterated-i1-GGUF).*** + +static quants are available at https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-GGUF + +**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-GGUF).** +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | | +| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-vl-instruct-abliterated-i1-GGUF/resolve/main/Qwen3-8B-vl-instruct-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. + +