初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 06:24:43 +08:00
commit 4d02e77304
27 changed files with 226 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1b94a4f3267a6e669e6a261a80b32151ccf3eed010f4d6eeab5e81e299636b2
size 2256149856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7a0020afcf09e09d36bfa10ed6b91de28de4294915ed3c56d42e64e4ff340312
size 2115771744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d4430b19a0ff7284fd7966577697a82fdbf8038bee66eadf72feb754ee0056f3
size 3051916640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dde4e65a2fec7d3b407dec81ae9db8b1ee9a71ef3cd97b9ea4fb8be71047e266
size 2864745824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac71d405fdede33e70a8fb97cf33c39bcc7a8568f79254aa3723c4665d5d4808
size 2696158560

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:703af8e2311d0c6724d69d92bc5916326bd2e3a1cdc9022ee279d60df015f874
size 2490113376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:66357fd9387954c8535b6840cc417fedb1495eacbadfcde664ebf54d34f1186d
size 3896622432

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e6f9b8851ddcdeaa274999c43bedd3de82dcc66fe993f45e2d8a6fe55532ca3c
size 3789667680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c7b215e4dfa055d30065a44b3907b7ec672f9716e66319c3231971800b3411b4
size 3626876256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dd1dff34d078136c70cd3d447341f2705d2f97988a580eb2885057a9eb687ed9
size 3369635168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f83f94fbaedf6b8291f61ac9e4ab2fd48d1a0983929acbfb25fb6f6dfc09ee2
size 4793625952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:21b4b5fe8def573ca966cc5977db13a6563b734371808609de8671d79cefc034
size 4561841504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:77ac47d97757cb57d3495e4c67677a53cf47e8fd54cf59b17bd63a44790f890c
size 3281735008

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:21a66e6f85a37a57db7217dab9d90a5046c169775af2a51f14c3ac95ffbfa44f
size 3083554144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a8b0203557270df71baa1b9cfa11b5dca560999d110f4585c85535101e44e732
size 4431396192

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:14832d6ea7965f719beddb7b9a0fd7500c07ff0d0170986017ab4b159ce8da9d
size 4124163424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eca199d5a4488b9ec7de7fa80ec5bdfc6f8a20ab0c8fa0458b21dc2af6e70331
size 3769613664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:29aa8b06bd2e97b3d5b6875ef1b96fb30e26c5f154fa3844551fe543b9b6b39f
size 4787334496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f485d637a38d71f5cf9c3167d1ad679f6fab83b0f1597c0dcdbe3e59a008b2e9
size 5247757664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1b5cbccde9f58a5d4d8f23fcbad70de2c9f583892971905436030a939da642a8
size 5027786080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b3e87dff4e6dbe24a69ad3efb24173a3b21330722db9c8e11b2936f669bd659
size 4802014560

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e17c0128d232716e11e09333329919cf03165f8f1526f1502b2d236f00a9cfca
size 5851114848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d7b36482d684e643012d299f2dab5391741a9a9773fcfc5f1deba227b7e4447e
size 5720763744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e6cd7d618438fc2077931a999a728a261372e42aafccf217a4434862acf28142
size 6725901664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e77782a0ee44e1f03d597f134660e143e1d4ca9fe8493759db19762dea27a2c3
size 5347200

91
README.md Normal file
View File

@@ -0,0 +1,91 @@
---
base_model: prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v1
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- abliterated
- v1.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-VL-8B-Instruct-abliterated-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-8B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen3-VL-8B-Instruct-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->