初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-30 15:07:12 +08:00
commit cf55675687
27 changed files with 227 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-VL-2B-Ties-Mix.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d4cd0fc0850316ee42c460141d1ed67a3f638a112a95860d8e7a00370db35069
size 543790240

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53faadb060629eaf37776f8d153a6d72a9c4f1eff0af821e34ebfbcbb4a48364
size 515773600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:82d6353892954d5608e901542e5b860b763c6354c14f04abed5a75b8b3c04e7d
size 695178400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:67a30d11d2436559e309498292eb90a5c3a366597a15de47aa4c78c2ce59dc68
size 657822880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4575e1ffb58604e0eb2ae1f5106bc7acbd61c9bdba044f8c0d8e2dcf71fbab0e
size 631510176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c09badb637eba048a43c05e8c89f3e988799c1ce17fc3d300f280450aa7148d0
size 590484640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:524d2eb926b4c4bc71980b6382a915c32af4eb75d12b3e927c14db86842fd2c9
size 895659168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:19bb63b6577b6c62836b1be9afbf14d3906c0741a8f0ee03ba338656571d04de
size 867249312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9aa7cf6edb6577adb806882623f9bd8c8b2af6ba9f6f2c791a9840ee9cf9f24f
size 834219168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3b9253128f2921ead3bdf1adf454ca16799cf698fb7f6403bd70f9639369c7ea
size 754357408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ef39f1a1053232a2e3b853361d23eef66930f7fceebe0c15310b3878511a93d
size 1054420128

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f5264fe85f53fd1c1a6e48af8814b7407a401250c36137c689ebf2c7669b251f
size 1010379936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2eeb70a7ed0bb36d6dba214ccda1ce71e7d416359f0dfc64841b4b715309542
size 777792672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33a0451c255a0bd7b6376a71b953b353b6b0fd0c9add276a27f5145402103ff9
size 732966048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:666e8051588b7a9c27126262729633ab05646ebc6424e05597277b7fa224b868
size 1003498656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bcb970a8cd16cfb4febee4a263fc676ec22f8207dd50b55ea215e508e64eb2ae
size 939535520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:37a8382b924305af7f18eb437c0542e7afaf80c9b69380fc04aeff40fb19c088
size 867249312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:749f056766b00ab84effd0899d3ee9883f20516d6776f6779407ff2eb172f638
size 1056779424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:27aab44df3139fe5d4ee82f8d03f5884cbf1358f28e16df293bcd727dd854ce6
size 1142500512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ebcd9d4a07b77371945855bf89200cc9e71753733dd1e849f4e41a1f324d798
size 1107405984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6d6275e6c7d5b61f98b906af2f6143faeb62630e3f464d86ad6be79d708f36ba
size 1060187296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:868f9f5391f0a6915d92383cb889e67cd37a1f50ca33998a26d7ef56da7a7a26
size 1257876640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2f4d993448ae52bac961364d80d2e3d50b1760cfbef9e5543ecb81998e562798
size 1230580896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:04758c005f53ec24c0403e73b5fd7fa96b9ceea50c9fd055e562b46b89f997e1
size 1417751712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0b071a02faacdbe7521789435facc33a3f3b9acb96111a73f1296c695a2693e
size 2094560

92
README.md Normal file
View File

@@ -0,0 +1,92 @@
---
base_model: bunnycore/Qwen3-VL-2B-Ties-Mix
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- remyxai/SpaceQwen3-VL-2B-Thinking
- huihui-ai/Huihui-Qwen3-VL-2B-Thinking-abliterated
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Qwen3-VL-2B-Ties-Mix
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-VL-2B-Ties-Mix-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VL-2B-Ties-Mix-i1-GGUF/resolve/main/Qwen3-VL-2B-Ties-Mix.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->