初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/InternVL3-2B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 07:51:33 +08:00
commit 54e599b060
27 changed files with 225 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
InternVL3-2B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:76472599897a15d428cc8de020f8d39031937aed1805949167d649031c187ec6
size 540623328

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c282d5cb850b92d734938ddea4bc49acdb0d88feb95c4bbe8a36527ce7534f24
size 512689632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1867839995ad5a308bd91d89be08296289828a22f40fe8a375c1912a3c8ee5b6
size 700877888

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:21bd4fa469d61aa6224ee3c43f5f85f1ea3b0bceb87ffd56d513f5ba6d81ec07
size 663632960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1465449241e8b6d756002d16a3ce9d12e3ecd4b39c0b429ab721e37bec8cede3
size 626488800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e41cef739b69634290fdbc52bfb1b060d8f354b003158271a8b8918860f8a39b
size 587179488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ccd8af71474385d90e009d271e452e4ba207fb399c658786c8afacbbcaaecac1
size 876433792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e975fea2e2be76494de335a9a7fd1a5dda2e7d591b70547a80c3a00b3f2257ed
size 862176640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ecb5025f7be3e04a437aa369df15bca608b0fa89c7c894477d9d8456216669e
size 831468928

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:461c679e58153e25d45de093a9cb1df7736011a859d81a8e6326c83372e87e7b
size 768615488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:299c59f2e25b5c201cdeaa7b955d2a97c8730fcefd1f852d95af46751a4fff29
size 1067042464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:958052fdcbda466ae2bfdae812f89408558b63c4822bd235f2ffff0a794d755b
size 1019162560

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7c5c002e4a1ef8afabeb4bfc71421821e1e51ee910787b1c1ff5dffe95bb06bb
size 752413472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d944518a785869ddaf35c57fb3095ea387eb5cfb5abfb9304e33e88074703269
size 716243744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:82a7d2e40e54a4b46f500364c50efa51df594ba71b0ca64df97863393c27a211
size 979932544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ce99b68372a10c356d26af56b8819b8d35614f3a31fb0661996c7529ffb6c3c
size 923948416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2600670361bac940c95dc4182e86ca07534f552fee67d63d9e0282867d0f2ea7
size 860714368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d2432793643e040b1ba9e6c094a05625da4102d1a172e89e94190570276eacd4
size 1068246688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b4c63bffe649461d092e959eccbd60095f0b572d22807795fb93f58dcab489d6
size 1162114144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c025dc90cb7db7c91574c8d7b6315f50ad4c1dfbd76abbc179b538b01cabf5d2
size 1116759712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db5ba889ab5bfa96a2bd0f46b9ddd0eb07918f973869608c7837383ddf7ccc61
size 1071023776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4adb00bad0db57404426534eaf316a07a407758e14e42c7524fd133f6054aebb
size 1284882976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a104b7faf0153b9acae6629c748da0f0fdbc6740840037fb383a3e2fca6352c0
size 1258562080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ec22f2d8569ec4ea5127aef7c1567a4ecf494f37c83cff2fe6eb09061f7ce2b4
size 1463513952

90
README.md Normal file
View File

@@ -0,0 +1,90 @@
---
base_model: OpenGVLab/InternVL3-2B
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenGVLab/InternVL3-2B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InternVL3-2B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InternVL3-2B-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/InternVL3-2B-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3-2B-i1-GGUF/resolve/main/InternVL3-2B.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8c78e82b7edc323a4591d06a7352afb5717e09b652ea6f7f1ee01a93cbf93084
size 2042201