初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-04 13:36:43 +08:00
commit 49d00b55a4
27 changed files with 228 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Capella-Qwen3-DS-V3.1-4B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:47f39684c0e0dc6f05d0f21add67387f57d372766236ca88373ccfbd7e9ce70a
size 1127018976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5b8741807a11617035310781830f219a4c99a7249bed3064ad7c25bdba61ac2a
size 1055257056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fdf6a7d7b55d2426b6a5fe761ba54c4c12e06d3926157325559ac8c794305126
size 1512985056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e60e88345e8ae9bd1a21f2462640f638c856e552d357065473431183756eab8e
size 1417302496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f71268ddde68d6b8b852f2da198bbaa4ec50ae4eb1c4a1cfd0d62d22e4f4f132
size 1354101216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b952cc149e3aa8124fd33d27c98cada60471105b80ebecfe14bdaca3817131f
size 1246622176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e701712bada86f2343afb3b575418b9d3ffcee288a04dd44a416d908fba97467
size 1962897376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8d5aeb2c2d55b8f289e59c482182b36756c5fa450150269bc7a22d3c3eb2158e
size 1899532256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4252769592adb7e9a61bcd1ad75f31a33b3592a33e2f29cff1302500457b9615
size 1814376416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7293313006d0003499f1d1a112e47d2d86baf1bc8d816476c787be60c3668567
size 1670189536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c6b8dd2fe61ebedbbaa68fc777504ae455921412f4ef41fe3a066c33d33a21e7
size 2381344736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:99f493a66b419304bad47a1c021c48be650d637a2b3c63d8eb28776f3d6ac9c2
size 2270752736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f01bc29d0229e365bb9e2b03419b47f9f17ecf48ba3e4b346d964ce636aa98ba
size 1669500896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac7ea8783a0ab9b74aa6e92e840c9711f6f8194dca998c4ec580d7ef1dcb70c3
size 1563455456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:601ea99e28912a34d380aaa5817c0a046ff68dcf944539faee9da49b877ddf29
size 2239786976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:49b74b8c6b8f096226f4c0bc90b6c628c742de3f33c8f2e853c511657a471844
size 2075619296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ee673b8bd6848f8ff822adc35ee6062a21f442a61b536328f941a043cdd17823
size 1886998496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:186d11c0851ae6dc8848c8881a2450e1f8e11a88c5b8097f3bb9d0470c527cf1
size 2375774176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ff45cb1a83d59894115fff085524fa924a6168e29930150e0ffe5ddda9e4fb7
size 2596630496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:03ce76074b188c391454ea6697bf033cf810b568b45141406df7408257464e29
size 2497282016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:64dd6aaff321817b96c5da7eeb600089e6fae2a897823cfb8a187d0a892351c4
size 2383310816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b7b9cb6736663ca43ffd531cfed9a446537605d06f3e0e16f26dc143cfd09bc2
size 2889514976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c739a3b4e7bf038dd3304538cf972c60a7a0cc22f9a07b9fc8e49105487fa5af
size 2823712736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e1274e2e2dadee39070833d06ab212a1939003bade70c499c189b16530ae105f
size 3306262496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c46ffbd1b037dcf689551ea78f92d839144dd33e1282a550ed01a56650e44e46
size 3872640

93
README.md Normal file
View File

@@ -0,0 +1,93 @@
---
base_model: prithivMLmods/Capella-Qwen3-DS-V3.1-4B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- trl
- text-generation-inference
- math
- science
- code
- v3.1
- stem
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Capella-Qwen3-DS-V3.1-4B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->