commit 740b26fa8690fdff0fae039768ea314e60532f1c Author: ModelHub XC Date: Sun Apr 12 05:55:55 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/komit_think_0.5b-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..e0a5ee4 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,47 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +komit_think_0.5b.f16.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..243ba8d --- /dev/null +++ b/README.md @@ -0,0 +1,76 @@ +--- +base_model: foryui/komit_think_0.5b +language: +- en +library_name: transformers +model_name: komit_think +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +tags: +- generated_from_trainer +- sft +- trl +--- +## About + + + + + + + + + +static quants of https://huggingface.co/foryui/komit_think_0.5b + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#komit_think_0.5b-GGUF).*** + +weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q2_K.gguf) | Q2_K | 0.4 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q3_K_S.gguf) | Q3_K_S | 0.4 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q3_K_L.gguf) | Q3_K_L | 0.4 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.IQ4_XS.gguf) | IQ4_XS | 0.4 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q5_K_S.gguf) | Q5_K_S | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q5_K_M.gguf) | Q5_K_M | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q6_K.gguf) | Q6_K | 0.6 | very good quality | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | +| [GGUF](https://huggingface.co/mradermacher/komit_think_0.5b-GGUF/resolve/main/komit_think_0.5b.f16.gguf) | f16 | 1.2 | 16 bpw, overkill | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. + + diff --git a/komit_think_0.5b.IQ4_XS.gguf b/komit_think_0.5b.IQ4_XS.gguf new file mode 100644 index 0000000..65db296 --- /dev/null +++ b/komit_think_0.5b.IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69fc4d2c7b460ed44a503d54036f68460564b68edf6119b4e0c18b8fb227f68e +size 339772768 diff --git a/komit_think_0.5b.Q2_K.gguf b/komit_think_0.5b.Q2_K.gguf new file mode 100644 index 0000000..70a7151 --- /dev/null +++ b/komit_think_0.5b.Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4d9d9fd959f8c78e1997c4e46031b47cb7d4306886eabb7e9c1a847f9a4dc23 +size 263685472 diff --git a/komit_think_0.5b.Q3_K_L.gguf b/komit_think_0.5b.Q3_K_L.gguf new file mode 100644 index 0000000..85b6b59 --- /dev/null +++ b/komit_think_0.5b.Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e30be612d2beff8b6f6e2da7038967b6fee6d3ad1403d2474ddf479bfb2317c1 +size 337216864 diff --git a/komit_think_0.5b.Q3_K_M.gguf b/komit_think_0.5b.Q3_K_M.gguf new file mode 100644 index 0000000..11b2712 --- /dev/null +++ b/komit_think_0.5b.Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49225d7fd2d3f8c5e9c056c03b5c17c528f5a8b8f14e3a74fc6b6daba17a2c45 +size 315983200 diff --git a/komit_think_0.5b.Q3_K_S.gguf b/komit_think_0.5b.Q3_K_S.gguf new file mode 100644 index 0000000..a173828 --- /dev/null +++ b/komit_think_0.5b.Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3308dbb7f49606cbf142f88c353992f2377fc6f3faa436a2d6f0eff7778c373b +size 291800416 diff --git a/komit_think_0.5b.Q4_K_M.gguf b/komit_think_0.5b.Q4_K_M.gguf new file mode 100644 index 0000000..649817d --- /dev/null +++ b/komit_think_0.5b.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f543fbb3a07997e05096588cb424cc2794c385c45c7296e53230f40be17139d2 +size 368182624 diff --git a/komit_think_0.5b.Q4_K_S.gguf b/komit_think_0.5b.Q4_K_S.gguf new file mode 100644 index 0000000..b6fd913 --- /dev/null +++ b/komit_think_0.5b.Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae551569fcd3e7fcda3e4d2cf5a9377091fb47558b46cecac1a1828eead3dab7 +size 354059616 diff --git a/komit_think_0.5b.Q5_K_M.gguf b/komit_think_0.5b.Q5_K_M.gguf new file mode 100644 index 0000000..dad7f63 --- /dev/null +++ b/komit_think_0.5b.Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7684ac43e4a6727e67b1ca3931704107f53e9a45376a825435e8a2b1a9ab069 +size 416941408 diff --git a/komit_think_0.5b.Q5_K_S.gguf b/komit_think_0.5b.Q5_K_S.gguf new file mode 100644 index 0000000..487d3ec --- /dev/null +++ b/komit_think_0.5b.Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a53b3051db9dccdcb0962e124a8f4ca348b75b8dd7e08fd8a82f9d180b0d5adc +size 408585568 diff --git a/komit_think_0.5b.Q6_K.gguf b/komit_think_0.5b.Q6_K.gguf new file mode 100644 index 0000000..33fe3a9 --- /dev/null +++ b/komit_think_0.5b.Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af1adc97d01d10d439e2f3f38d80194bf5ffdbb9b29eca29d789c35ef9b67f0c +size 468747616 diff --git a/komit_think_0.5b.Q8_0.gguf b/komit_think_0.5b.Q8_0.gguf new file mode 100644 index 0000000..5f9dff1 --- /dev/null +++ b/komit_think_0.5b.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfb434a540973517bc42b58568567cac9f00ad18367f523e1b8cff3209f73838 +size 605881696 diff --git a/komit_think_0.5b.f16.gguf b/komit_think_0.5b.f16.gguf new file mode 100644 index 0000000..1e8a525 --- /dev/null +++ b/komit_think_0.5b.f16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c7bde0211b41d8c27cf4f2425ff976b68c55ac0adcf1e5a3b7cbcd093db9568 +size 1136723296