初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 22:02:57 +08:00
commit 4d16cf820d
27 changed files with 218 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6e581a20cfb22792488804bf67f8e4cb4cabd743f5aa7a8a0bcd4eea661e3494
size 2042197184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:40f0e5fab0accf83e1fa18a9799f45eb40b145d3ec0a5e028dfae5d39b5135f1
size 1903668416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88c7f918c5d18238f9ee34868fe2d11863914b3937167e957f5f5c4e680560d4
size 2780343488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91a585d8e5d2169520a36c6728aed233da9fa4f4d6f333a7b5d7ba75e705546a
size 2595638464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f1b5b90726833acb39f0a807e69eefc7e93278cb3c4a27618f37e111849ce46
size 2469022912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:864ad0c07d0688e02899058f3d0fce754f080ba3eaa79f03eacad937a1b8f042
size 2273078464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0045f0d1c9018e9fd912324aa5703883dbb6f9216344f7ea3da464a6b4f95249
size 3574013120

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e24a892cb1285ebe80bf807fcbc3518d5346da48b091445df036125a10959189
size 3499193536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a52de2f427d21a92a397fd7d97f20c61d760b955c67512fd779126d8159744a8
size 3346257088

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2017bf246a13e8d0fdd5fd6be2e3e51bf5505ef7df65a38059cdb82860959dd1
size 3114515648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:24ee90d0f0ebdaccd364acd7b7ad913f61917298eedad0e08db6b1ed1f885e7f
size 4437814464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:624457d42fe28fbd8515db7cd50cdd7a6c6e94055c990931d61f475fe9dc53d1
size 4218473664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f1aa3b0643419d7621b0029d7ef5c16e5543baaa95cef99ba1e053ed4eebd091
size 3015941312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:741f014aea1c0d82e9b9918e1e38c357fe9febf918364a784d5ecbd213835e79
size 2834074816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3ed8bb0b26f133bfd36b383a92efc2796b76ca1de67ff123243fd80a0b8da0c9
size 4088460480

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1f81052d1769e80d1ab58b700c464d2a0586809ceb7651ed5194730de2e2e98
size 3808392384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:29d8079a5c1dfc0f1827584686a566899f2f904f3d8dae6e8f731ac724070a56
size 3492369600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3a016b8d52ceb5b5bbd6d379b388494308893764a16c5af2c91aa85a32e7a195
size 4444122304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b22b394e2a0d08ad2a4bb2b1641de9a63d60c5fbc81d2a56c23f45093392bb15
size 4873284800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9366755f9860d1e59dcd377ecf832ea47042fc7fbd38deec0df9ca5dfaa6d4c3
size 4683074752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f996f0e64f348bfdd90a9df6df34c88e1254d600bc0741c0a3beb5287d8dfbf0
size 4457770176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eed791374a210bb9e40834a398e1e590f68e3348bcd6474af3179d59c01d53a6
size 5444832448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c9627537e47de750b72c6c4a0169cd1bc572bcfb6f88cf01ac9a4762b70ea366
size 5315177664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b3bf0b0c3aace5a3851e38fbfef76d108abb5a8ae5ba66d27efd136954a06f9a
size 6254200000

83
README.md Normal file
View File

@@ -0,0 +1,83 @@
---
base_model: JosephNguyen/Qwen2.5-7B-Instruct-reasoning-finetuned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/JosephNguyen/Qwen2.5-7B-Instruct-reasoning-finetuned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-reasoning-finetuned-i1-GGUF/resolve/main/Qwen2.5-7B-Instruct-reasoning-finetuned.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db500a56f46b319f0c40a38abc28bc809ba33cb57b6df2d9c859738bf62050da
size 4536665