初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 03:04:56 +08:00
commit faac9e246b
14 changed files with 163 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.f16.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ff9ee563c777dbf4e021aa3a6cbcee761ee3083ffbcd9be69124cc94753a1124
size 351448160

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:00fd06abb22ceef57403b0c3823522112106e0c0368f4452abf5edbc89769bf9
size 338610272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6751e128e04fd4574c0e40912ecade87360dfe0057f1a2e08038c7e018ed3619
size 369360992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:13074c197fe6e9b2250ef61fdcd0367fb9fb1241129a1134537bae1581ca8ca8
size 355469408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5cb0ff68af12f8ad2b0d59295253c9f40a4a3dbf6b5f44a1e6171429a80e1b3e
size 338266208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:23b0e2e8a7ead54ae1f164e62d18bbf8be4d027351d8129fcb7a29fa601bb11b
size 397810784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b243c5158384e5b29f5b7380561cf9f1bea77b06e94315a62753d5f1d6d4f83f
size 385474656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7ffb931db9c17f900a9ea174c69e7e5be178c26fd5637cbcc9c914125fd78b3d
size 420088928

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:627f97a5d32ed260c2bb2a3d716cbf4d4e6703f886a6367ce285f86a298e864e
size 412713056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d71449897c8f749e9111aaffc9c8b93fad273a97482aba9fa02dcc385e6e2f44
size 505739360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:adaf8453782ca2047abe7aa22ff01889671c851017e438d505f3825c3ae3be89
size 531071072

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:790e6d542e6104cdb78fd2b0c53ea4f748b772cbcbceb36d09cdda5dfa96e2f6
size 994159712

80
README.md Normal file
View File

@@ -0,0 +1,80 @@
---
base_model: PJMixers-Dev/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B
datasets:
- PJMixers-Dev/allura-org_gryphe-sonnet-3.5-charcards-names-added-qwq-all-aphrodite
- PJMixers-Dev/anthracite-org_c2_logs_32k_llama3_qwen2_v1.3-qwq-all-aphrodite
- PJMixers-Dev/grimulkan_aicg-logs-augmented-system-qwq-all-aphrodite
- PJMixers-Dev/grimulkan_jannie-log-augmented-system-qwq-all-aphrodite
- PJMixers-Dev/grimulkan_PIPPA-augmented-dedup-system-qwq-all-aphrodite
- PJMixers-Dev/lemonilia_LimaRP-Only-NonSus-Simple-CustomShareGPT-qwq-all-aphrodite
- PJMixers-Dev/MinervaAI_Aesir-Preview-Anon-qwq-all-aphrodite
- PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT-qwq-all-aphrodite
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PJMixers-Dev/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B-GGUF/resolve/main/Qwen2.5-QwQ-RP-Draft-v0.1-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->