初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-09 05:29:49 +08:00
commit e2917d9696
13 changed files with 153 additions and 0 deletions

46
.gitattributes vendored Normal file
View File

@@ -0,0 +1,46 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-30B-A3B-CoderThinking-YOYO-linear.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0002c16feaa2380b1f436eead7c4b732b5bcbf21e8f7bffa44232c3b1a7f42d4
size 16556351072

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:80b5efadd78608959dcd3f6257cb303fa78d58318e2e031476a3bf266a258b28
size 11257978944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:afa968bb26b133af76aa917a228f3f1b15b67e0a641f595c5c093b3e05d00db0
size 15899983712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d3c67518f33118ee21db1a90632fb17dfe5ab776446bf06ebe85cb9c9f65b5af
size 14711160672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3cf27e79275dbdaf2c161ad2fd24366b0e06e071da2f9b74d49042c059acd93f
size 13291781984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:951558a5121656a0adb3e2f3457d5cd668bed17e626065695378733eb71b6c66
size 18555927456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fe7b1c81940caca58799415f6abb7fd320e5ee4a416e45ea7fcb857185d6dd6e
size 17455250336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0cd0761838e40932d6e1c37b2df5836ba2d92b849587ab1f802cfb69f835f1b3
size 21724754080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ea7736003bd211466711117db106c74235b36e28fb6850e6279a0765ecef3687
size 21079683232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:842dc397faf8d1c0d3c8f3b621fe156f1bfa8e6ea6953acb50f8376f227c7cee
size 25091632384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:477eb10755449ab6df221789fa1e72bdbc6aaad5b57e4a54ed88a56200ccd0cc
size 32482767424

74
README.md Normal file
View File

@@ -0,0 +1,74 @@
---
base_model: YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-GGUF/resolve/main/Qwen3-30B-A3B-CoderThinking-YOYO-linear.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->