初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/LinalgZero-GRPO-merged-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 21:19:34 +08:00
commit f1b44a3544
14 changed files with 162 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
LinalgZero-GRPO-merged.f16.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:58b15ddac19ccc75d27a88cf5918388243706fadc49b77c24f4714cd7392990f
size 1753184864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d02cda13897b5d9b592a87f58b3365b86ea4898c56c81cf3c20144db09f912c7
size 1274755680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cf6853d452650de85943b18e34bd6cc1dcc7855236626d3d0494b4a61a9fee3d
size 1707391584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a5b78bb58a1c30ce90751256ee3ea27d349a5e366d614e2a7af8f3423bc6dcad
size 1590475360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7dfeac33a3c2eb74027fa039977cd3cd360c44f8aa7ba10a230f43abfeec8875
size 1454357088

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4c59cc884c46de16f6b6957b2aaf63d73ba00c3dc9039bc43e1ca7a552cbbb40
size 1929902688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:64f4feaf5e9f2e55845a07a621dbacadbe8d26d2f84669f0b6f92a6c8d36ef52
size 1834383968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fe8b0e39cd0c557c083576398533417556df450fec70be64fc98a54d2cdfce68
size 2224814688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:241a9b706d292d98f793714db87b665519d7bc739605e6d0fdbabb95d4ac2a6a
size 2169666144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:13a042bd76de5cd06cdde2afa2825aca9d7b4b9692ecf78f916d227def8eec16
size 2538158688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:160355237fb26e0b028665fa503e26944fb2a151d1f8ac073b5ce03a666c4b54
size 3285475936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a2b5164056818ee484dda87bf23d39a69edafb8684739218a6aed173d6e64296
size 6178316896

79
README.md Normal file
View File

@@ -0,0 +1,79 @@
---
base_model: rfvasile/LinalgZero-GRPO-merged
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- base_model:adapter:atomwalk12/LinalgZero-SFT
- grpo
- lora
- transformers
- trl
- unsloth
- step1000
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/rfvasile/LinalgZero-GRPO-merged
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LinalgZero-GRPO-merged-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LinalgZero-GRPO-merged-GGUF/resolve/main/LinalgZero-GRPO-merged.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->