初始化项目,由ModelHub XC社区提供模型

Model: prithivMLmods/Procyon-1.5B-Theorem-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-24 00:57:56 +08:00
commit 60131e8c1a
17 changed files with 132 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5e11da1361db7725d02e30d50a854849dd3533b1a26b998a746a5b6139422f0a
size 3560413632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9bfda94769da7191cb33d6128468d04d6a781f44d9191d8c5df42c21bb58a669
size 3560413632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a91fa769528747851393f0283fc6fbabdf033a6e1768f3c411dc96518d3c2b4c
size 7114299840

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:722422a26b0fc07fdd8b3aa0613b9482126af928b1c9e61bae73a11f5898b06c
size 752877504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:158f863a66b803a0c8cf7ea64c35fb62efd7e52289518463d72280c6e9bb36c4
size 980437440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:92b3649031fd392c3097f2b7a793c0b5a590dd814fa946c7f94c63ed0aba8357
size 924453312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aec3aff89643a60500fb7ecc6f51c8c62521ab1b7c24ea2a1e1166e04d40c755
size 861219264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:99798096d2d2de1228076518a202334eccdd95a18755f4c664927f39484c5ee5
size 1117318080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e41ff83241bb08a0b19e23f4b3eb13373abc8079b35f1f6adfb1c20cde3b3516
size 1071582144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3fc72e6f029f57d89a58981141d054e7a19e3c1fbc01af8ca4b4da5d3764ead2
size 1285491648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1214c4d474d8a71e337eb20fcd2541b19a3b28885c1d4c78460412515b2ffa43
size 1259170752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ba4eac23829073d30da3d9f8c6edcb0e3d0be70ac86f1646eb491099313233c9
size 1464176064

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d82f4567f624bf091e02c0af13b528fb41821ecd9a00b9bb084cf55dabfeadaa
size 1894529472

42
README.md Normal file
View File

@@ -0,0 +1,42 @@
---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Procyon-1.5B-Qwen2-Theorem
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- theorem
---
# **Procyon-1.5B-Qwen2-Theorem-GGUF**
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Procyon-1.5B-Qwen2-Theorem.F32.gguf | 7.11 GB | F32 | Full precision 32-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.F16.gguf | 3.56 GB | F16 | Half precision 16-bit floating point |
| Procyon-1.5B-Qwen2-Theorem.BF16.gguf | 3.56 GB | BF16 | Brain floating point 16-bit |
| Procyon-1.5B-Qwen2-Theorem.Q8_0.gguf | 1.89 GB | Q8_0 | 8-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q6_K.gguf | 1.46 GB | Q6_K | 6-bit quantized |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_M.gguf | 1.29 GB | Q5_K_M | 5-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q5_K_S.gguf | 1.26 GB | Q5_K_S | 5-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_M.gguf | 1.12 GB | Q4_K_M | 4-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q4_K_S.gguf | 1.07 GB | Q4_K_S | 4-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_L.gguf | 980 MB | Q3_K_L | 3-bit quantized, large quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_M.gguf | 924 MB | Q3_K_M | 3-bit quantized, medium quality |
| Procyon-1.5B-Qwen2-Theorem.Q3_K_S.gguf | 861 MB | Q3_K_S | 3-bit quantized, small quality |
| Procyon-1.5B-Qwen2-Theorem.Q2_K.gguf | 753 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

3
config.json Normal file
View File

@@ -0,0 +1,3 @@
{
"model_type": "qwen2"
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "others", "allow_remote": true}