初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Procyon-1.5B-Theorem-GGUF Source: Original Platform
This commit is contained in:
42
README.md
Normal file
42
README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- prithivMLmods/Procyon-1.5B-Qwen2-Theorem
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- theorem
|
||||
---
|
||||
# **Procyon-1.5B-Qwen2-Theorem-GGUF**
|
||||
|
||||
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
|
||||
|
||||
## Model Files
|
||||
|
||||
| File Name | Size | Format | Description |
|
||||
|-----------|------|--------|-------------|
|
||||
| Procyon-1.5B-Qwen2-Theorem.F32.gguf | 7.11 GB | F32 | Full precision 32-bit floating point |
|
||||
| Procyon-1.5B-Qwen2-Theorem.F16.gguf | 3.56 GB | F16 | Half precision 16-bit floating point |
|
||||
| Procyon-1.5B-Qwen2-Theorem.BF16.gguf | 3.56 GB | BF16 | Brain floating point 16-bit |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q8_0.gguf | 1.89 GB | Q8_0 | 8-bit quantized |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q6_K.gguf | 1.46 GB | Q6_K | 6-bit quantized |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q5_K_M.gguf | 1.29 GB | Q5_K_M | 5-bit quantized, medium quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q5_K_S.gguf | 1.26 GB | Q5_K_S | 5-bit quantized, small quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q4_K_M.gguf | 1.12 GB | Q4_K_M | 4-bit quantized, medium quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q4_K_S.gguf | 1.07 GB | Q4_K_S | 4-bit quantized, small quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q3_K_L.gguf | 980 MB | Q3_K_L | 3-bit quantized, large quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q3_K_M.gguf | 924 MB | Q3_K_M | 3-bit quantized, medium quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q3_K_S.gguf | 861 MB | Q3_K_S | 3-bit quantized, small quality |
|
||||
| Procyon-1.5B-Qwen2-Theorem.Q2_K.gguf | 753 MB | Q2_K | 2-bit quantized |
|
||||
|
||||
## Quants Usage
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
Reference in New Issue
Block a user