Files
Oganesson-TinyLlama-1.2B-GGUF/README.md
ModelHub XC c9c3093f1a 初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Oganesson-TinyLlama-1.2B-GGUF
Source: Original Platform
2026-04-22 01:40:51 +08:00

38 lines
1.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Oganesson-TinyLlama-1.2B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- math
- llama-3.2
---
# **Oganesson-TinyLlama-1.2B-GGUF**
> **Oganesson-TinyLlama-1.2B** is a lightweight and efficient language model built on the **LLaMA 3.2 1.2B** architecture. Fine-tuned for **general-purpose inference**, **mathematical reasoning**, and **code generation**, its ideal for edge devices, personal assistants, and educational applications requiring a compact yet capable model.
## Model File
| File Name | Size | Format |
|-----------------------------------------------|---------|--------|
| Oganesson-TinyLlama-1.2B.BF16.gguf | 2.48 GB | BF16 |
| Oganesson-TinyLlama-1.2B.F16.gguf | 2.48 GB | F16 |
| Oganesson-TinyLlama-1.2B.F32.gguf | 4.95 GB | F32 |
| Oganesson-TinyLlama-1.2B.Q4_K_M.gguf | 808 MB | Q4_K_M |
| .gitattributes | 1.8 kB | - |
| README.md | 212 B | - |
| config.json | 31 B | JSON |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)