c9c3093f1a910fdd06330c89e0f55a8ad426f38c
Model: prithivMLmods/Oganesson-TinyLlama-1.2B-GGUF Source: Original Platform
license, language, base_model, pipeline_tag, library_name, tags
| license | language | base_model | pipeline_tag | library_name | tags | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| apache-2.0 |
|
|
text-generation | transformers |
|
Oganesson-TinyLlama-1.2B-GGUF
Oganesson-TinyLlama-1.2B is a lightweight and efficient language model built on the LLaMA 3.2 1.2B architecture. Fine-tuned for general-purpose inference, mathematical reasoning, and code generation, it’s ideal for edge devices, personal assistants, and educational applications requiring a compact yet capable model.
Model File
| File Name | Size | Format |
|---|---|---|
| Oganesson-TinyLlama-1.2B.BF16.gguf | 2.48 GB | BF16 |
| Oganesson-TinyLlama-1.2B.F16.gguf | 2.48 GB | F16 |
| Oganesson-TinyLlama-1.2B.F32.gguf | 4.95 GB | F32 |
| Oganesson-TinyLlama-1.2B.Q4_K_M.gguf | 808 MB | Q4_K_M |
| .gitattributes | 1.8 kB | - |
| README.md | 212 B | - |
| config.json | 31 B | JSON |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
Description
