Files
ModelHub XC c9c3093f1a 初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Oganesson-TinyLlama-1.2B-GGUF
Source: Original Platform
2026-04-22 01:40:51 +08:00

1.5 KiB
Raw Permalink Blame History

license, language, base_model, pipeline_tag, library_name, tags
license language base_model pipeline_tag library_name tags
apache-2.0
en
prithivMLmods/Oganesson-TinyLlama-1.2B
text-generation transformers
text-generation-inference
code
math
llama-3.2

Oganesson-TinyLlama-1.2B-GGUF

Oganesson-TinyLlama-1.2B is a lightweight and efficient language model built on the LLaMA 3.2 1.2B architecture. Fine-tuned for general-purpose inference, mathematical reasoning, and code generation, its ideal for edge devices, personal assistants, and educational applications requiring a compact yet capable model.

Model File

File Name Size Format
Oganesson-TinyLlama-1.2B.BF16.gguf 2.48 GB BF16
Oganesson-TinyLlama-1.2B.F16.gguf 2.48 GB F16
Oganesson-TinyLlama-1.2B.F32.gguf 4.95 GB F32
Oganesson-TinyLlama-1.2B.Q4_K_M.gguf 808 MB Q4_K_M
.gitattributes 1.8 kB -
README.md 212 B -
config.json 31 B JSON

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png