Model: ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF Source: Original Platform
base_model, tags, license, language, datasets, pipeline_tag
| base_model | tags | license | language | datasets | pipeline_tag | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth |
|
apache-2.0 |
|
|
text-generation |
LFM2.5-1.2B-SFT-Unsloth — GGUF quantized
GGUF quantizations of ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth,
produced via Unsloth + llama.cpp's conversion scripts.
| Field | Value |
|---|---|
| Source checkpoint | ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth |
| Base model | LiquidAI/LFM2.5-1.2B-Instruct |
| Dataset | ermiaazarkhalili/Claude-Opus-4.7-Reasoning |
| Training | N=1 full epoch (N=1 epoch steps, effective batch=8) |
| Conversion | Unsloth save_pretrained_gguf → llama.cpp GGUF |
| Quantization tool | llama.cpp llama-quantize |
Available quantizations
| File | Size | Notes |
|---|---|---|
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q2_K.gguf |
smallest | 2-bit; extreme compression, quality loss |
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q3_K_M.gguf |
small | 3-bit; modest quality trade-off |
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q4_K_M.gguf |
recommended | 4-bit; best size/quality balance |
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q5_K_M.gguf |
balanced | 5-bit; near-full quality |
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q6_K.gguf |
high quality | 6-bit; minimal degradation |
LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q8_0.gguf |
largest | 8-bit; closest to bf16 source |
Recommended default: Q4_K_M (4-bit, K-quant medium). For memory-constrained deployment, try Q2_K or Q3_K_M. For maximum fidelity, use Q8_0.
Usage
llama.cpp
# Text-only
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF --jinja -p "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?" -n 256
# Interactive chat
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF --jinja -cnv
Ollama
ollama run hf.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF:Q4_K_M
llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF",
filename="*Q4_K_M.gguf",
n_ctx=2048,
)
out = llm.create_chat_completion(
messages=[{"role": "user", "content": "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?"}],
max_tokens=256,
)
print(out["choices"][0]["message"]["content"])
Intended use
For research and non-commercial experimentation only. Outputs should be independently verified before any downstream use.
Limitations
- GGUF quantizations have unavoidable quality loss relative to the source bfloat16 checkpoint. Use
Q5_K_MorQ8_0for best fidelity. - Inherits all limitations of the source merged checkpoint (
ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth). - Distilled reasoning traces reflect patterns from Claude Opus 4.7 and may not generalize to domains outside the distillation corpus.
Citation
@misc{ lfm25_12b_sft_claude_opus_2026_gguf ,
author = {Ermia Azarkhalili},
title = { LFM2.5-1.2B-SFT-Unsloth — GGUF quantized },
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF}}
}
This lfm2 model was trained 2× faster with Unsloth and Hugging Face's TRL library.
Description
Model synced from source: ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF
