Files
ModelHub XC 9bc5b02b90 初始化项目,由ModelHub XC社区提供模型
Model: ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF
Source: Original Platform
2026-04-30 19:15:31 +08:00

106 lines
4.2 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
base_model: ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth
tags:
- gguf
- llama.cpp
- unsloth
- lfm2
- reasoning
- quantized
license: apache-2.0
language:
- en
datasets:
- ermiaazarkhalili/Claude-Opus-4.7-Reasoning
pipeline_tag: text-generation
---
# LFM2.5-1.2B-SFT-Unsloth — GGUF quantized
GGUF quantizations of [`ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth),
produced via [Unsloth](https://github.com/unslothai/unsloth) + llama.cpp's conversion scripts.
| Field | Value |
|---|---|
| **Source checkpoint** | [`ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth) |
| **Base model** | [`LiquidAI/LFM2.5-1.2B-Instruct`](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
| **Dataset** | [`ermiaazarkhalili/Claude-Opus-4.7-Reasoning`](https://huggingface.co/datasets/ermiaazarkhalili/Claude-Opus-4.7-Reasoning) |
| **Training** | N=1 full epoch (N=1 epoch steps, effective batch=8) |
| **Conversion** | Unsloth `save_pretrained_gguf` → llama.cpp GGUF |
| **Quantization tool** | llama.cpp `llama-quantize` |
## Available quantizations
| File | Size | Notes |
|---|---|---|
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q2_K.gguf` | smallest | 2-bit; extreme compression, quality loss |
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q3_K_M.gguf` | small | 3-bit; modest quality trade-off |
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q4_K_M.gguf` | recommended | 4-bit; best size/quality balance |
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q5_K_M.gguf` | balanced | 5-bit; near-full quality |
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q6_K.gguf` | high quality | 6-bit; minimal degradation |
| `LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth.Q8_0.gguf` | largest | 8-bit; closest to bf16 source |
**Recommended default:** `Q4_K_M` (4-bit, K-quant medium). For memory-constrained deployment, try `Q2_K` or `Q3_K_M`. For maximum fidelity, use `Q8_0`.
## Usage
### llama.cpp
```bash
# Text-only
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF --jinja -p "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?" -n 256
# Interactive chat
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF --jinja -cnv
```
### Ollama
```bash
ollama run hf.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF:Q4_K_M
```
### llama-cpp-python
```python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF",
filename="*Q4_K_M.gguf",
n_ctx=2048,
)
out = llm.create_chat_completion(
messages=[{"role": "user", "content": "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?"}],
max_tokens=256,
)
print(out["choices"][0]["message"]["content"])
```
## Intended use
For research and non-commercial experimentation only. Outputs should be independently verified before any downstream use.
## Limitations
- GGUF quantizations have unavoidable quality loss relative to the source bfloat16 checkpoint. Use `Q5_K_M` or `Q8_0` for best fidelity.
- Inherits all limitations of the source merged checkpoint ([`ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth)).
- Distilled reasoning traces reflect patterns from Claude Opus 4.7 and may not generalize to domains outside the distillation corpus.
## Citation
```bibtex
@misc{ lfm25_12b_sft_claude_opus_2026_gguf ,
author = {Ermia Azarkhalili},
title = { LFM2.5-1.2B-SFT-Unsloth — GGUF quantized },
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF}}
}
```
---
This lfm2 model was trained 2× faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)