--- base_model: ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth tags: - gguf - llama.cpp - unsloth - lfm2 - function-calling - quantized license: apache-2.0 language: - en datasets: - Salesforce/xlam-function-calling-60k pipeline_tag: text-generation --- # LFM2.5-1.2B-xLAM-Unsloth — GGUF quantized GGUF quantizations of [`ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth), produced via [Unsloth](https://github.com/unslothai/unsloth) + llama.cpp's conversion scripts. | Field | Value | |---|---| | **Source checkpoint** | [`ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth) | | **Base model** | [`LiquidAI/LFM2.5-1.2B-Instruct`](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | | **Dataset** | [`Salesforce/xlam-function-calling-60k`](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | | **Training** | N=1 full epoch (7,500 steps, effective batch=8) | | **Conversion** | Unsloth `save_pretrained_gguf` → llama.cpp GGUF | | **Quantization tool** | llama.cpp `llama-quantize` | ## Available quantizations | File | Size | Notes | |---|---|---| | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q2_K.gguf` | smallest | 2-bit; extreme compression, quality loss | | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q3_K_M.gguf` | small | 3-bit; modest quality trade-off | | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q4_K_M.gguf` | recommended | 4-bit; best size/quality balance | | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q5_K_M.gguf` | balanced | 5-bit; near-full quality | | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q6_K.gguf` | high quality | 6-bit; minimal degradation | | `LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q8_0.gguf` | largest | 8-bit; closest to bf16 source | **Recommended default:** `Q4_K_M` (4-bit, K-quant medium). For memory-constrained deployment, try `Q2_K` or `Q3_K_M`. For maximum fidelity, use `Q8_0`. ## Usage ### llama.cpp ```bash # Text-only llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF --jinja -p "Find flights from SFO to NYC on December 25th" -n 256 # Interactive chat llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF --jinja -cnv ``` ### Ollama ```bash ollama run hf.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF:Q4_K_M ``` ### llama-cpp-python ```python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF", filename="*Q4_K_M.gguf", n_ctx=2048, ) out = llm.create_chat_completion( messages=[{"role": "user", "content": "Find flights from SFO to NYC on December 25th"}], max_tokens=256, ) print(out["choices"][0]["message"]["content"]) ``` ## Intended use For research and non-commercial experimentation only. Outputs should be independently verified before any downstream use. ## Limitations - GGUF quantizations have unavoidable quality loss relative to the source bfloat16 checkpoint. Use `Q5_K_M` or `Q8_0` for best fidelity. - Inherits all limitations of the source merged checkpoint ([`ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth`](https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth)). - Limited to the 60 function schemas covered in the training dataset; performance on novel APIs may degrade. ## Citation ```bibtex @misc{ lfm25_12b_xlam_unsloth_2026_gguf , author = {Ermia Azarkhalili}, title = { LFM2.5-1.2B-xLAM-Unsloth — GGUF quantized }, year = {2026}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF}} } ``` --- This lfm2 model was trained 2× faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library. [](https://github.com/unslothai/unsloth)