ModelHub XC 629fc4dfd5 初始化项目,由ModelHub XC社区提供模型
Model: ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF
Source: Original Platform
2026-04-29 06:30:35 +08:00

base_model, tags, license, language, datasets, pipeline_tag
base_model tags license language datasets pipeline_tag
ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth
gguf
llama.cpp
unsloth
lfm2
function-calling
quantized
apache-2.0
en
Salesforce/xlam-function-calling-60k
text-generation

LFM2.5-1.2B-xLAM-Unsloth — GGUF quantized

GGUF quantizations of ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth, produced via Unsloth + llama.cpp's conversion scripts.

Field Value
Source checkpoint ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth
Base model LiquidAI/LFM2.5-1.2B-Instruct
Dataset Salesforce/xlam-function-calling-60k
Training N=1 full epoch (7,500 steps, effective batch=8)
Conversion Unsloth save_pretrained_gguf → llama.cpp GGUF
Quantization tool llama.cpp llama-quantize

Available quantizations

File Size Notes
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q2_K.gguf smallest 2-bit; extreme compression, quality loss
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q3_K_M.gguf small 3-bit; modest quality trade-off
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q4_K_M.gguf recommended 4-bit; best size/quality balance
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q5_K_M.gguf balanced 5-bit; near-full quality
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q6_K.gguf high quality 6-bit; minimal degradation
LFM2.5-1.2B-Function-Calling-xLAM-Unsloth.Q8_0.gguf largest 8-bit; closest to bf16 source

Recommended default: Q4_K_M (4-bit, K-quant medium). For memory-constrained deployment, try Q2_K or Q3_K_M. For maximum fidelity, use Q8_0.

Usage

llama.cpp

# Text-only
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF --jinja -p "Find flights from SFO to NYC on December 25th" -n 256

# Interactive chat
llama-cli -hf ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF --jinja -cnv

Ollama

ollama run hf.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF:Q4_K_M

llama-cpp-python

from llama_cpp import Llama
llm = Llama.from_pretrained(
    repo_id="ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF",
    filename="*Q4_K_M.gguf",
    n_ctx=2048,
)
out = llm.create_chat_completion(
    messages=[{"role": "user", "content": "Find flights from SFO to NYC on December 25th"}],
    max_tokens=256,
)
print(out["choices"][0]["message"]["content"])

Intended use

For research and non-commercial experimentation only. Outputs should be independently verified before any downstream use.

Limitations

  • GGUF quantizations have unavoidable quality loss relative to the source bfloat16 checkpoint. Use Q5_K_M or Q8_0 for best fidelity.
  • Inherits all limitations of the source merged checkpoint (ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth).
  • Limited to the 60 function schemas covered in the training dataset; performance on novel APIs may degrade.

Citation

@misc{ lfm25_12b_xlam_unsloth_2026_gguf ,
  author = {Ermia Azarkhalili},
  title = { LFM2.5-1.2B-xLAM-Unsloth — GGUF quantized },
  year = {2026},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF}}
}

This lfm2 model was trained 2× faster with Unsloth and Hugging Face's TRL library.

Description
Model synced from source: ermiaazarkhalili/LFM2.5-1.2B-Function-Calling-xLAM-Unsloth-GGUF
Readme 27 KiB