Files
LFM2.5-350M-F32-GGUF/README.md
ModelHub XC 49be3eb076 初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/LFM2.5-350M-F32-GGUF
Source: Original Platform
2026-04-12 15:43:59 +08:00

2.2 KiB

license, language, base_model, pipeline_tag, library_name, tags
license language base_model pipeline_tag library_name tags
apache-2.0
en
LiquidAI/LFM2.5-350M
text-generation transformers
text-generation-inference
edge
llama.cpp

LFM2.5-350M-F32-GGUF

LiquidAI/LFM2.5-350M is an ultra-compact 350M-parameter model from Liquid AI's LFM2.5 series, leveraging a hybrid architecture with 10 double-gated Linear Input-Varying (LIV) convolution blocks for efficient sequence processing and 6 Grouped Query Attention (GQA) blocks for precise long-range context handling, trained on 28T tokens (80K:1 token-to-parameter ratio) with extensive reinforcement learning to excel at agentic tasks like tool calling, data extraction, structured JSON outputs, and multi-step reasoning—outperforming models twice its size on GPQA Diamond, MMLU-Pro, IFEval, BFCLv3/4, and CaseReportBench while achieving blazing-fast inference (313 tok/s on AMD CPUs, 188 tok/s on Snapdragon Gen4). Optimized for edge deployment under 1GB memory with native llama.cpp/MLX/vLLM support, it represents peak "intelligence density" for running reliable agent loops on mobiles, IoT devices, and low-power servers where traditional Transformers fail, making high-quality structured data processing and function calling viable at consumer-grade hardware scales.

Model Files

File Name Quant Type File Size File Link
LFM2.5-350M.BF16.gguf BF16 711 MB Download
LFM2.5-350M.F16.gguf F16 711 MB Download
LFM2.5-350M.F32.gguf F32 1.42 GB Download
LFM2.5-350M.Q8_0.gguf Q8_0 379 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png