commit 49be3eb0761ddd24797b7198a194ae3bceecbadc Author: ModelHub XC Date: Sun Apr 12 15:43:59 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: prithivMLmods/LFM2.5-350M-F32-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..d27797f --- /dev/null +++ b/.gitattributes @@ -0,0 +1,39 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +GGUF/LFM2.5-350M.BF16.gguf filter=lfs diff=lfs merge=lfs -text +GGUF/LFM2.5-350M.F16.gguf filter=lfs diff=lfs merge=lfs -text +GGUF/LFM2.5-350M.F32.gguf filter=lfs diff=lfs merge=lfs -text +GGUF/LFM2.5-350M.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/GGUF/LFM2.5-350M.BF16.gguf b/GGUF/LFM2.5-350M.BF16.gguf new file mode 100644 index 0000000..2cd6b39 --- /dev/null +++ b/GGUF/LFM2.5-350M.BF16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db9f562207ff9999340384853881ba2c9003ded438d740f9ec4bb86a7bacfa60 +size 711485152 diff --git a/GGUF/LFM2.5-350M.F16.gguf b/GGUF/LFM2.5-350M.F16.gguf new file mode 100644 index 0000000..e3c0f2c --- /dev/null +++ b/GGUF/LFM2.5-350M.F16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93fa9c0d818386fae26dcb89d54deac6c7e5cb170c94e51a1fe6cbcacb591e09 +size 711485152 diff --git a/GGUF/LFM2.5-350M.F32.gguf b/GGUF/LFM2.5-350M.F32.gguf new file mode 100644 index 0000000..0066c5e --- /dev/null +++ b/GGUF/LFM2.5-350M.F32.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0be5f249f0411506de78c055523d85114413e1d32a8797a3a16891eda623f512 +size 1420322528 diff --git a/GGUF/LFM2.5-350M.Q8_0.gguf b/GGUF/LFM2.5-350M.Q8_0.gguf new file mode 100644 index 0000000..394f267 --- /dev/null +++ b/GGUF/LFM2.5-350M.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fbdd94ce5c61f9be4f843bd49d9c86f5a3295e804ecf6c1a3f5ade12f9969f5 +size 379217632 diff --git a/README.md b/README.md new file mode 100644 index 0000000..bced9d4 --- /dev/null +++ b/README.md @@ -0,0 +1,35 @@ +--- +license: apache-2.0 +language: +- en +base_model: +- LiquidAI/LFM2.5-350M +pipeline_tag: text-generation +library_name: transformers +tags: +- text-generation-inference +- edge +- llama.cpp +--- + +# **LFM2.5-350M-F32-GGUF** + +> LiquidAI/LFM2.5-350M is an ultra-compact 350M-parameter model from Liquid AI's LFM2.5 series, leveraging a hybrid architecture with 10 double-gated Linear Input-Varying (LIV) convolution blocks for efficient sequence processing and 6 Grouped Query Attention (GQA) blocks for precise long-range context handling, trained on 28T tokens (80K:1 token-to-parameter ratio) with extensive reinforcement learning to excel at agentic tasks like tool calling, data extraction, structured JSON outputs, and multi-step reasoning—outperforming models twice its size on GPQA Diamond, MMLU-Pro, IFEval, BFCLv3/4, and CaseReportBench while achieving blazing-fast inference (313 tok/s on AMD CPUs, 188 tok/s on Snapdragon Gen4). Optimized for edge deployment under 1GB memory with native llama.cpp/MLX/vLLM support, it represents peak "intelligence density" for running reliable agent loops on mobiles, IoT devices, and low-power servers where traditional Transformers fail, making high-quality structured data processing and function calling viable at consumer-grade hardware scales. + +## Model Files + + File Name | Quant Type | File Size | File Link | + | - | - | - | - | + | LFM2.5-350M.BF16.gguf | BF16 | 711 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.BF16.gguf) | + | LFM2.5-350M.F16.gguf | F16 | 711 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.F16.gguf) | + | LFM2.5-350M.F32.gguf | F32 | 1.42 GB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.F32.gguf) | + | LFM2.5-350M.Q8_0.gguf | Q8_0 | 379 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.Q8_0.gguf) | + +## Quants Usage + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) \ No newline at end of file