初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/LFM2.5-350M-F32-GGUF Source: Original Platform
This commit is contained in:
39
.gitattributes
vendored
Normal file
39
.gitattributes
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
GGUF/LFM2.5-350M.BF16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
GGUF/LFM2.5-350M.F16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
GGUF/LFM2.5-350M.F32.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
GGUF/LFM2.5-350M.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
GGUF/LFM2.5-350M.BF16.gguf
Normal file
3
GGUF/LFM2.5-350M.BF16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:db9f562207ff9999340384853881ba2c9003ded438d740f9ec4bb86a7bacfa60
|
||||
size 711485152
|
||||
3
GGUF/LFM2.5-350M.F16.gguf
Normal file
3
GGUF/LFM2.5-350M.F16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:93fa9c0d818386fae26dcb89d54deac6c7e5cb170c94e51a1fe6cbcacb591e09
|
||||
size 711485152
|
||||
3
GGUF/LFM2.5-350M.F32.gguf
Normal file
3
GGUF/LFM2.5-350M.F32.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0be5f249f0411506de78c055523d85114413e1d32a8797a3a16891eda623f512
|
||||
size 1420322528
|
||||
3
GGUF/LFM2.5-350M.Q8_0.gguf
Normal file
3
GGUF/LFM2.5-350M.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6fbdd94ce5c61f9be4f843bd49d9c86f5a3295e804ecf6c1a3f5ade12f9969f5
|
||||
size 379217632
|
||||
35
README.md
Normal file
35
README.md
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- LiquidAI/LFM2.5-350M
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- edge
|
||||
- llama.cpp
|
||||
---
|
||||
|
||||
# **LFM2.5-350M-F32-GGUF**
|
||||
|
||||
> LiquidAI/LFM2.5-350M is an ultra-compact 350M-parameter model from Liquid AI's LFM2.5 series, leveraging a hybrid architecture with 10 double-gated Linear Input-Varying (LIV) convolution blocks for efficient sequence processing and 6 Grouped Query Attention (GQA) blocks for precise long-range context handling, trained on 28T tokens (80K:1 token-to-parameter ratio) with extensive reinforcement learning to excel at agentic tasks like tool calling, data extraction, structured JSON outputs, and multi-step reasoning—outperforming models twice its size on GPQA Diamond, MMLU-Pro, IFEval, BFCLv3/4, and CaseReportBench while achieving blazing-fast inference (313 tok/s on AMD CPUs, 188 tok/s on Snapdragon Gen4). Optimized for edge deployment under 1GB memory with native llama.cpp/MLX/vLLM support, it represents peak "intelligence density" for running reliable agent loops on mobiles, IoT devices, and low-power servers where traditional Transformers fail, making high-quality structured data processing and function calling viable at consumer-grade hardware scales.
|
||||
|
||||
## Model Files
|
||||
|
||||
File Name | Quant Type | File Size | File Link |
|
||||
| - | - | - | - |
|
||||
| LFM2.5-350M.BF16.gguf | BF16 | 711 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.BF16.gguf) |
|
||||
| LFM2.5-350M.F16.gguf | F16 | 711 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.F16.gguf) |
|
||||
| LFM2.5-350M.F32.gguf | F32 | 1.42 GB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.F32.gguf) |
|
||||
| LFM2.5-350M.Q8_0.gguf | Q8_0 | 379 MB | [Download](https://huggingface.co/prithivMLmods/LFM2.5-350M-F32-GGUF/blob/main/GGUF/LFM2.5-350M.Q8_0.gguf) |
|
||||
|
||||
## Quants Usage
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
Reference in New Issue
Block a user