+
+# LFM2-2.6B-GGUF
+
+LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
+
+Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-2.6B
+
+## 🏃 How to run LFM2
+
+Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
+
+```
+llama-cli -hf LiquidAI/LFM2-2.6B-GGUF
+```