From ce4d10db2db9f8953e4b3adb2faec2808b9be59e Mon Sep 17 00:00:00 2001 From: Maxime Labonne Date: Wed, 24 Sep 2025 10:44:01 +0000 Subject: [PATCH] Update README.md --- README.md | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 62 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c756d3c..935f19f 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,73 @@ - --- license: other license_name: lfm1.0 license_link: LICENSE +language: +- en +- ar +- zh +- fr +- de +- ja +- ko +- es +pipeline_tag: text-generation tags: - liquid - lfm2 - edge +- llama.cpp +- gguf base_model: - LiquidAI/LFM2-2.6B --- + +
+
+ Liquid AI +
+ + + + Liquid: Playground + + + + + + + + + + + + + + + + + +
+ +# LFM2-2.6B-GGUF + +LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. + +Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-2.6B + +## 🏃 How to run LFM2 + +Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): + +``` +llama-cli -hf LiquidAI/LFM2-2.6B-GGUF +```