diff --git a/README.md b/README.md index 6be1c70..9722b2b 100644 --- a/README.md +++ b/README.md @@ -31,11 +31,12 @@ base_model: />
- -Try LFMDocumentationLEAP +Try LFMDocsLEAPDiscord
+
+ # LFM2-2.6B-GGUF LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. @@ -49,3 +50,8 @@ Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): ``` llama-cli -hf LiquidAI/LFM2-2.6B-GGUF ``` + +## 📬 Contact + +- Got questions or want to connect? [Join our Discord community](https://discord.com/invite/liquid-ai) +- If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).