Add Discord community link to README

This commit is contained in:
Paulescu
2026-03-30 14:59:16 +02:00
parent 7949a4d182
commit a759abdc59

View File

@@ -31,11 +31,12 @@ base_model:
/>
</div>
<div style="display: flex; justify-content: center; gap: 0.5em;">
<a href="https://playground.liquid.ai/chat">
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a><a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a><a href="https://leap.liquid.ai/"><strong>LEAP</strong></a></a>
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a><a href="https://docs.liquid.ai/lfm/getting-started/welcome"><strong>Docs</strong></a><a href="https://leap.liquid.ai/"><strong>LEAP</strong></a><a href="https://discord.com/invite/liquid-ai"><strong>Discord</strong></a>
</div>
</center>
<br>
# LFM2-2.6B-GGUF
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
@@ -49,3 +50,8 @@ Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
```
llama-cli -hf LiquidAI/LFM2-2.6B-GGUF
```
## 📬 Contact
- Got questions or want to connect? [Join our Discord community](https://discord.com/invite/liquid-ai)
- If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).