Model: EREN121232/MAJESTIC-FIN-R1-gguf Source: Original Platform
base_model, license, language, tags
| base_model | license | language | tags | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| LiquidAI/LFM2-2.6B | apache-2.0 |
|
|
MAJESTIC-FIN-R1 GGUF
MAJESTIC-FIN-R1 is a fine-tuned LiquidAI/LFM2-2.6B model exported to GGUF for Ollama, llama.cpp, and lightweight CPU deployment.
Available files
MAJESTIC-FIN-R1-F16.gguf: highest-fidelity GGUF export.MAJESTIC-FIN-R1-Q8_0.gguf: smaller GGUF export for Ollama and free CPU hosting.template: Ollama chat template for this model family.params: default Ollama runtime parameters.Modelfile: local Ollama import file.
Run with Ollama from Hugging Face
ollama run hf.co/EREN121232/MAJESTIC-FIN-R1-gguf:Q8_0
Run with Ollama locally
- Download
MAJESTIC-FIN-R1-Q8_0.ggufandModelfile. - Keep them in the same folder.
- Run:
ollama create majestic-fin-r1 -f Modelfile
ollama run majestic-fin-r1
Free hosted demo and API
A public Hugging Face Space can serve the Q8_0 build on free CPU hardware. The companion Space for this repo is:
https://huggingface.co/spaces/EREN121232/MAJESTIC-FIN-R1-Free-API
Once the Space is live, use the footer link Use via API to inspect endpoints, or call the /chat endpoint directly from Python, JavaScript, or curl.
Description
Languages
Jinja
100%