Files
MAJESTIC-FIN-R1-gguf/README.md
ModelHub XC c0860b092f 初始化项目,由ModelHub XC社区提供模型
Model: EREN121232/MAJESTIC-FIN-R1-gguf
Source: Original Platform
2026-05-05 23:52:40 +08:00

1.3 KiB

base_model, license, language, tags
base_model license language tags
LiquidAI/LFM2-2.6B apache-2.0
en
gguf
llama.cpp
ollama
lfm2
unsloth
conversational
text-generation

MAJESTIC-FIN-R1 GGUF

MAJESTIC-FIN-R1 is a fine-tuned LiquidAI/LFM2-2.6B model exported to GGUF for Ollama, llama.cpp, and lightweight CPU deployment.

Available files

  • MAJESTIC-FIN-R1-F16.gguf: highest-fidelity GGUF export.
  • MAJESTIC-FIN-R1-Q8_0.gguf: smaller GGUF export for Ollama and free CPU hosting.
  • template: Ollama chat template for this model family.
  • params: default Ollama runtime parameters.
  • Modelfile: local Ollama import file.

Run with Ollama from Hugging Face

ollama run hf.co/EREN121232/MAJESTIC-FIN-R1-gguf:Q8_0

Run with Ollama locally

  1. Download MAJESTIC-FIN-R1-Q8_0.gguf and Modelfile.
  2. Keep them in the same folder.
  3. Run:
ollama create majestic-fin-r1 -f Modelfile
ollama run majestic-fin-r1

Free hosted demo and API

A public Hugging Face Space can serve the Q8_0 build on free CPU hardware. The companion Space for this repo is:

  • https://huggingface.co/spaces/EREN121232/MAJESTIC-FIN-R1-Free-API

Once the Space is live, use the footer link Use via API to inspect endpoints, or call the /chat endpoint directly from Python, JavaScript, or curl.