初始化项目,由ModelHub XC社区提供模型
Model: EREN121232/MAJESTIC-FIN-R1-gguf Source: Original Platform
This commit is contained in:
51
README.md
Normal file
51
README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
base_model: LiquidAI/LFM2-2.6B
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- gguf
|
||||
- llama.cpp
|
||||
- ollama
|
||||
- lfm2
|
||||
- unsloth
|
||||
- conversational
|
||||
- text-generation
|
||||
---
|
||||
|
||||
# MAJESTIC-FIN-R1 GGUF
|
||||
|
||||
MAJESTIC-FIN-R1 is a fine-tuned `LiquidAI/LFM2-2.6B` model exported to GGUF for Ollama, llama.cpp, and lightweight CPU deployment.
|
||||
|
||||
## Available files
|
||||
|
||||
- `MAJESTIC-FIN-R1-F16.gguf`: highest-fidelity GGUF export.
|
||||
- `MAJESTIC-FIN-R1-Q8_0.gguf`: smaller GGUF export for Ollama and free CPU hosting.
|
||||
- `template`: Ollama chat template for this model family.
|
||||
- `params`: default Ollama runtime parameters.
|
||||
- `Modelfile`: local Ollama import file.
|
||||
|
||||
## Run with Ollama from Hugging Face
|
||||
|
||||
```bash
|
||||
ollama run hf.co/EREN121232/MAJESTIC-FIN-R1-gguf:Q8_0
|
||||
```
|
||||
|
||||
## Run with Ollama locally
|
||||
|
||||
1. Download `MAJESTIC-FIN-R1-Q8_0.gguf` and `Modelfile`.
|
||||
2. Keep them in the same folder.
|
||||
3. Run:
|
||||
|
||||
```bash
|
||||
ollama create majestic-fin-r1 -f Modelfile
|
||||
ollama run majestic-fin-r1
|
||||
```
|
||||
|
||||
## Free hosted demo and API
|
||||
|
||||
A public Hugging Face Space can serve the `Q8_0` build on free CPU hardware. The companion Space for this repo is:
|
||||
|
||||
- `https://huggingface.co/spaces/EREN121232/MAJESTIC-FIN-R1-Free-API`
|
||||
|
||||
Once the Space is live, use the footer link `Use via API` to inspect endpoints, or call the `/chat` endpoint directly from Python, JavaScript, or curl.
|
||||
Reference in New Issue
Block a user