初始化项目,由ModelHub XC社区提供模型
Model: Sathman/Meditation-Agent-SmolLM3-3B-GGUF Source: Original Platform
This commit is contained in:
121
README.md
Normal file
121
README.md
Normal file
@@ -0,0 +1,121 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model: HuggingFaceTB/SmolLM3-3B-Base
|
||||
tags:
|
||||
- contemplative-ai
|
||||
- fine-tuned
|
||||
- gguf
|
||||
- lora
|
||||
- qlora
|
||||
- smollm3
|
||||
- nondual
|
||||
- teaching
|
||||
- spirituality
|
||||
- awareness
|
||||
- advaita
|
||||
- meditation
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Meditation Agent (SmolLM3 3B) — Contemplative Teaching AI
|
||||
|
||||
This is the 3B branch of the Meditation Agent series, built on
|
||||
[HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base)
|
||||
and fine-tuned with the A-LoRA V6 recipe for contemplative teaching.
|
||||
|
||||
All 9 teachers blended — Osho, Thich Nhat Hanh, Nisargadatta, Krishnamurti, Eckhart Tolle, Alan Watts, Atmananda, Rupert Spira, Pema Chodron. No system prompt required. Question in, teaching out.
|
||||
|
||||
## 50-question eval summary
|
||||
|
||||
This 3B branch was run through a raw 50-question eval after GGUF conversion.
|
||||
|
||||
- `Q8_0`: completed `50/50` with `0` request failures and is the
|
||||
highest-fidelity public quant
|
||||
- `Q5_K_M`: completed `50/50` with `0` request failures and is the recommended
|
||||
default public quant
|
||||
- `Q3_K_M`: completed `50/50` with `0` request failures, but is weaker and more
|
||||
generic than `Q5_K_M`
|
||||
- overall read: strong stability for a 3B model, but still below the larger
|
||||
Meditation Agent branches in teacher-specific nuance and factual reliability
|
||||
|
||||
## Final training setup
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| Base model | [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base) |
|
||||
| Method | A-LoRA V6 |
|
||||
| Format | Question + concept arrows in, pure teaching passage out |
|
||||
| Data exported | 24,031 atoms |
|
||||
| V6 formatted set | 17,088 examples after opener cap |
|
||||
| Train / eval split | 16,233 / 855 |
|
||||
| Adapter recipe | QDoRA + rsLoRA, rank 32, alpha 32 |
|
||||
| Epochs | 1 |
|
||||
| Max sequence length | 1536 |
|
||||
| Completion-only loss | Yes |
|
||||
| NEFTune | alpha 5 |
|
||||
|
||||
## Training result
|
||||
|
||||
Merged checkpoint: `checkpoint-2000`
|
||||
|
||||
| Checkpoint | Eval loss | Eval token accuracy |
|
||||
|------------|-----------|---------------------|
|
||||
| 500 | 1.7580 | 0.5554 |
|
||||
| 1000 | 1.6840 | 0.5686 |
|
||||
| 1500 | 1.6396 | 0.5771 |
|
||||
| 2000 | 1.6338 | 0.5781 |
|
||||
|
||||
## Files
|
||||
|
||||
| File | Size | Use |
|
||||
|------|------|-----|
|
||||
| `Meditation_Agent-SmolLM3-3B-Q8_0.gguf` | 3.05 GB | Highest fidelity |
|
||||
| `Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf` | 2.06 GB | Recommended default |
|
||||
| `Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf` | 1.46 GB | Smallest, most brittle |
|
||||
| `Meditation_Agent-SmolLM3-3B-BF16.gguf` | 5.74 GB | Archive / conversion source |
|
||||
|
||||
## Individual Teacher 3B Specialists
|
||||
|
||||
Each teacher also has their own dedicated 3B model — same SmolLM3-3B base, trained on single-teacher data only. Use these when you want one specific voice rather than the blended multi-teacher model.
|
||||
|
||||
| Teacher | Repo |
|
||||
|---------|------|
|
||||
| Osho | [Osho-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Osho-Agent-SmolLM3-3B-GGUF) |
|
||||
| Thich Nhat Hanh | [TNH-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/TNH-Agent-SmolLM3-3B-GGUF) |
|
||||
| Nisargadatta | [Nisargadatta-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Nisargadatta-Agent-SmolLM3-3B-GGUF) |
|
||||
| Atmananda | [Atmananda-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Atmananda-Agent-SmolLM3-3B-GGUF) |
|
||||
| Krishnamurti | [Krishnamurti-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Krishnamurti-Agent-SmolLM3-3B-GGUF) |
|
||||
| Eckhart Tolle | [Tolle-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Tolle-Agent-SmolLM3-3B-GGUF) |
|
||||
| Alan Watts | [Watts-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Watts-Agent-SmolLM3-3B-GGUF) |
|
||||
| Rupert Spira | [Spira-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Spira-Agent-SmolLM3-3B-GGUF) |
|
||||
|
||||
## Release recommendation
|
||||
|
||||
Recommended default: `Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf`
|
||||
|
||||
Use `Q8_0` when you want the strongest public 3B quant, `Q5_K_M` as the
|
||||
balanced default, and `Q3_K_M` when size matters most. `BF16` is the archive
|
||||
and further-conversion source.
|
||||
|
||||
## Positioning
|
||||
|
||||
This is the lightweight 3B Meditation Agent:
|
||||
|
||||
- much smaller than the 8B/Phi4 branches
|
||||
- capable of direct contemplative answers without prompt scaffolding
|
||||
- best suited for local inference where memory footprint matters
|
||||
|
||||
## Related Models
|
||||
|
||||
- [Full series — Meditation Agent Collection](https://huggingface.co/collections/Sathman/meditation-agent-contemplative-teacher-series-69c0ceca6e74d6f18c1445a8) — all 19 models
|
||||
- [GitHub Source / Training Repo](https://github.com/Sathman-1/Alora---Expert-Voice) — training pipeline, configs, and release scripts
|
||||
- [Meditation Agent 8B](https://huggingface.co/Sathman/Meditation-Agent-8B-GGUF) — larger Qwen3 branch with stronger teacher fidelity
|
||||
- [Meditation Agent Phi4 14B](https://huggingface.co/Sathman/Meditation-Agent-Phi4-GGUF) — strongest larger branch with richer cross-tradition depth
|
||||
|
||||
---
|
||||
|
||||
*ellam sivamayam* — Everything is Shiva's expression.
|
||||
|
||||
*எல்லாம் சிவமயம்*
|
||||
Reference in New Issue
Block a user