From 7654776dd4771740721ceafa8312916e02bb8581 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sun, 12 Apr 2026 04:10:54 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: Sathman/Meditation-Agent-SmolLM3-3B-GGUF Source: Original Platform --- .gitattributes | 40 ++++++++ Meditation_Agent-SmolLM3-3B-BF16.gguf | 3 + Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf | 3 + Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf | 3 + Meditation_Agent-SmolLM3-3B-Q8_0.gguf | 3 + README.md | 121 ++++++++++++++++++++++++ 6 files changed, 173 insertions(+) create mode 100644 .gitattributes create mode 100644 Meditation_Agent-SmolLM3-3B-BF16.gguf create mode 100644 Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf create mode 100644 Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf create mode 100644 Meditation_Agent-SmolLM3-3B-Q8_0.gguf create mode 100644 README.md diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..29a8350 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,40 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +Meditation_Agent-SmolLM3-3B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Meditation_Agent-SmolLM3-3B-BF16.gguf filter=lfs diff=lfs merge=lfs -text +individual-authors/osho/Osho_Agent-SmolLM3-3B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Meditation_Agent-SmolLM3-3B-BF16.gguf b/Meditation_Agent-SmolLM3-3B-BF16.gguf new file mode 100644 index 0000000..f25e240 --- /dev/null +++ b/Meditation_Agent-SmolLM3-3B-BF16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df26fdb388ad63ad069fcd32ca11f776c3bbbddb82d03d3db4a46f91f435a031 +size 6158333824 diff --git a/Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf b/Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf new file mode 100644 index 0000000..6fd158c --- /dev/null +++ b/Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4fec665c65b58f2397b5536f6800d3dc4ff6d7037abe0d458eb14013b33f133 +size 1571063680 diff --git a/Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf b/Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf new file mode 100644 index 0000000..25f8147 --- /dev/null +++ b/Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f86e200188d8894662ad88ef0aedec2faa6aaaa0d4db35cfbd1a672ee72ba051 +size 2213750656 diff --git a/Meditation_Agent-SmolLM3-3B-Q8_0.gguf b/Meditation_Agent-SmolLM3-3B-Q8_0.gguf new file mode 100644 index 0000000..f675a2b --- /dev/null +++ b/Meditation_Agent-SmolLM3-3B-Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:846e2c24d0941b7358094370d96ada65d7ceb1e791295c7f3d376b6a81b4e0b8 +size 3275569024 diff --git a/README.md b/README.md new file mode 100644 index 0000000..d49c7cd --- /dev/null +++ b/README.md @@ -0,0 +1,121 @@ +--- +license: apache-2.0 +base_model: HuggingFaceTB/SmolLM3-3B-Base +tags: + - contemplative-ai + - fine-tuned + - gguf + - lora + - qlora + - smollm3 + - nondual + - teaching + - spirituality + - awareness + - advaita + - meditation +language: + - en +pipeline_tag: text-generation +--- + +# Meditation Agent (SmolLM3 3B) — Contemplative Teaching AI + +This is the 3B branch of the Meditation Agent series, built on +[HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base) +and fine-tuned with the A-LoRA V6 recipe for contemplative teaching. + +All 9 teachers blended — Osho, Thich Nhat Hanh, Nisargadatta, Krishnamurti, Eckhart Tolle, Alan Watts, Atmananda, Rupert Spira, Pema Chodron. No system prompt required. Question in, teaching out. + +## 50-question eval summary + +This 3B branch was run through a raw 50-question eval after GGUF conversion. + +- `Q8_0`: completed `50/50` with `0` request failures and is the + highest-fidelity public quant +- `Q5_K_M`: completed `50/50` with `0` request failures and is the recommended + default public quant +- `Q3_K_M`: completed `50/50` with `0` request failures, but is weaker and more + generic than `Q5_K_M` +- overall read: strong stability for a 3B model, but still below the larger + Meditation Agent branches in teacher-specific nuance and factual reliability + +## Final training setup + +| Setting | Value | +|---------|-------| +| Base model | [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base) | +| Method | A-LoRA V6 | +| Format | Question + concept arrows in, pure teaching passage out | +| Data exported | 24,031 atoms | +| V6 formatted set | 17,088 examples after opener cap | +| Train / eval split | 16,233 / 855 | +| Adapter recipe | QDoRA + rsLoRA, rank 32, alpha 32 | +| Epochs | 1 | +| Max sequence length | 1536 | +| Completion-only loss | Yes | +| NEFTune | alpha 5 | + +## Training result + +Merged checkpoint: `checkpoint-2000` + +| Checkpoint | Eval loss | Eval token accuracy | +|------------|-----------|---------------------| +| 500 | 1.7580 | 0.5554 | +| 1000 | 1.6840 | 0.5686 | +| 1500 | 1.6396 | 0.5771 | +| 2000 | 1.6338 | 0.5781 | + +## Files + +| File | Size | Use | +|------|------|-----| +| `Meditation_Agent-SmolLM3-3B-Q8_0.gguf` | 3.05 GB | Highest fidelity | +| `Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf` | 2.06 GB | Recommended default | +| `Meditation_Agent-SmolLM3-3B-Q3_K_M.gguf` | 1.46 GB | Smallest, most brittle | +| `Meditation_Agent-SmolLM3-3B-BF16.gguf` | 5.74 GB | Archive / conversion source | + +## Individual Teacher 3B Specialists + +Each teacher also has their own dedicated 3B model — same SmolLM3-3B base, trained on single-teacher data only. Use these when you want one specific voice rather than the blended multi-teacher model. + +| Teacher | Repo | +|---------|------| +| Osho | [Osho-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Osho-Agent-SmolLM3-3B-GGUF) | +| Thich Nhat Hanh | [TNH-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/TNH-Agent-SmolLM3-3B-GGUF) | +| Nisargadatta | [Nisargadatta-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Nisargadatta-Agent-SmolLM3-3B-GGUF) | +| Atmananda | [Atmananda-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Atmananda-Agent-SmolLM3-3B-GGUF) | +| Krishnamurti | [Krishnamurti-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Krishnamurti-Agent-SmolLM3-3B-GGUF) | +| Eckhart Tolle | [Tolle-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Tolle-Agent-SmolLM3-3B-GGUF) | +| Alan Watts | [Watts-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Watts-Agent-SmolLM3-3B-GGUF) | +| Rupert Spira | [Spira-Agent-SmolLM3-3B-GGUF](https://huggingface.co/Sathman/Spira-Agent-SmolLM3-3B-GGUF) | + +## Release recommendation + +Recommended default: `Meditation_Agent-SmolLM3-3B-Q5_K_M.gguf` + +Use `Q8_0` when you want the strongest public 3B quant, `Q5_K_M` as the +balanced default, and `Q3_K_M` when size matters most. `BF16` is the archive +and further-conversion source. + +## Positioning + +This is the lightweight 3B Meditation Agent: + +- much smaller than the 8B/Phi4 branches +- capable of direct contemplative answers without prompt scaffolding +- best suited for local inference where memory footprint matters + +## Related Models + +- [Full series — Meditation Agent Collection](https://huggingface.co/collections/Sathman/meditation-agent-contemplative-teacher-series-69c0ceca6e74d6f18c1445a8) — all 19 models +- [GitHub Source / Training Repo](https://github.com/Sathman-1/Alora---Expert-Voice) — training pipeline, configs, and release scripts +- [Meditation Agent 8B](https://huggingface.co/Sathman/Meditation-Agent-8B-GGUF) — larger Qwen3 branch with stronger teacher fidelity +- [Meditation Agent Phi4 14B](https://huggingface.co/Sathman/Meditation-Agent-Phi4-GGUF) — strongest larger branch with richer cross-tradition depth + +--- + +*ellam sivamayam* — Everything is Shiva's expression. + +*எல்லாம் சிவமயம்*