Files
philosophy-oracle-smollm2-360m/README.md
ModelHub XC 02e4c29589 初始化项目,由ModelHub XC社区提供模型
Model: amitbehura/philosophy-oracle-smollm2-360m
Source: Original Platform
2026-04-27 22:50:18 +08:00

3.7 KiB

language, license, base_model, tags, pipeline_tag
language license base_model tags pipeline_tag
en apache-2.0 HuggingFaceTB/SmolLM2-360M-Instruct
philosophy
fine-tuned
qlora
smollm2
literature
text-generation
llama
conversational
text-generation

Philosophy Oracle — SmolLM2 360M

A fine-tuned version of SmolLM2-360M-Instruct, trained on 2,000+ synthetic Q&A pairs derived from 34 philosophy and literature classics. The goal was simple: make a tiny model that thinks and speaks like the authors I love.

Model Details

Property Value
Base Model HuggingFaceTB/SmolLM2-360M-Instruct
Fine-tuning Method QLoRA (4-bit)
Training Epochs 3
Training Pairs ~2,000
Merged Size 694 MB
GGUF (Q8_0) Size 386 MB
Framework Unsloth Studio

Training Corpus

34 books across philosophy, existentialism, psychoanalysis, and literature. Pair counts weighted by philosophical depth:

Book Pairs
Thus Spoke Zarathustra 150
Notes from Underground 120
The Myth of Sisyphus 121
Civilization and Its Discontents 100
The Possessed 100
Upanishads 112
The Fall 70
Totem and Taboo 70
The Interpretation of Dreams 70
Moby-Dick 70
The Symposium 80
Rig Veda 80
Book of Chuang Tzu 80
Tao Te Ching 100
Gilgamesh 70
The Social Contract 60
The Three Theban Plays 50
The Trial 50
The Death of Ivan Ilyich 50
The Oresteia 50
The Need for Roots 50
Four Archetypes 50
A Room of One's Own 40
Oppression and Liberty 40
Fathers and Sons 40
The Metamorphosis 40
War and Peace 40
The Bell Jar 30
To the Lighthouse 30
Wuthering Heights 30
The Picture of Dorian Gray 30
One Day in the Life of Ivan Denisovich 30
I Who Have Never Known Men 30
The Joke 30

Intended Use

  • Philosophical Q&A with opinionated, crisp responses
  • Edge deployment via Ollama or LM Studio using the GGUF
  • Exploring what personality-driven fine-tuning looks like at 360M parameters

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "amitbehura/philosophy-oracle-smollm2-360m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

messages = [
    {"role": "user", "content": "How should one live in the face of the certainty of death?"}
]

input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=200)
print(tokenizer.decode(output[0]))

GGUF (Local Use)

Use philosophy-oracle-smollm2-360m-Q8_0.gguf with Ollama or LM Studio directly.

Limitations

  • 360M parameter model — reasoning depth is limited by size
  • Occasionally invents philosophical terminology to maintain persona
  • Not suitable for factual Q&A — designed for philosophical voice, not accuracy
  • Responses may reflect translator style rather than original author intent

Sample Output

Q: How should one live in the face of the certainty of death?

You have no choice but to live according to the circumstances that you have chosen. The certainty of death is not the problem; the problem is the life you live in the face of it. Each life is an experiment, and the experiment was chosen in part by the circumstances of death. Live according to the circumstances of your experiment, and then live.

Author

Built by Amit Behura as a personal experiment in personality-driven fine-tuning on a single laptop GPU (RTX 2050, 4GB VRAM).

Full writeup on the methodology, training decisions, and results available on Substack.