133 lines
3.9 KiB
Markdown
133 lines
3.9 KiB
Markdown
---
|
|
library_name: transformers
|
|
tags:
|
|
- unsloth
|
|
- philosophy
|
|
- debate
|
|
- sft
|
|
license: apache-2.0
|
|
datasets:
|
|
- mattwesney/CoT_Philosophical_Understanding
|
|
language:
|
|
- en
|
|
base_model:
|
|
- unsloth/Qwen3-1.7B
|
|
pipeline_tag: text-generation
|
|
---
|
|
|
|
# Model Card for Averroes-R1
|
|
|
|
|
|

|
|
|
|
|
|
## Model Details
|
|
|
|
### Model Description
|
|
|
|
- **Base Model:** Qwen3-1.7B
|
|
- **Language(s) (NLP):** English
|
|
- **License:** MIT
|
|
- **Task:** Foundational Philosophical Reasoning with Chain-of-Thought (CoT)
|
|
- **Model Type:** Instruction-tuned model emphasizing logical and conceptual reasoning in philosophy
|
|
- **Dataset:** [moremilk/CoT_Philosophical_Understanding](https://huggingface.co/datasets/moremilk/CoT_Philosophical_Understanding)
|
|
|
|
|
|
## Uses
|
|
|
|
### Direct Use
|
|
|
|
The model is designed for:
|
|
|
|
- Educational use in teaching and learning philosophy
|
|
- Supporting AI assistants and chatbots focused on structured reasoning and conceptual understanding
|
|
- Serving as a tool for structured philosophical explanation
|
|
- Enhancing automated reasoning systems in conceptual and abstract domains
|
|
|
|
It is not intended to replace human philosophical analysis or provide moral/personal advice.
|
|
|
|
## Out of Scope
|
|
|
|
This model is not designed for:
|
|
|
|
- In-depth exploration of specialized or niche philosophical debates
|
|
- Providing personal philosophical advice or opinions
|
|
- Real-time discussion or analysis of ongoing philosophical issues
|
|
- Handling highly subjective or interpretive arguments lacking foundational grounding
|
|
|
|
## Bias, Risks, and Limitations
|
|
|
|
- May simplify nuanced philosophical perspectives
|
|
- Not suitable for advanced research or subjective debate
|
|
- Outputs depend on prompt clarity; ambiguous inputs may yield incomplete reasoning
|
|
- Trained for foundational reasoning, not for exhaustive domain knowledge
|
|
|
|
|
|
## How to Get Started with the Model
|
|
|
|
Use the code below to get started with the model.
|
|
|
|
```python
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("khazarai/Averroes-R1")
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
"khazarai/Averroes-R1",
|
|
device_map={"": 0}
|
|
)
|
|
|
|
question = """
|
|
What is the existentialist dilemma of freedom, and how do concepts like responsibility, anguish, and bad faith relate to it, according to Sartre?
|
|
"""
|
|
|
|
messages = [
|
|
{"role" : "user", "content" : question}
|
|
]
|
|
text = tokenizer.apply_chat_template(
|
|
messages,
|
|
tokenize = False,
|
|
add_generation_prompt = True,
|
|
enable_thinking = True,
|
|
)
|
|
|
|
from transformers import TextStreamer
|
|
_ = model.generate(
|
|
**tokenizer(text, return_tensors = "pt").to("cuda"),
|
|
max_new_tokens = 2200,
|
|
temperature = 0.6,
|
|
top_p = 0.95,
|
|
top_k = 20,
|
|
streamer = TextStreamer(tokenizer, skip_prompt = True),
|
|
)
|
|
```
|
|
|
|
|
|
## Training Data
|
|
|
|
**Scope**
|
|
|
|
This model was fine-tuned on tasks emphasizing foundational philosophical reasoning, focusing on:
|
|
|
|
- Understanding key philosophical concepts across major branches (ethics, epistemology, metaphysics, logic, etc.)
|
|
|
|
- Explaining philosophical principles through clear examples and structured reasoning
|
|
|
|
- Highlighting the logical and conceptual steps behind philosophical inquiry
|
|
|
|
- Building a strong foundational understanding of philosophical thought
|
|
|
|
**Illustrative Examples**
|
|
|
|
- Explaining the difference between empiricism and rationalism
|
|
|
|
- Describing the reasoning behind the categorical imperative
|
|
|
|
- Analyzing simple logical fallacies within philosophical arguments
|
|
|
|
**Emphasis on Chain-of-Thought (CoT)**
|
|
|
|
The dataset explicitly teaches step-by-step reasoning, allowing the model to show intermediate thoughts when analyzing or explaining philosophical ideas.
|
|
|
|
**Focus on Foundational Knowledge**
|
|
|
|
Rather than diving into complex, specialized debates, the dataset helps build a broad, structured foundation for philosophical reasoning. |