Files
Demeter-LongCoT-Qwen3-1.7B/README.md

110 lines
3.8 KiB
Markdown
Raw Permalink Normal View History

2025-08-25 18:30:21 +00:00
---
2025-08-25 18:31:20 +00:00
license: apache-2.0
base_model:
- Qwen/Qwen3-1.7B
datasets:
- prithivMLmods/Demeter-LongCoT-400K
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- LongCoT
- trl
- math
- code
- stem
---
![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/YL9ww0vwTra8q-9b8wGqd.png)
# **Demeter-LongCoT-Qwen3-1.7B**
> **Demeter-LongCoT-Qwen3-1.7B** is a reasoning-focused model fine-tuned on **Qwen/Qwen3-1.7B** using the **Demeter-LongCoT-400K** dataset.
> It is designed for **math and code chain-of-thought reasoning**, blending symbolic precision, scientific logic, and structured output fluency—making it an effective tool for developers, educators, and researchers seeking reliable step-by-step reasoning.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Demeter-LongCoT-Qwen3-1.7B-GGUF](https://huggingface.co/prithivMLmods/Demeter-LongCoT-Qwen3-1.7B-GGUF)
---
2025-08-25 18:30:21 +00:00
2025-08-25 18:31:20 +00:00
## **Key Features**
2025-08-25 18:30:21 +00:00
2025-08-25 18:31:20 +00:00
1. **Unified Reasoning in Math & Code**
Fine-tuned on **Demeter-LongCoT-400K**, which emphasizes extended chain-of-thought reasoning in mathematics, algorithms, and programming workflows.
2025-08-25 18:30:21 +00:00
2025-08-25 18:31:20 +00:00
2. **Advanced Code Understanding & Generation**
Handles multi-language programming tasks with explanations, optimization hints, and error detection—suited for algorithm synthesis, debugging, and prototyping.
2025-08-25 18:30:21 +00:00
2025-08-25 18:31:20 +00:00
3. **Mathematical Problem Solving**
Excels at step-by-step derivations, symbolic manipulations, and applied problem solving across calculus, algebra, and logic-based reasoning.
2025-08-25 18:30:21 +00:00
2025-08-25 18:31:20 +00:00
4. **Chain-of-Thought Focused Reasoning**
Optimized to produce clear, structured thought processes for both **STEM explanations** and **computational logic** tasks.
5. **Structured Output Mastery**
Generates well-formed outputs in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, enabling smooth integration with research pipelines and technical documentation.
6. **Balanced Performance for Deployment**
Designed to deliver strong reasoning under moderate compute budgets, deployable on **mid-range GPUs**, **offline clusters**, and **specialized edge AI systems**.
2025-08-25 18:30:21 +00:00
---
2025-08-25 18:31:20 +00:00
## **Quickstart with Transformers**
2025-08-25 18:30:21 +00:00
```python
2025-08-25 18:31:20 +00:00
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Demeter-LongCoT-Qwen3-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve the integral of x^2 * e^x step by step."
messages = [
{"role": "system", "content": "You are a tutor skilled in math, code, and step-by-step reasoning."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
2025-08-25 18:30:21 +00:00
```
2025-08-25 18:31:20 +00:00
---
## **Intended Use**
* Step-by-step math tutoring and symbolic derivation
* Advanced coding assistant for algorithms, debugging, and structured reasoning
* Chain-of-thought generation for research and education tools
* Producing structured outputs for technical documentation and computational pipelines
* Deployments requiring reliable reasoning under constrained compute
## **Limitations**
* Not tuned for general-purpose or conversational tasks
* May underperform in long-form multi-document contexts
* Specialized in math and code—general writing or casual dialogue may be weak
2025-09-04 14:22:10 +00:00
* Prioritizes structured reasoning over natural or emotional tone generation