初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Demeter-LongCoT-Qwen3-1.7B Source: Original Platform
This commit is contained in:
110
README.md
Normal file
110
README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model:
|
||||
- Qwen/Qwen3-1.7B
|
||||
datasets:
|
||||
- prithivMLmods/Demeter-LongCoT-400K
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- LongCoT
|
||||
- trl
|
||||
- math
|
||||
- code
|
||||
- stem
|
||||
---
|
||||
|
||||

|
||||
|
||||
# **Demeter-LongCoT-Qwen3-1.7B**
|
||||
|
||||
> **Demeter-LongCoT-Qwen3-1.7B** is a reasoning-focused model fine-tuned on **Qwen/Qwen3-1.7B** using the **Demeter-LongCoT-400K** dataset.
|
||||
> It is designed for **math and code chain-of-thought reasoning**, blending symbolic precision, scientific logic, and structured output fluency—making it an effective tool for developers, educators, and researchers seeking reliable step-by-step reasoning.
|
||||
|
||||
> \[!note]
|
||||
> GGUF: [https://huggingface.co/prithivMLmods/Demeter-LongCoT-Qwen3-1.7B-GGUF](https://huggingface.co/prithivMLmods/Demeter-LongCoT-Qwen3-1.7B-GGUF)
|
||||
|
||||
---
|
||||
|
||||
## **Key Features**
|
||||
|
||||
1. **Unified Reasoning in Math & Code**
|
||||
Fine-tuned on **Demeter-LongCoT-400K**, which emphasizes extended chain-of-thought reasoning in mathematics, algorithms, and programming workflows.
|
||||
|
||||
2. **Advanced Code Understanding & Generation**
|
||||
Handles multi-language programming tasks with explanations, optimization hints, and error detection—suited for algorithm synthesis, debugging, and prototyping.
|
||||
|
||||
3. **Mathematical Problem Solving**
|
||||
Excels at step-by-step derivations, symbolic manipulations, and applied problem solving across calculus, algebra, and logic-based reasoning.
|
||||
|
||||
4. **Chain-of-Thought Focused Reasoning**
|
||||
Optimized to produce clear, structured thought processes for both **STEM explanations** and **computational logic** tasks.
|
||||
|
||||
5. **Structured Output Mastery**
|
||||
Generates well-formed outputs in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, enabling smooth integration with research pipelines and technical documentation.
|
||||
|
||||
6. **Balanced Performance for Deployment**
|
||||
Designed to deliver strong reasoning under moderate compute budgets, deployable on **mid-range GPUs**, **offline clusters**, and **specialized edge AI systems**.
|
||||
|
||||
---
|
||||
|
||||
## **Quickstart with Transformers**
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "prithivMLmods/Demeter-LongCoT-Qwen3-1.7B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
prompt = "Solve the integral of x^2 * e^x step by step."
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a tutor skilled in math, code, and step-by-step reasoning."},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=512
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Intended Use**
|
||||
|
||||
* Step-by-step math tutoring and symbolic derivation
|
||||
* Advanced coding assistant for algorithms, debugging, and structured reasoning
|
||||
* Chain-of-thought generation for research and education tools
|
||||
* Producing structured outputs for technical documentation and computational pipelines
|
||||
* Deployments requiring reliable reasoning under constrained compute
|
||||
|
||||
## **Limitations**
|
||||
|
||||
* Not tuned for general-purpose or conversational tasks
|
||||
* May underperform in long-form multi-document contexts
|
||||
* Specialized in math and code—general writing or casual dialogue may be weak
|
||||
* Prioritizes structured reasoning over natural or emotional tone generation
|
||||
Reference in New Issue
Block a user