113 lines
4.2 KiB
Markdown
113 lines
4.2 KiB
Markdown
|
|
---
|
|||
|
|
license: apache-2.0
|
|||
|
|
datasets:
|
|||
|
|
- TAUR-dev/STEPS__r1_4d_eval__mini_all
|
|||
|
|
- TAUR-dev/STEPS__r1_8d_eval__v3_mini_all
|
|||
|
|
- TAUR-dev/STEPS__r1_8d_eval__v4
|
|||
|
|
- TAUR-dev/STEPS__r1_8d_eval__v3_4o
|
|||
|
|
language:
|
|||
|
|
- en
|
|||
|
|
library_name: transformers
|
|||
|
|
base_model:
|
|||
|
|
- prithivMLmods/Qwen3-4B-ft-bf16
|
|||
|
|
pipeline_tag: text-generation
|
|||
|
|
tags:
|
|||
|
|
- text-generation-inference
|
|||
|
|
- trl
|
|||
|
|
- moe
|
|||
|
|
- code
|
|||
|
|
- math
|
|||
|
|
---
|
|||
|
|
|
|||
|
|

|
|||
|
|
|
|||
|
|
# Canum-Qwen3\_R1-4B-iCoT
|
|||
|
|
|
|||
|
|
> **Canum-Qwen3\_R1-4B-iCoT** is a precision-tuned variant of the Qwen3-4B architecture, explicitly aligned with **internal chain-of-thought (iCoT)** methodologies. Trained on the **TAUR-dev/STEPS\_\_r1\_4d\_eval\_\_mini\_all** dataset, this model excels in **long-form mathematical reasoning**, **progressive symbolic logic**, and **multi-stage problem decomposition**, all within a compact 4B parameter footprint.
|
|||
|
|
|
|||
|
|
> [!note]
|
|||
|
|
GGUF : https://huggingface.co/prithivMLmods/Canum-Qwen3_R1-4B-iCoT-Q4_K_M-GGUF
|
|||
|
|
|
|||
|
|
|
|||
|
|
## Key Features
|
|||
|
|
|
|||
|
|
1. **Internal Chain-of-Thought Reasoning (iCoT)**
|
|||
|
|
Enables deeper logical progression through internally coherent reasoning steps, ideal for complex mathematical derivations and multivariable algebraic thinking.
|
|||
|
|
|
|||
|
|
2. **Dataset: TAUR-dev/STEPS\_\_r1\_4d\_eval\_\_mini\_all**
|
|||
|
|
Fine-tuned using structured evaluation sequences to build resilience in multi-step problem solving and improve interpretability in math-focused tasks.
|
|||
|
|
|
|||
|
|
3. **Long Reasoning Paths in STEM Domains**
|
|||
|
|
Suited for long-chain logical flows in geometry, number theory, calculus, and symbolic manipulation, including proofs and multi-stage equation solving.
|
|||
|
|
|
|||
|
|
4. **Lightweight Yet Capable (4B)**
|
|||
|
|
Maintains strong reasoning and instruction-following abilities with lower computational cost compared to larger models, suitable for single-GPU deployments.
|
|||
|
|
|
|||
|
|
5. **Instruction-Following and Step-by-Step Alignment**
|
|||
|
|
Follows complex instructions with multi-turn dependencies and provides granular output that aligns with internal steps used in the reasoning process.
|
|||
|
|
|
|||
|
|
6. **Technical Format Adaptability**
|
|||
|
|
Outputs answers in clean Markdown, LaTeX, JSON, or table formats for academic, development, and notebook-based use cases.
|
|||
|
|
|
|||
|
|
## Quickstart with Transformers
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|||
|
|
|
|||
|
|
model_name = "prithivMLmods/Canum-Qwen3_R1-4B-iCoT"
|
|||
|
|
|
|||
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|||
|
|
model_name,
|
|||
|
|
torch_dtype="auto",
|
|||
|
|
device_map="auto"
|
|||
|
|
)
|
|||
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|||
|
|
|
|||
|
|
prompt = "Use internal CoT to solve: A rectangle has a length that is 3 times its width. If the perimeter is 48 units, what are the dimensions?"
|
|||
|
|
|
|||
|
|
messages = [
|
|||
|
|
{"role": "system", "content": "You are a reasoning assistant trained to use internal chain-of-thought (iCoT) for multi-step mathematical problems."},
|
|||
|
|
{"role": "user", "content": prompt}
|
|||
|
|
]
|
|||
|
|
|
|||
|
|
text = tokenizer.apply_chat_template(
|
|||
|
|
messages,
|
|||
|
|
tokenize=False,
|
|||
|
|
add_generation_prompt=True
|
|||
|
|
)
|
|||
|
|
|
|||
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
|||
|
|
|
|||
|
|
generated_ids = model.generate(
|
|||
|
|
**model_inputs,
|
|||
|
|
max_new_tokens=512
|
|||
|
|
)
|
|||
|
|
generated_ids = [
|
|||
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
|||
|
|
]
|
|||
|
|
|
|||
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|||
|
|
print(response)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## Intended Use
|
|||
|
|
|
|||
|
|
* Internal chain-of-thought (iCoT) problem solving
|
|||
|
|
* Long-form symbolic math and algebraic derivations
|
|||
|
|
* Curriculum-based step-by-step math tutoring
|
|||
|
|
* Structured multi-turn reasoning in STEM domains
|
|||
|
|
* Output generation in technical formats (LaTeX, Markdown)
|
|||
|
|
|
|||
|
|
## Limitations
|
|||
|
|
|
|||
|
|
* May require well-structured prompts for optimal reasoning output
|
|||
|
|
* Smaller context length may limit extremely long multi-part problems
|
|||
|
|
* Focused on precision reasoning, not creative or subjective writing
|
|||
|
|
* Best used with prompt patterns that guide internal logical steps
|
|||
|
|
|
|||
|
|
## References
|
|||
|
|
|
|||
|
|
1. **TAUR-dev/STEPS\_\_r1\_4d\_eval\_\_mini\_all** – Dataset for structured math reasoning
|
|||
|
|
2. **Internal CoT (iCoT)** – Progressive logical strategy for complex problems
|
|||
|
|
3. [AIMO-2 Math Benchmark – OpenMathReasoning](https://arxiv.org/pdf/2504.16891)
|
|||
|
|
4. [YaRN: Efficient Context Extension of LLMs](https://arxiv.org/pdf/2309.00071)
|