ModelHub XC ddc1b3ffe8 初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Canum-Qwen3_R1-4B-iCoT
Source: Original Platform
2026-05-07 18:51:49 +08:00

license, datasets, language, library_name, base_model, pipeline_tag, tags
license datasets language library_name base_model pipeline_tag tags
apache-2.0
TAUR-dev/STEPS__r1_4d_eval__mini_all
TAUR-dev/STEPS__r1_8d_eval__v3_mini_all
TAUR-dev/STEPS__r1_8d_eval__v4
TAUR-dev/STEPS__r1_8d_eval__v3_4o
en
transformers
prithivMLmods/Qwen3-4B-ft-bf16
text-generation
text-generation-inference
trl
moe
code
math

Add a heading.png

Canum-Qwen3_R1-4B-iCoT

Canum-Qwen3_R1-4B-iCoT is a precision-tuned variant of the Qwen3-4B architecture, explicitly aligned with internal chain-of-thought (iCoT) methodologies. Trained on the TAUR-dev/STEPS__r1_4d_eval__mini_all dataset, this model excels in long-form mathematical reasoning, progressive symbolic logic, and multi-stage problem decomposition, all within a compact 4B parameter footprint.

Note

GGUF : https://huggingface.co/prithivMLmods/Canum-Qwen3_R1-4B-iCoT-Q4_K_M-GGUF

Key Features

  1. Internal Chain-of-Thought Reasoning (iCoT) Enables deeper logical progression through internally coherent reasoning steps, ideal for complex mathematical derivations and multivariable algebraic thinking.

  2. Dataset: TAUR-dev/STEPS__r1_4d_eval__mini_all Fine-tuned using structured evaluation sequences to build resilience in multi-step problem solving and improve interpretability in math-focused tasks.

  3. Long Reasoning Paths in STEM Domains Suited for long-chain logical flows in geometry, number theory, calculus, and symbolic manipulation, including proofs and multi-stage equation solving.

  4. Lightweight Yet Capable (4B) Maintains strong reasoning and instruction-following abilities with lower computational cost compared to larger models, suitable for single-GPU deployments.

  5. Instruction-Following and Step-by-Step Alignment Follows complex instructions with multi-turn dependencies and provides granular output that aligns with internal steps used in the reasoning process.

  6. Technical Format Adaptability Outputs answers in clean Markdown, LaTeX, JSON, or table formats for academic, development, and notebook-based use cases.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Canum-Qwen3_R1-4B-iCoT"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Use internal CoT to solve: A rectangle has a length that is 3 times its width. If the perimeter is 48 units, what are the dimensions?"

messages = [
    {"role": "system", "content": "You are a reasoning assistant trained to use internal chain-of-thought (iCoT) for multi-step mathematical problems."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Internal chain-of-thought (iCoT) problem solving
  • Long-form symbolic math and algebraic derivations
  • Curriculum-based step-by-step math tutoring
  • Structured multi-turn reasoning in STEM domains
  • Output generation in technical formats (LaTeX, Markdown)

Limitations

  • May require well-structured prompts for optimal reasoning output
  • Smaller context length may limit extremely long multi-part problems
  • Focused on precision reasoning, not creative or subjective writing
  • Best used with prompt patterns that guide internal logical steps

References

  1. TAUR-dev/STEPS__r1_4d_eval__mini_all Dataset for structured math reasoning
  2. Internal CoT (iCoT) Progressive logical strategy for complex problems
  3. AIMO-2 Math Benchmark OpenMathReasoning
  4. YaRN: Efficient Context Extension of LLMs
Description
Model synced from source: prithivMLmods/Canum-Qwen3_R1-4B-iCoT
Readme 2 MiB