ModelHub XC 87a9b35f63 初始化项目,由ModelHub XC社区提供模型
Model: taki555/Qwen3-4B-Thinking-2507-Art
Source: Original Platform
2026-04-21 01:36:07 +08:00

base_model, datasets, language, license, library_name, pipeline_tag
base_model datasets language license library_name pipeline_tag
Qwen/Qwen3-4B-Thinking-2507
taki555/DeepScaleR-Easy
en
apache-2.0 transformers text-generation

Art-Qwen3-4B-Thinking-2507

This is the CoT efficient version of the Qwen3-4B-Thinking-2507 model, presented in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.

The model was trained on the DeepScaleR-Easy dataset to incentivize short yet accurate thinking trajectories.

Model Description

Large Language Models (LLMs) consistently benefit from scaled Chain-of-Thought (CoT) reasoning, but also suffer from heavy computational overhead. This model addresses efficient reasoning by using a two-stage training paradigm: length adaptation and reasoning refinement. Through reward shaping with Reinforcement Learning (RL), the model is optimized to maintain high performance across a wide spectrum of token budgets while avoiding the "short-is-correct" trap.

For more details, please visit the Project Page.

Citation

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Description
Model synced from source: taki555/Qwen3-4B-Thinking-2507-Art
Readme 2 MiB
Languages
Jinja 100%