1.4 KiB
base_model, datasets, language, license, library_name, pipeline_tag
| base_model | datasets | language | license | library_name | pipeline_tag | |||
|---|---|---|---|---|---|---|---|---|
|
|
|
apache-2.0 | transformers | text-generation |
Art-Qwen3-4B-Thinking-2507
This is the CoT efficient version of the Qwen3-4B-Thinking-2507 model, presented in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.
The model was trained on the DeepScaleR-Easy dataset to incentivize short yet accurate thinking trajectories.
Model Description
Large Language Models (LLMs) consistently benefit from scaled Chain-of-Thought (CoT) reasoning, but also suffer from heavy computational overhead. This model addresses efficient reasoning by using a two-stage training paradigm: length adaptation and reasoning refinement. Through reward shaping with Reinforcement Learning (RL), the model is optimized to maintain high performance across a wide spectrum of token budgets while avoiding the "short-is-correct" trap.
For more details, please visit the Project Page.
Citation
@inproceedings{wu2026art,
title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
year={2026},
url={https://arxiv.org/pdf/2602.20945}
}