dedd8a3d032ec65b5a617becd41164b2e44f5fae
Model: taki555/Qwen3-1.7B-Art Source: Original Platform
base_model, datasets, language, license, pipeline_tag, library_name
| base_model | datasets | language | license | pipeline_tag | library_name | |||
|---|---|---|---|---|---|---|---|---|
|
|
|
apache-2.0 | text-generation | transformers |
Art-Qwen3-1.7B
This model is the Chain-of-Thought (CoT) efficient version of Qwen3-1.7B, developed as part of the research presented in the paper "The Art of Efficient Reasoning: Data, Reward, and Optimization".
Model Description
Art-Qwen3-1.7B is optimized for efficient reasoning, aiming to produce short yet accurate thinking trajectories. It was trained using Reinforcement Learning (RL) with specialized reward shaping on the DeepScaleR-Easy dataset. The training follows a two-stage paradigm involving length adaptation and reasoning refinement to maintain high accuracy while reducing computational overhead.
- Paper: The Art of Efficient Reasoning: Data, Reward, and Optimization
- Project Page: https://wutaiqiang.github.io/project/Art
- Base Model: Qwen/Qwen3-1.7B
Citation
@inproceedings{wu2026art,
title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
year={2026},
url={https://arxiv.org/pdf/2602.20945}
}
Description
Languages
Jinja
100%