Model: taki555/Qwen3-30B-A3B-Instruct-2507-Art Source: Original Platform
base_model, datasets, language, license, pipeline_tag, library_name
| base_model | datasets | language | license | pipeline_tag | library_name | |||
|---|---|---|---|---|---|---|---|---|
|
|
|
apache-2.0 | text-generation | transformers |
Qwen3-30B-A3B-Art
This is the Chain-of-Thought (CoT) efficient version of the Qwen3-30B-A3B-Instruct-2507 model, introduced in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.
The model is designed to generate short yet accurate reasoning trajectories, reducing computational overhead while maintaining high performance. It was trained on the DeepScaleR-Easy dataset using reward shaping with Reinforcement Learning (RL).
Project Resources
- Project Page: https://wutaiqiang.github.io/project/Art
- Paper: arXiv:2602.20945
Citation
If you find this work useful, please cite:
@inproceedings{wu2026art,
title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
year={2026},
url={https://arxiv.org/pdf/2602.20945}
}
Description
Languages
Jinja
100%