--- base_model: - Qwen/Qwen3-30B-A3B-Instruct-2507 datasets: - taki555/DeepScaleR-Easy language: - en license: apache-2.0 pipeline_tag: text-generation library_name: transformers --- # Qwen3-30B-A3B-Art This is the Chain-of-Thought (CoT) efficient version of the **Qwen3-30B-A3B-Instruct-2507** model, introduced in the paper [The Art of Efficient Reasoning: Data, Reward, and Optimization](https://huggingface.co/papers/2602.20945). The model is designed to generate short yet accurate reasoning trajectories, reducing computational overhead while maintaining high performance. It was trained on the [DeepScaleR-Easy](https://huggingface.co/datasets/taki555/DeepScaleR-Easy) dataset using reward shaping with Reinforcement Learning (RL). ## Project Resources - **Project Page:** [https://wutaiqiang.github.io/project/Art](https://wutaiqiang.github.io/project/Art) - **Paper:** [arXiv:2602.20945](https://huggingface.co/papers/2602.20945) ## Citation If you find this work useful, please cite: ```bibtex @inproceedings{wu2026art, title={The Art of Efficient Reasoning: Data, Reward, and Optimization}, author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong}, year={2026}, url={https://arxiv.org/pdf/2602.20945} } ```