ModelHub XC d532906a1c 初始化项目,由ModelHub XC社区提供模型
Model: taki555/Qwen3-30B-A3B-Instruct-2507-Art
Source: Original Platform
2026-04-24 00:09:10 +08:00

base_model, datasets, language, license, pipeline_tag, library_name
base_model datasets language license pipeline_tag library_name
Qwen/Qwen3-30B-A3B-Instruct-2507
taki555/DeepScaleR-Easy
en
apache-2.0 text-generation transformers

Qwen3-30B-A3B-Art

This is the Chain-of-Thought (CoT) efficient version of the Qwen3-30B-A3B-Instruct-2507 model, introduced in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.

The model is designed to generate short yet accurate reasoning trajectories, reducing computational overhead while maintaining high performance. It was trained on the DeepScaleR-Easy dataset using reward shaping with Reinforcement Learning (RL).

Project Resources

Citation

If you find this work useful, please cite:

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Description
Model synced from source: taki555/Qwen3-30B-A3B-Instruct-2507-Art
Readme 2.1 MiB
Languages
Jinja 100%