Files
Qwen3-1.7B-Art/README.md
ModelHub XC dedd8a3d03 初始化项目,由ModelHub XC社区提供模型
Model: taki555/Qwen3-1.7B-Art
Source: Original Platform
2026-05-02 03:34:10 +08:00

1.5 KiB

base_model, datasets, language, license, pipeline_tag, library_name
base_model datasets language license pipeline_tag library_name
Qwen/Qwen3-1.7B
taki555/DeepScaleR-Easy
en
apache-2.0 text-generation transformers

Art-Qwen3-1.7B

This model is the Chain-of-Thought (CoT) efficient version of Qwen3-1.7B, developed as part of the research presented in the paper "The Art of Efficient Reasoning: Data, Reward, and Optimization".

Model Description

Art-Qwen3-1.7B is optimized for efficient reasoning, aiming to produce short yet accurate thinking trajectories. It was trained using Reinforcement Learning (RL) with specialized reward shaping on the DeepScaleR-Easy dataset. The training follows a two-stage paradigm involving length adaptation and reasoning refinement to maintain high accuracy while reducing computational overhead.

Citation

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}