Files
Qwen3-1.7B-Art/README.md

34 lines
1.5 KiB
Markdown
Raw Permalink Normal View History

---
base_model:
- Qwen/Qwen3-1.7B
datasets:
- taki555/DeepScaleR-Easy
language:
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
# Art-Qwen3-1.7B
This model is the Chain-of-Thought (CoT) efficient version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), developed as part of the research presented in the paper "[The Art of Efficient Reasoning: Data, Reward, and Optimization](https://huggingface.co/papers/2602.20945)".
## Model Description
Art-Qwen3-1.7B is optimized for efficient reasoning, aiming to produce short yet accurate thinking trajectories. It was trained using Reinforcement Learning (RL) with specialized reward shaping on the [DeepScaleR-Easy](https://huggingface.co/datasets/taki555/DeepScaleR-Easy) dataset. The training follows a two-stage paradigm involving length adaptation and reasoning refinement to maintain high accuracy while reducing computational overhead.
- **Paper:** [The Art of Efficient Reasoning: Data, Reward, and Optimization](https://huggingface.co/papers/2602.20945)
- **Project Page:** [https://wutaiqiang.github.io/project/Art](https://wutaiqiang.github.io/project/Art)
- **Base Model:** [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
## Citation
```bibtex
@inproceedings{wu2026art,
title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
year={2026},
url={https://arxiv.org/pdf/2602.20945}
}
```