初始化项目,由ModelHub XC社区提供模型

Model: taki555/Qwen3-30B-A3B-Instruct-2507-Art
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-24 00:09:10 +08:00
commit d532906a1c
25 changed files with 170786 additions and 0 deletions

34
README.md Normal file
View File

@@ -0,0 +1,34 @@
---
base_model:
- Qwen/Qwen3-30B-A3B-Instruct-2507
datasets:
- taki555/DeepScaleR-Easy
language:
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
# Qwen3-30B-A3B-Art
This is the Chain-of-Thought (CoT) efficient version of the **Qwen3-30B-A3B-Instruct-2507** model, introduced in the paper [The Art of Efficient Reasoning: Data, Reward, and Optimization](https://huggingface.co/papers/2602.20945).
The model is designed to generate short yet accurate reasoning trajectories, reducing computational overhead while maintaining high performance. It was trained on the [DeepScaleR-Easy](https://huggingface.co/datasets/taki555/DeepScaleR-Easy) dataset using reward shaping with Reinforcement Learning (RL).
## Project Resources
- **Project Page:** [https://wutaiqiang.github.io/project/Art](https://wutaiqiang.github.io/project/Art)
- **Paper:** [arXiv:2602.20945](https://huggingface.co/papers/2602.20945)
## Citation
If you find this work useful, please cite:
```bibtex
@inproceedings{wu2026art,
title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
year={2026},
url={https://arxiv.org/pdf/2602.20945}
}
```