--- license: mit library_name: transformers pipeline_tag: text-generation --- # GT-GRPO: Qwen3-8B-Base trained on DAPO-14k This model is a checkpoint of the **GT-GRPO: Qwen3-8B-Base** model, trained on the DAPO-14k dataset. It is part of the research presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410). ## Paper Abstract Summary The paper introduces **Co-rewarding**, a novel self-supervised reinforcement learning (RL) framework designed to enhance the reasoning ability of large language models (LLMs). It addresses the common issue of training collapse in self-rewarding methods by seeking complementary supervision from multiple views. Co-rewarding is instantiated in two ways: data-side (Co-rewarding-I) using contrastive agreement across semantically analogous questions, and model-side (Co-rewarding-II) via self-distillation with a slowly-updated reference teacher. This approach improves training stability and significantly outperforms other self-rewarding baselines on various mathematical reasoning benchmarks, sometimes even surpassing RLVR methods that use ground-truth labels. ## GitHub Repository For more details, including installation instructions, training procedures, and other released checkpoints and datasets related to the Co-rewarding framework, please refer to the [official GitHub repository](https://github.com/tmlr-group/Co-rewarding). ## Citation If you use our datasets or models, please cite our paper: ```bibtex @article{zhang2025co, title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models}, author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo}, journal={arXiv preprint arXiv:2508.00410}, year={2025} } ```