初始化项目,由ModelHub XC社区提供模型
Model: TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-MATH Source: Original Platform
This commit is contained in:
30
README.md
Normal file
30
README.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
license: mit
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
base_model: Qwen/Qwen3-8B
|
||||
datasets:
|
||||
- TMLR-Group-HF/Co-rewarding-RephrasedMATH
|
||||
- TMLR-Group-HF/Co-rewarding-RephrasedDAPO-14k
|
||||
- TMLR-Group-HF/Co-rewarding-RephrasedOpenRS
|
||||
---
|
||||
|
||||
# Co-rewarding: CoReward-Qwen3-8B-Base
|
||||
|
||||
This repository hosts the **CoReward-Qwen3-8B-Base** model, a Qwen3-8B-Base model fine-tuned using the Co-rewarding method on the MATH training set. The model was presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
|
||||
|
||||
## Abstract
|
||||
While reinforcement learning with verifiable rewards (RLVR) is effective to improve the reasoning ability of large language models (LLMs), its reliance on human-annotated labels leads to the scaling up dilemma, especially for complex tasks. Recent self-rewarding methods investigate a label-free alternative to unlock the reasoning capabilities of LLMs, yet they frequently encounter the non-negligible training collapse issue, as the single-view supervision signal easily forms the self-consistent illusion, yielding the reward hacking. Inspired by the success of self-supervised learning, we propose \textit{Co-rewarding}, a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. Specifically, we instantiate Co-rewarding in two ways: (1) \textit{Co-rewarding-I} is a data-side instantiation that derives reward signals from contrastive agreement across semantically analogous questions; and (2) \textit{Co-rewarding-II} is a model-side instantiation that maintains a slowly-updated reference teacher with pseudo labels to realize self-distillation. Intuitively, such instantiations introduce different levels of discrepancy to increase the difficulty of training collapse on trivial reasoning solutions. Empirically, Co-rewarding exhibits stable training across various setups, and outperforms other self-rewarding baselines by $+3.31\%$ improvements on average on multiple mathematical reasoning benchmarks, especially by $+7.49\%$ on Llama-3.2-3B-Instruct. Notably, Co-rewarding reaches or even surpasses RLVR with ground-truth (GT) label in several cases, such as a Pass@$1$ of $94.01\%$ on GSM8K with Qwen3-8B-Base remarkably higher than GT. Our code is publicly available at this https URL .
|
||||
|
||||
For more details about the Co-rewarding framework and its implementation, you can refer to our GitHub Repository: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
|
||||
|
||||
## Citation
|
||||
If you use our datasets or models, please cite our paper!
|
||||
```bibtex
|
||||
@article{zhang2025coreward,
|
||||
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
|
||||
author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
|
||||
journal={arXiv preprint arXiv:2508.00410},
|
||||
year={2025},
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user