From 7df0feb4ca83f8228f4377fca1ae75522e571167 Mon Sep 17 00:00:00 2001 From: ai-modelscope Date: Fri, 6 Jun 2025 02:33:39 +0800 Subject: [PATCH] Update README.md --- README.md | 101 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 58 insertions(+), 43 deletions(-) diff --git a/README.md b/README.md index a4ee25c..75aa333 100644 --- a/README.md +++ b/README.md @@ -1,47 +1,62 @@ --- -license: Apache License 2.0 - -#model-type: -##如 gpt、phi、llama、chatglm、baichuan 等 -#- gpt - -#domain: -##如 nlp、cv、audio、multi-modal -#- nlp - -#language: -##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa -#- cn - -#metrics: -##如 CIDEr、Blue、ROUGE 等 -#- CIDEr - -#tags: -##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他 -#- pretrained - -#tools: -##如 vllm、fastchat、llamacpp、AdaSeq 等 -#- vllm +language: +- en +base_model: +- meta-llama/Llama-3.2-3B-Instruct +tags: +- One-Shot-CFT --- -### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。 -#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型 +# One-Shot-CFT: Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem -SDK下载 -```bash -#安装ModelScope -pip install modelscope -``` -```python -#SDK模型下载 -from modelscope import snapshot_download -model_dir = snapshot_download('TIGER-Lab/One-Shot-CFT-Math-Llama-3B') -``` -Git下载 -``` -#Git模型下载 -git clone https://www.modelscope.cn/TIGER-Lab/One-Shot-CFT-Math-Llama-3B.git -``` +

+ 💻 Code | + 📄 Paper | + 📊 Dataset | + 🤗 Model | + 🌐 Project Page +

-

如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。

\ No newline at end of file + + +## 🧠 Overview + +One-Shot Critique Fine-Tuning (CFT) is a simple, robust, and compute-efficient training paradigm for unleashing the reasoning capabilities of pretrained LLMs in both mathematical and logical domains. By leveraging critiques on just one problem, One-Shot CFT enables models like Qwen and LLaMA to match or even outperform reinforcement learning, while using 20× less compute. + +Instead of learning from reference answers (as in supervised fine-tuning) or reward signals (as in reinforcement learning), One-Shot CFT enables models to learn from critiques of diverse solutions to a single problem, enhancing their exposure to varied reasoning patterns and mitigating overfitting. This exposes the LLMs to multiple perspectives and error types, thereby more effectively unleashing their reasoning potential. + + +## ✨ Key Highlights + +- **Unleashes Reasoning with One Example:** One-Shot CFT uses critiques of diverse model-generated solutions to a single problem to significantly boost performance across math and logic tasks. For example, with just 5 GPU hours of training on Qwen2.5-Math-7B, One-Shot CFT achieves an average improvement of +15% on six math benchmarks and +16% on three logic reasoning benchmarks. +- **Outperforms RLVR and Full SFT with 20× Less Compute:** One-Shot CFT outperforms both one-shot Reinforcement Learning with Verifiable Rewards (RLVR) and full-dataset supervised fine-tuning, while requiring only 5 GPU hours on a 7B model—offering a much more efficient and stable training alternative. +- **Robust Across Seeds and Model Scales:** One-Shot CFT remains effective across different seed problem choices and model sizes—from 1.5B to 14B parameters—demonstrating strong generalization and scalability. + +**This specific model is the One-Shot CFT variant trained based on [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) with [DSR-CFT-p0](https://huggingface.co/datasets/TIGER-Lab/One-Shot-CFT-Data) dataset.** + + +## Main Results + +

+ CFT Performance Comparison +

+ +

+One-shot CFT consistently improves mathematical and logical reasoning. +Left: Average accuracy on six mathematical reasoning benchmarks for Qwen and LLaMA models, comparing base, SFT, RLVR, and CFT with only one training example. +Right: In-domain accuracy on three logic reasoning benchmarks (BBEH subtasks) for Qwen2.5-Math-7B. +Across both domains, CFT with a single problem significantly outperforms standard SFT and matches or exceeds reinforcement learning with much lower compute. +

+ + +## Citation + +If you find our work helpful, please cite it as: + +```bibtex +@article{wang2025unleashing, + title={Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem}, + author={Wang, Yubo and Nie, Ping and Zou, Kai and Wu, Lijun and Chen, Wenhu}, + journal={arXiv preprint arXiv:2506.03295}, + year={2025} +} +``` \ No newline at end of file