f7ae09a1486d4eae6f5aaa7f867a1701e6d504fa
Model: divelab/DAPO_E2H-gsm8k-gaussian_0p25_0p75 Source: Original Platform
base_model, datasets, library_name, model_name, tags, licence
| base_model | datasets | library_name | model_name | tags | licence | |||
|---|---|---|---|---|---|---|---|---|
| Qwen/Qwen2.5-1.5B-Instruct | gsm8k-dataset | transformers | Qwen2.5-1.5B-Instruct_math_grpo_cosine_0.5_0.5_SEC0.3DRO1.0G0.0_minpTrue_1600 |
|
license |
Model Card for Qwen2.5-1.5B-Instruct_math_grpo_cosine_0.5_0.5_SEC0.3DRO1.0G0.0_minpTrue_1600
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct on the gsm8k-dataset dataset. It has been trained using E2H on the top of TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-1.5B-Instruct_math_grpo_cosine_0.5_0.5_SEC0.3DRO1.0G0.0_minpTrue_1600", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.
Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
Citations
Cite E2H as:
@inproceedings{parashar2026curriculum,
title = {Curriculum Reinforcement Learning from Easy to Hard Tasks Improves {LLM} Reasoning},
author = {Parashar, Shubham and Gui, Shurui and Li, Xiner and Ling, Hongyi and Vemuri, Sushil and Olson, Blake and Li, Eric and Zhang, Yu and Caverlee, James and Kalathil, Dileep and Ji, Shuiwang},
booktitle = {The Fourteenth International Conference on Learning Representations},
year = {2026},
url = {https://openreview.net/forum?id=KJvHnl3kUv}
}
Description
Languages
Jinja
100%