Files
Qwen2.5-3B-GRPO-math-reasoning/README.md
ModelHub XC 3b2c1f290e 初始化项目,由ModelHub XC社区提供模型
Model: jaygala24/Qwen2.5-3B-GRPO-math-reasoning
Source: Original Platform
2026-05-04 16:34:59 +08:00

3.7 KiB

library_name, license, base_model, tags, datasets, pipeline_tag
library_name license base_model tags datasets pipeline_tag
transformers apache-2.0 Qwen/Qwen2.5-3B
reinforcement-learning
grpo
math-reasoning
pipelinerl
gsm8k_train
math_train
text-generation

Qwen2.5-3B-GRPO-math-reasoning

This model is a fine-tuned version of Qwen2.5-3B using GRPO (Group Relative Policy Optimization) without KL penalty for mathematical reasoning.

Trained with PipelineRL.

Training Details

Datasets

Split Datasets
Train gsm8k_train, math_train
Test gsm8k_test, math_500

RL Algorithm

Parameter Value
Algorithm GRPO (Group Relative Policy Optimization)
Policy Loss ppo
KL Coefficient 0.0
Epsilon (clip) 0.02
Divide Advantage by Std False
Filter Zero Advantage Groups False
Rollouts per Problem 16

Training Hyperparameters

Parameter Value
Base Model Qwen/Qwen2.5-3B
Learning Rate 1e-06
LR Scheduler cosine
Warmup Steps 25
Max Training Steps 1500
Micro Batch Size 2
Gradient Accumulation 128
Effective Batch Size 256
Sequence Length 8192
Gradient Clipping 0.3
Weight Decay 0.01
Optimizer adamw_torch
Precision bf16
DeepSpeed ZeRO Stage 3

Evaluation Results

Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):

Dataset pass@1 pass@2 pass@4 pass@8 pass@16 pass@32
GSM8K (test) 84.45 90.00 93.33 95.50 96.93 97.88
MATH-500 64.48 72.51 78.84 83.85 87.70 90.40
Overall 78.96 85.19 89.35 92.29 94.39 95.82

GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).

Training Curves

Training Metrics

W&B Run

Full training logs: https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_3b_grpo_no_kl_3a1f_4xh100_197318_finetune_e7346e62

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen2.5-3B-GRPO-math-reasoning", revision="step-0200")  # or whatever branch name, e.g. "step-0400", "step-0600"
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen2.5-3B-GRPO-math-reasoning", revision="step-0200")  # or whatever branch name, e.g. "step-0400", "step-0600"

prompt = "Please reason step by step, and put your final answer within \\boxed{{}}.\n\nWhat is the sum of 123 and 456?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

vLLM

from vllm import LLM, SamplingParams

llm = LLM(model="jaygala24/Qwen2.5-3B-GRPO-math-reasoning", revision="step-0200")  # or whatever branch name, e.g. "step-0400", "step-0600"
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)

prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)

Framework