ModelHub XC 6414ea9091 初始化项目,由ModelHub XC社区提供模型
Model: jaygala24/Qwen2.5-1.5B-RLOO-math-reasoning
Source: Original Platform
2026-05-02 18:15:43 +08:00

library_name, license, base_model, tags, datasets, pipeline_tag
library_name license base_model tags datasets pipeline_tag
transformers apache-2.0 Qwen/Qwen2.5-1.5B
reinforcement-learning
rloo
math-reasoning
pipelinerl
gsm8k_train
math_train
text-generation

Qwen2.5-1.5B-RLOO-math-reasoning

This model is a fine-tuned version of Qwen2.5-1.5B using RLOO (REINFORCE Leave-One-Out) without KL penalty for mathematical reasoning.

Trained with PipelineRL.

Training Details

Datasets

Split Datasets
Train gsm8k_train, math_train
Test gsm8k_test, math_500

RL Algorithm

Parameter Value
Algorithm RLOO (REINFORCE Leave-One-Out)
Advantage Baseline Leave-one-out mean reward over the group
Extra Inference None
Group Structure Required
Policy Loss reinforce
KL Coefficient 0.0
Epsilon (clip) 0.02
Discount Factor (gamma) 1.0
Divide Advantage by Std False
Filter Zero Advantage Groups False
Rollouts per Problem 16

RLOO uses the leave-one-out mean of the other responses in the group as the baseline, trained with a REINFORCE-style policy loss.

Training Hyperparameters

Parameter Value
Base Model Qwen/Qwen2.5-1.5B
Learning Rate 1e-06
LR Scheduler cosine
Warmup Steps 25
Max Training Steps 1500
Micro Batch Size 4
Gradient Accumulation 64
Effective Batch Size 256
Sequence Length 8192
Gradient Clipping 0.3
Weight Decay 0.01
Optimizer adamw_torch
Precision bf16
DeepSpeed ZeRO Stage 3

Evaluation Results

Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):

Dataset pass@1 pass@2 pass@4 pass@8 pass@16 pass@32
GSM8K (test) 78.44 85.37 89.97 92.93 94.80 96.06
MATH-500 60.14 68.63 75.63 81.47 86.24 89.80
Overall 73.41 80.77 86.03 89.78 92.45 94.34

GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).

Training Curves

Training Metrics

W&B Run

Full training logs: https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_1.5b_rloo_no_kl_3a1f_4xh100_236657_finetune_27b80841

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen2.5-1.5B-RLOO-math-reasoning", revision="step-0200")  # optional branch, e.g. "step-0400"
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen2.5-1.5B-RLOO-math-reasoning", revision="step-0200")

prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

vLLM

from vllm import LLM, SamplingParams

llm = LLM(model="jaygala24/Qwen2.5-1.5B-RLOO-math-reasoning", revision="step-0200")  # optional branch, e.g. "step-0400"
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)

prompt = "Please reason step by step, and put your final answer within \boxed{}.

What is the sum of 123 and 456?"
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)

Framework

Description
Model synced from source: jaygala24/Qwen2.5-1.5B-RLOO-math-reasoning
Readme 2 MiB
Languages
Jinja 100%