5645c35b38d7a6624c50aecf8834629402811c02
Model: jaygala24/Qwen2.5-1.5B-DAPO-math-reasoning Source: Original Platform
library_name, license, base_model, tags, datasets, pipeline_tag
| library_name | license | base_model | tags | datasets | pipeline_tag | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| transformers | apache-2.0 | Qwen/Qwen2.5-1.5B |
|
|
text-generation |
Qwen2.5-1.5B-DAPO-math-reasoning
This model is a fine-tuned version of Qwen2.5-1.5B using DAPO (Decoupled Clip and Dynamic Sampling Policy Optimization) without KL penalty for mathematical reasoning.
Trained with PipelineRL.
Training Details
Datasets
| Split | Datasets |
|---|---|
| Train | gsm8k_train, math_train |
| Test | gsm8k_test, math_500 |
RL Algorithm
| Parameter | Value |
|---|---|
| Algorithm | DAPO (Decoupled Clip and Dynamic Sampling Policy Optimization) |
| Advantage Baseline | Group mean reward |
| Extra Inference | None |
| Group Structure | Required |
| Policy Loss | ppo |
| KL Coefficient | 0.0 |
| Epsilon (clip) | 0.2 |
Discount Factor (gamma) |
1.0 |
| Divide Advantage by Std | False |
| Filter Zero Advantage Groups | True |
| Rollouts per Problem | 16 |
DAPO extends GRPO with clip-higher (asymmetric PPO clipping), dynamic sampling (filtering zero-variance groups), token-level loss aggregation, and overlong reward shaping.
Training Hyperparameters
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-1.5B |
| Learning Rate | 1e-06 |
| LR Scheduler | cosine |
| Warmup Steps | 25 |
| Max Training Steps | 1500 |
| Micro Batch Size | 4 |
| Gradient Accumulation | 64 |
| Effective Batch Size | 256 |
| Sequence Length | 8192 |
| Gradient Clipping | 0.3 |
| Weight Decay | 0.01 |
| Optimizer | adamw_torch |
| Precision | bf16 |
| DeepSpeed | ZeRO Stage 3 |
Evaluation Results
Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):
| Dataset | pass@1 | pass@2 | pass@4 | pass@8 | pass@16 | pass@32 |
|---|---|---|---|---|---|---|
| GSM8K (test) | 78.78 | 85.63 | 89.97 | 92.74 | 94.63 | 95.98 |
| MATH-500 | 60.22 | 68.87 | 75.81 | 81.50 | 85.69 | 88.40 |
| Overall | 73.68 | 81.02 | 86.07 | 89.65 | 92.17 | 93.90 |
GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).
Training Curves
W&B Run
Full training logs: https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_1.5b_dapo_no_kl_3a1f_4xh100_236315_finetune_2c0241db
Usage
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen2.5-1.5B-DAPO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen2.5-1.5B-DAPO-math-reasoning", revision="step-0200")
prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
vLLM
from vllm import LLM, SamplingParams
llm = LLM(model="jaygala24/Qwen2.5-1.5B-DAPO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
prompt = "Please reason step by step, and put your final answer within \boxed{}.
What is the sum of 123 and 456?"
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
Framework
- PipelineRL
- Transformers
- DeepSpeed (ZeRO Stage 3)
Description
Languages
Jinja
100%
