124 lines
3.8 KiB
Markdown
124 lines
3.8 KiB
Markdown
---
|
|
library_name: transformers
|
|
license: apache-2.0
|
|
base_model: Qwen/Qwen2.5-1.5B
|
|
tags:
|
|
- reinforcement-learning
|
|
- grpo
|
|
- math-reasoning
|
|
- pipelinerl
|
|
datasets:
|
|
- gsm8k_train
|
|
- math_train
|
|
pipeline_tag: text-generation
|
|
---
|
|
|
|
# Qwen2.5-1.5B-GRPO-KL-math-reasoning
|
|
|
|
This model is a fine-tuned version of [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) using **GRPO (Group Relative Policy Optimization) with KL penalty** for mathematical reasoning.
|
|
|
|
Trained with [PipelineRL](https://github.com/ServiceNow/PipelineRL).
|
|
|
|
## Training Details
|
|
|
|
### Datasets
|
|
|
|
| Split | Datasets |
|
|
|-------|----------|
|
|
| Train | `gsm8k_train`, `math_train` |
|
|
| Test | `gsm8k_test`, `math_500` |
|
|
|
|
### RL Algorithm
|
|
|
|
| Parameter | Value |
|
|
|-----------|-------|
|
|
| Algorithm | GRPO (Group Relative Policy Optimization) |
|
|
| Advantage Baseline | Group mean reward |
|
|
| Extra Inference | None |
|
|
| Group Structure | Required |
|
|
| Policy Loss | `ppo` |
|
|
| KL Coefficient | `0.001` |
|
|
| Epsilon (clip) | `0.02` |
|
|
| Discount Factor (`gamma`) | `1.0` |
|
|
| Divide Advantage by Std | `False` |
|
|
| Filter Zero Advantage Groups | `False` |
|
|
| Rollouts per Problem | `16` |
|
|
|
|
GRPO uses the group mean reward as the baseline for relative advantages.
|
|
|
|
### Training Hyperparameters
|
|
|
|
| Parameter | Value |
|
|
|-----------|-------|
|
|
| Base Model | `Qwen/Qwen2.5-1.5B` |
|
|
| Learning Rate | `1e-06` |
|
|
| LR Scheduler | `cosine` |
|
|
| Warmup Steps | `25` |
|
|
| Max Training Steps | `1500` |
|
|
| Micro Batch Size | `4` |
|
|
| Gradient Accumulation | `64` |
|
|
| Effective Batch Size | `256` |
|
|
| Sequence Length | `8192` |
|
|
| Gradient Clipping | `0.3` |
|
|
| Weight Decay | `0.01` |
|
|
| Optimizer | `adamw_torch` |
|
|
| Precision | `bf16` |
|
|
| DeepSpeed | ZeRO Stage 3 |
|
|
|
|
## Evaluation Results
|
|
|
|
Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):
|
|
|
|
| Dataset | pass@1 | pass@2 | pass@4 | pass@8 | pass@16 | pass@32 |
|
|
| --- | ---: | ---: | ---: | ---: | ---: | ---: |
|
|
| GSM8K (test) | 75.35 | 83.40 | 88.99 | 92.55 | 94.68 | 96.13 |
|
|
| MATH-500 | 54.79 | 64.03 | 71.57 | 78.01 | 83.37 | 87.20 |
|
|
| **Overall** | **69.70** | **78.07** | **84.20** | **88.55** | **91.57** | **93.68** |
|
|
|
|
*GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).*
|
|
|
|
## Training Curves
|
|
|
|

|
|
|
|
## W&B Run
|
|
|
|
Full training logs: [https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_1.5b_grpo_with_kl_2a1p1f_4xh100_202891_finetune_d46ef3e3](https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_1.5b_grpo_with_kl_2a1p1f_4xh100_202891_finetune_d46ef3e3)
|
|
|
|
## Usage
|
|
|
|
### Transformers
|
|
|
|
```python
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen2.5-1.5B-GRPO-KL-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
|
|
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen2.5-1.5B-GRPO-KL-math-reasoning", revision="step-0200")
|
|
|
|
prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
|
|
inputs = tokenizer(prompt, return_tensors="pt")
|
|
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
```
|
|
|
|
### vLLM
|
|
|
|
```python
|
|
from vllm import LLM, SamplingParams
|
|
|
|
llm = LLM(model="jaygala24/Qwen2.5-1.5B-GRPO-KL-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
|
|
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
|
|
|
|
prompt = "Please reason step by step, and put your final answer within \boxed{}.
|
|
|
|
What is the sum of 123 and 456?"
|
|
outputs = llm.generate([prompt], sampling_params)
|
|
print(outputs[0].outputs[0].text)
|
|
```
|
|
|
|
## Framework
|
|
|
|
- [PipelineRL](https://github.com/ServiceNow/PipelineRL)
|
|
- [Transformers](https://github.com/huggingface/transformers)
|
|
- [DeepSpeed](https://github.com/microsoft/DeepSpeed) (ZeRO Stage 3)
|