初始化项目,由ModelHub XC社区提供模型
Model: jaygala24/Qwen2.5-3B-ReMax-math-reasoning Source: Original Platform
This commit is contained in:
123
README.md
Normal file
123
README.md
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
base_model: Qwen/Qwen2.5-3B
|
||||
tags:
|
||||
- reinforcement-learning
|
||||
- remax
|
||||
- math-reasoning
|
||||
- pipelinerl
|
||||
datasets:
|
||||
- gsm8k_train
|
||||
- math_train
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Qwen2.5-3B-ReMax-math-reasoning
|
||||
|
||||
This model is a fine-tuned version of [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) using **ReMax without KL penalty** for mathematical reasoning.
|
||||
|
||||
Trained with [PipelineRL](https://github.com/ServiceNow/PipelineRL).
|
||||
|
||||
## Training Details
|
||||
|
||||
### Datasets
|
||||
|
||||
| Split | Datasets |
|
||||
|-------|----------|
|
||||
| Train | `gsm8k_train`, `math_train` |
|
||||
| Test | `gsm8k_test`, `math_500` |
|
||||
|
||||
### RL Algorithm
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Algorithm | ReMax |
|
||||
| Advantage Baseline | Greedy-decoded response reward |
|
||||
| Extra Inference | 1 deterministic rollout per prompt |
|
||||
| Group Structure | Not required |
|
||||
| Policy Loss | `ppo` |
|
||||
| KL Coefficient | `0.0` |
|
||||
| Epsilon (clip) | `0.2` |
|
||||
| Discount Factor (`gamma`) | `1.0` |
|
||||
| Divide Advantage by Std | `False` |
|
||||
| Filter Zero Advantage Groups | `False` |
|
||||
| Rollouts per Problem | `16` |
|
||||
|
||||
ReMax uses a greedy-decoded response's reward as the baseline for advantages.
|
||||
|
||||
### Training Hyperparameters
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Base Model | `Qwen/Qwen2.5-3B` |
|
||||
| Learning Rate | `1e-06` |
|
||||
| LR Scheduler | `cosine` |
|
||||
| Warmup Steps | `25` |
|
||||
| Max Training Steps | `1500` |
|
||||
| Micro Batch Size | `2` |
|
||||
| Gradient Accumulation | `128` |
|
||||
| Effective Batch Size | `256` |
|
||||
| Sequence Length | `8192` |
|
||||
| Gradient Clipping | `0.3` |
|
||||
| Weight Decay | `0.01` |
|
||||
| Optimizer | `adamw_torch` |
|
||||
| Precision | `bf16` |
|
||||
| DeepSpeed | ZeRO Stage 3 |
|
||||
|
||||
## Evaluation Results
|
||||
|
||||
Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):
|
||||
|
||||
| Dataset | pass@1 | pass@2 | pass@4 | pass@8 | pass@16 | pass@32 |
|
||||
| --- | ---: | ---: | ---: | ---: | ---: | ---: |
|
||||
| GSM8K (test) | 85.99 | 90.50 | 93.34 | 95.29 | 96.64 | 97.50 |
|
||||
| MATH-500 | 67.36 | 74.99 | 81.23 | 85.92 | 89.09 | 91.20 |
|
||||
| **Overall** | **80.87** | **86.24** | **90.01** | **92.71** | **94.56** | **95.77** |
|
||||
|
||||
*GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).*
|
||||
|
||||
## Training Curves
|
||||
|
||||

|
||||
|
||||
## W&B Run
|
||||
|
||||
Full training logs: [https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_3b_remax_3a1f_4xh100_214753_finetune_1c7d72aa](https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen2.5_3b_remax_3a1f_4xh100_214753_finetune_1c7d72aa)
|
||||
|
||||
## Usage
|
||||
|
||||
### Transformers
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen2.5-3B-ReMax-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
|
||||
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen2.5-3B-ReMax-math-reasoning", revision="step-0200")
|
||||
|
||||
prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
|
||||
inputs = tokenizer(prompt, return_tensors="pt")
|
||||
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
### vLLM
|
||||
|
||||
```python
|
||||
from vllm import LLM, SamplingParams
|
||||
|
||||
llm = LLM(model="jaygala24/Qwen2.5-3B-ReMax-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
|
||||
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
|
||||
|
||||
prompt = "Please reason step by step, and put your final answer within \boxed{}.
|
||||
|
||||
What is the sum of 123 and 456?"
|
||||
outputs = llm.generate([prompt], sampling_params)
|
||||
print(outputs[0].outputs[0].text)
|
||||
```
|
||||
|
||||
## Framework
|
||||
|
||||
- [PipelineRL](https://github.com/ServiceNow/PipelineRL)
|
||||
- [Transformers](https://github.com/huggingface/transformers)
|
||||
- [DeepSpeed](https://github.com/microsoft/DeepSpeed) (ZeRO Stage 3)
|
||||
Reference in New Issue
Block a user