--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-1.7B tags: - reinforcement-learning - dapo - math-reasoning - pipelinerl datasets: - gsm8k_train - math_train pipeline_tag: text-generation --- # Qwen3-1.7B-DAPO-math-reasoning This model is a fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) using **DAPO (Decoupled Clip and Dynamic Sampling Policy Optimization) without KL penalty** for mathematical reasoning. Trained with [PipelineRL](https://github.com/ServiceNow/PipelineRL). ## Training Details ### Datasets | Split | Datasets | |-------|----------| | Train | `gsm8k_train`, `math_train` | | Test | `gsm8k_test`, `math_500` | ### RL Algorithm | Parameter | Value | |-----------|-------| | Algorithm | DAPO (Decoupled Clip and Dynamic Sampling Policy Optimization) | | Advantage Baseline | Group mean reward | | Extra Inference | None | | Group Structure | Required | | Policy Loss | `ppo` | | KL Coefficient | `0.0` | | Epsilon (clip) | `0.2` | | Discount Factor (`gamma`) | `1.0` | | Divide Advantage by Std | `False` | | Filter Zero Advantage Groups | `True` | | Rollouts per Problem | `16` | DAPO extends GRPO with clip-higher (asymmetric PPO clipping), dynamic sampling (filtering zero-variance groups), token-level loss aggregation, and overlong reward shaping. ### Training Hyperparameters | Parameter | Value | |-----------|-------| | Base Model | `Qwen/Qwen3-1.7B` | | Learning Rate | `1e-06` | | LR Scheduler | `cosine` | | Warmup Steps | `25` | | Max Training Steps | `1500` | | Micro Batch Size | `4` | | Gradient Accumulation | `64` | | Effective Batch Size | `256` | | Sequence Length | `8192` | | Gradient Clipping | `0.3` | | Weight Decay | `0.01` | | Optimizer | `adamw_torch` | | Precision | `bf16` | | DeepSpeed | ZeRO Stage 3 | ## Evaluation Results Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0): | Dataset | pass@1 | pass@2 | pass@4 | pass@8 | pass@16 | pass@32 | | --- | ---: | ---: | ---: | ---: | ---: | ---: | | GSM8K (test) | 80.97 | 88.04 | 92.43 | 95.03 | 96.54 | 97.57 | | MATH-500 | 65.77 | 74.57 | 81.06 | 85.79 | 89.17 | 91.60 | | **Overall** | **76.79** | **84.34** | **89.30** | **92.49** | **94.52** | **95.93** | *GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).* ## Training Curves ![Training Metrics](training_metrics.png) ## W&B Run Full training logs: [https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen3_1.7b_dapo_no_kl_3a1f_4xh100_236316_finetune_11e44a44](https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen3_1.7b_dapo_no_kl_3a1f_4xh100_236316_finetune_11e44a44) ## Usage ### Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen3-1.7B-DAPO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400" tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen3-1.7B-DAPO-math-reasoning", revision="step-0200") prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### vLLM ```python from vllm import LLM, SamplingParams llm = LLM(model="jaygala24/Qwen3-1.7B-DAPO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400" sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) prompt = "Please reason step by step, and put your final answer within \boxed{}. What is the sum of 123 and 456?" outputs = llm.generate([prompt], sampling_params) print(outputs[0].outputs[0].text) ``` ## Framework - [PipelineRL](https://github.com/ServiceNow/PipelineRL) - [Transformers](https://github.com/huggingface/transformers) - [DeepSpeed](https://github.com/microsoft/DeepSpeed) (ZeRO Stage 3)