Files
SIRI-1.5B-low/README.md

64 lines
2.3 KiB
Markdown
Raw Permalink Normal View History

---
license: apache-2.0
datasets:
- agentica-org/DeepScaleR-Preview-Dataset
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- reinforcement-learning
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/BHITqJU33sXqf-Jbytrxg.png" width="100"/>
<b><span style="font-size:28px">SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression</span></b>
</p>
<p align="center">
📃 <a href="https://arxiv.org" target="_blank">Paper</a> • 📝 <a href="https://arxiv.org" target="_blank">Wandb</a>
</p>
---
## 🔍 Overview
**SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression)** is a reinforcement-learningbased framework designed to improve the efficiency and accuracy of **Large Reasoning Models (LRMs)**.
Traditional RL training often causes **overthinking** and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy.
SIRI solves this trade-off by **iteratively alternating between compression and expansion of the reasoning budget**, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/SXow6xntEgrwhvWtzvrkE.png" alt="pareto_front" width="500"/>
</p>
---
## 🚀 Key Features
- **Interleaved CompressionExpansion**:
- *Compression phase*: forces concise, high-density reasoning by limiting rollout length.
- *Expansion phase*: restores longer rollouts to encourage exploration and planning.
- **Token Efficiency without Accuracy Loss**: Unlike previous methods, SIRI improves accuracy *while reducing average token usage*.
- **Iterative RL Training**: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal).
- **Generalization Across Model Sizes**: Validated on both **1.5B** and **7B** models.
---
## 📊 Benchmarks
![perf](https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/0S2d9VZTiaoGI6_N9Vrh2.png)
---
## 📝 Citation
```bibtex