Files
SIRI-1.5B-low/README.md
ModelHub XC 9237c4d619 初始化项目,由ModelHub XC社区提供模型
Model: THU-KEG/SIRI-1.5B-low
Source: Original Platform
2026-05-04 01:52:37 +08:00

2.3 KiB
Raw Permalink Blame History

license, datasets, base_model, tags, language, pipeline_tag, library_name
license datasets base_model tags language pipeline_tag library_name
apache-2.0
agentica-org/DeepScaleR-Preview-Dataset
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
reinforcement-learning
en
zh
text-generation transformers

SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression

📃 Paper📝 Wandb


🔍 Overview

SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression) is a reinforcement-learningbased framework designed to improve the efficiency and accuracy of Large Reasoning Models (LRMs).

Traditional RL training often causes overthinking and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy.

SIRI solves this trade-off by iteratively alternating between compression and expansion of the reasoning budget, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.

pareto_front


🚀 Key Features

  • Interleaved CompressionExpansion:
    • Compression phase: forces concise, high-density reasoning by limiting rollout length.
    • Expansion phase: restores longer rollouts to encourage exploration and planning.
  • Token Efficiency without Accuracy Loss: Unlike previous methods, SIRI improves accuracy while reducing average token usage.
  • Iterative RL Training: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal).
  • Generalization Across Model Sizes: Validated on both 1.5B and 7B models.

📊 Benchmarks

perf


📝 Citation