Files
rl_r2egym-full_terminus-str…/README.md
ModelHub XC 97d3738309 初始化项目,由ModelHub XC社区提供模型
Model: laion/rl_r2egym-full_terminus-structured
Source: Original Platform
2026-05-06 23:58:41 +08:00

1.3 KiB

license, base_model, tags, language, pipeline_tag, library_name
license base_model tags language pipeline_tag library_name
apache-2.0 laion/r2egym-nl2bash-stack-bugsseq-fixthink-again
reinforcement-learning
code
r2egym
rl
rloo-n
terminus-structured
en
text-generation transformers

rl_r2egym-full_terminus-structured

RL-trained Qwen3-8B with structured tool calls. Continued from mixed step 37 with full r2egym dataset (1785 tasks) for 18 more steps.

SWEBench-100: 42% pass@3. Training pass@8 peaked at 90.6%.

Training Details

  • Base model: laion/r2egym-nl2bash-stack-bugsseq-fixthink-again
  • Training method: rloo-n with terminus-structured agent (structured tool calls: bash, view, edit, create, search)
  • Framework: BenSkyRL + Harbor
  • Context: 32k (24k input + 8k output)
  • Learning rate: 1e-5

SWEBench-Verified Results (100 tasks, pass@3)

Model SWEBench pass@3
Base SFT (terminus-2) 37%
This model (terminus-structured) See eval results

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("laion/rl_r2egym-full_terminus-structured")
tokenizer = AutoTokenizer.from_pretrained("laion/rl_r2egym-full_terminus-structured")