ModelHub XC 292bb97fc6 初始化项目,由ModelHub XC社区提供模型
Model: hyunseoki/verl-math-transfer-7bi-to-7bi-v2
Source: Original Platform
2026-04-10 17:49:14 +08:00

library_name, pipeline_tag, tags
library_name pipeline_tag tags
transformers text-generation
verl
math
grpo
transfer
qwen2
7b
7bi-to-7bi

VERL Math Transfer 7B to 7B v2

Math transfer experiment trained with verl. This repo groups all exported Hugging Face checkpoints for the 7B-to-7B v2 configuration.

Layout

  • main: latest exported checkpoint, currently step-150
  • step revisions: step-010, step-020, step-030, step-040, step-050, step-060, step-070, step-080, step-090, step-100, step-110, step-120, step-130, step-140, step-150

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

repo_id = "hyunseoki/verl-math-transfer-7bi-to-7bi-v2"
tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True)

Load a specific checkpoint revision:

from transformers import AutoTokenizer, AutoModelForCausalLM

repo_id = "hyunseoki/verl-math-transfer-7bi-to-7bi-v2"
revision = "step-150"
tokenizer = AutoTokenizer.from_pretrained(repo_id, revision=revision, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, revision=revision, trust_remote_code=True)

Notes

  • Architecture detected from the exported config: Qwen2ForCausalLM
  • The original base model Hub ID is not encoded in these local checkpoints, so base_model metadata is not set automatically.
  • Checkpoints were exported from verl FSDP shards into Hugging Face safetensors format.
Description
Model synced from source: hyunseoki/verl-math-transfer-7bi-to-7bi-v2
Readme 2 MiB
Languages
Jinja 100%