1.5 KiB
1.5 KiB
library_name, pipeline_tag, tags
| library_name | pipeline_tag | tags | |||||||
|---|---|---|---|---|---|---|---|---|---|
| transformers | text-generation |
|
VERL Math Transfer 7B to 7B v2
Math transfer experiment trained with verl. This repo groups all exported Hugging Face checkpoints for the 7B-to-7B v2 configuration.
Layout
main: latest exported checkpoint, currentlystep-150- step revisions:
step-010, step-020, step-030, step-040, step-050, step-060, step-070, step-080, step-090, step-100, step-110, step-120, step-130, step-140, step-150
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
repo_id = "hyunseoki/verl-math-transfer-7bi-to-7bi-v2"
tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True)
Load a specific checkpoint revision:
from transformers import AutoTokenizer, AutoModelForCausalLM
repo_id = "hyunseoki/verl-math-transfer-7bi-to-7bi-v2"
revision = "step-150"
tokenizer = AutoTokenizer.from_pretrained(repo_id, revision=revision, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, revision=revision, trust_remote_code=True)
Notes
- Architecture detected from the exported config:
Qwen2ForCausalLM - The original base model Hub ID is not encoded in these local checkpoints, so
base_modelmetadata is not set automatically. - Checkpoints were exported from verl FSDP shards into Hugging Face
safetensorsformat.