69 lines
2.5 KiB
Markdown
69 lines
2.5 KiB
Markdown
|
|
---
|
||
|
|
base_model: meta-llama/Llama-3.1-8B-Instruct
|
||
|
|
datasets: Neelectric/OpenR1-Math-220k_all_SDFT_nr
|
||
|
|
library_name: transformers
|
||
|
|
model_name: Llama-3.1-8B-Instruct_SDFT_mathv00.06
|
||
|
|
tags:
|
||
|
|
- generated_from_trainer
|
||
|
|
- open-r1
|
||
|
|
- trl
|
||
|
|
- sdft
|
||
|
|
licence: license
|
||
|
|
---
|
||
|
|
|
||
|
|
# Model Card for Llama-3.1-8B-Instruct_SDFT_mathv00.06
|
||
|
|
|
||
|
|
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [Neelectric/OpenR1-Math-220k_all_SDFT_nr](https://huggingface.co/datasets/Neelectric/OpenR1-Math-220k_all_SDFT_nr) dataset.
|
||
|
|
It has been trained using [TRL](https://github.com/huggingface/trl).
|
||
|
|
|
||
|
|
## Quick start
|
||
|
|
|
||
|
|
```python
|
||
|
|
from transformers import pipeline
|
||
|
|
|
||
|
|
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
||
|
|
generator = pipeline("text-generation", model="Neelectric/Llama-3.1-8B-Instruct_SDFT_mathv00.06", device="cuda")
|
||
|
|
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
||
|
|
print(output["generated_text"])
|
||
|
|
```
|
||
|
|
|
||
|
|
## Training procedure
|
||
|
|
|
||
|
|
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/open-r1_math/runs/i58cg3rj)
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
This model was trained with SDFT, a method introduced in [Self-Training with On-Policy Self-Distillation for Language Model Alignment](https://huggingface.co/papers/2601.19897).
|
||
|
|
|
||
|
|
### Framework versions
|
||
|
|
|
||
|
|
- TRL: 1.1.0.dev0
|
||
|
|
- Transformers: 4.57.6
|
||
|
|
- Pytorch: 2.9.0
|
||
|
|
- Datasets: 4.8.4
|
||
|
|
- Tokenizers: 0.22.2
|
||
|
|
|
||
|
|
## Citations
|
||
|
|
|
||
|
|
Cite SDFT as:
|
||
|
|
|
||
|
|
```bibtex
|
||
|
|
@article{hubotter2026selftraining,
|
||
|
|
title = {{Self-Training with On-Policy Self-Distillation for Language Model Alignment}},
|
||
|
|
author = {Jonas H\"ubotter and Frederike L\"ubeck and Lejs Behric and Anton Baumann and Marco Bagatella and Daniel Marta and Ido Hakimi and Idan Shenfeld and Thomas Kleine Buening and Carlos Guestrin and Andreas Krause},
|
||
|
|
year = 2026,
|
||
|
|
eprint = {arXiv:2601.19897}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
Cite TRL as:
|
||
|
|
|
||
|
|
```bibtex
|
||
|
|
@software{vonwerra2020trl,
|
||
|
|
title = {{TRL: Transformers Reinforcement Learning}},
|
||
|
|
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
|
||
|
|
license = {Apache-2.0},
|
||
|
|
url = {https://github.com/huggingface/trl},
|
||
|
|
year = {2020}
|
||
|
|
}
|
||
|
|
```
|