--- base_model: meta-llama/Llama-3.1-8B-Instruct datasets: Neelectric/OpenR1-Math-220k_all_Llama3_4096toks library_name: transformers model_name: Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05 tags: - generated_from_trainer - sft - trl - open-r1 licence: license --- # Model Card for Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [Neelectric/OpenR1-Math-220k_all_Llama3_4096toks](https://huggingface.co/datasets/Neelectric/OpenR1-Math-220k_all_Llama3_4096toks) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/neelectric/open-r1_math/runs/4krdbtj9) This model was trained with SFT. ### Framework versions - TRL: 1.1.0.dev0 - Transformers: 4.57.6 - Pytorch: 2.9.0 - Datasets: 4.8.5 - Tokenizers: 0.22.2 ## Citations Cite TRL as: ```bibtex @software{vonwerra2020trl, title = {{TRL: Transformers Reinforcement Learning}}, author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin}, license = {Apache-2.0}, url = {https://github.com/huggingface/trl}, year = {2020} } ```