Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) openai-gsm8k_meta-llama-Llama-3.2-1B - GGUF - Model creator: https://huggingface.co/YWZBrandon/ - Original model: https://huggingface.co/YWZBrandon/openai-gsm8k_meta-llama-Llama-3.2-1B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf) | Q2_K | 0.54GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf) | IQ3_XS | 0.58GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf) | IQ3_S | 0.6GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf) | Q3_K_S | 0.6GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf) | IQ3_M | 0.61GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf) | Q3_K | 0.64GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf) | Q3_K_M | 0.64GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf) | Q3_K_L | 0.68GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf) | IQ4_XS | 0.7GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf) | Q4_0 | 0.72GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf) | IQ4_NL | 0.72GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf) | Q4_K_S | 0.72GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf) | Q4_K | 0.75GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf) | Q4_K_M | 0.75GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf) | Q4_1 | 0.77GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf) | Q5_0 | 0.83GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf) | Q5_K_S | 0.83GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf) | Q5_K | 0.85GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf) | Q5_K_M | 0.85GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf) | Q5_1 | 0.89GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf) | Q6_K | 0.95GB | | [openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf) | Q8_0 | 1.23GB | Original model description: --- base_model: meta-llama/Llama-3.2-1B datasets: openai/gsm8k library_name: transformers model_name: openai-gsm8k_meta-llama-Llama-3.2-1B tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for openai-gsm8k_meta-llama-Llama-3.2-1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the [openai/gsm8k](https://huggingface.co/datasets/openai/gsm8k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="YWZBrandon/openai-gsm8k_meta-llama-Llama-3.2-1B", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/yuweiz/ActionEditV1/runs/fz4agnju) This model was trained with SFT. ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```