commit fa85caba2909b71ded39aaaca50ac74fd2154d82 Author: ModelHub XC Date: Fri Apr 10 11:15:00 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..db59e0e --- /dev/null +++ b/.gitattributes @@ -0,0 +1,57 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..82878eb --- /dev/null +++ b/README.md @@ -0,0 +1,102 @@ +Quantization made by Richard Erkhov. + +[Github](https://github.com/RichardErkhov) + +[Discord](https://discord.gg/pvy7H8DZMG) + +[Request more models](https://github.com/RichardErkhov/quant_request) + + +openai-gsm8k_meta-llama-Llama-3.2-1B - GGUF +- Model creator: https://huggingface.co/YWZBrandon/ +- Original model: https://huggingface.co/YWZBrandon/openai-gsm8k_meta-llama-Llama-3.2-1B/ + + +| Name | Quant method | Size | +| ---- | ---- | ---- | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf) | Q2_K | 0.54GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf) | IQ3_XS | 0.58GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf) | IQ3_S | 0.6GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf) | Q3_K_S | 0.6GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf) | IQ3_M | 0.61GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf) | Q3_K | 0.64GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf) | Q3_K_M | 0.64GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf) | Q3_K_L | 0.68GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf) | IQ4_XS | 0.7GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf) | Q4_0 | 0.72GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf) | IQ4_NL | 0.72GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf) | Q4_K_S | 0.72GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf) | Q4_K | 0.75GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf) | Q4_K_M | 0.75GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf) | Q4_1 | 0.77GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf) | Q5_0 | 0.83GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf) | Q5_K_S | 0.83GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf) | Q5_K | 0.85GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf) | Q5_K_M | 0.85GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf) | Q5_1 | 0.89GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf) | Q6_K | 0.95GB | +| [openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/YWZBrandon_-_openai-gsm8k_meta-llama-Llama-3.2-1B-gguf/blob/main/openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf) | Q8_0 | 1.23GB | + + + + +Original model description: +--- +base_model: meta-llama/Llama-3.2-1B +datasets: openai/gsm8k +library_name: transformers +model_name: openai-gsm8k_meta-llama-Llama-3.2-1B +tags: +- generated_from_trainer +- trl +- sft +licence: license +--- + +# Model Card for openai-gsm8k_meta-llama-Llama-3.2-1B + +This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the [openai/gsm8k](https://huggingface.co/datasets/openai/gsm8k) dataset. +It has been trained using [TRL](https://github.com/huggingface/trl). + +## Quick start + +```python +from transformers import pipeline + +question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" +generator = pipeline("text-generation", model="YWZBrandon/openai-gsm8k_meta-llama-Llama-3.2-1B", device="cuda") +output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] +print(output["generated_text"]) +``` + +## Training procedure + +[Visualize in Weights & Biases](https://wandb.ai/yuweiz/ActionEditV1/runs/fz4agnju) + +This model was trained with SFT. + +### Framework versions + +- TRL: 0.12.2 +- Transformers: 4.46.3 +- Pytorch: 2.5.1 +- Datasets: 3.1.0 +- Tokenizers: 0.20.3 + +## Citations + + + +Cite TRL as: + +```bibtex +@misc{vonwerra2022trl, + title = {{TRL: Transformer Reinforcement Learning}}, + author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, + year = 2020, + journal = {GitHub repository}, + publisher = {GitHub}, + howpublished = {\url{https://github.com/huggingface/trl}} +} +``` + diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf new file mode 100644 index 0000000..770b9f8 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d730c1abc64900a67984062b4249d02900cba02dbf2de0ef236ec7dfe6d60697 +size 657285536 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf new file mode 100644 index 0000000..cde2570 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebd4c5c1e797e2cabf924569e871155e797e57532100fa6c7bc187cf8542afe7 +size 643916192 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf new file mode 100644 index 0000000..65b399e --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ3_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:646a7b1877c90fda02676cd68c7c7eddc58da660b2b883cc0d9d01f4828175b0 +size 621109664 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf new file mode 100644 index 0000000..ed60b5b --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_NL.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2486d86a54fde1421129d74857fa5207883517e1c7ae878a399c97a8151a6d1c +size 777216416 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf new file mode 100644 index 0000000..7f0e872 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:293c196d8337b29c33649018a48c0fdba6cb3d7bc8b902035355452a16cc63ec +size 748380576 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf new file mode 100644 index 0000000..96e8ebe --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60087f671ca8e2fb86bc9df7eabeafc0221cf4f4e81b713c8880f5906603d7a9 +size 580870560 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf new file mode 100644 index 0000000..43c52e4 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8c10915ad0af9039422e19289c82d86ed2de1c0371933ef624fbe3ee971e175 +size 690839968 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf new file mode 100644 index 0000000..6ce66ec --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:479b066c4134d41621dbfba7fb18b0f68036da1950d8507ba24416891c1a6c35 +size 732520864 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf new file mode 100644 index 0000000..43c52e4 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8c10915ad0af9039422e19289c82d86ed2de1c0371933ef624fbe3ee971e175 +size 690839968 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf new file mode 100644 index 0000000..f785904 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:013202b2a5da3234ce4e652b72a67c63d6df55135592ee21755f94b78835409d +size 641687968 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf new file mode 100644 index 0000000..6680993 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcef12c952e9ad53b107f5b9dbe6ddd8f6b1627ceec5ee46a9ade103e2d3bd21 +size 770924960 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf new file mode 100644 index 0000000..7eb8a26 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99c4c8ba0d76d86c6e982711710070a00cff2caf15860e1245debeb682c2e2e9 +size 831742368 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf new file mode 100644 index 0000000..9fc6644 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf43ca106dfcbb38d87fbe490d4af571e26aab800dc54ab2513d08eb2b527f5e +size 807690656 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf new file mode 100644 index 0000000..9fc6644 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf43ca106dfcbb38d87fbe490d4af571e26aab800dc54ab2513d08eb2b527f5e +size 807690656 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf new file mode 100644 index 0000000..7e7549b --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:afda7ca364597e3612602c823df8a1986af13ec01c79340846da617990acf20e +size 775643552 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf new file mode 100644 index 0000000..68b39b2 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0002b7171bee426c629245fffea6a3e71426df29ed832fb5387be7019d6580dc +size 892559776 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf new file mode 100644 index 0000000..6d80986 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:379e1b9d46ccdcfb057141cb1b694ad31b96470398e3125667f70689caec097e +size 953377184 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf new file mode 100644 index 0000000..931a945 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e5b374e111cf75f6ee1ee07f6c9abdc3aa3cee01f1c576495b94d981d409176 +size 911499680 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf new file mode 100644 index 0000000..931a945 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e5b374e111cf75f6ee1ee07f6c9abdc3aa3cee01f1c576495b94d981d409176 +size 911499680 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf new file mode 100644 index 0000000..07a666e --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b1bc7fafbca23b76fa988a1de1c2c9da1b68906c7da7f7c0cc2ce8d4974be9f +size 892559776 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf new file mode 100644 index 0000000..bbe012a --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2109e55adc7558068ac44ff52b45cce78c2dc143c78e0f0a403e9863b17b1f77 +size 1021796768 diff --git a/openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf new file mode 100644 index 0000000..4f19ec4 --- /dev/null +++ b/openai-gsm8k_meta-llama-Llama-3.2-1B.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50ac1cd4f222046cb2f88b38d9cf662429ef4f24ea03545f2344c4923b9e6d66 +size 1321079200