From 2bc84bec297739f4a430f598c7e5fa0ab28bf14a Mon Sep 17 00:00:00 2001 From: Dan Clipca Date: Fri, 14 Mar 2025 18:19:16 +0000 Subject: [PATCH] Upload folder using huggingface_hub --- .gitattributes | 2 + README.md | 46 +++++++++++++++++++++ gemma-3-27b-it-codeforces-SFT.imatrix.dat | 3 ++ gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf | 3 ++ 4 files changed, 54 insertions(+) create mode 100644 README.md create mode 100644 gemma-3-27b-it-codeforces-SFT.imatrix.dat create mode 100644 gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf diff --git a/.gitattributes b/.gitattributes index a6344aa..1976194 100644 --- a/.gitattributes +++ b/.gitattributes @@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text *.zip filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text +gemma-3-27b-it-codeforces-SFT.imatrix.dat filter=lfs diff=lfs merge=lfs -text +gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..6c385dc --- /dev/null +++ b/README.md @@ -0,0 +1,46 @@ +--- +base_model: qgallouedec/gemma-3-27b-it-codeforces-SFT +language: +- en +license: mit +quantized_by: SpongeQuant +tags: +- SpongeQuant +- i1-GGUF +--- + + +Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization. + +
+ + + + + + +
+ +*** +
+ Flying insect with flowers +
Flying insect with flowers
+
+ +
+ +
Flawed Mangoes - Dramamine (USA, 2024)
+
+ +*** + +### What is a GGUF? +GGUF is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but GGUF improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn't have enough memory, GGUF can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. GGUF is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, GGUF allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems. + + +### What is an i1-GGUF? +i1-GGUF is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard GGUF, i1-GGUF allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, i1-GGUF models deliver better responses than traditional GGUF models while maintaining efficiency. + diff --git a/gemma-3-27b-it-codeforces-SFT.imatrix.dat b/gemma-3-27b-it-codeforces-SFT.imatrix.dat new file mode 100644 index 0000000..f067bc6 --- /dev/null +++ b/gemma-3-27b-it-codeforces-SFT.imatrix.dat @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:956c8df5ee9bc2ff36e33e6e74236cee2d3af0200838efac4fafd2f87ad6dd65 +size 13029455 diff --git a/gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf b/gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf new file mode 100644 index 0000000..4ded427 --- /dev/null +++ b/gemma-3-27b-it-codeforces-sft-i1-IQ1_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eea1e097614ac862561345919506e3ad498439f941a4a5dde4da670305a5b92a +size 6264247904