Files
Gemmalpaca-7B-i1-GGUF/README.md
ModelHub XC 00daa0b0dc 初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/Gemmalpaca-7B-i1-GGUF
Source: Original Platform
2026-05-09 18:57:34 +08:00

83 lines
5.1 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
base_model: mlabonne/Gemmalpaca-7B
datasets:
- vicgalle/alpaca-gpt4
extra_gated_button_content: Acknowledge license
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youre required to review and
agree to Googles usage license. To do this, please ensure youre logged-in to Hugging
Face and click below. Requests are processed immediately.
language:
- en
library_name: transformers
license: other
license_link: https://ai.google.dev/gemma/terms
license_name: gemma-terms-of-use
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mlabonne/Gemmalpaca-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemmalpaca-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q4_0.gguf) | i1-Q4_0 | 5.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemmalpaca-7B-i1-GGUF/resolve/main/Gemmalpaca-7B.i1-Q6_K.gguf) | i1-Q6_K | 7.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->