初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/Gemma-merged-2B-ties-i1-GGUF Source: Original Platform
This commit is contained in:
58
.gitattributes
vendored
Normal file
58
.gitattributes
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Gemma-merged-2B-ties.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
Gemma-merged-2B-ties.i1-IQ1_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f7e3bddad2949ff962361e7b3741949943be957039ee50461e324a768534c78e
|
||||
size 813844736
|
||||
3
Gemma-merged-2B-ties.i1-IQ1_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aa813fb774ee8bfd57fa44ce3a5095785e2ad8390bf71a78616e9c1be09b98b0
|
||||
size 770959616
|
||||
3
Gemma-merged-2B-ties.i1-IQ2_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9c525386501bd8ca9f2e16adeb918d5ed3283ae45b7efc4bbb847300cf7301fb
|
||||
size 1019472128
|
||||
3
Gemma-merged-2B-ties.i1-IQ2_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a06b5f8b19fcbcf82899bcd1f340b3c0037e238cb1a97734cdcd9e74b677c543
|
||||
size 962291968
|
||||
3
Gemma-merged-2B-ties.i1-IQ2_XS.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:44bdff1c7480caced641de0da540572f5660bf80aba9cfcf0921ad3c8dceeb95
|
||||
size 944859392
|
||||
3
Gemma-merged-2B-ties.i1-IQ2_XXS.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5fbb56c7dc21f06b56a0901df7e9db0e47ee77de7f5d8dbc2f77b81fbcb4629f
|
||||
size 885319936
|
||||
3
Gemma-merged-2B-ties.i1-IQ3_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:59c49b7c1c61ed6a7a2a4e6eb4a078ebec7c43071c29dd6538bc2020d6851577
|
||||
size 1308174592
|
||||
3
Gemma-merged-2B-ties.i1-IQ3_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:98f2a4011acf28a01f95046a16341d03bc865468650282de2eb180f26803e9d5
|
||||
size 1289234688
|
||||
3
Gemma-merged-2B-ties.i1-IQ3_XS.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:387ab48da49733398acffe1204a734c6e0e4816288bf63445cb91daec8c346de
|
||||
size 1244358912
|
||||
3
Gemma-merged-2B-ties.i1-IQ3_XXS.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b23c33487902d15aa2b35ddf5431e1995ecf548f9b1394afb96dff14f94b6e7c
|
||||
size 1125378304
|
||||
3
Gemma-merged-2B-ties.i1-IQ4_XS.gguf
Normal file
3
Gemma-merged-2B-ties.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5669663cd871f3153dedb74e65c206ad41ae300c8bcedc02fb62cd658ca69662
|
||||
size 1490733312
|
||||
3
Gemma-merged-2B-ties.i1-Q2_K.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:51895868602ad5cfcab9d137cd6a32670508f47777f2bc4f5bf04d989295c09c
|
||||
size 1157925120
|
||||
3
Gemma-merged-2B-ties.i1-Q2_K_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q2_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2e41cde3c1dfc4be67001c1e445b1c4bd526d1f00329019081a8d306d8f3bd33
|
||||
size 1104644352
|
||||
3
Gemma-merged-2B-ties.i1-Q3_K_L.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:78562cc8d53a6f64967dfda553177c99c484cc0e8d88a860e478142cb853c3fd
|
||||
size 1465592064
|
||||
3
Gemma-merged-2B-ties.i1-Q3_K_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0dc1f4cda15b6121f40ca1b60622ebdcfe414d3f8abe4beefba5bd68746edb08
|
||||
size 1383803136
|
||||
3
Gemma-merged-2B-ties.i1-Q3_K_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:719d7307d358789bfc4bce8d18cfb07539cb6449b023c5651ed877097e9d406f
|
||||
size 1287981312
|
||||
3
Gemma-merged-2B-ties.i1-Q4_0.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f941856943889ea2c6ea879c8934fa8ab38be2814912b254ceb601f2f8b6ddd2
|
||||
size 1555384576
|
||||
3
Gemma-merged-2B-ties.i1-Q4_K_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fe092a1d5b9a096031ddd00b5446f14b6d4390ee2fc03700730b4106bf0bfdf0
|
||||
size 1630263552
|
||||
3
Gemma-merged-2B-ties.i1-Q4_K_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5a85f7d086d6e937af2658523fec9abdad7af513da6b4079a75d1bf46c2131c7
|
||||
size 1559841024
|
||||
3
Gemma-merged-2B-ties.i1-Q5_K_M.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9958fbfd00cf49f3370b75d6a83b11a6e82d76c3fe797989d12990e7fc9a5ba1
|
||||
size 1839651072
|
||||
3
Gemma-merged-2B-ties.i1-Q5_K_S.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:23c3a3a6cd3e15485704cec4448b3ac3f4b31c5e6194c15ed99ab39c4acfa6af
|
||||
size 1798916352
|
||||
3
Gemma-merged-2B-ties.i1-Q6_K.gguf
Normal file
3
Gemma-merged-2B-ties.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c389664fb038c1a50b5594fd1d73112fc2ca35a96f3028fda5ca4f90a4d8191d
|
||||
size 2062125312
|
||||
79
README.md
Normal file
79
README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
base_model: arcee-ai/Gemma-merged-2B-ties
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
quantized_by: mradermacher
|
||||
tags:
|
||||
- merge
|
||||
- mergekit
|
||||
- google/gemma-2b
|
||||
- google/gemma-2b-it
|
||||
---
|
||||
## About
|
||||
|
||||
<!-- ### quantize_version: 2 -->
|
||||
<!-- ### output_tensor_quantised: 1 -->
|
||||
<!-- ### convert_type: hf -->
|
||||
<!-- ### vocab_type: -->
|
||||
<!-- ### tags: nicoboss -->
|
||||
weighted/imatrix quants of https://huggingface.co/arcee-ai/Gemma-merged-2B-ties
|
||||
|
||||
<!-- provided-files -->
|
||||
static quants are available at https://huggingface.co/mradermacher/Gemma-merged-2B-ties-GGUF
|
||||
## Usage
|
||||
|
||||
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||
more details, including on how to concatenate multi-part files.
|
||||
|
||||
## Provided Quants
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
| Link | Type | Size/GB | Notes |
|
||||
|:-----|:-----|--------:|:------|
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/Gemma-merged-2B-ties-i1-GGUF/resolve/main/Gemma-merged-2B-ties.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
|
||||
And here are Artefact2's thoughts on the matter:
|
||||
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||
|
||||
## FAQ / Model Request
|
||||
|
||||
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||
questions you might have and/or if you want some other model quantized.
|
||||
|
||||
## Thanks
|
||||
|
||||
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||
me use its servers and providing upgrades to my workstation to enable
|
||||
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||
|
||||
<!-- end -->
|
||||
3
imatrix.dat
Normal file
3
imatrix.dat
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4c42bdae95e2e691fdf8a00f71b4d106b595b1ee4141e6011a135ae7ebd4c223
|
||||
size 2068543
|
||||
Reference in New Issue
Block a user