初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/Hala-1.2B-i1-GGUF Source: Original Platform
This commit is contained in:
60
.gitattributes
vendored
Normal file
60
.gitattributes
vendored
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Hala-1.2B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
3
Hala-1.2B.i1-IQ1_M.gguf
Normal file
3
Hala-1.2B.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a2624860b9c2f94e3d833b8d96cd3da736df0e65454bc441fc23e64e3e084e0a
|
||||||
|
size 327144448
|
||||||
3
Hala-1.2B.i1-IQ1_S.gguf
Normal file
3
Hala-1.2B.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c4d4febdeafd48de672e95be3e46e589216ca93366054e18523c828822a4539a
|
||||||
|
size 304387072
|
||||||
3
Hala-1.2B.i1-IQ2_M.gguf
Normal file
3
Hala-1.2B.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:3b64916b3e4d91ef2e1e7683a2229558e9a69518c598d6f6404600b62cc41c8c
|
||||||
|
size 434131968
|
||||||
3
Hala-1.2B.i1-IQ2_S.gguf
Normal file
3
Hala-1.2B.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:7a32bae4157dcff8137a27b01481602fb12a4d534d3df3da9f3c02c1de6570db
|
||||||
|
size 403788800
|
||||||
3
Hala-1.2B.i1-IQ2_XS.gguf
Normal file
3
Hala-1.2B.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:34595d4b02cc9d9bcd4e7f0244d5bb638e7fcd32cdf1c6056a9f91e3bbd07511
|
||||||
|
size 396203008
|
||||||
3
Hala-1.2B.i1-IQ2_XXS.gguf
Normal file
3
Hala-1.2B.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:693417e73fb90071ca7f1f562f726da1a96aaae2a91c2802b14054045d33686d
|
||||||
|
size 365073408
|
||||||
3
Hala-1.2B.i1-IQ3_M.gguf
Normal file
3
Hala-1.2B.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:94432dadc7da91de42fdd09f4789d4eb06f2440fbe92d4cf20ba89d7cfe94773
|
||||||
|
size 566793216
|
||||||
3
Hala-1.2B.i1-IQ3_S.gguf
Normal file
3
Hala-1.2B.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cc42ce9985e281b03e10c0cba5a9458dbdc1ca11be6feba5d4e58f0e31f9b17c
|
||||||
|
size 558158848
|
||||||
3
Hala-1.2B.i1-IQ3_XS.gguf
Normal file
3
Hala-1.2B.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ba247f0b940858a6e8dfd8ec9709c55ca8efc50e8cc979e2e7858d3e631945f4
|
||||||
|
size 537809920
|
||||||
3
Hala-1.2B.i1-IQ3_XXS.gguf
Normal file
3
Hala-1.2B.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:10545912556cb287f53dbf81b38aa3fc21a4504f153493cce288abf0b8f7b4e4
|
||||||
|
size 490984448
|
||||||
3
Hala-1.2B.i1-IQ4_NL.gguf
Normal file
3
Hala-1.2B.i1-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:73236064c109c7c9df5103ae74e9400120e3453e36c8050f3306da626de8799c
|
||||||
|
size 695751680
|
||||||
3
Hala-1.2B.i1-IQ4_XS.gguf
Normal file
3
Hala-1.2B.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:d5ca09d9f55704628a5d16d6a8c4070543293133c2e9ed72b65cf5f3496af7a2
|
||||||
|
size 663376896
|
||||||
3
Hala-1.2B.i1-Q2_K.gguf
Normal file
3
Hala-1.2B.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:fe7df2721de10cfd95922e779b4c77ddc64f920b982c47a92547261d96395af6
|
||||||
|
size 483398656
|
||||||
3
Hala-1.2B.i1-Q2_K_S.gguf
Normal file
3
Hala-1.2B.i1-Q2_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:d94165955501af8592bf794a395957c7fb466a105a77b2a03b79a0fa3548cf06
|
||||||
|
size 460805120
|
||||||
3
Hala-1.2B.i1-Q3_K_L.gguf
Normal file
3
Hala-1.2B.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e64ec3546c8e9ac12cd6c4853c490fa4cd4559d67d671f02a79b0826cd2ae3b8
|
||||||
|
size 635474944
|
||||||
3
Hala-1.2B.i1-Q3_K_M.gguf
Normal file
3
Hala-1.2B.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f22efa013fce2bc067c5576ffff1a9a4cf3bf20ac316b3c3b0b626be78da5e87
|
||||||
|
size 600347648
|
||||||
3
Hala-1.2B.i1-Q3_K_S.gguf
Normal file
3
Hala-1.2B.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:16194b7a540e5fb319a93784b354fd01c1516ccc23bd187a79e444f5265004ab
|
||||||
|
size 558158848
|
||||||
3
Hala-1.2B.i1-Q4_0.gguf
Normal file
3
Hala-1.2B.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:bbb7035cabfdd30bc7df5819924a715f7f766ef5698beae23963121dc22385d9
|
||||||
|
size 697848832
|
||||||
3
Hala-1.2B.i1-Q4_1.gguf
Normal file
3
Hala-1.2B.i1-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5796373e9affb242a4af72f48a7dc9a65edd5386a25d8c4b477db3fd95ff9832
|
||||||
|
size 760501248
|
||||||
3
Hala-1.2B.i1-Q4_K_M.gguf
Normal file
3
Hala-1.2B.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e62eec33208a9bb19ae5b6986bebdb3ec05c78a732175b630c903ee790e79354
|
||||||
|
size 730895360
|
||||||
3
Hala-1.2B.i1-Q4_K_S.gguf
Normal file
3
Hala-1.2B.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ca6f0ef192b87bb7d70ee5e69f121ec655cb098698263fd1115895d938619c8a
|
||||||
|
size 700470272
|
||||||
3
Hala-1.2B.i1-Q5_K_M.gguf
Normal file
3
Hala-1.2B.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:7096a5db47818e685ce74f1ec2a1c43cce1a524dd276714d500d5cf64e912f6e
|
||||||
|
size 843355136
|
||||||
3
Hala-1.2B.i1-Q5_K_S.gguf
Normal file
3
Hala-1.2B.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:99a2d28c16a419a351abbf90fa2e80c37708844c31420e8555e3a6901e94edf9
|
||||||
|
size 825250816
|
||||||
3
Hala-1.2B.i1-Q6_K.gguf
Normal file
3
Hala-1.2B.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:aad8caccafc99a163f15065bc97b3274582e986490f263c936b7ee83332aea38
|
||||||
|
size 962843648
|
||||||
3
Hala-1.2B.imatrix.gguf
Normal file
3
Hala-1.2B.imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:75dcc6a6b2e0ebe8b5932877271764a9293d2ceb9b31858eee9dacf3ec9bfe8e
|
||||||
|
size 1161536
|
||||||
87
README.md
Normal file
87
README.md
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
---
|
||||||
|
base_model: hammh0a/Hala-1.2B
|
||||||
|
datasets:
|
||||||
|
- hammh0a/Hala-4.6M-SFT
|
||||||
|
language:
|
||||||
|
- ar
|
||||||
|
library_name: transformers
|
||||||
|
license: cc-by-nc-4.0
|
||||||
|
mradermacher:
|
||||||
|
readme_rev: 1
|
||||||
|
quantized_by: mradermacher
|
||||||
|
---
|
||||||
|
## About
|
||||||
|
|
||||||
|
<!-- ### quantize_version: 2 -->
|
||||||
|
<!-- ### output_tensor_quantised: 1 -->
|
||||||
|
<!-- ### convert_type: hf -->
|
||||||
|
<!-- ### vocab_type: -->
|
||||||
|
<!-- ### tags: nicoboss -->
|
||||||
|
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
|
||||||
|
<!-- ### quants_skip: -->
|
||||||
|
<!-- ### skip_mmproj: -->
|
||||||
|
weighted/imatrix quants of https://huggingface.co/hammh0a/Hala-1.2B
|
||||||
|
|
||||||
|
<!-- provided-files -->
|
||||||
|
|
||||||
|
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hala-1.2B-i1-GGUF).***
|
||||||
|
|
||||||
|
static quants are available at https://huggingface.co/mradermacher/Hala-1.2B-GGUF
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||||
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||||
|
more details, including on how to concatenate multi-part files.
|
||||||
|
|
||||||
|
## Provided Quants
|
||||||
|
|
||||||
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||||
|
|
||||||
|
| Link | Type | Size/GB | Notes |
|
||||||
|
|:-----|:-----|--------:|:------|
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.6 | very low quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q2_K.gguf) | i1-Q2_K | 0.6 | IQ3_XXS probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.6 | lower quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.7 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.7 | IQ3_S probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.8 | optimal size/speed/quality |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | |
|
||||||
|
| [GGUF](https://huggingface.co/mradermacher/Hala-1.2B-i1-GGUF/resolve/main/Hala-1.2B.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
|
||||||
|
|
||||||
|
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||||
|
types (lower is better):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
And here are Artefact2's thoughts on the matter:
|
||||||
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||||
|
|
||||||
|
## FAQ / Model Request
|
||||||
|
|
||||||
|
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||||
|
questions you might have and/or if you want some other model quantized.
|
||||||
|
|
||||||
|
## Thanks
|
||||||
|
|
||||||
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||||
|
me use its servers and providing upgrades to my workstation to enable
|
||||||
|
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||||
|
|
||||||
|
<!-- end -->
|
||||||
Reference in New Issue
Block a user