初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/HelpingAI-3B-hindi-i1-GGUF Source: Original Platform
This commit is contained in:
57
.gitattributes
vendored
Normal file
57
.gitattributes
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
HelpingAI-3B-hindi.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
HelpingAI-3B-hindi.i1-IQ1_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fa30e4405d4510cb38081655ffe9b411990f5fe2a2516ce907d76e0b4648de7b
|
||||
size 727813440
|
||||
3
HelpingAI-3B-hindi.i1-IQ1_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0124e9af317f6f13a3932b63e8b7666e3293c9a32938725e63231b777d1630c1
|
||||
size 679828800
|
||||
3
HelpingAI-3B-hindi.i1-IQ2_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4f81c0e206fad8ec06b665194f66fac30fe73ec0df7fabb0f6078d615176e70e
|
||||
size 1013352416
|
||||
3
HelpingAI-3B-hindi.i1-IQ2_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9aeb5d64e5a2e60fabe7454a5ff7fae7463b8531f1ba7e256faba743f04f5fb5
|
||||
size 949372896
|
||||
3
HelpingAI-3B-hindi.i1-IQ2_XS.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d7d93db680f5e6de0ba712545719371f025816877927ee3e88c3e557a179c3b7
|
||||
size 878320960
|
||||
3
HelpingAI-3B-hindi.i1-IQ2_XXS.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e954639f13aa56c41b049a8b795f31f4c101e1482bb04204d9acaa406dc3a19f
|
||||
size 807787840
|
||||
3
HelpingAI-3B-hindi.i1-IQ3_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:04748d02fe4552c7bac727b22551811026e5612e6812861bdecdecfefdafbd4f
|
||||
size 1319482208
|
||||
3
HelpingAI-3B-hindi.i1-IQ3_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7b230c5b3423c21d160a48f805541595d8f9752feab6be7f6e5ede5dc3a6b66b
|
||||
size 1254376288
|
||||
3
HelpingAI-3B-hindi.i1-IQ3_XS.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c67ae006b71f1fd484837ae649c76536ef95efd40e451ad9dcb50620ab94c39a
|
||||
size 1194902368
|
||||
3
HelpingAI-3B-hindi.i1-IQ3_XXS.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:984d7c331515399ccd5d4d21c2c4922301143039d07e9f9a6c00d32ad3419f02
|
||||
size 1101948896
|
||||
3
HelpingAI-3B-hindi.i1-IQ4_XS.gguf
Normal file
3
HelpingAI-3B-hindi.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ec81b9a8d5f45ec0838b41dea371c0adc7c1c08ce7a570ae141f6e8a6ae79e6e
|
||||
size 1525169664
|
||||
3
HelpingAI-3B-hindi.i1-Q2_K.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a8b1f42cc045ba8f2865237b9b348fd4e31275fb42dd1810509c45195c6e98fc
|
||||
size 1083689152
|
||||
3
HelpingAI-3B-hindi.i1-Q3_K_L.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:83ed65718f9700308fbef5eea15242b65a06cabc5f1e1464ddb119f9eac92fea
|
||||
size 1508492128
|
||||
3
HelpingAI-3B-hindi.i1-Q3_K_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:89e8f488157d7b83ec796e84d7672d6d1ee556d7f89fc9ed0664b7e937f34cf8
|
||||
size 1391346528
|
||||
3
HelpingAI-3B-hindi.i1-Q3_K_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b7ed90b239ef9490f935816acd1595ca7ced1edaaaa3f6eac564f83081194d32
|
||||
size 1254376288
|
||||
3
HelpingAI-3B-hindi.i1-Q4_0.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:18fb14fecd526a906d8c55bd6edf509db2ee62b75505bdbd8ac8a2dec49d2a6e
|
||||
size 1612914368
|
||||
3
HelpingAI-3B-hindi.i1-Q4_K_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1c794be8114b007def8966d9700fec4804e150e8897b25b4c7df91a6f58eb6ec
|
||||
size 1708515008
|
||||
3
HelpingAI-3B-hindi.i1-Q4_K_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:187f3a1cab97b9cf3ebfb7dcaefd9c3b6de45850442e844166e10b4ca26312ad
|
||||
size 1620614848
|
||||
3
HelpingAI-3B-hindi.i1-Q5_K_M.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5d7346ea3ea920227bf0394c9e78ee1697ec5cc69e6387b208e38c2956b500eb
|
||||
size 1993302528
|
||||
3
HelpingAI-3B-hindi.i1-Q5_K_S.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:548b51def28877ae36edbaff98cc44243e43988e474d4f0653a3498c13b4a387
|
||||
size 1941774848
|
||||
3
HelpingAI-3B-hindi.i1-Q6_K.gguf
Normal file
3
HelpingAI-3B-hindi.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f2fcc28345cee8909f7cbcb488198e245e79c7c4dcd7dbe9442a67193c158086
|
||||
size 2295889280
|
||||
83
README.md
Normal file
83
README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
base_model: OEvortex/HelpingAI-3B-hindi
|
||||
datasets:
|
||||
- OEvortex/SentimentSynth
|
||||
- OEvortex/EmotionalIntelligence-10K
|
||||
language:
|
||||
- hi
|
||||
- en
|
||||
library_name: transformers
|
||||
license: other
|
||||
license_link: LICENSE
|
||||
license_name: helpingai
|
||||
quantized_by: mradermacher
|
||||
tags:
|
||||
- HelpingAI
|
||||
- Emotional Intelligence
|
||||
- EQ
|
||||
---
|
||||
## About
|
||||
|
||||
<!-- ### quantize_version: 2 -->
|
||||
<!-- ### output_tensor_quantised: 1 -->
|
||||
<!-- ### convert_type: hf -->
|
||||
<!-- ### vocab_type: -->
|
||||
<!-- ### tags: nicoboss -->
|
||||
weighted/imatrix quants of https://huggingface.co/OEvortex/HelpingAI-3B-hindi
|
||||
|
||||
<!-- provided-files -->
|
||||
static quants are available at https://huggingface.co/mradermacher/HelpingAI-3B-hindi-GGUF
|
||||
## Usage
|
||||
|
||||
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||
more details, including on how to concatenate multi-part files.
|
||||
|
||||
## Provided Quants
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
| Link | Type | Size/GB | Notes |
|
||||
|:-----|:-----|--------:|:------|
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ2_S.gguf) | i1-IQ2_S | 1.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q2_K.gguf) | i1-Q2_K | 1.2 | IQ3_XXS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.1 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-3B-hindi-i1-GGUF/resolve/main/HelpingAI-3B-hindi.i1-Q6_K.gguf) | i1-Q6_K | 2.4 | practically like static Q6_K |
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
|
||||
And here are Artefact2's thoughts on the matter:
|
||||
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||
|
||||
## FAQ / Model Request
|
||||
|
||||
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||
questions you might have and/or if you want some other model quantized.
|
||||
|
||||
## Thanks
|
||||
|
||||
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||
me use its servers and providing upgrades to my workstation to enable
|
||||
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||
|
||||
<!-- end -->
|
||||
3
imatrix.dat
Normal file
3
imatrix.dat
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2a4d932e9ee2823f6a3ff19af8f3f5e640bfc218706839d87d0b3cbfeb30b0f5
|
||||
size 2858237
|
||||
Reference in New Issue
Block a user