Compare commits

...

10 Commits

Author SHA1 Message Date
team mradermacher
cd3a06dd5a auto-patch README.md 2025-07-31 10:02:07 +00:00
team mradermacher
e94b37ade3 auto-patch README.md 2025-03-03 14:24:43 +00:00
team mradermacher
ee1a0628f2 uploaded from nico2 2025-03-03 12:10:56 +00:00
team mradermacher
172f5e1aee uploaded from nico2 2025-03-03 12:09:12 +00:00
team mradermacher
bd36939075 uploaded from nico2 2025-03-03 12:09:07 +00:00
team mradermacher
889d6be2f3 auto-patch README.md 2025-03-03 12:08:48 +00:00
team mradermacher
777fb84aa6 uploaded from nico2 2025-03-03 12:08:16 +00:00
team mradermacher
f56d745272 uploaded from nico2 2025-03-03 12:07:01 +00:00
team mradermacher
cbe33778fd uploaded from nico2 2025-03-03 12:05:31 +00:00
team mradermacher
39974d327d uploaded from nico2 2025-03-03 12:05:09 +00:00
9 changed files with 119 additions and 0 deletions

6
.gitattributes vendored
View File

@@ -38,3 +38,9 @@ Babel-9B-Chat.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.f16.gguf filter=lfs diff=lfs merge=lfs -text
Babel-9B-Chat.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:daee42b3accfa9eedb28b48c1a749710a07e32881791560ee7f5ff2461ae74e1
size 5005749056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42c4d3a1ce9adc0c797e29d7f986d5a480df45a9d05353e1abc40121f8a04b3d
size 4817302336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d272b4269cce81973a2c7b069d4c15b31b6ff84f5daf6d2aa11be414c039648b
size 5523823424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ced349b280a8ee71731539cdadab55f1212bcd43547a9e45a0eb288392e45dc
size 6434216768

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ab17e5c910c3c7b25c65af8f4a22df12cc8237ac58ff7a9c5cd14c5bea1c8f36
size 6276778816

3
Babel-9B-Chat.Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b8b03546b430d788b1d48fe65c982f2cf7eb299afc30e596b156d412901d44f8
size 9584481088

3
Babel-9B-Chat.f16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:39e2c3dbe39b86b250b31a3d0c2aa0a111c03db234fc2d8a417ac7fd11f097ef
size 18034692928

View File

@@ -1,6 +1,98 @@
---
base_model: Tower-Babel/Babel-9B-Chat
language:
- en
- zh
- hi
- es
- fr
- ar
- bn
- ru
- pt
- id
- ur
- de
- ja
- sw
- ta
- tr
- ko
- vi
- jv
- it
- ha
- th
- fa
- tl
- my
library_name: transformers
license: other
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
license_name: seallm
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- multilingual
- babel
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Tower-Babel/Babel-9B-Chat
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Babel-9B-Chat-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Babel-9B-Chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q3_K_S.gguf) | Q3_K_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q3_K_M.gguf) | Q3_K_M | 4.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.IQ4_XS.gguf) | IQ4_XS | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q4_K_S.gguf) | Q4_K_S | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q5_K_S.gguf) | Q5_K_S | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q5_K_M.gguf) | Q5_K_M | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q6_K.gguf) | Q6_K | 7.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.Q8_0.gguf) | Q8_0 | 9.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Babel-9B-Chat-GGUF/resolve/main/Babel-9B-Chat.f16.gguf) | f16 | 18.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->