auto-patch README.md

This commit is contained in:
team mradermacher
2025-07-31 10:02:07 +00:00
committed by system
parent e94b37ade3
commit cd3a06dd5a

View File

@@ -30,6 +30,8 @@ library_name: transformers
license: other license: other
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
license_name: seallm license_name: seallm
mradermacher:
readme_rev: 1
quantized_by: mradermacher quantized_by: mradermacher
tags: tags:
- multilingual - multilingual
@@ -45,6 +47,9 @@ tags:
static quants of https://huggingface.co/Tower-Babel/Babel-9B-Chat static quants of https://huggingface.co/Tower-Babel/Babel-9B-Chat
<!-- provided-files --> <!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Babel-9B-Chat-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Babel-9B-Chat-i1-GGUF weighted/imatrix quants are available at https://huggingface.co/mradermacher/Babel-9B-Chat-i1-GGUF
## Usage ## Usage
@@ -88,6 +93,6 @@ questions you might have and/or if you want some other model quantized.
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. this work in my free time.
<!-- end --> <!-- end -->