auto-patch README.md
This commit is contained in:
committed by
system
parent
e94b37ade3
commit
cd3a06dd5a
@@ -30,6 +30,8 @@ library_name: transformers
|
||||
license: other
|
||||
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
|
||||
license_name: seallm
|
||||
mradermacher:
|
||||
readme_rev: 1
|
||||
quantized_by: mradermacher
|
||||
tags:
|
||||
- multilingual
|
||||
@@ -45,6 +47,9 @@ tags:
|
||||
static quants of https://huggingface.co/Tower-Babel/Babel-9B-Chat
|
||||
|
||||
<!-- provided-files -->
|
||||
|
||||
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Babel-9B-Chat-GGUF).***
|
||||
|
||||
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Babel-9B-Chat-i1-GGUF
|
||||
## Usage
|
||||
|
||||
@@ -88,6 +93,6 @@ questions you might have and/or if you want some other model quantized.
|
||||
|
||||
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||
me use its servers and providing upgrades to my workstation to enable
|
||||
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||
this work in my free time.
|
||||
|
||||
<!-- end -->
|
||||
|
||||
Reference in New Issue
Block a user