初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/TinyLLama-NSFW-Chatbot-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 01:52:55 +08:00
commit e40de74656
14 changed files with 154 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.f16.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
TinyLLama-NSFW-Chatbot.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

71
README.md Normal file
View File

@@ -0,0 +1,71 @@
---
base_model: bilalRahib/TinyLLama-NSFW-Chatbot
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bilalRahib/TinyLLama-NSFW-Chatbot
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLLama-NSFW-Chatbot-GGUF/resolve/main/TinyLLama-NSFW-Chatbot.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6bd604a73b1d383b2bc41e333f3c79a7c6e0c673038d60211443afd495f5fb51
size 609808192

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:84dcfecbe73a8c335b5b2205d37815ad3ca26f0a36b9c20fae1cd694c54937d7
size 432131904

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:561031647957295927b65a84b8443e4fabff2ec4b6fdb27001047e40a836182c
size 591527744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:32557db6647ee2a26e8ac15e2143e4cf2fd78434704912881a663225bdb55f7d
size 548405056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b07ce1a674a9caec77e9e029f41adb76f021a61f6b95b90d6be607ce1dd954af
size 499343168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:594facae28244fb58bd51cc7228670d9d0ef2dde4e736d57751789e29dfbd04a
size 667815744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f67f47ca120ea46746b2e618f23f537afd43609f0720c335c0ab4e0e505f0a16
size 639872832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d35b9e113d21c31b420845bcea21c9c3bcaaab0a454dfc175ee18e41b19a915
size 782044992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0744d19dab290bbfb167d3a841dc0da599648efdbc4fae2b941ffd2a4765ef5
size 766029632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5988bbd0374bc1dc4045f30a34abacfcbc9242b1895645507b2ca3c5756a594c
size 903413568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:84775c282a2bb2bd408ad912697d06f96e8e3d7d6a03098f86b1c0134cc27275
size 1169809216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f75d907e90274a4879db18fd9a8b28a4115ee111b6bed323f3eb0cb2ffba6700
size 2201018176