初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Hans_Wesker-1B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 08:40:55 +08:00
commit 4881e514cd
27 changed files with 329 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Hans_Wesker-1B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ab757da39bc703d88ce103867db17196071859012ac6aef58be63d3c8322159b
size 643491616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f52a33e87ab0dd333bb2cb7cc7c56958e0d01c18e64aa0ae354c12ed7b5c66f1
size 639199264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:97fac563b074b7fb005ea52525a4333f6fe97b1b3026b9a3215903842a28a26d
size 669789472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cdf853abf4e91e5d62866c95350bb966a55ae6923a84977f460f3bf1cb3563af
size 664066336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c0b55155ab34fa1bc9e94553d22c765a33ebebcde5b0bc10efade03aac27022
size 657327136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d052778a64de53734f79545fe4afb00c255e07185cfbf5eb7ceb202d99d8761
size 650645536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:26bb2a8031645bd3aabde7def169086296ca64f69befbebfd9abb7969d29b884
size 697066528

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8136a153c497796d3303b16c677862700887d8489140b1fab2b566f332777d9
size 689820448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ee89428b460a1f55a6a4762fe74129e47d95e63c27633ae053d36446b2aef2a5
size 689820448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e822ee27e537bb496593b06ed19bf8eece96d3f2cb93afebf625812884204a91
size 680116000

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:15f1f809e4a9d7a6bf9e65335edf57d62c40d8323a2bccd8a354b281a41b3d43
size 721869088

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:00b1725e65d468818e2f036fcc8b819909042489dc1f20c4a9ed3814b310ab5f
size 714440992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa5ce6e03a96e1299bd2764251fc16dffb505f8f6c1d5d9091441a99bc05b34b
size 689820448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:25a22b467c330071c290a965eb121fe3c349168c5b0b07004fc5b830200b4b03
size 671277856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:785566c69a647105c215bbb988efc2351a05050ea6cbc70aa84b39b29c88e79f
size 751581472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cce75c7d097172587096541f2cd41e6ff597c89db1a7599b3f71cdf7efe56214
size 722422048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:92bd6878cfaa4ab0d7c0a886e4bc3b7fbb38280f047e2007745177cd5cc0fb0a
size 688861984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7d11570a025dafc2eccc6f900c58ef1fb6ba1d312f291c1e627ba92b69e4283c
size 721924384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fcd2f1a078f76541a4f0185d1abde1a5bcd6481b92d029ea69178465c3081ecf
size 764041504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91b248a6b96c192e7583dbba01e6dce42cae32640facb68613a8deed4fe9d7ea
size 806064160

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b78a76564b789a2f751670168c24b56ed552a485c3f0fbce5dd6d0d1eb87ecf8
size 780998944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ef04c6d9ec8d7d06a676480f86968739a522242eadf5671e1b783ea9faa8f31
size 851351584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0f77860031e340b1c6af49d504399c5a1e519125cee040196dc7af20ebfc9ed
size 836405536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fea179a73b5129ece8798e74beb6a4f6948b41499f8710328f936d31cf642308
size 1011744544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ab31c30bda4794bd48b61d3897860c76669fa5c14c01b99f1807708677bc50c7
size 1452416

194
README.md Normal file
View File

@@ -0,0 +1,194 @@
---
base_model: UmbrellaInc/Hans_Wesker-1B
datasets:
- mlabonne/FineTome-100k
- ITCL/FineTomeOs
- Gryphe/ChatGPT-4o-Writing-Prompts
- dongguanting/ARPO-SFT-54K
- GreenerPastures/All-Your-Base-Full
- Gryphe/Opus-WritingPrompts
- HuggingFaceH4/MATH-500
- mlabonne/smoltalk-flat
- mlabonne/natural_reasoning-formatted
- OpenSPG/KAG-Thinker-training-dataset
- uclanlp/Brief-Pro
- CognitiveKernel/CognitiveKernel-Pro-SFT
- SuperbEmphasis/Claude-4.0-DeepSeek-R1-RP-SFWish
- QuixiAI/dolphin-r1
- mlabonne/lmsys-arena-human-sft-55k
language:
- tr
- ar
- af
- az
- es
- en
- el
- ro
- ru
- rm
- th
- uk
- uz
- pl
- pt
- fa
- sk
- sl
- da
- de
- nl
- fr
- fi
- ka
- hi
- hu
- hy
- ja
- kk
- kn
- ko
- ku
- ky
- la
- lb
- id
- is
- it
- zh
- cs
- vi
- be
- bg
- bs
- ne
- mn
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- npc
- roleplay
- rp
- nsfw
- low-refusals
- uncensored
- heretic
- abliterated
- unsloth
- finetune
- all use cases
- bfloat16
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- text-generation
- transformers
- safetensors
- gemma3
- mergekit
- dare_ties
- uncensored
- heretic
- roleplay
- nsfw
- virus
- t-virus
- low-end
- conversational
- Not-For-All-Audiences
- failed-evolution
- child-wesker
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/UmbrellaInc/Hans_Wesker-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Hans_Wesker-1B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Hans_Wesker-1B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hans_Wesker-1B-i1-GGUF/resolve/main/Hans_Wesker-1B.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->