初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Albert_Wesker-1B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 08:32:00 +08:00
commit 69099f81b3
27 changed files with 332 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Albert_Wesker-1B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7d19f7810ec83aa721f3c27e8a93053bb19a6980be1f2b7f05e179ba8790b666
size 643491584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b343d47301fa23dbe9e39d83e66f1425f3d01ef1003d74cbd39587a25fcde2c
size 639199232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a086acd9616f57196f0fe529ad38c13ed620fb3ae50e6cf2fafe3061a2518470
size 669789440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9f24600a21cc89077e2948e06ce3a4e0636db8505c20c9179d248fd52de52f85
size 664066304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87e343692a623530eaf460b1b576727d8664556de7211df82dbfb43c604982ec
size 657327104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:71399495b088111e4d0e247cc80fb8dde26fdef8888c82e1172520e7bdf3a25a
size 650645504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:491bc2e5903fcfb01ce726a1bbf24dffd02c61c76ecc8a5ef38b9bc0a77b3efa
size 697066496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2679f5c5015f7fcdae64a4b962af6f36d744a802f83d7e0936ab2aa8c985bdac
size 689820416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:302e49c9a0fc2849fd6fb11ad285cc7d13706b00af2609544a8d9fa229973a57
size 689820416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e82df5b4081f7fbbc13de81bddcd369b79dc4c8f87ef5b23fa1d93101c1ea7bf
size 680115968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5ebc6a992a3d2685d57a173f673cfc7336a10e0603dcc9732f0bbb4f119faa52
size 721869056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:510cc7f742a65b0d29ef9fe13202b3109d0b47440082696541ea956310dda40f
size 714440960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ff81da53a898c6b8fa1fdbff1a8fc9ed317f7d28d35879b367a92a6aa7beceda
size 689820416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ef44ab145b6686b11d0f8cce5849d381625b1517534a7c88730e2b2ae3a777d
size 671277824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e307626d16b00c9403b37bbced18430ef5cb71c586f6af11ad829e9f080a7254
size 751581440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d03554887d8365f713dda1ec38dd6ef87996ec9fc7ea2523f782479b60b3fa0a
size 722422016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e1fe02df88c050ba88e7d136be907df6872982d216da430250562c29381ee710
size 688861952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d2dfd2f8f2de83212de6c25fbccb8f253ebdee0ca59eb0f1979469f67b0808c9
size 721924352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b427a3be3e41f85c618405f6aef2a82c0b0010e4319ecb4bc6567714ccae5c82
size 764041472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:106b73c57874362ca8eeb5b7cdc403008cc4a6d21ecad46237367eaa730dbe20
size 806064128

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e88f22076b007025cdc5c7f741d4f23948708d0438ea8c135285bca07cee9eed
size 780998912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a83f0bc704d4a86ef210a688ef4cd746571fa570d760cc2c243a7288250f15cf
size 851351552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:28cb453d79cebdf8378dcc7da880f7dd57a402352c3e9b149ca629176c1f10bf
size 836405504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a9d9a1828c8f72a23ce4caed620c70e3d3883e607aebd4a6b38dd43b44ef4c19
size 1011744512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:764109c543e7a81ba3247924f11a8c22f422f997f547c65afe27655e994aa5cd
size 1452416

197
README.md Normal file
View File

@@ -0,0 +1,197 @@
---
base_model: UmbrellaInc/Albert_Wesker-1B
datasets:
- TeichAI/glm-4.7-2000x
- chimbiwide/RolePlay-NPCv2
- berkeruveyik/toxic-speech-annotated-dataset
- mlabonne/FineTome-100k
- ITCL/FineTomeOs
- Gryphe/ChatGPT-4o-Writing-Prompts
- dongguanting/ARPO-SFT-54K
- GreenerPastures/All-Your-Base-Full
- Gryphe/Opus-WritingPrompts
- HuggingFaceH4/MATH-500
- mlabonne/smoltalk-flat
- mlabonne/natural_reasoning-formatted
- OpenSPG/KAG-Thinker-training-dataset
- uclanlp/Brief-Pro
- CognitiveKernel/CognitiveKernel-Pro-SFT
- SuperbEmphasis/Claude-4.0-DeepSeek-R1-RP-SFWish
- QuixiAI/dolphin-r1
- mlabonne/lmsys-arena-human-sft-55k
language:
- tr
- ar
- af
- az
- es
- en
- el
- ro
- ru
- rm
- th
- uk
- uz
- pl
- pt
- fa
- sk
- sl
- da
- de
- nl
- fr
- fi
- ka
- hi
- hu
- hy
- ja
- kk
- kn
- ko
- ku
- ky
- la
- lb
- id
- is
- it
- zh
- cs
- vi
- be
- bg
- bs
- ne
- mn
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- npc
- roleplay
- rp
- nsfw
- low-refusals
- uncensored
- heretic
- abliterated
- unsloth
- finetune
- all use cases
- bfloat16
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- slice of life
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- text-generation
- transformers
- safetensors
- gemma3
- mergekit
- karcher mean
- uncensored
- heretic
- roleplay
- nsfw
- virus
- t-virus
- low-end
- conversational
- think
- Not-For-All-Audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/UmbrellaInc/Albert_Wesker-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Albert_Wesker-1B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Albert_Wesker-1B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Albert_Wesker-1B-i1-GGUF/resolve/main/Albert_Wesker-1B.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->