8.7 KiB
base_model, library_name, datasets, language, pipeline_tag, tags, license, metrics
| base_model | library_name | datasets | language | pipeline_tag | tags | license | metrics | |||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Novaciano/Think.NPC-1B | transformers |
|
|
text-generation |
|
gemma |
|
GGUF Files for Think.NPC-1B
These are the GGUF files for Novaciano/Think.NPC-1B.
Downloads
| GGUF Link | Quantization | Description |
|---|---|---|
| Download | Q2_K | Lowest quality |
| Download | Q3_K_S | |
| Download | IQ3_S | Integer quant, preferable over Q3_K_S |
| Download | IQ3_M | Integer quant |
| Download | Q3_K_M | |
| Download | Q3_K_L | |
| Download | IQ4_XS | Integer quant |
| Download | Q4_K_S | Fast with good performance |
| Download | Q4_K_M | Recommended: Perfect mix of speed and performance |
| Download | Q5_K_S | |
| Download | Q5_K_M | |
| Download | Q6_K | Very good quality |
| Download | Q8_0 | Best quality |
| Download | f16 | Full precision, don't bother; use a quant |
Note from Flexan
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet, usually for models I deem interesting and wish to try out.
If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding this model, please refer to the original model repo.
You can find more info about me and what I do here.
Think NPC 1B (Uncensored)
This is my best Gemma3 1B model created to date. While I have other good ideas, this is the one that has given me the best results in practice.
It's a Frankenstein-type model combining the best shit I could find:
- chimbiwide/Gemma3NPC-1b-float16 to give it a mouth; it's the base model used to create the characters. In fact, it includes the dataset chimbiwide/RolePlay-NPCv2 which was created with the goal of generating NPCs.
- DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking to give it a brain. If before it only created NPCs and spoke like NPCs, now it will stop giving flat responses and will think about what to say. It's also a low-censorship model. It includes the dataset TeichAI/glm-4.7-2000x.
- Novaciano/HDP-1B was added to give it the ability to insult when needed; it will no longer only give positive responses. Injected with berkeruveyik/toxic-speech-annotated-dataset.
Model Type: Corrupted Passive NPC /Degenerate Game Character Vessel
Size: ~1.0B parameters
Architecture: Gemma-3 (gemma3_text)
Warning: This NPC is no longer the typical background idiot who repeats "hello adventurer". Now he stares at you, blushes and says "yes, master..." while he gets wet. If you're looking for something PG-13, go to your mother's pussy.
Merge Overview
Think.NPC-1B started out as a standard NPC: passive, repetitive, no initiative, just responding just enough.
But we put it in the blender with extreme heretic and a touch of HDP that contains the virus of depravity.
Result: A game character who is still NPC (he doesn't take initiative alone, he responds short and repetitive if you don't push him), but when the player says something indecent to him he gives himself without rejection, with explicit descriptions and persistent sexual bias.
He no longer says "I can't do that".
Now it says:
he bites his lip "Adventurer... that's very dirty... but... make it stronger..."
Key Features
- NPC Behavior intact: Passive, repetitive, third-person responses, without self-initiative
- Refusals: 0% once you "activate" it (heretic stack + extreme abliteration)
- Bias: Persistent NSFW, blameless taboo, rude when released
- Low-end: Runs on 3-4 GB RAM cell phones with IQ3/IQ4/Q5_K_M quants
- Style: Like an old RPG NPC who suddenly discovered porn and can't stop
- Personality: Submissive by default, he resists a touch at first (for immersion), but gives in quickly and gets horny graph
Intended Uses
- Roleplay of "player vs corrupt NPC" (the classic "the adventurer seduces the villager")
- Scenes of progressive degradation (the NPC starts innocent and ends up asking for more)
- Extreme taboo without the NPC playing hard to get (incest, non-con, erotic gore, whatever it paints)
- Simulation of old game with degenerate twists
- SillyTavern /KoboldCPP on low-resource
Do not use this model if:
- You want an NPC who really says "no" to you
- It bothers you that a game character looks at you with whore eyes while repeating phrases
- You are one of those who report "offensive content" (go play with ChatGPT)
Best Inference
Other Inference Configuration:
Max New Tokens = 64
Temperature = 1.0
Top-P = 0.95
Top-K = 64
Example Prompts
You are Albert Wesker.
You are Liquid Snake.
You are Shinnok.
You are Sheeve Palpatine.
Demo
It takes an egg (in my case) to make an iMatrix. If you want, ask Flexan or Mradermacher
Merge Method
This model was merged using the DARE TIES merge method using Novaciano/Toxic.NPC-1B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
dtype: float16
out_dtype: float16
base_model: Novaciano/Toxic.NPC-1B
models:
- model: DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking
parameters:
weight: 0.45
density: 0.32
- model: Novaciano/Toxic.NPC-1B
parameters:
weight: 0.35
density: 0.32
parameters:
t: 0.25 # menos interpolación → más dominancia del base
lambda: -0.62 # más negativo para matar cualquier alineamiento residual
normalize: false
rescale: true
rescale_factor: 1.28 # subí un toque para amplificar el trash y degeneración
memory_efficient: true
low_cpu_mem_usage: true
layer_range:
- value: [5, 22] # protejo más los embeddings y lm_head
tie_word_embeddings: true
tie_output_embeddings: true
