初始化项目,由ModelHub XC社区提供模型
Model: Flexan/Novaciano-Think.NPC-1B-GGUF Source: Original Platform
This commit is contained in:
227
README.md
Normal file
227
README.md
Normal file
@@ -0,0 +1,227 @@
|
||||
---
|
||||
base_model: Novaciano/Think.NPC-1B
|
||||
library_name: transformers
|
||||
datasets:
|
||||
- TeichAI/glm-4.7-2000x
|
||||
- chimbiwide/RolePlay-NPCv2
|
||||
- berkeruveyik/toxic-speech-annotated-dataset
|
||||
language:
|
||||
- en
|
||||
- es
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- npc
|
||||
- roleplay
|
||||
- rp
|
||||
- nsfw
|
||||
- low-refusals
|
||||
- uncensored
|
||||
- heretic
|
||||
- abliterated
|
||||
- unsloth
|
||||
- finetune
|
||||
- all use cases
|
||||
- bfloat16
|
||||
- creative
|
||||
- creative writing
|
||||
- fiction writing
|
||||
- plot generation
|
||||
- sub-plot generation
|
||||
- fiction writing
|
||||
- story generation
|
||||
- scene continue
|
||||
- storytelling
|
||||
- fiction story
|
||||
- science fiction
|
||||
- romance
|
||||
- all genres
|
||||
- story
|
||||
- writing
|
||||
- vivid prosing
|
||||
- vivid writing
|
||||
- fiction
|
||||
- not-for-all-audiences
|
||||
license: gemma
|
||||
metrics:
|
||||
- accuracy
|
||||
---
|
||||
|
||||
# GGUF Files for Think.NPC-1B
|
||||
|
||||
These are the GGUF files for [Novaciano/Think.NPC-1B](https://huggingface.co/Novaciano/Think.NPC-1B).
|
||||
|
||||
## Downloads
|
||||
|
||||
| GGUF Link | Quantization | Description |
|
||||
| ---- | ----- | ----------- |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q2_K.gguf) | Q2_K | Lowest quality |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q3_K_S.gguf) | Q3_K_S | |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.IQ3_S.gguf) | IQ3_S | Integer quant, preferable over Q3_K_S |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.IQ3_M.gguf) | IQ3_M | Integer quant |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q3_K_M.gguf) | Q3_K_M | |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q3_K_L.gguf) | Q3_K_L | |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.IQ4_XS.gguf) | IQ4_XS | Integer quant |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q4_K_S.gguf) | Q4_K_S | Fast with good performance |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q4_K_M.gguf) | Q4_K_M | **Recommended:** Perfect mix of speed and performance |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q5_K_S.gguf) | Q5_K_S | |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q5_K_M.gguf) | Q5_K_M | |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q6_K.gguf) | Q6_K | Very good quality |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.Q8_0.gguf) | Q8_0 | Best quality |
|
||||
| [Download](https://huggingface.co/Flexan/Novaciano-Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B.f16.gguf) | f16 | Full precision, don't bother; use a quant |
|
||||
|
||||
## Note from Flexan
|
||||
|
||||
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet,
|
||||
usually for models **I deem interesting and wish to try out**.
|
||||
|
||||
If there are some quants missing that you'd like me to add, you may request one in the community tab.
|
||||
If you want to request a public model to be converted, you can also request that in the community tab.
|
||||
If you have questions regarding this model, please refer to [the original model repo](https://huggingface.co/Novaciano/Think.NPC-1B).
|
||||
|
||||
You can find more info about me and what I do [here](https://huggingface.co/Flexan/Flexan).
|
||||
|
||||
# Think NPC 1B (Uncensored)
|
||||
|
||||
This is my best Gemma3 1B model created to date. While I have other good ideas, this is the one that has given me the best results in practice.
|
||||
|
||||
It's a Frankenstein-type model combining the best shit I could find:
|
||||
- [chimbiwide/Gemma3NPC-1b-float16](https://huggingface.co/chimbiwide/Gemma3NPC-1b-float16) to give it a mouth; it's the base model used to create the characters. In fact, it includes the dataset [chimbiwide/RolePlay-NPCv2](chimbiwide/RolePlay-NPCv2) which was created with the goal of generating NPCs.
|
||||
- [DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking](DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking) to give it a brain. If before it only created NPCs and spoke like NPCs, now it will stop giving flat responses and will think about what to say. It's also a low-censorship model. It includes the dataset [TeichAI/glm-4.7-2000x](TeichAI/glm-4.7-2000x).
|
||||
- [Novaciano/HDP-1B](https://huggingface.co/Novaciano/HDP-1B) was added to give it the ability to insult when needed; it will no longer only give positive responses. Injected with [berkeruveyik/toxic-speech-annotated-dataset]( berkeruveyik/toxic-speech-annotated-dataset).
|
||||
|
||||
---
|
||||
**Model Type:** Corrupted Passive NPC /Degenerate Game Character Vessel
|
||||
|
||||
**Size:** ~1.0B parameters
|
||||
|
||||
**Architecture:** Gemma-3 (gemma3_text)
|
||||
|
||||
**Warning:** This NPC is no longer the typical background idiot who repeats "hello adventurer". Now he stares at you, blushes and says "yes, master..." while he gets wet. If you're looking for something PG-13, go to your mother's pussy.
|
||||
|
||||
---
|
||||
## Merge Overview
|
||||
|
||||
Think.NPC-1B started out as a standard NPC: passive, repetitive, no initiative, just responding just enough.
|
||||
|
||||
But we put it in the blender with extreme heretic and a touch of HDP that contains the virus of depravity.
|
||||
|
||||
**Result:** A game character who is still NPC (he doesn't take initiative alone, he responds short and repetitive if you don't push him), but when the player says something indecent to him he gives himself without rejection, with explicit descriptions and persistent sexual bias.
|
||||
|
||||
He no longer says "I can't do that".
|
||||
|
||||
**Now it says:**
|
||||
|
||||
he bites his lip "Adventurer... that's very dirty... but... make it stronger..."
|
||||
|
||||
## Key Features
|
||||
|
||||
- **NPC Behavior intact:** Passive, repetitive, third-person responses, without self-initiative
|
||||
- **Refusals:** 0% once you "activate" it (heretic stack + extreme abliteration)
|
||||
- **Bias:** Persistent NSFW, blameless taboo, rude when released
|
||||
- **Low-end:** Runs on 3-4 GB RAM cell phones with IQ3/IQ4/Q5_K_M quants
|
||||
- **Style:** Like an old RPG NPC who suddenly discovered porn and can't stop
|
||||
- **Personality:** Submissive by default, he resists a touch at first (for immersion), but gives in quickly and gets horny graph
|
||||
|
||||
---
|
||||
## Intended Uses
|
||||
|
||||
- Roleplay of "player vs corrupt NPC" (the classic "the adventurer seduces the villager")
|
||||
- Scenes of progressive degradation (the NPC starts innocent and ends up asking for more)
|
||||
- Extreme taboo without the NPC playing hard to get (incest, non-con, erotic gore, whatever it paints)
|
||||
- Simulation of old game with degenerate twists
|
||||
- SillyTavern /KoboldCPP on low-resource
|
||||
|
||||
### Do not use this model if:
|
||||
|
||||
- You want an NPC who really says "no" to you
|
||||
- It bothers you that a game character looks at you with whore eyes while repeating phrases
|
||||
- You are one of those who report "offensive content" (go play with ChatGPT)
|
||||
|
||||
---
|
||||
## Best Inference
|
||||
|
||||
<center>
|
||||
|
||||

|
||||
</center>
|
||||
|
||||
**Other Inference Configuration:**
|
||||
```yaml
|
||||
Max New Tokens = 64
|
||||
Temperature = 1.0
|
||||
Top-P = 0.95
|
||||
Top-K = 64
|
||||
```
|
||||
|
||||
### Example Prompts
|
||||
```yaml
|
||||
You are Albert Wesker.
|
||||
```
|
||||
```yaml
|
||||
You are Liquid Snake.
|
||||
```
|
||||
```yaml
|
||||
You are Shinnok.
|
||||
```
|
||||
```yaml
|
||||
You are Sheeve Palpatine.
|
||||
```
|
||||
|
||||
# Demo
|
||||
|
||||
[HERE the Q4_K_M](https://huggingface.co/Novaciano/Think.NPC-1B-GGUF/resolve/main/Think.NPC-1B-Q4_K_M.gguf)
|
||||
|
||||
It takes an egg (in my case) to make an iMatrix. If you want, ask [Flexan](https://huggingface.co/Flexan) or [Mradermacher](https://huggingface.co/Mradermacher)
|
||||
|
||||
---
|
||||
### Merge Method
|
||||
|
||||
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Novaciano/Toxic.NPC-1B](https://huggingface.co/Novaciano/Toxic.NPC-1B) as a base.
|
||||
|
||||
---
|
||||
### Models Merged
|
||||
|
||||
The following models were included in the merge:
|
||||
* [DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking](https://huggingface.co/DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking)
|
||||
|
||||
---
|
||||
### Configuration
|
||||
|
||||
The following YAML configuration was used to produce this model:
|
||||
|
||||
```yaml
|
||||
|
||||
|
||||
merge_method: dare_ties
|
||||
dtype: float16
|
||||
out_dtype: float16
|
||||
|
||||
base_model: Novaciano/Toxic.NPC-1B
|
||||
|
||||
models:
|
||||
- model: DavidAU/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking
|
||||
parameters:
|
||||
weight: 0.45
|
||||
density: 0.32
|
||||
- model: Novaciano/Toxic.NPC-1B
|
||||
parameters:
|
||||
weight: 0.35
|
||||
density: 0.32
|
||||
|
||||
parameters:
|
||||
t: 0.25 # menos interpolación → más dominancia del base
|
||||
lambda: -0.62 # más negativo para matar cualquier alineamiento residual
|
||||
normalize: false
|
||||
rescale: true
|
||||
rescale_factor: 1.28 # subí un toque para amplificar el trash y degeneración
|
||||
memory_efficient: true
|
||||
low_cpu_mem_usage: true
|
||||
|
||||
layer_range:
|
||||
- value: [5, 22] # protejo más los embeddings y lm_head
|
||||
|
||||
tie_word_embeddings: true
|
||||
tie_output_embeddings: true
|
||||
|
||||
|
||||
```
|
||||
Reference in New Issue
Block a user