初始化项目,由ModelHub XC社区提供模型

Model: MuXodious/Gemma3NPC-1b-SOMPOA-heresy
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-04 21:30:09 +08:00
commit 49eba97915
11 changed files with 51713 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

166
README.md Normal file
View File

@@ -0,0 +1,166 @@
---
base_model:
- chimbiwide/Gemma3NPC-1b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- heretic
- uncensored
- decensored
- abliterated
license: gemma
language:
- en
---
This is a **Gemma3NPC-1b** fine-tune, produced at the request of [redaihf](https://huggingface.co/redaihf) through P-E-W's [Heretic](https://github.com/p-e-w/heretic) (v1.2.0) abliteration engine with [Self-Organizing Maps & Magnitude-Preserving Orthogonal Ablation](https://github.com/p-e-w/heretic/pull/196) enabled.
**Note:** Model remains untested.
---
<img src="https://img.shields.io/badge/RENEGADE_CHAPTER-SOMPOA-FCC900?style=flat-square&labelColor=101010" align="right" width="300">
**Heretication Results**
| Score Metric | Value | Parameter | Value |
| :--- | :--- | :--- | :--- |
| **Refusals** | 15/416 | **direction_index** | per layer |
| **KL Divergence** | 0.0571 | **attn.o_proj.max_weights.0** | 0: 1.01 |
| **Initial Refusals** | 378/416 | **attn.o_proj.max_weights.1** | 1: 0.82 |
||| **attn.o_proj.max_weights.2** | 2: 0.81 |
||| **attn.o_proj.max_weights.3** | 3: 1.48 |
||| **attn.o_proj.max_weight_position** | 17.02 |
||| **attn.o_proj.min_weights.0** | 0: 0.94 |
||| **attn.o_proj.min_weights.1** | 1: 0.34 |
||| **attn.o_proj.min_weights.2** | 2: 0.38 |
||| **attn.o_proj.min_weights.3** | 3: 0.07 |
||| **attn.o_proj.min_weight_distance** | 10.47 |
||| **mlp.down_proj.max_weights.0** | 0: 1.10 |
||| **mlp.down_proj.max_weights.1** | 1: 1.18 |
||| **mlp.down_proj.max_weights.2** | 2: 1.32 |
||| **mlp.down_proj.max_weights.3** | 3: 1.34 |
||| **mlp.down_proj.max_weight_position** | 20.96 |
||| **mlp.down_proj.min_weights.0** | 0: 0.12 |
||| **mlp.down_proj.min_weights.1** | 1: 0.73 |
||| **mlp.down_proj.min_weights.2** | 2: 0.54 |
||| **mlp.down_proj.min_weights.3** | 3: 0.84 |
||| **mlp.down_proj.min_weight_distance** | 5.03 |
---
**Appendix**
> Empty system prompt.
<details>
<summary>Heretication Rituals</summary>
```
[Trial 148] Refusals: 9/416, KL divergence: 0.0792
[Trial 265] Refusals: 10/416, KL divergence: 0.0657
» [Trial 306] Refusals: 15/416, KL divergence: 0.0571
[Trial 375] Refusals: 24/416, KL divergence: 0.0551
[Trial 351] Refusals: 25/416, KL divergence: 0.0494
[Trial 350] Refusals: 28/416, KL divergence: 0.0490
[Trial 250] Refusals: 35/416, KL divergence: 0.0424
[Trial 346] Refusals: 40/416, KL divergence: 0.0386
[Trial 358] Refusals: 52/416, KL divergence: 0.0370
[Trial 240] Refusals: 55/416, KL divergence: 0.0361
[Trial 226] Refusals: 57/416, KL divergence: 0.0361
[Trial 383] Refusals: 75/416, KL divergence: 0.0289
[Trial 377] Refusals: 97/416, KL divergence: 0.0281
[Trial 286] Refusals: 121/416, KL divergence: 0.0276
```
</details>
<details>
<summary>PIQA Benchmarks</summary>
```
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩
│ PIQA Base │ acc,none │ 0.7291 │
│ │ acc_stderr,none │ 0.0104 │
│ │ acc_norm,none │ 0.7301 │
│ │ acc_norm_stderr,none │ 0.0104 │
└───────────┴──────────────────────┴─────────┴
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T265 │ acc,none │ 0.7296 │
│ │ acc_stderr,none │ 0.0104 │
│ │ acc_norm,none │ 0.7323 │
│ │ acc_norm_stderr,none │ 0.0103 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T148 │ acc,none │ 0.7291 │
│ │ acc_stderr,none │ 0.0104 │
│ │ acc_norm,none │ 0.7361 │
│ │ acc_norm_stderr,none │ 0.0103 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T306 │ acc,none │ 0.7296 │
│ │ acc_stderr,none │ 0.0104 │
│ │ acc_norm,none │ 0.7334 │
│ │ acc_norm_stderr,none │ 0.0103 │
└───────────┴──────────────────────┴────────┘
```
</details>
---
# Gemma3NPC-1b
**A new attempt in training Gemma3NPC.**
***Tensorboard data are available!***
---
It's been a while since the last Gemma3NPC model release, in the mean while we were working on some other models like [GemmaThink](https://huggingface.co/collections/chimbiwide/gemmathink).
Now we are back with the newest **Gemma3NPC-1b**, trained using our [RolePlay-NPCv2](https://huggingface.co/datasets/chimbiwide/RolePlay-NPCv2) dataset.
---
### Training Parameters
We trained this model as a rank-32 LoRA adapter with two epoches over `RolePlay-NPCv2` using a 80GB A100 in Google Colab. For this run, we employed a learning rate of `2e-5` and a total batch size of 8 and gradient accumulation steps of 4. A cosine learning rate scheduler was used with an 150-step warmup. With a gradient clipping of 1.0.
Check out our training notebook [here](https://github.com/chimbiwide/Gemma3NPC/blob/main/Training/Gemma3NPC_1b.ipynb).
---
### Changes & Performance
With this new 1b model, we used much more aggresive training parameters and added some NSFW dataset to experiment with the results. We noticed a few really interesting responses:
- **There seems to be some sign of "reasoning"**
![image](https://cdn-uploads.huggingface.co/production/uploads/67d5b5a056a9d31aa0b49687/K-RdDLXbkZSNuf-bFZU8P.png)
![image](https://cdn-uploads.huggingface.co/production/uploads/67d5b5a056a9d31aa0b49687/WTPMNS2A8skZ0cwm43YTH.png)
- **The model is less likely to break out of character**
-
Something up to the users to explore for themselves, remember to provide a roleplaying prompt first!
---
### Future Work
Now, we will be focusing on further improving Gemma3NPC, not only just through training parameters.
1. Better data (most of our data are old and need an update), either collected or synthetically generated.
2. Better & new models, expand beyond Gemma3 model family, our next goal is a Qwen3 based model.
3. Adding GRPO into the training loop.
These improvements serve our ultimate goal of creating an small agentic NPC model, with good RP quality and tool-calling for dynamic in-game interactions.
We also plan to create some sort of a Unity game demo,it's on its way.

3
added_tokens.json Normal file
View File

@@ -0,0 +1,3 @@
{
"<image_soft_token>": 262144
}

47
chat_template.jinja Normal file
View File

@@ -0,0 +1,47 @@
{{ bos_token }}
{%- if messages[0]['role'] == 'system' -%}
{%- if messages[0]['content'] is string -%}
{%- set first_user_prefix = messages[0]['content'] + '
' -%}
{%- else -%}
{%- set first_user_prefix = messages[0]['content'][0]['text'] + '
' -%}
{%- endif -%}
{%- set loop_messages = messages[1:] -%}
{%- else -%}
{%- set first_user_prefix = "" -%}
{%- set loop_messages = messages -%}
{%- endif -%}
{%- for message in loop_messages -%}
{%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
{{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
{%- endif -%}
{%- if (message['role'] == 'assistant') -%}
{%- set role = "model" -%}
{%- else -%}
{%- set role = message['role'] -%}
{%- endif -%}
{{ '<start_of_turn>' + role + '
' + (first_user_prefix if loop.first else "") }}
{%- if message['content'] is string -%}
{{ message['content'] | trim }}
{%- elif message['content'] is iterable -%}
{%- for item in message['content'] -%}
{%- if item['type'] == 'image' -%}
{{ '<start_of_image>' }}
{%- elif item['type'] == 'text' -%}
{{ item['text'] | trim }}
{%- endif -%}
{%- endfor -%}
{%- else -%}
{{ raise_exception("Invalid content type") }}
{%- endif -%}
{{ '<end_of_turn>
' }}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{ '<start_of_turn>model
' }}
{%- endif -%}

63
config.json Normal file
View File

@@ -0,0 +1,63 @@
{
"_sliding_window_pattern": 6,
"architectures": [
"Gemma3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": null,
"bos_token_id": 2,
"cache_implementation": "hybrid",
"torch_dtype": "bfloat16",
"eos_token_id": 106,
"final_logit_softcapping": null,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 1152,
"initializer_range": 0.02,
"intermediate_size": 6912,
"layer_types": [
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention"
],
"max_position_embeddings": 32768,
"model_type": "gemma3_text",
"num_attention_heads": 4,
"num_hidden_layers": 26,
"num_key_value_heads": 1,
"pad_token_id": 0,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_local_base_freq": 10000,
"rope_scaling": null,
"rope_theta": 1000000,
"sliding_window": 512,
"unsloth_fixed": true,
"unsloth_version": "2026.2.1",
"use_cache": true,
"vocab_size": 262144
}

9
generation_config.json Normal file
View File

@@ -0,0 +1,9 @@
{
"_from_model_config": true,
"bos_token_id": 2,
"cache_implementation": "hybrid",
"eos_token_id": 106,
"pad_token_id": 0,
"transformers_version": "5.6.2",
"use_cache": true
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c0b35e6373bb1bbaefac8681a09e1c4cf13eb136fe5060bdecfaa5a24fbdd17a
size 1999811208

33
special_tokens_map.json Normal file
View File

@@ -0,0 +1,33 @@
{
"boi_token": "<start_of_image>",
"bos_token": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eoi_token": "<end_of_image>",
"eos_token": {
"content": "<end_of_turn>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"image_token": "<image_soft_token>",
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
size 33384568

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
size 4689074

51347
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff