70 lines
3.1 KiB
Markdown
70 lines
3.1 KiB
Markdown
|
|
---
|
||
|
|
base_model:
|
||
|
|
- p-e-w/gemma-3-12b-it-heretic-v2
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- unsloth
|
||
|
|
- gemma3
|
||
|
|
- roleplay
|
||
|
|
- sillytavern
|
||
|
|
- characters
|
||
|
|
- gguf
|
||
|
|
license: apache-2.0
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
---
|
||
|
|
|
||
|
|
# Gemma-3-12B-Character-Creator-V2 - GGUF Quants
|
||
|
|
|
||
|
|
GGUF quantizations of [SufficientPrune3897/Gemma-3-12B-Character-Creator-V2](https://huggingface.co/SufficientPrune3897/Gemma-3-12B-Character-Creator-V2).
|
||
|
|
|
||
|
|
This is a model made to create characters that can be used in Sillytavern, cai, jai and other such roleplay scenarios. The resulting characters should be about ~2k tokens and follow a prebaked structure.
|
||
|
|
|
||
|
|
Versions:
|
||
|
|
- 8B llama 3.3 based and [GGUFs](https://huggingface.co/SufficientPrune3897/Llama-3.3-8B-Character-Creator-V2-GGUF)
|
||
|
|
- 12B gemma 3 based (this one) and GGUFs
|
||
|
|
- 24B mistral small 3.2 based and GGUFs
|
||
|
|
- (maybe) 27B gemma 3 based and GGUFs
|
||
|
|
|
||
|
|
## How to use it:
|
||
|
|
- Simply tell the model what you want your character to be.
|
||
|
|
- It should know many popular franchises, the bigger the model, the more it knows.
|
||
|
|
- Fully uncensored.
|
||
|
|
- Asking for a different structure than the one the model uses might significantly reduce result quality.
|
||
|
|
- While follow up questions are supported, you will often get better results adjusting your original prompt.
|
||
|
|
- Supports asking for: prompts for pictures of the char, asking for changes and making an intro.
|
||
|
|
|
||
|
|
## Changes [from V1](https://huggingface.co/SufficientPrune3897/gemma-3-27b-Character-Creator)
|
||
|
|
- No longer supports Groups and scenarios
|
||
|
|
- Characters should be much better
|
||
|
|
- It actually follows a structure and doesnt start making shit up after ~1k tokens
|
||
|
|
|
||
|
|
## Available Quants
|
||
|
|
|
||
|
|
| Filename | Quant | Size | Description |
|
||
|
|
|----------|-------|------|-------------|
|
||
|
|
| `Gemma-3-12B-Character-Creator-V2-Q8_0.gguf` | Q8_0 | 12GB | Maximum quality, near-lossless |
|
||
|
|
| `Gemma-3-12B-Character-Creator-V2-Q5_K_M.gguf` | Q5_K_M | 7.9GB | High quality, recommended |
|
||
|
|
| `Gemma-3-12B-Character-Creator-V2-Q4_K_M.gguf` | Q4_K_M | 6.9GB | Good quality, good balance |
|
||
|
|
| `Gemma-3-12B-Character-Creator-V2-IQ4_NL.gguf` | IQ4_NL | 6.5GB | Good quality, slightly smaller than Q4_K_M |
|
||
|
|
| `Gemma-3-12B-Character-Creator-V2-IQ3_M.gguf` | IQ3_M | 5.3GB | Smaller, some quality loss |
|
||
|
|
|
||
|
|
## V3 and beyond:
|
||
|
|
The next version will either reintroduce scenarios, groups or feature reasoning. Probably both. Perhaps even lorebooks, although I'm still unsure how to execute on that... After that I will probably make my own real roleplay finetune or something.
|
||
|
|
|
||
|
|
If anybody wants support of their native language just ask me and tell me what model does the best for that.
|
||
|
|
|
||
|
|
I am very much open for feedback. A single comment can easily change how I will do my next version.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
- **Developed by:** SufficientPrune3897
|
||
|
|
- **License:** apache-2.0
|
||
|
|
- **Finetuned from model:** p-e-w/gemma-3-12b-it-heretic-v2
|
||
|
|
- **Quantized with:** [llama.cpp](https://github.com/ggml-org/llama.cpp)
|
||
|
|
|
||
|
|
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||
|
|
|
||
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|