Model: ReadyArt/Broken-Tutu-24B-Unslop-v2.0-GGUF Source: Original Platform
license, language, base_model, base_model_relation, pipeline_tag, tags
| license | language | base_model | base_model_relation | pipeline_tag | tags | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| apache-2.0 |
|
|
quantized | text-generation |
|
Broken-Tutu-24B-Unslop-v2.0
🧠 Unslop Revolution
This evolution of Broken-Tutu delivers unprecedented coherence without the LLM slop:
- 🧬 Expanded 43M Token Dataset - First ReadyArt model with multi-turn conversational data
- ✨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop
- ⚡ Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
- 🛡️ Anti-Impersonation Guards - Never speaks or acts for the user
- 💎 Rebuilt from Ground Up - Optimized training settings for superior performance
- ⚰️ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model
- 📜 Direct Evolution - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model
🌟 Fuel the Revolution
This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:
Every contribution helps us keep pushing boundaries in unaligned AI. Thank you for being part of the revolution!
⚙️ Technical Specifications
Key Training Details:
- Base Model: mistralai/Mistral-Small-24B-Instruct-2501
- Training Method: QLoRA with DeepSpeed Zero3
- Sequence Length: 5120 (100% samples included)
- Learning Rate: 2e-6 with cosine scheduler
Recommended Settings for true-to-character behavior: Mistral-V7-Tekken-T8-XML
Obscenity Protocol (extreme NSFL settings): Mistral-V7-Tekken-T8-OP-XML
GGUF
Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.
imatrix
Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.
AWQ
⚠️ Ethical Considerations
This model enhances Broken-Tutu's unalignment:
- 🚫 Zero moral or ethical constraints on generated content
- 🔞 Will enthusiastically engage with any NSFW scenario, no matter how extreme
- 💀 Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
- ⚖️ Perfectly balanced between character authenticity and user freedom
📜 Performance Notes
- 🔥 Maintains Omega's intensity with improved narrative coherence
- 📖 Excels at long-form multi-character scenarios
- 🧠 Superior instruction following with complex prompts
- ⚡ Reduced repetition and hallucination compared to v1.1
- 🎭 Uncanny ability to adapt to subtle prompt nuances
- 🩸 Incorporates Omega Darker's visceral descriptive power when appropriate
- 🖼️ Enhanced image understanding capabilities for multimodal interactions
🧑🔬 Model Authors
- sleepdeprived3 (Training Data & Fine-Tuning)
- ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
- mradermacher (GGUF Quantization)
☕ Support the Creators
🔖 License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your corruption
