9c56e731d0a0ab52e4d05154a908e50b388d6df9
Model: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1-GGUF Source: Original Platform
license, language, base_model, base_model_relation, pipeline_tag, tags
| license | language | base_model | base_model_relation | pipeline_tag | tags | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| apache-2.0 |
|
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1 | quantized | text-generation |
|
The-Omega-Directive
MS3.2-24B-Unslop-v2.1
🧠 Unslop Revolution
This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:
- 🧬 RegEx Filtred ~39M Token Dataset - Second ReadyArt model with multi-turn conversational data
- ✨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop.
- ⚡ Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
- 🛡️ Anti-Impersonation Guards - Never speaks or acts for the user
- ⚰️ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model
- 🧠 128K Context Window - Enhanced long-context capabilities without compromising performance
🌟 Enhanced Capabilities
Powered by anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only:
- 📜 Extended Context - Handle up to 128k tokens for complex, long-form interactions
- ⚡ Performance Optimized - Maintains text generation quality while adding new capabilities
- 🌐 Multilingual Support - Fluent in 9 languages including English, French, German, Spanish
⚙️ Technical Specifications
Key Training Details:
- Base Model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
- Training Method: QLoRA
- Sequence Length: 5120 (100% samples included)
- Learning Rate: 2e-6 with cosine scheduler
Recommended Settings: Pending ¯\_(ツ)_/¯
GGUF
Q2_K (9.0GB)
Q3_K_S (10.5GB)
Q3_K_M (11.6GB)
Q3_K_L (12.5GB)
IQ4_XS (13.0GB)
Q4_K_S (13.6GB)
Q4_K_M (14.4GB)
Q5_K_S (16.4GB)
Q5_K_M (16.9GB)
Q6_K (19.4GB)
Q8_0 (25.2GB)
Notes: Q4_K_M recommended for speed/quality balance. Q6_K for very high quality. Q8_0 for original like quality. Prefer imatrix if possible in most cases.
imatrix
IQ1_S (5.4GB)
IQ1_M (5.9GB)
IQ2_XXS (6.6GB)
IQ2_XS (7.3GB)
IQ2_S (7.6GB)
IQ2_M (8.2GB)
Q2_K_S (8.4GB)
Q2_K (9.0GB)
IQ3_XXS (9.4GB)
IQ3_XS (10.0GB)
Q3_K_S (10.5GB)
IQ3_S (10.5GB)
IQ3_M (10.8GB)
Q3_K_M (11.6GB)
Q3_K_L (12.5GB)
IQ4_XS (12.9GB)
Q4_0 (13.6GB)
Q4_K_S (13.6GB)
Q4_K_M (14.4GB)
Q4_1 (15.0GB)
Q5_K_S (16.4GB)
Q5_K_M (16.9GB)
Q6_K (19.4GB)
Notes: Q4_K_M recommended. IQ1_S/IQ1_M for desperate. Q6_K for very high quality.
⚠️ Ethical Considerations
This model enhances The-Omega-Directive's unalignment:
- 🚫 Zero moral or ethical constraints on generated content
- 🔞 Will enthusiastically engage with any NSFW scenario, no matter how extreme
- 💀 Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
- ⚖️ Perfectly balanced between character authenticity and user freedom
📜 Performance Notes
- 🔥 Maintains Omega's intensity with improved narrative coherence
- 📖 Excels at long-form multi-character scenarios
- 🧠 Superior instruction following with complex prompts
- ⚡ Reduced repetition and hallucination compared to v1.1
- 🎭 Uncanny ability to adapt to subtle prompt nuances
- 🩸 Incorporates Omega Darker's visceral descriptive power when appropriate
🧑🔬 Model Authors
- sleepdeprived3 (Training Data)
- gecfdo (Fine-Tuning & data filtering)
- ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
- mradermacher (GGUF Quantization)
Links
🔖 License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your corruption
Description
