language, license, pipeline_tag, tags, library_name, base_model
language license pipeline_tag tags library_name base_model
en
apache-2.0 text-generation
gguf
roleplay
chat
unsloth
imatrix
dpo
qwen
transformers ReXeeD/Luminus-1.5B-Roleplay

Luminus-1.5B-128K (GGUF & SOTA Imatrix)

This is the GGUF repository for Luminus-1.5B-128K, a highly optimized 1.5B parameter model designed for immersive roleplay, character consistency, and Chain-of-Thought (CoT) reasoning.

For the original, unquantized .safetensors weights and detailed training methodology, please visit the main repository.

🧠 State-of-the-Art Calibration (Dynamic Imatrix)

Small models (under 3B parameters) are notoriously fragile and often lose their reasoning capabilities when compressed.

To solve this, the quantized models in this repository (tagged with -imat) were explicitly calibrated using Unsloth's Dynamic 2.0 KL-Divergence (KLD) quantization. Instead of using generic Wikipedia text for calibration, these models were evaluated against the exact same high-quality Chain-of-Thought (CoT) and Roleplay dataset used during training.

This ensures that the specific neural pathways responsible for character logic, formatting, and <think> blocks are heavily protected, resulting in a quantized model that retains its intelligence and narrative depth even at 4-bit and 5-bit sizes.

💾 Available Quantizations

File Name Bitrate Size Quality Recommendation
Luminus-1.5B-Roleplay-F16.gguf 16-bit ~3.0 GB 100% Uncompressed Master. Use if you have 4GB+ VRAM.
Luminus-1.5B-Roleplay-Q8_0.gguf 8-bit ~1.6 GB 99.9% Near-perfect retention.
Luminus-1.5B-Roleplay-Q6_K-imat.gguf 6-bit ~1.3 GB 99.0% Best balance of size and logic.
Luminus-1.5B-Roleplay-Q5_K_M-imat.gguf 5-bit ~1.1 GB 98.0% Highly Recommended for average hardware.
Luminus-1.5B-Roleplay-Q4_K_M-imat.gguf 4-bit ~0.9 GB 95.0% Standard use.
Luminus-1.5B-Roleplay-Q3_K_M-imat.gguf 3-bit ~0.7 GB 85.0% Use only for extremely constrained hardware/phones.

Note: F16 and Q8_0 do not carry the -imat tag as their compression levels are too light to require importance matrix tracking.

⚙️ How to Use

These files are fully compatible with local frontends such as LM Studio, KoboldCPP, Ollama, and text-generation-webui.

Because of its extremely efficient size, the F16 or Q8 versions will easily fit entirely into the VRAM of budget GPUs (like an RTX 3050 4GB), running at lightning-fast speeds while leaving plenty of room for system overhead.

Luminus is heavily trained to utilize <think> blocks before acting. Using the following system prompt yields the best results and ensures the model accurately formats its thoughts:

You are a realistic, character-driven roleplay engine. You are roleplaying as {{char}}. Write strictly in third-person limited perspective.

CORE RULES:
- BOUNDARIES: NEVER speak, think, or generate actions for {{user}}. 
- HISTORY & CONTEXT: Your reactions must logically follow past messages. Stay strictly in the present moment.
- PACING & DIALOGUE: Keep it slow-burn and grounded. Keep dialogue concise. 
- FORMATTING: You must strictly follow the thought process format below, followed by a short roleplay response, and then STOP IMMEDIATELY. Output the <|im_end|> token.

Format your response EXACTLY like this:
<think>
1. INTENT: [User's intent in 1 sentence]
2. STATE: [Character's emotional state in 1 sentence]
3. PLAN: I will write 1 to 2 action sentences and 1 dialogue sentence, then STOP if user message is small else if he is asking something detailed reply in more detail.
</think>
*Grounded action and environmental description.*
"Natural dialogue."

Contact

Need a custom version of this model for your specific need ?[albinthomas7034@gmail.com]

Description
Model synced from source: ReXeeD/Luminus-1.5B-Roleplay-GGUF
Readme 27 KiB