ModelHub XC 65e48e28ea 初始化项目,由ModelHub XC社区提供模型
Model: Steelskull/L3-Aethora-15B
Source: Original Platform
2026-04-22 11:30:59 +08:00

library_name, tags, license, datasets
library_name tags license datasets
transformers
llama-factory
llama3
TheSkullery/Aether-Lite-V1.2
<style> body, html { height: 100%; /* Ensure the full height of the page is used */ margin: 0; padding: 0; font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; font-size: 16px; } .container { width: 100%; /* Full width */ height: 100%; /* Full height */ padding: 20px; margin: 0; /* Remove margin to fill the entire area */ background-color: rgba(255, 255, 255, 0.02); border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #5F9EA0; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section h2 { font-size: 24px; color: #88C0D0; } .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #88C0D0; text-decoration: none; } a:hover { color: #A3BE8C; } .button { display: inline-block; background-color: #5E81AC; color: #E5E9F0; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #81A1C1; } pre { background-color: #2E3440; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #D8DEE9; } </style> <html lang="en"> <head> </head>

L3-Aethora-15B

The Skullery Presents L3-Aethora-15B.

Creator: Steelskull

Dataset: Aether-Lite-V1.2

Trained: 4 x A100 for 15 hours Using RsLora and DORA

About L3-Aethora-15B:

 L3 = Llama3 

L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.

AbL3In-15b was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split

This model is trained on the L3 prompt format.

Quants:

  • Mradermacher/L3-Aethora-15B-GGUF
  • Mradermacher/L3-Aethora-15B-i1-GGUF
  • NikolayKozloff/L3-Aethora-15B-GGUF
  • Dataset Summary: (Filtered)

    Filtered Phrases: GPTslop, Claudism's

    • mrfakename/Pure-Dove-ShareGPT: Processed 3707, Removed 150
    • mrfakename/Capybara-ShareGPT: Processed 13412, Removed 2594
    • jondurbin/airoboros-3.2: Processed 54517, Removed 4192
    • PJMixers/grimulkan_theory-of-mind-ShareGPT: Processed 533, Removed 6
    • grimulkan/PIPPA-augmented-dedup: Processed 869, Removed 46
    • grimulkan/LimaRP-augmented: Processed 790, Removed 14
    • PJMixers/grimulkan_physical-reasoning-ShareGPT: Processed 895, Removed 4
    • MinervaAI/Aesir-Preview: Processed 994, Removed 6
    • Doctor-Shotgun/no-robots-sharegpt: Processed 9911, Removed 89

    Deduplication Stats:

    Starting row count: 85628, Final row count: 81960, Rows removed: 3668

    I've had a few people ask about donations so here's a link:

    </html>
    Description
    Model synced from source: Steelskull/L3-Aethora-15B
    Readme 2.6 MiB