60 lines
2.1 KiB
Markdown
60 lines
2.1 KiB
Markdown
---
|
|
base_model: 0xA50C1A1/Llama-3.3-8B-Instruct-128K-SOM-MPOA
|
|
library_name: transformers
|
|
model_name: Llama-3.3-8B-Nymphaea-RP
|
|
tags:
|
|
- uncensored
|
|
- roleplay
|
|
- trl
|
|
- rp
|
|
- creative-writing
|
|
license: apache-2.0
|
|
---
|
|
|
|
# Llama-3.3-8B-Nymphaea-RP
|
|
|
|
A fine-tune of Llama 3.3 8B Instruct for roleplay and creative writing.
|
|
|
|
I've trained this mostly for merging with Llama 3.1/3.3 8B fine-tunes.
|
|
|
|
> [!Tip]
|
|
> The SillyTavern preset is available [here](https://huggingface.co/0xA50C1A1/Llama-3.3-8B-Nymphaea-RP/blob/main/ST-Preset.json).
|
|
> For custom presets, please use the **Llama 3** instruct template.
|
|
|
|
## Training Notes
|
|
|
|
Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.
|
|
|
|
> [!IMPORTANT]
|
|
> The base weights are abliterated via [Heretic](https://github.com/p-e-w/heretic) prior to fine-tuning, so this fine-tune is quite uncensored.
|
|
|
|
<details>
|
|
<summary>Training Specs</summary><p>
|
|
|
|
**Method:**
|
|
|
|
* **Training Method:** DoRA (Weight-Decomposed LoRA)
|
|
* **Target Modules** `all-linear`
|
|
* **LoRA Rank:** 64
|
|
* **LoRA Alpha:** 64
|
|
* **LoRA Dropout:** 0.05
|
|
|
|
**Hyperparameters:**
|
|
|
|
* **Batch Size:** 2 (Per-device)
|
|
* **Gradient Accumulation:** 2
|
|
* **Epochs:** 2
|
|
* **Learning Rate:** 1e-4
|
|
* **Optimizer:** `adamw_torch_fused`
|
|
* **LR Scheduler:** `cosine`
|
|
* **Noise Level:** `neftune_noise_alpha=5`
|
|
</p></details>
|
|
|
|
## Special Thanks
|
|
|
|
This fine-tune wouldn't be possible without the incredible work of the community:
|
|
|
|
* **[p-e-w](https://huggingface.co/p-e-w)** for developing **[Heretic](https://github.com/p-e-w/heretic)** - an essential tool for censorship removal.
|
|
* **[SicariusSicariiStuff](https://huggingface.co/SicariusSicariiStuff)** for developing **[SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector)** script.
|
|
* **[allura-forge](https://huggingface.co/allura-forge)** and **[shb777](https://huggingface.co/shb777)** for providing access to the **Llama 3.3 8B** weights.
|
|
* **[AMD](https://oneclickamd.ai/)** for their Instinct™ MI300X GPU. |