Files
ModelHub XC 7d64580e9b 初始化项目,由ModelHub XC社区提供模型
Model: 0xA50C1A1/Llama-3.3-8B-Nymphaea-RP
Source: Original Platform
2026-04-27 05:37:04 +08:00

2.1 KiB

base_model, library_name, model_name, tags, license
base_model library_name model_name tags license
0xA50C1A1/Llama-3.3-8B-Instruct-128K-SOM-MPOA transformers Llama-3.3-8B-Nymphaea-RP
uncensored
roleplay
trl
rp
creative-writing
apache-2.0

Llama-3.3-8B-Nymphaea-RP

A fine-tune of Llama 3.3 8B Instruct for roleplay and creative writing.

I've trained this mostly for merging with Llama 3.1/3.3 8B fine-tunes.

Tip

The SillyTavern preset is available here. For custom presets, please use the Llama 3 instruct template.

Training Notes

Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.

Important

The base weights are abliterated via Heretic prior to fine-tuning, so this fine-tune is quite uncensored.

Training Specs

Method:

  • Training Method: DoRA (Weight-Decomposed LoRA)
  • Target Modules all-linear
  • LoRA Rank: 64
  • LoRA Alpha: 64
  • LoRA Dropout: 0.05

Hyperparameters:

  • Batch Size: 2 (Per-device)
  • Gradient Accumulation: 2
  • Epochs: 2
  • Learning Rate: 1e-4
  • Optimizer: adamw_torch_fused
  • LR Scheduler: cosine
  • Noise Level: neftune_noise_alpha=5

Special Thanks

This fine-tune wouldn't be possible without the incredible work of the community: