Files
Qwen3-4B-Nymphaea-RP/README.md
ModelHub XC 7196392d52 初始化项目,由ModelHub XC社区提供模型
Model: 0xA50C1A1/Qwen3-4B-Nymphaea-RP
Source: Original Platform
2026-04-23 15:26:16 +08:00

2.2 KiB

base_model, library_name, model_name, tags, license
base_model library_name model_name tags license
0xA50C1A1/Qwen3-4B-Instruct-2507-SOM-MPOA transformers Qwen3-4B-Nymphaea-RP
uncensored
roleplay
trl
rp
creative-writing
apache-2.0

Qwen3-4B-Nymphaea-RP

A fine-tune of Qwen3-4B-Instruct-2507 for roleplay and creative writing.

Suitable for mobile roleplay: tested on Nothing Phone 2 in Q4_K_M quantization (7-8 t/s)

Tip

The SillyTavern preset is available here. For custom presets, please use the ChatML instruct template.

Chat Example

Tested at Q8_0 quantization.

SillyTavern Screenshot

Training Notes

Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.

Important

The base weights are abliterated via Heretic prior to fine-tuning, so this fine-tune is quite uncensored.

Training Specs

Method:

  • Training Method: DoRA (Weight-Decomposed LoRA)
  • Target Modules all-linear
  • LoRA Rank: 32
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05

Hyperparameters:

  • Batch Size: 2 (Per-device)
  • Gradient Accumulation: 2
  • Epochs: 2
  • Learning Rate: 1e-4
  • Optimizer: adamw_torch_fused
  • LR Scheduler: cosine
  • Noise Level: neftune_noise_alpha=5

Special Thanks

This fine-tune wouldn't be possible without the incredible work of the community: