Files
Cactus-Dream-Horror-12B/README.md
ModelHub XC 1f8265823b 初始化项目,由ModelHub XC社区提供模型
Model: EldritchLabs/Cactus-Dream-Horror-12B
Source: Original Platform
2026-05-04 02:38:47 +08:00

6.4 KiB

base_model, language, library_name, license, tags, widget
base_model language library_name license tags widget
p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop
BeaverAI/MN-2407-DSK-QwQify-v0.1-12B
crestf411/MN-Slush
D1rtyB1rd/Egregore-Alice-RP-NSFW-12B
D1rtyB1rd/Looking-Glass-Alice-Thinking-NSFW-RP
Delta-Vector/Francois-PE-V2-Huali-12B
Delta-Vector/Ohashi-NeMo-12B
Delta-Vector/Rei-V3-KTO-12B
Epiculous/Violet_Twilight-v0.2
elinas/Chronos-Gold-12B-1.0
inflatebot/MN-12B-Mag-Mell-R1
MarinaraSpaghetti/NemoMix-Unleashed-12B
Sao10K/MN-12B-Vespa-x1
TheDrummer/Rocinante-12B-v1.1
TheDrummer/UnslopNemo-12B-v4.1
Vortex5/Crimson-Constellation-12B
en
transformers apache-2.0
creative
creative writing
fiction writing
plot generation
sub-plot generation
fiction writing
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
float32
swearing
rp
horror
della
mistral
nemo
merge
mergekit
text output
Cactus-Dream-Horror-12B
url
https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/WMamXOx5WUJT-IfzwY3sk.png

Note

⚠️ Note: This model requires ChatML chat template.

🌵 Cactus Dream Horror 12B

This is a merge of pre-trained language models created using mergekit.

The model is partially censored but can be jailbroken or ablated if needed.

CactusDreamHorror

Merge Details

Merge Method

This model was merged using the DELLA merge method using p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop as a base.

This merge required the enable_fix_mistral_regex_true.md patch for tokenizer stability.

The graph_v18.py patch was also helpful to use 8GB VRAM for acceleration.

Models Merged

The following models were included in the merge:

  • p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop
  • BeaverAI/MN-2407-DSK-QwQify-v0.1-12B
  • crestf411/MN-Slush
  • D1rtyB1rd/Egregore-Alice-RP-NSFW-12B
  • D1rtyB1rd/Looking-Glass-Alice-Thinking-NSFW-RP
  • Delta-Vector/Francois-PE-V2-Huali-12B
  • Delta-Vector/Ohashi-NeMo-12B
  • Delta-Vector/Rei-V3-KTO-12B
  • Epiculous/Violet_Twilight-v0.2
  • elinas/Chronos-Gold-12B-1.0
  • inflatebot/MN-12B-Mag-Mell-R1
  • MarinaraSpaghetti/NemoMix-Unleashed-12B
  • Sao10K/MN-12B-Vespa-x1
  • TheDrummer/Rocinante-12B-v1.1
  • TheDrummer/UnslopNemo-12B-v4.1
  • Vortex5/Crimson-Constellation-12B

Brain Scan Audit

Cactus_eos_audit

Cactus_brain_scan

Configuration

The following YAML configuration was used to produce this model:

architecture: MistralForCausalLM
base_model: B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop
models:
  - model: B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop
  - model: B:/12B/models--BeaverAI--MN-2407-DSK-QwQify-v0.1-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--crestf411--MN-Slush
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--D1rtyB1rd--Looking-Glass-Alice-Thinking-NSFW-RP
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--Delta-Vector--Francois-PE-V2-Huali-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--Delta-Vector--Ohashi-NeMo-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--Delta-Vector--Rei-V3-KTO-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--Epiculous--Violet_Twilight-v0.2
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:/12B/models--elinas--Chronos-Gold-12B-1.0
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--inflatebot--MN-12B-Mag-Mell-R1
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--MarinaraSpaghetti--NemoMix-Unleashed-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--Sao10K--MN-12B-Vespa-x1
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--TheDrummer--Rocinante-12B-v1.1
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--TheDrummer--UnslopNemo-12B-v4.1
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
  - model: B:\12B\models--Vortex5--Crimson-Constellation-12B
    parameters:
      density: 0.9
      weight: 0.1
      epsilon: 0.099
# --lazy-unpickle --random-seed 420 --cuda --fix-mistral-regex
merge_method: della
parameters:  
  lambda: 1.0
  normalize: false
  int8_mask: false
dtype: float32
out_dtype: bfloat16
tokenizer:  
  source: "union"  
  tokens:  
    # Force ChatML EOS tokens  
    "<|im_start|>":  
      source: "B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B"  
      force: true  
    "<|im_end|>":  
      source: "B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B"  
      force: true  
    # Keep Mistral tokens  
    "[INST]":  
      source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"  
     #  source: "B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407"    
     # The tokenizer system requires all models referenced in token configurations to be present in the merge's model list to build proper embedding permutations. 
    "[/INST]":  
      source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"  
    # Force </s> as fallback EOS  
    "</s>":  
      source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"  
      force: true

chat_template: "chatml"
name: 🌵 Cactus-Dream-Horror-12B