--- base_model: - Babsie/ThetaBlackGorgon-8B - Bacon666/Athlon-8B-0.1 - Naphula/Llamatron-8B-v1 - DarkArtsForge/Raven-8B-v1 - EldritchLabs/Cthulhu-8B-v1.4 - HumanLLMs/Human-Like-LLama3-8B-Instruct - NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS - OccultAI/Morpheus-8B-v3 - Sao10K/L3-8B-Stheno-v3.2 - SicariusSicariiStuff/Assistant_Pepe_8B - SicariusSicariiStuff/Impish_Mind_8B - TheDrummer/Anubis-Mini-8B-v1 - TroyDoesAI/BlackSheep-X-Dolphin datasets: - DarkArtsForge/Poe_v1 - EldritchLabs/Cthulhu_v1.4b - OccultAI/illuminati_imatrix_v1 - OccultAI/Morpheus_1052 - SicariusSicariiStuff/UBW_Tapestries language: - en library_name: transformers license: apache-2.0 tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prosing - vivid writing - fiction - roleplaying - float32 - swearing - rp - horror - dare_linear - llama - merge - mergekit widget: - text: "Goetia-8B-v1" output: url: https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/DHbuh4efzjCGpxDUciZ_-.jpeg --- > [!CAUTION] > ⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use **Llama 3** chat template. > Goetia 8B v1

📜 Goetia 8B v1

Goetia Grimoire

🐙 The Lesser Key

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored as a base. Goetia 8B v1 is fully uncensored, no ablation or jailbreaks are needed. The model is very creative for its size, and has quite an attitude. Here you can see the merge audit showing exact weight distribution. This model is recommended for those who can't run the larger Goetias, or who want a change of pace from the way Mistral writes. Increasing Temp and TopNSigma to 1.0 seems to help also. `dare_linear` seems quite effective for Llama models in particular. It outperformed `della_linear` and `dare_ties` in my testing. The following models were included in the merge: - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - Babsie/ThetaBlackGorgon-8B - Bacon666/Athlon-8B-0.1 - Naphula/Llamatron-8B-v1 - DarkArtsForge/Raven-8B-v1 - EldritchLabs/Cthulhu-8B-v1.4 - HumanLLMs/Human-Like-LLama3-8B-Instruct - NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS - OccultAI/Morpheus-8B-v3 - Sao10K/L3-8B-Stheno-v3.2 - SicariusSicariiStuff/Assistant_Pepe_8B - SicariusSicariiStuff/Impish_Mind_8B - TheDrummer/Anubis-Mini-8B-v1 - TroyDoesAI/BlackSheep-X-Dolphin

🧙 OccultAI Sigil Magic

### Configuration The following YAML configuration was used to produce this model: ```yaml architecture: LlamaForCausalLM models: - model: A:\LLM\.cache\8B\!models--aifeifei798--DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - model: A:\LLM\.cache\8B\!models--SicariusSicariiStuff--Assistant_Pepe_8B parameters: weight: 0.4 density: 0.8 - model: B:\8B\Morpheus_v3_prototype_526 parameters: weight: 0.4 density: 0.8 - model: A:\LLM\.cache\8B\Naphula_Llamatron-8B-v1 parameters: weight: 0.25 density: 0.8 - model: A:\LLM\.cache\8B\!models--SicariusSicariiStuff--Impish_Mind_8B parameters: weight: 0.1 density: 0.8 - model: A:\LLM\.cache\8B\!models--Sao10K--L3-8B-Stheno-v3.2 parameters: weight: 0.1 density: 0.8 - model: B:\8B\Cthulhu_v1.4 parameters: weight: 0.4 density: 0.8 - model: B:\8B\models--TheDrummer--Anubis-Mini-8B-v1 parameters: weight: 0.2 density: 0.8 - model: B:\8B\Raven_v1 parameters: weight: 0.4 density: 0.8 - model: B:\8B\models--HumanLLMs--Human-Like-LLama3-8B-Instruct parameters: weight: 0.1 density: 0.8 - model: A:\LLM\.cache\8B\!models--NeverSleep--Llama-3-Lumimaid-8B-v0.1-OAS parameters: weight: 0.1 density: 0.8 - model: A:\LLM\.cache\8B\!models--TroyDoesAI--BlackSheep-X-Dolphin parameters: weight: 0.1 density: 0.8 - model: A:\LLM\.cache\8B\!models--Bacon666--Athlon-8B-0.1 parameters: weight: 0.1 density: 0.8 - model: A:\LLM\.cache\8B\!models--Babsie--ThetaBlackGorgon-8B parameters: weight: 0.1 density: 0.8 merge_method: dare_linear base_model: A:\LLM\.cache\8B\!models--aifeifei798--DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored parameters: lambda: 1.0 normalize: false int8_mask: false rescale: true tokenizer: source: union chat_template: auto dtype: float32 out_dtype: bfloat16 name: 📜 Goetia-8B-v1 ```

🕯️ Summon the Infernal — Invocation Ritual

Experiment with these settings: https://huggingface.co/datasets/Naphula/Updated_Settings