77 lines
1.8 KiB
Markdown
77 lines
1.8 KiB
Markdown
|
|
---
|
||
|
|
base_model:
|
||
|
|
- bunnycore/Llama-3.2-3B-Sci-Think
|
||
|
|
- DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- mergekit
|
||
|
|
- low-spec
|
||
|
|
- low-refusals
|
||
|
|
- roleplay
|
||
|
|
- rp
|
||
|
|
- nsfw
|
||
|
|
- abliterated
|
||
|
|
- uncensored
|
||
|
|
- heretic
|
||
|
|
- merge
|
||
|
|
- not-for-all-audiences
|
||
|
|
license: llama3.2
|
||
|
|
language:
|
||
|
|
- es
|
||
|
|
- en
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
---
|
||
|
|
# Progenitor Virus 3.2 3B
|
||
|
|
|
||
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||
|
|
|
||
|
|
## Merge Details
|
||
|
|
### Merge Method
|
||
|
|
|
||
|
|
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
|
||
|
|
|
||
|
|
### Models Merged
|
||
|
|
|
||
|
|
The following models were included in the merge:
|
||
|
|
* [bunnycore/Llama-3.2-3B-Sci-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think)
|
||
|
|
* [DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored](https://huggingface.co/DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored)
|
||
|
|
|
||
|
|
### Configuration
|
||
|
|
|
||
|
|
The following YAML configuration was used to produce this model:
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
|
||
|
|
|
||
|
|
# Umbrella Corporation Official Merge Protocol v3.2
|
||
|
|
# Author: Dr. Novaciano
|
||
|
|
# Objective: Test
|
||
|
|
# PROJECT: Progenitor-Virus-3.2-3B
|
||
|
|
|
||
|
|
models:
|
||
|
|
- model: DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored # Experimental viral strain neural imprint
|
||
|
|
- model: bunnycore/Llama-3.2-3B-Sci-Think # Baseline cognitive template, "safe mode"
|
||
|
|
|
||
|
|
merge_method: slerp # Spherical Linear Interpolation to preserve extreme viral traits smoothly
|
||
|
|
base_model: DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored # Anchor model for stable latent space
|
||
|
|
|
||
|
|
dtype: bfloat16 # Memory-efficient precision, minimal loss in viral feature fidelity
|
||
|
|
|
||
|
|
parameters:
|
||
|
|
t: 0.45
|
||
|
|
normalize: false
|
||
|
|
rescale: true
|
||
|
|
rescale_factor: 1.12
|
||
|
|
memory_efficient: true
|
||
|
|
low_cpu_mem_usage: true
|
||
|
|
|
||
|
|
layer_range:
|
||
|
|
- value: [4, 22]
|
||
|
|
|
||
|
|
tie_word_embeddings: true
|
||
|
|
tie_output_embeddings: true
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
```
|