1.8 KiB
1.8 KiB
base_model, library_name, tags, license, language, pipeline_tag
| base_model | library_name | tags | license | language | pipeline_tag | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
transformers |
|
llama3.2 |
|
text-generation |
Progenitor Virus 3.2 3B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# Umbrella Corporation Official Merge Protocol v3.2
# Author: Dr. Novaciano
# Objective: Test
# PROJECT: Progenitor-Virus-3.2-3B
models:
- model: DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored # Experimental viral strain neural imprint
- model: bunnycore/Llama-3.2-3B-Sci-Think # Baseline cognitive template, "safe mode"
merge_method: slerp # Spherical Linear Interpolation to preserve extreme viral traits smoothly
base_model: DavidAU/Llama-3.2-3B-Instruct-heretic-ablitered-uncensored # Anchor model for stable latent space
dtype: bfloat16 # Memory-efficient precision, minimal loss in viral feature fidelity
parameters:
t: 0.45
normalize: false
rescale: true
rescale_factor: 1.12
memory_efficient: true
low_cpu_mem_usage: true
layer_range:
- value: [4, 22]
tie_word_embeddings: true
tie_output_embeddings: true