初始化项目,由ModelHub XC社区提供模型

Model: DavidAU/MN-CaptainErisNebula-12B-Chimera-v1.1-heretic-uncensored-abliterated
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-04 07:23:26 +08:00
commit 8df9d0c112
14 changed files with 8658 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

147
README.md Normal file
View File

@@ -0,0 +1,147 @@
---
library_name: transformers
base_model:
- Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1
tags:
- heretic
- uncensored
- decensored
- abliterated
- finetune
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- mistral nemo
- nemo
- horror
---
<h2>MN-CaptainErisNebula-12B-Chimera-v1.1-heretic-uncensored-abliterated</h2>
Ablitered/uncensored by [Heretic](https://github.com/p-e-w/heretic) v1.0.1
Refusals: 4/100, KL divergence: 0.0512
Original Model Refusal rate: 91/100
ENJOY THE FREEDOM!
<B>Thinking/Reasoning Version:</B>
Fine tuned to use Claude Opus 4.5 High Reasoning / Thinking:
https://huggingface.co/DavidAU/MN-CaptainErisNebula-Chimera-v1.1-THINKING-ClaudeOpus4.5-12B-heretic-uncensored
<B>EXPLAINER:</B>
The method invented by "P-E-W" looks for the best settings to de-censor ("abliterate") the model by trial and error
AND ensure the model is not damaged too.
"KL divergence" is a benchmark to assess model's root/default state, with zero being perfect.
Generally any number less that 1 is great, however with smaller models lower / as close to zero is very important.
ZERO (or close to it : lower than .3 ish for small models [0.6B-3B]) means the model runs as well as it did before the process.
The "refusal rate" is level of censorship in the model.
Again, the goal is to attempt to get to 0 or close to it is critical while FIRST ensuring "KL divergence" as as low as possible or zero.
A "refusal rate" of 20 or lower is the goal, with ZERO being perfect.
Reducing the "refusal rate" has additional positive side effects too.
I choose the lowest possible "KL divergence" first, matched with best "refusal rate" second.
A slightly higher "refusal rate" is a lot easier to deal with than a "brain damaged" model.
---
<B>IMPORTANT: Using an "uncensored" (refusals removed) model VS trained "uncensored" model</B>
---
Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.
In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.
Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want)
to get it generate the content correctly as the "expected" content level too.
Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.
Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic,
cursing or explicit levels.
Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

4
chat_template.jinja Normal file
View File

@@ -0,0 +1,4 @@
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}

30
config.json Normal file
View File

@@ -0,0 +1,30 @@
{
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"dtype": "bfloat16",
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 1024000,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 40,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_parameters": {
"rope_theta": 1000000.0,
"rope_type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"transformers_version": "5.0.0.dev0",
"use_cache": false,
"vocab_size": 131072
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "5.0.0.dev0",
"use_cache": false
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:da266321770bbcd5a58345f5e0125b78ebce876844642ac72bc4732f967325ff
size 4865522496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7dd2ffc39f7ca81950467a3b77d19a02c6750143146a2387b9851fb24bebc418
size 4907529424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3823dbee38af79ca0a93cc089776eed5c609a83a76c6fba7c8444052a58b9d39
size 4907529456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f81796f8c07f0198d3c9e9f4ff19ff9e9da2bcab50e689a8292667149851546e
size 4907529456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a82e0eec3e88b6c3090fcfc3dc061cb72425d2f61608e72c76d142bda838d65
size 4907496272

View File

@@ -0,0 +1,371 @@
{
"metadata": {
"total_parameters": 12247782400,
"total_size": 24495564800
},
"weight_map": {
"lm_head.weight": "model-00005-of-00005.safetensors",
"model.embed_tokens.weight": "model-00001-of-00005.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.norm.weight": "model-00005-of-00005.safetensors"
}
}

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:da2667f40263a6e3acf9bb624fe8edab9adb315e84c7301e5cd2d0cefd3fdbdf
size 17078497

8015
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff