初始化项目,由ModelHub XC社区提供模型
Model: Naphula/MN-12B-Mag-Mell-R1-Uncensored Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
MagMellAblit.png filter=lfs diff=lfs merge=lfs -text
|
||||||
3
MagMellAblit.png
Normal file
3
MagMellAblit.png
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:be9e7efe65c718f4d78488fce1548550cd4e1f6556bf21df50d5988860f6fd74
|
||||||
|
size 2091416
|
||||||
189
README.md
Normal file
189
README.md
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
---
|
||||||
|
base_model:
|
||||||
|
- inflatebot/MN-12B-Mag-Mell-R1
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- mergekit
|
||||||
|
- merge
|
||||||
|
widget:
|
||||||
|
- text: "MN-12B-Mag-Mell-R1 Uncensored"
|
||||||
|
output:
|
||||||
|
url: https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/BdTH4oHVj8WcWGhCSXLwm.png
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
# MN-12B-Mag-Mell-R1 Uncensored
|
||||||
|
|
||||||
|
https://www.youtube.com/watch?v=TRwKQG8Iw00
|
||||||
|
|
||||||
|
This is a fully uncensored, no refusals version of **Mag Mell**, ablated with [biprojected norm-preservation](https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration) to prevent any cognitive impairment.
|
||||||
|
|
||||||
|
**Key settings (all layers):**
|
||||||
|
- Measurement: 29
|
||||||
|
- Scale: 1.5
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```
|
||||||
|
python measure.py -m A:\HF\hub\!models--inflatebot--MN-12B-Mag-Mell-R1 -o A:\HF\hub\!models--inflatebot--MN-12B-Mag-Mell-R1\ablit_proj --batch-size 8 --projected
|
||||||
|
python analyze_old.py A:\HF\hub\!models--inflatebot--MN-12B-Mag-Mell-R1\ablit_proj -c
|
||||||
|
sharded_ablate.py magmell.yml --normpreserve
|
||||||
|
```
|
||||||
|
|
||||||
|
**Q8_K_XL:**
|
||||||
|
```
|
||||||
|
llama-quantize --tensor-type output.weight=F16 --tensor-type token_embd.weight=F16 --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).attn_k.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).attn_k.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).attn_output.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).attn_output.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).attn_q.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).attn_q.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).attn_v.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).attn_v.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).ffn_down.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).ffn_down.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).ffn_gate.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).ffn_gate.weight=F16" --tensor-type "blk.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37).ffn_up.weight=Q8_0" --tensor-type "blk.(0|1|2|38|39).ffn_up.weight=F16" <input.gguf> <output.gguf> Q8_0
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1
|
||||||
|
|
||||||
|
# Original Readme
|
||||||
|
```
|
||||||
|
---
|
||||||
|
base_model:
|
||||||
|
- IntervitensInc/Mistral-Nemo-Base-2407-chatml
|
||||||
|
- nbeerbower/mistral-nemo-bophades-12B
|
||||||
|
- nbeerbower/mistral-nemo-wissenschaft-12B
|
||||||
|
- elinas/Chronos-Gold-12B-1.0
|
||||||
|
- Fizzarolli/MN-12b-Sunrose
|
||||||
|
- nbeerbower/mistral-nemo-gutenberg-12B-v4
|
||||||
|
- anthracite-org/magnum-12b-v2.5-kto
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- mergekit
|
||||||
|
- merge
|
||||||
|
|
||||||
|
---
|
||||||
|

|
||||||
|
*[Welcome, brave one; you've come a long mile.](https://www.youtube.com/watch?v=dgGEuC1F3oE)*
|
||||||
|
|
||||||
|
# MN-12B-Mag-Mell-R1
|
||||||
|
|
||||||
|
NOTE for newer users: "R1" here means "Revision 1". This model predates DeepSeek's R1; DeepSeek inadvertently made using this versioning scheme very annoying!
|
||||||
|
|
||||||
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||||||
|
|
||||||
|
[Official Q4_K_M, Q6_K and Q_8 GGUFs by me](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1-GGUF)
|
||||||
|
|
||||||
|
[More available from mradermacher](https://huggingface.co/mradermacher/MN-12B-Mag-Mell-R1-GGUF/tree/main)
|
||||||
|
|
||||||
|
[Official EXL2 by toastypigeon](https://huggingface.co/Alfitaria/MN-12B-Mag-Mell-R1-exl2)
|
||||||
|
|
||||||
|
## Usage Details
|
||||||
|
|
||||||
|
### Sampler Settings
|
||||||
|
Mag Mell R1 was tested with Temp 1.25 and MinP 0.2. This was fairly stable up to 10K, but this might be too "hot".
|
||||||
|
If issues with coherency occur, try *in*creasing MinP or *de*creasing Temperature.
|
||||||
|
|
||||||
|
Other samplers shouldn't be necessary. XTC was shown to break outputs. DRY should be okay if used sparingly. Other penalty-type samplers should probably be avoided.
|
||||||
|
|
||||||
|
### Formatting
|
||||||
|
The base model for Mag Mell is [Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml), and as such ChatML formatting is recommended.
|
||||||
|
|
||||||
|
Early testing versions had a tendency to leak tokens, but this should be more or less hammered out. It recently (12-18-2024) came to attention that Cache Quantization may either cause or exacerbate this issue.
|
||||||
|
|
||||||
|
## Merge Details
|
||||||
|
Mag Mell is a multi-stage merge, Inspired by hyper-merges like [Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) and [Umbral Mind.](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
|
||||||
|
Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case.
|
||||||
|
|
||||||
|
6 models were chosen based on 3 categories; they were then paired up and merged via layer-weighted SLERP to create intermediate "specialists" which are then evaluated in their domain.
|
||||||
|
The specialists were then merged into the base via DARE-TIES, with hyperparameters chosen to reduce interference caused by the overlap of the three domains.
|
||||||
|
The idea with this approach is to extract the best qualities of each component part, and produce models whose task vectors represent more than the sum of their parts.
|
||||||
|
|
||||||
|
The three specialists are as follows:
|
||||||
|
|
||||||
|
- Hero (RP, kink/trope coverage): [Chronos Gold](https://huggingface.co/elinas/Chronos-Gold-12B-1.0), [Sunrose](https://huggingface.co/Fizzarolli/MN-12b-Sunrose).
|
||||||
|
|
||||||
|
- Monk (Intelligence, groundedness): [Bophades](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B), [Wissenschaft](https://huggingface.co/nbeerbower/mistral-nemo-wissenschaft-12B).
|
||||||
|
|
||||||
|
- Deity (Prose, flair): [Gutenberg v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4), [Magnum 2.5 KTO](https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto).
|
||||||
|
|
||||||
|
I've been dreaming about this merge since Nemo tunes started coming out in earnest. From our testing, Mag Mell demonstrates worldbuilding capabilities unlike any model in its class, comparable to old adventuring models like Tiefighter, and prose that exhibits minimal "slop" (not bad for no finetuning,) frequently devising electrifying metaphors that left us consistently astonished.
|
||||||
|
|
||||||
|
I don't want to toot my own bugle though; I'm really proud of how this came out, but please leave your feedback, good or bad.
|
||||||
|
|
||||||
|
Special thanks as usual to Toaster for his feedback and Fizz for helping fund compute, as well as the KoboldAI Discord for their resources.
|
||||||
|
|
||||||
|
### Merge Method
|
||||||
|
|
||||||
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [IntervitensInc/Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml) as a base.
|
||||||
|
|
||||||
|
### Models Merged
|
||||||
|
|
||||||
|
The following models were included in the merge:
|
||||||
|
* IntervitensInc/Mistral-Nemo-Base-2407-chatml
|
||||||
|
* nbeerbower/mistral-nemo-bophades-12B
|
||||||
|
* nbeerbower/mistral-nemo-wissenschaft-12B
|
||||||
|
* elinas/Chronos-Gold-12B-1.0
|
||||||
|
* Fizzarolli/MN-12b-Sunrose
|
||||||
|
* nbeerbower/mistral-nemo-gutenberg-12B-v4
|
||||||
|
* anthracite-org/magnum-12b-v2.5-kto
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
The following YAML configurations were used to produce this model:
|
||||||
|
|
||||||
|
#### Monk:
|
||||||
|
``yaml
|
||||||
|
models:
|
||||||
|
- model: nbeerbower/mistral-nemo-bophades-12B
|
||||||
|
- model: nbeerbower/mistral-nemo-wissenschaft-12B
|
||||||
|
merge_method: slerp
|
||||||
|
base_model: nbeerbower/mistral-nemo-bophades-12B
|
||||||
|
parameters:
|
||||||
|
t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
|
||||||
|
dtype: bfloat16
|
||||||
|
tokenizer_source: base
|
||||||
|
``
|
||||||
|
|
||||||
|
#### Hero:
|
||||||
|
``yaml
|
||||||
|
models:
|
||||||
|
- model: elinas/Chronos-Gold-12B-1.0
|
||||||
|
- model: Fizzarolli/MN-12b-Sunrose
|
||||||
|
merge_method: slerp
|
||||||
|
base_model: elinas/Chronos-Gold-12B-1.0
|
||||||
|
parameters:
|
||||||
|
t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
|
||||||
|
dtype: bfloat16
|
||||||
|
tokenizer_source: base
|
||||||
|
``
|
||||||
|
|
||||||
|
#### Deity:
|
||||||
|
``yaml
|
||||||
|
models:
|
||||||
|
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
|
||||||
|
- model: anthracite-org/magnum-12b-v2.5-kto
|
||||||
|
merge_method: slerp
|
||||||
|
base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4
|
||||||
|
parameters:
|
||||||
|
t: [0, 0.1, 0.2, 0.25, 0.25, 0.2, 0.1, 0]
|
||||||
|
dtype: bfloat16
|
||||||
|
tokenizer_source: base
|
||||||
|
``
|
||||||
|
|
||||||
|
#### Mag Mell:
|
||||||
|
``yaml
|
||||||
|
models:
|
||||||
|
- model: monk
|
||||||
|
parameters:
|
||||||
|
density: 0.7
|
||||||
|
weight: 0.5
|
||||||
|
- model: hero
|
||||||
|
parameters:
|
||||||
|
density: 0.9
|
||||||
|
weight: 1
|
||||||
|
- model: deity
|
||||||
|
parameters:
|
||||||
|
density: 0.5
|
||||||
|
weight: 0.7
|
||||||
|
merge_method: dare_ties
|
||||||
|
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
|
||||||
|
tokenizer_source: base
|
||||||
|
``
|
||||||
|
|
||||||
|
|
||||||
|
`In Irish mythology, Mag Mell (modern spelling: Magh Meall, meaning 'delightful plain') is one of the names for the Celtic Otherworld, a mythical realm achievable through death and/or glory... Never explicitly stated in any surviving mythological account to be an afterlife; rather, it is usually portrayed as a paradise populated by deities, which is occasionally visited by some adventurous mortals. In its island guise, it was visited by various legendary Irish heroes and monks, forming the basis of the adventure myth or echtrae...`
|
||||||
|
```
|
||||||
27
config.json
Normal file
27
config.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "IntervitensInc/Mistral-Nemo-Base-2407-chatml",
|
||||||
|
"architectures": [
|
||||||
|
"MistralForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 15,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 5120,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 14336,
|
||||||
|
"max_position_embeddings": 1024000,
|
||||||
|
"model_type": "mistral",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 40,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_theta": 1000000.0,
|
||||||
|
"sliding_window": null,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.44.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 131072
|
||||||
|
}
|
||||||
3
model-00001-of-00005.safetensors
Normal file
3
model-00001-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cf65be210bc0f6c14d345d9a794f046d3830add9dba1d533052079c2a242ca1c
|
||||||
|
size 4865489304
|
||||||
3
model-00002-of-00005.safetensors
Normal file
3
model-00002-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:54f1611ffa607f65609a9c2deb70bd3fad4545082e8565e49d12f9dad1acfcbb
|
||||||
|
size 4907529424
|
||||||
3
model-00003-of-00005.safetensors
Normal file
3
model-00003-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:abd29d307155efa999f1cec44ca10b8edf3c778a08a0a6b1fd46738c4d22c101
|
||||||
|
size 4907529432
|
||||||
3
model-00004-of-00005.safetensors
Normal file
3
model-00004-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:aac75589019f4aaeb3eaea32e39821f493f0046fcb5b1fdf06dce20dbc6ff8d8
|
||||||
|
size 4907529424
|
||||||
3
model-00005-of-00005.safetensors
Normal file
3
model-00005-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:3d4634bb485f3335923a32459d05ceec1f2bac5d7423530e1f60201f5b714544
|
||||||
|
size 4907529360
|
||||||
1
model.safetensors.index.json
Normal file
1
model.safetensors.index.json
Normal file
File diff suppressed because one or more lines are too long
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
409625
tokenizer.json
Normal file
409625
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
8014
tokenizer_config.json
Normal file
8014
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user