初始化项目,由ModelHub XC社区提供模型
Model: SanjiWatsuki/Sonya-7B Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
129
README.md
Normal file
129
README.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
license: cc-by-4.0
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- merge
|
||||
---
|
||||
|
||||
<div style="display: flex; justify-content: center; align-items: center">
|
||||
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/Sonya.jpg">
|
||||
</div
|
||||
>
|
||||
|
||||
<p align="center">
|
||||
<big><b>Top 1 Performer MT-bench 🤪</b></big>
|
||||
</p>
|
||||
|
||||
## WTF is This?
|
||||
|
||||
Sonya-7B is, at the time of writing, the **#1 performing model in MT-Bench first turn, ahead of GPT-4, and overall the #2 model in MT-Bench**, to the best of my knowledge. Sonya-7B should be a good all-purpose model for all tasks including assistant, RP, etc.
|
||||
|
||||
Sonya-7B has a similar structure to my previous model, [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B), and uses a very similar merge. It's a merge of [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), [Jan-Ai's Stealth v1.2](https://huggingface.co/jan-hq/stealth-v1.2), [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b), [NeverSleep/Noromaid-7B-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](athirdpath/NSFW_DPO_vmgb-7b). Sauce is below. Somehow, by combining these pieces, it substantially outscores any of its parents on MT-Bench.
|
||||
|
||||
I picked these models because:
|
||||
* MT-Bench normally correlates well with real world model quality and xDAN performs well on it.
|
||||
* Almost all models in the mix were Alpaca prompt formatted which gives prompt consistency.
|
||||
* Stealth v1.2 has been a magic sprinkle that seems to increase my MT-Bench scores.
|
||||
* I added RP models because it boosted the Writing and Roleplay benchmarks 👀
|
||||
|
||||
Based on the parent models, I expect this model to be used with an 8192 context window. Please use NTK scaling alpha of 2.6 to experimentally try out 16384 context.
|
||||
|
||||
**Let me be candid:** Despite the test scores, this model is **NOT is a GPT killer**. I think it's a very sharp model **for a 7B**, it probably punches way above its weight **for a 7B**, but it's still a 7B model. Even for a 7B model, I think **it's quirky and has some weird outputs**, probably due to how Frankenstein this merge is. Keep your expectations in check 😉
|
||||
|
||||
**MT-Bench Average Turn**
|
||||
| model | score | size
|
||||
|--------------------|-----------|--------
|
||||
| gpt-4 | 8.99 | -
|
||||
| **Sonya-7B** | **8.52** | **7b**
|
||||
| xDAN-L1-Chat-RL-v1 | 8.34 | 7b
|
||||
| Starling-7B | 8.09 | 7b
|
||||
| Claude-2 | 8.06 | -
|
||||
| *Silicon-Maid* | *7.96* | *7b*
|
||||
| *Loyal-Macaroni-Maid*| *7.95* | *7b*
|
||||
| gpt-3.5-turbo | 7.94 | 20b?
|
||||
| Claude-1 | 7.90 | -
|
||||
| OpenChat-3.5 | 7.81 | -
|
||||
| vicuna-33b-v1.3 | 7.12 | 33b
|
||||
| wizardlm-30b | 7.01 | 30b
|
||||
| Llama-2-70b-chat | 6.86 | 70b
|
||||
|
||||
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-gpt.png">
|
||||
|
||||
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-comparison.png">
|
||||
|
||||
### The Sauce
|
||||
|
||||
```
|
||||
models:
|
||||
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
|
||||
parameters:
|
||||
weight: 1
|
||||
density: 1
|
||||
- model: chargoddard/piano-medley-7b
|
||||
parameters:
|
||||
weight: 0.3
|
||||
- model: jan-hq/stealth-v1.2
|
||||
parameters:
|
||||
weight: 0.2
|
||||
- model: NeverSleep/Noromaid-7b-v0.2
|
||||
parameters:
|
||||
weight: 0.2
|
||||
- model: athirdpath/NSFW_DPO_vmgb-7b
|
||||
parameters:
|
||||
weight: 0.2
|
||||
merge_method: ties
|
||||
base_model: mistralai/Mistral-7B-v0.1
|
||||
parameters:
|
||||
density: 0.4
|
||||
int8_mask: true
|
||||
normalize: true
|
||||
dtype: bfloat16
|
||||
```
|
||||
|
||||
**There was no additional training, finetuning, or DPO.** This is a straight merger.
|
||||
|
||||
### Prompt Template (Alpaca)
|
||||
|
||||
```
|
||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
||||
|
||||
### Instruction:
|
||||
{prompt}
|
||||
|
||||
### Response:
|
||||
```
|
||||
|
||||
I found that this model **performed worse** with the xDAN prompt format so, despite the heavy weight of xDAN in this merger, I recommeend *against* its use.
|
||||
|
||||
### Other Benchmark Stuff
|
||||
|
||||
**########## First turn ##########**
|
||||
| model | turn | score | size
|
||||
|--------------------|------|----------|--------
|
||||
| **Sonya-7B** | 1 | **9.06875** | **7b**
|
||||
| gpt-4 | 1 | 8.95625 | -
|
||||
| xDAN-L1-Chat-RL-v1 | 1 | *8.87500* | *7b*
|
||||
| xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
|
||||
| claude-v1 | 1 | 8.15000 | -
|
||||
| gpt-3.5-turbo | 1 | 8.07500 | 20b
|
||||
| vicuna-33b-v1.3 | 1 | 7.45625 | 33b
|
||||
| wizardlm-30b | 1 | 7.13125 | 30b
|
||||
| oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
|
||||
| Llama-2-70b-chat | 1 | 6.98750 | 70b
|
||||
|
||||
|
||||
########## Second turn ##########
|
||||
| model | turn | score | size
|
||||
|--------------------|------|-----------|--------
|
||||
| gpt-4 | 2 | 9.025000 | -
|
||||
| xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
|
||||
| **Sonya-7B** | 2 | **7.962500** | **7b**
|
||||
| xDAN-L1-Chat-RL-v1 | 2 | 7.825000 | 7b
|
||||
| gpt-3.5-turbo | 2 | 7.812500 | 20b
|
||||
| claude-v1 | 2 | 7.650000 | -
|
||||
| wizardlm-30b | 2 | 6.887500 | 30b
|
||||
| vicuna-33b-v1.3 | 2 | 6.787500 | 33b
|
||||
| Llama-2-70b-chat | 2 | 6.725000 | 70b
|
||||
|
||||
If you'd like to replicate the MT-Bench run, please ensure that the Alpaca prompt template is applied to the model. I did this by putting "alpaca" in the model path to trigger the `AlpacaAdapter`.
|
||||
BIN
assets/Sonya.jpg
Normal file
BIN
assets/Sonya.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 586 KiB |
BIN
assets/mt-bench-comparison.png
Normal file
BIN
assets/mt-bench-comparison.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 313 KiB |
BIN
assets/mt-bench-gpt.png
Normal file
BIN
assets/mt-bench-gpt.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 263 KiB |
0
assets/noop
Normal file
0
assets/noop
Normal file
BIN
assets/xdaniel.jpg
Normal file
BIN
assets/xdaniel.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 64 KiB |
26
config.json
Normal file
26
config.json
Normal file
@@ -0,0 +1,26 @@
|
||||
{
|
||||
"_name_or_path": "mistralai/Mistral-7B-v0.1",
|
||||
"architectures": [
|
||||
"MistralForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 8192,
|
||||
"model_type": "mistral",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_theta": 10000.0,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.37.0.dev0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32000
|
||||
}
|
||||
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:84ca39c268868ef302966ed5dab6f72815718b0041c90a2190acb1f2d40c634c
|
||||
size 9984924496
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:425ec418e9b61fff4c2b5906afb523fdd733c3b0225e13e1e98a53097b523290
|
||||
size 4498573536
|
||||
1
model.safetensors.index.json
Normal file
1
model.safetensors.index.json
Normal file
File diff suppressed because one or more lines are too long
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
91122
tokenizer.json
Normal file
91122
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
42
tokenizer_config.json
Normal file
42
tokenizer_config.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [],
|
||||
"bos_token": "<s>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"legacy": true,
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": null,
|
||||
"sp_model_kwargs": {},
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": false
|
||||
}
|
||||
Reference in New Issue
Block a user