初始化项目,由ModelHub XC社区提供模型
Model: NeverSleep/Mistral-11B-OmniMix-bf16 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
141
README.md
Normal file
141
README.md
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
license: cc-by-nc-4.0
|
||||
---
|
||||
This model should be fixed, it was MEANT to be BF16.
|
||||
|
||||
Don't mind this one at the moment, I need to finetune it for RP, it's just a test.
|
||||
|
||||
## Description
|
||||
|
||||
This repo contains fp16 files of Mistral-11B-OmniMix-bf16.
|
||||
|
||||
My goal for this model was only to make it score the highest possible with merge and layer toying, proving that:
|
||||
- Benchmark are objective
|
||||
- You should try a model yourself and don't go blindly to the highest rated one
|
||||
- Merge/Layer toying CAN be usable to do better model (maybe?)
|
||||
|
||||
|
||||
## Model used
|
||||
- [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
|
||||
- [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
|
||||
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
|
||||
- [zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
|
||||
|
||||
|
||||
|
||||
## Prompt template
|
||||
|
||||
The best one after further testing is this one:
|
||||
|
||||
```
|
||||
<|system|>
|
||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
||||
<|user|>
|
||||
{prompt}
|
||||
<|assistant|>
|
||||
```
|
||||
|
||||
|
||||

|
||||
|
||||
But these one work too:
|
||||
|
||||
```
|
||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
||||
|
||||
### Instruction:
|
||||
{prompt}
|
||||
|
||||
### Response:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
USER: <prompt>
|
||||
ASSISTANT:
|
||||
```
|
||||
|
||||
Or use any prompting system from one of the 4 source model, should work.
|
||||
|
||||
## The secret sauce
|
||||
|
||||
Mistral-11B-OpenOrcaPlatypus :
|
||||
```
|
||||
slices:
|
||||
- sources:
|
||||
- model: Open-Orca/Mistral-7B-OpenOrca
|
||||
layer_range: [0, 24]
|
||||
- sources:
|
||||
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
|
||||
layer_range: [8, 32]
|
||||
merge_method: passthrough
|
||||
dtype: bfloat16
|
||||
```
|
||||
|
||||
Mistral-11B-CC-Zephyr :
|
||||
```
|
||||
slices:
|
||||
- sources:
|
||||
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
|
||||
layer_range: [0, 24]
|
||||
- sources:
|
||||
- model: "/content/drive/MyDrive/Zephyr-7B"
|
||||
layer_range: [8, 32]
|
||||
merge_method: passthrough
|
||||
dtype: bfloat16
|
||||
```
|
||||
|
||||
Mistral-11B-OmniMix :
|
||||
```
|
||||
slices:
|
||||
- sources:
|
||||
- model: Mistral-11B-OpenOrcaPlatypus
|
||||
layer_range: [0, 48]
|
||||
- model: Mistral-11B-CC-Zephyr
|
||||
layer_range: [0, 48]
|
||||
merge_method: slerp
|
||||
base_model: Mistral-11B-OpenOrcaPlatypus
|
||||
parameters:
|
||||
t:
|
||||
- filter: lm_head
|
||||
value: [0.75]
|
||||
- filter: embed_tokens
|
||||
value: [0.75]
|
||||
- filter: self_attn
|
||||
value: [0.75, 0.25]
|
||||
- filter: mlp
|
||||
value: [0.25, 0.75]
|
||||
- filter: layernorm
|
||||
value: [0.5, 0.5]
|
||||
- filter: modelnorm
|
||||
value: [0.75]
|
||||
- value: 0.5 # fallback for rest of tensors
|
||||
dtype: bfloat16
|
||||
```
|
||||
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
|
||||
|
||||
## Some scoring I done myself
|
||||
|
||||
|
||||

|
||||
|
||||
hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-OmniMix-bf16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4
|
||||
| Task |Version| Metric |Value | |Stderr|
|
||||
|-------------|------:|--------|-----:|---|-----:|
|
||||
|arc_challenge| 0|acc |0.5580|± |0.0145|
|
||||
| | |acc_norm|0.5819|± |0.0144|
|
||||
|arc_easy | 0|acc |0.8300|± |0.0077|
|
||||
| | |acc_norm|0.8211|± |0.0079|
|
||||
|hellaswag | 0|acc |0.6372|± |0.0048|
|
||||
| | |acc_norm|0.8209|± |0.0038|
|
||||
|piqa | 0|acc |0.8145|± |0.0091|
|
||||
| | |acc_norm|0.8286|± |0.0088|
|
||||
|truthfulqa_mc| 1|mc1 |0.3978|± |0.0171|
|
||||
| | |mc2 |0.5680|± |0.0155|
|
||||
|winogrande | 0|acc |0.7427|± |0.0123|
|
||||
|
||||
## Others
|
||||
|
||||
Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
|
||||
|
||||
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
||||
5
added_tokens.json
Normal file
5
added_tokens.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"</s>": 2,
|
||||
"<s>": 1,
|
||||
"<unk>": 0
|
||||
}
|
||||
25
config.json
Normal file
25
config.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"_name_or_path": "Undi95/Mistral-11B-OpenOrcaPlatypus",
|
||||
"architectures": [
|
||||
"MistralForCausalLM"
|
||||
],
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 32768,
|
||||
"model_type": "mistral",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 48,
|
||||
"num_key_value_heads": 8,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_theta": 10000.0,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.35.0.dev0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32000
|
||||
}
|
||||
3
model-00001-of-00003.safetensors
Normal file
3
model-00001-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e7a6f9c95e37dce7ee7e52c8298fd668c81733a56ba05eb39c4d67c56e937235
|
||||
size 9976544144
|
||||
3
model-00002-of-00003.safetensors
Normal file
3
model-00002-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8a123b366ee57ccab26ad7b31dce7b4953ed51bb0002a54eec8c5bc98c9c9efb
|
||||
size 9976535776
|
||||
3
model-00003-of-00003.safetensors
Normal file
3
model-00003-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ad4959cc05f192be5108993ddb612306025f2c40be3811f3702619801b902472
|
||||
size 1510018920
|
||||
1
model.safetensors.index.json
Normal file
1
model.safetensors.index.json
Normal file
File diff suppressed because one or more lines are too long
10
special_tokens_map.json
Normal file
10
special_tokens_map.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<unk>",
|
||||
"<s>",
|
||||
"</s>"
|
||||
],
|
||||
"bos_token": "<s>",
|
||||
"eos_token": "</s>",
|
||||
"unk_token": "<unk>"
|
||||
}
|
||||
91140
tokenizer.json
Normal file
91140
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
65
tokenizer_config.json
Normal file
65
tokenizer_config.json
Normal file
@@ -0,0 +1,65 @@
|
||||
{
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"32000": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"32001": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": true,
|
||||
"normalized": false,
|
||||
"rstrip": true,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<unk>",
|
||||
"<s>",
|
||||
"</s>",
|
||||
"<|im_end|>",
|
||||
"<|im_start|>"
|
||||
],
|
||||
"bos_token": "<s>",
|
||||
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"legacy": true,
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": null,
|
||||
"sp_model_kwargs": {},
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"trust_remote_code": false,
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": true,
|
||||
"use_fast": true
|
||||
}
|
||||
Reference in New Issue
Block a user