初始化项目,由ModelHub XC社区提供模型
Model: CultriX/MergeTrix-7B Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
107
README.md
Normal file
107
README.md
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
---
|
||||||
|
license: apache-2.0
|
||||||
|
tags:
|
||||||
|
- merge
|
||||||
|
- mergekit
|
||||||
|
- lazymergekit
|
||||||
|
- abideen/NexoNimbus-7B
|
||||||
|
- fblgit/UNA-TheBeagle-7b-v1
|
||||||
|
- argilla/distilabeled-Marcoro14-7B-slerp
|
||||||
|
base_model:
|
||||||
|
- udkai/Turdus
|
||||||
|
- abideen/NexoNimbus-7B
|
||||||
|
- fblgit/UNA-TheBeagle-7b-v1
|
||||||
|
- argilla/distilabeled-Marcoro14-7B-slerp
|
||||||
|
---
|
||||||
|
|
||||||
|
# EDIT:
|
||||||
|
Always check my space for the latest benchmark results for my models!
|
||||||
|
* https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
|
||||||
|
|
||||||
|
# IMPORTANT NOTE | READ ME! #
|
||||||
|
This model uses udkai/Turdus which may produce inaccurate results for the Winogrande evaluation scores.
|
||||||
|
The following is a quote directly taken from that models page:
|
||||||
|
- "A less contaminated version of udkai/Garrulus and the second model to be discussed in the paper Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC."
|
||||||
|
- "Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model."
|
||||||
|
|
||||||
|
In my understanding the Winogrande scores are only slightly influenced by the DPO-Contamination, that has the "side-effect" of increasing the scores on the other benchmarks.
|
||||||
|
Since the effect on the Winogrande scores was subtle in the udkai/Turdus benchmarking results, and this model combines it with other models (probably making this effect even less pronounced),
|
||||||
|
I still believe that this model can be of value to the community as it's overall performance is quite impressive.
|
||||||
|
However I do not want to mislead anybody or produce any unfair scores, hence this note! The full training configuration is also fully transparant and can be found below.
|
||||||
|
|
||||||
|
I Hope this model will prove useful to somebody. There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF.
|
||||||
|
I personally tested them and found them to produce very pleasing results.
|
||||||
|
|
||||||
|
Kind regards,
|
||||||
|
CultriX
|
||||||
|
|
||||||
|
# PERSONAL DISCLAIMER
|
||||||
|
(This is probably a good moment to point out that I'm an amateur doing this for fun and am by no means an IT professional or data scientist.
|
||||||
|
Therefore my understanding of these topics might be incomplete, missing or simply completely wrong in turn causing me to make inaccurate claims.
|
||||||
|
If you notice that's the case I invite you to notify me of my mistakes so that I can rectify any potential inaccuracies as soon as possible. Thanks for understanding!)
|
||||||
|
I Hope this model will prove useful to somebody.
|
||||||
|
There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF
|
||||||
|
|
||||||
|
# Shoutout
|
||||||
|
Once again, a major thank you and shoutout to @mlabonne for his amazing article that I used to produce this result which can be found here: https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54
|
||||||
|
My other model, CultriX/MistralTrix-v1, was based on another great article from the same guy, which can be found here: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
|
||||||
|
(I hope he doesn't mind me using his own articles to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
|
||||||
|
es to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
|
||||||
|
|
||||||
|
# MODEL INFORMATION:
|
||||||
|
# NAME: MergeTrix-7B
|
||||||
|
|
||||||
|
MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
||||||
|
* [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
|
||||||
|
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
|
||||||
|
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
|
||||||
|
|
||||||
|
## 🧩 Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
models:
|
||||||
|
- model: udkai/Turdus
|
||||||
|
# No parameters necessary for base model
|
||||||
|
- model: abideen/NexoNimbus-7B
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.4
|
||||||
|
- model: fblgit/UNA-TheBeagle-7b-v1
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.3
|
||||||
|
- model: argilla/distilabeled-Marcoro14-7B-slerp
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.3
|
||||||
|
merge_method: dare_ties
|
||||||
|
base_model: udkai/Turdus
|
||||||
|
parameters:
|
||||||
|
int8_mask: true
|
||||||
|
dtype: bfloat16
|
||||||
|
```
|
||||||
|
|
||||||
|
## 💻 Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
!pip install -qU transformers accelerate
|
||||||
|
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
import transformers
|
||||||
|
import torch
|
||||||
|
|
||||||
|
model = "CultriX/MergeTrix-7B"
|
||||||
|
messages = [{"role": "user", "content": "What is a large language model?"}]
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
||||||
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||||
|
pipeline = transformers.pipeline(
|
||||||
|
"text-generation",
|
||||||
|
model=model,
|
||||||
|
torch_dtype=torch.float16,
|
||||||
|
device_map="auto",
|
||||||
|
)
|
||||||
|
|
||||||
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
||||||
|
print(outputs[0]["generated_text"])
|
||||||
|
```
|
||||||
28
config.json
Normal file
28
config.json
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "udkai/Turdus",
|
||||||
|
"architectures": [
|
||||||
|
"MistralForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 14336,
|
||||||
|
"max_position_embeddings": 32768,
|
||||||
|
"model_type": "mistral",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": 2,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_theta": 10000.0,
|
||||||
|
"sliding_window": 4096,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.35.2",
|
||||||
|
"unsloth_version": "2024.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
21
mergekit_config.yml
Normal file
21
mergekit_config.yml
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
|
||||||
|
models:
|
||||||
|
- model: udkai/Turdus
|
||||||
|
# No parameters necessary for base model
|
||||||
|
- model: abideen/NexoNimbus-7B
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.4
|
||||||
|
- model: fblgit/UNA-TheBeagle-7b-v1
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.3
|
||||||
|
- model: argilla/distilabeled-Marcoro14-7B-slerp
|
||||||
|
parameters:
|
||||||
|
density: 0.53
|
||||||
|
weight: 0.3
|
||||||
|
merge_method: dare_ties
|
||||||
|
base_model: udkai/Turdus
|
||||||
|
parameters:
|
||||||
|
int8_mask: true
|
||||||
|
dtype: bfloat16
|
||||||
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b7ab83da8e5a883dfedad348765652acd10311a132c70221cfee1b0f7631dd49
|
||||||
|
size 9783597320
|
||||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:d491de0d2690eb16a6a5e7f2d19217779618f213b6f28fa914a56f2c01355c31
|
||||||
|
size 4699900720
|
||||||
1
model.safetensors.index.json
Normal file
1
model.safetensors.index.json
Normal file
File diff suppressed because one or more lines are too long
35
special_tokens_map.json
Normal file
35
special_tokens_map.json
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<unk>",
|
||||||
|
"<s>",
|
||||||
|
"</s>"
|
||||||
|
],
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
91122
tokenizer.json
Normal file
91122
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
45
tokenizer_config.json
Normal file
45
tokenizer_config.json
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
{
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"0": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"1": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"2": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<unk>",
|
||||||
|
"<s>",
|
||||||
|
"</s>"
|
||||||
|
],
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"legacy": true,
|
||||||
|
"model_max_length": 255,
|
||||||
|
"pad_token": "<unk>",
|
||||||
|
"padding_side": "right",
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"spaces_between_special_tokens": false,
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": "<unk>",
|
||||||
|
"use_default_system_prompt": true
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user