74 lines
2.1 KiB
Markdown
74 lines
2.1 KiB
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
base_model:
|
||
|
|
- paulml/NeuralOmniWestBeaglake-7B
|
||
|
|
- paulml/OmniBeagleSquaredMBX-v3-7B
|
||
|
|
- yam-peleg/Experiment21-7B
|
||
|
|
- yam-peleg/Experiment26-7B
|
||
|
|
- Kukedlc/NeuralMaths-Experiment-7b
|
||
|
|
- Gille/StrangeMerges_16-7B-slerp
|
||
|
|
- vanillaOVO/correction_1
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- mergekit
|
||
|
|
- merge
|
||
|
|
|
||
|
|
---
|
||
|
|

|
||
|
|
# bophades-mistral-7B
|
||
|
|
|
||
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||
|
|
|
||
|
|
## Merge Details
|
||
|
|
### Merge Method
|
||
|
|
|
||
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
|
||
|
|
|
||
|
|
### Models Merged
|
||
|
|
|
||
|
|
The following models were included in the merge:
|
||
|
|
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
|
||
|
|
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
|
||
|
|
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
|
||
|
|
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
|
||
|
|
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
|
||
|
|
* [vanillaOVO/correction_1](https://huggingface.co/vanillaOVO/correction_1)
|
||
|
|
|
||
|
|
### Configuration
|
||
|
|
|
||
|
|
The following YAML configuration was used to produce this model:
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
models:
|
||
|
|
- model: paulml/OmniBeagleSquaredMBX-v3-7B
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
- model: paulml/NeuralOmniWestBeaglake-7B
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
- model: Gille/StrangeMerges_16-7B-slerp
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
- model: yam-peleg/Experiment21-7B
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
- model: vanillaOVO/correction_1
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
- model: Kukedlc/NeuralMaths-Experiment-7b
|
||
|
|
parameters:
|
||
|
|
density: 0.5
|
||
|
|
weight: 0.5
|
||
|
|
merge_method: dare_ties
|
||
|
|
base_model: yam-peleg/Experiment26-7B
|
||
|
|
parameters:
|
||
|
|
normalize: true
|
||
|
|
dtype: bfloat16
|
||
|
|
|
||
|
|
```
|