初始化项目,由ModelHub XC社区提供模型

Model: nbeerbower/bophades-mistral-7B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 15:16:59 +08:00
commit fef4f36d6f
12 changed files with 91373 additions and 0 deletions

73
README.md Normal file
View File

@@ -0,0 +1,73 @@
---
license: apache-2.0
base_model:
- paulml/NeuralOmniWestBeaglake-7B
- paulml/OmniBeagleSquaredMBX-v3-7B
- yam-peleg/Experiment21-7B
- yam-peleg/Experiment26-7B
- Kukedlc/NeuralMaths-Experiment-7b
- Gille/StrangeMerges_16-7B-slerp
- vanillaOVO/correction_1
library_name: transformers
tags:
- mergekit
- merge
---
![image/png](https://huggingface.co/nbeerbower/bophades-mistral-7B/resolve/main/bophades.png)
# bophades-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
* [vanillaOVO/correction_1](https://huggingface.co/vanillaOVO/correction_1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
parameters:
density: 0.5
weight: 0.5
- model: paulml/NeuralOmniWestBeaglake-7B
parameters:
density: 0.5
weight: 0.5
- model: Gille/StrangeMerges_16-7B-slerp
parameters:
density: 0.5
weight: 0.5
- model: yam-peleg/Experiment21-7B
parameters:
density: 0.5
weight: 0.5
- model: vanillaOVO/correction_1
parameters:
density: 0.5
weight: 0.5
- model: Kukedlc/NeuralMaths-Experiment-7b
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
normalize: true
dtype: bfloat16
```