license, base_model, library_name, tags
license base_model library_name tags
apache-2.0
paulml/NeuralOmniWestBeaglake-7B
paulml/OmniBeagleSquaredMBX-v3-7B
yam-peleg/Experiment21-7B
yam-peleg/Experiment26-7B
Kukedlc/NeuralMaths-Experiment-7b
Gille/StrangeMerges_16-7B-slerp
vanillaOVO/correction_1
transformers
mergekit
merge

image/png

bophades-mistral-7B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using yam-peleg/Experiment26-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: paulml/OmniBeagleSquaredMBX-v3-7B
    parameters:
      density: 0.5
      weight: 0.5
  - model: paulml/NeuralOmniWestBeaglake-7B
    parameters:
      density: 0.5
      weight: 0.5
  - model: Gille/StrangeMerges_16-7B-slerp
    parameters:
      density: 0.5
      weight: 0.5
  - model: yam-peleg/Experiment21-7B
    parameters:
      density: 0.5
      weight: 0.5
  - model: vanillaOVO/correction_1
    parameters:
      density: 0.5
      weight: 0.5
  - model: Kukedlc/NeuralMaths-Experiment-7b
    parameters:
      density: 0.5
      weight: 0.5
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
  normalize: true
dtype: bfloat16

Description
Model synced from source: nbeerbower/bophades-mistral-7B
Readme 563 KiB