ModelHub XC 3c5613921b 初始化项目,由ModelHub XC社区提供模型
Model: InferenceIllusionist/Magic-Dolphin-7b
Source: Original Platform
2026-05-03 17:33:20 +08:00

license, base_model, library_name, tags, model-index
license base_model library_name tags model-index
apache-2.0
cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
Locutusque/Hyperion-1.5-Mistral-7B
ibm/merlinite-7b
transformers
mergekit
merge
code
name results
Magic-Dolphin-7b
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 65.78 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 85.61 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 64.64 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 58.01
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 79.64 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 51.18 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Magic-Dolphin-7b Open LLM Leaderboard

Magic-Dolphin-7b

The follow-up to this model has been released, check out the updated benchmarks here for Excalibur-7b

A full suite of GGUF quantizations can be found here, courtesy of RichardErkhov

A linear merge of:

These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.

Benchmark Performance

Name Avg. ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Magic-Dolphin-7b 67.48 65.78 85.61 64.64 58.01 79.64 51.18
dolphin-2.6-mistral-7b-dpo-laser 67.28 66.3 85.73 63.16 61.71 79.16 47.61
merlinite-7b 64 63.65 84.52 64.91 50.15 79.72 41.09
Hyperion-1.5-Mistral-7B 61.43 60.49 83.64 63.57 41.78 78.61 40.49

This was my first experiment with merging models so any feedback is greatly appreciated.

Uses Alpaca template.

Sample Question

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: models/dolphin-2.6-mistral-7b-dpo-laser
    parameters:
      weight: 1.0
  - model: models/Hyperion-1.5-Mistral-7B
    parameters:
      weight: 0.3
  - model: models/merlinite-7b
    parameters:
      weight: 0.5
merge_method: linear
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.48
AI2 Reasoning Challenge (25-Shot) 65.78
HellaSwag (10-Shot) 85.61
MMLU (5-Shot) 64.64
TruthfulQA (0-shot) 58.01
Winogrande (5-shot) 79.64
GSM8k (5-shot) 51.18
Description
Model synced from source: InferenceIllusionist/Magic-Dolphin-7b
Readme 1.4 MiB