Files
ModelHub XC d5115a27a8 初始化项目,由ModelHub XC社区提供模型
Model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
Source: Original Platform
2026-04-10 23:46:57 +08:00

4.1 KiB

language, license, library_name, tags, datasets, pipeline_tag, model-index
language license library_name tags datasets pipeline_tag model-index
en
cc-by-nc-4.0 transformers
text-generation-inference
argilla/OpenHermesPreferences
text2text-generation
name results
ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 73.12 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 89.09 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 64.8 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 77.45
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 84.77 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 69.45 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO Open LLM Leaderboard

Model Card for Model ID

disclaimer

just experimented with the model I had https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b here with the new preferences dataset of argillia here: https://huggingface.co/datasets/argilla/OpenHermesPreferences

I didn't test the model and the perf wasn't that good when training so use/test it with caution

disclaimer 2

It turns out the model performs well in benchmarks :D

GGUF: https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.45
AI2 Reasoning Challenge (25-Shot) 73.12
HellaSwag (10-Shot) 89.09
MMLU (5-Shot) 64.80
TruthfulQA (0-shot) 77.45
Winogrande (5-shot) 84.77
GSM8k (5-shot) 69.45