language, license, library_name, tags, datasets, pipeline_tag, model-index
| language |
license |
library_name |
tags |
datasets |
pipeline_tag |
model-index |
|
|
cc-by-nc-4.0 |
transformers |
| text-generation-inference |
|
| argilla/OpenHermesPreferences |
|
text2text-generation |
| name |
results |
| ogno-monarch-jaskier-merge-7b-OH-PREF-DPO |
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| AI2 Reasoning Challenge (25-Shot) |
ai2_arc |
ARC-Challenge |
test |
|
|
| type |
value |
name |
| acc_norm |
73.12 |
normalized accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
split |
args |
| HellaSwag (10-Shot) |
hellaswag |
validation |
|
|
| type |
value |
name |
| acc_norm |
89.09 |
normalized accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| MMLU (5-Shot) |
cais/mmlu |
all |
test |
|
|
| type |
value |
name |
| acc |
64.8 |
accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| TruthfulQA (0-shot) |
truthful_qa |
multiple_choice |
validation |
|
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| Winogrande (5-shot) |
winogrande |
winogrande_xl |
validation |
|
|
| type |
value |
name |
| acc |
84.77 |
accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| GSM8k (5-shot) |
gsm8k |
main |
test |
|
|
| type |
value |
name |
| acc |
69.45 |
accuracy |
|
|
|
|
|
|
|
Model Card for Model ID
disclaimer
just experimented with the model I had https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b here with the new preferences dataset of argillia here: https://huggingface.co/datasets/argilla/OpenHermesPreferences
I didn't test the model and the perf wasn't that good when training so use/test it with caution
disclaimer 2
It turns out the model performs well in benchmarks :D
GGUF: https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF
Detailed results can be found here
| Metric |
Value |
| Avg. |
76.45 |
| AI2 Reasoning Challenge (25-Shot) |
73.12 |
| HellaSwag (10-Shot) |
89.09 |
| MMLU (5-Shot) |
64.80 |
| TruthfulQA (0-shot) |
77.45 |
| Winogrande (5-shot) |
84.77 |
| GSM8k (5-shot) |
69.45 |