license, tags, model-index
license
tags
model-index
apache-2.0
merge
mergekit
lazymergekit
bardsai/jaskier-7b-dpo-v5.6
liminerity/merge
name
results
Blur-7b-slerp-v1.41
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
config
split
args
AI2 Reasoning Challenge (25-Shot)
ai2_arc
ARC-Challenge
test
type
value
name
acc_norm
72.78
normalized accuracy
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
split
args
HellaSwag (10-Shot)
hellaswag
validation
type
value
name
acc_norm
88.65
normalized accuracy
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
config
split
args
MMLU (5-Shot)
cais/mmlu
all
test
type
value
name
acc
64.84
accuracy
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
config
split
args
TruthfulQA (0-shot)
truthful_qa
multiple_choice
validation
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
config
split
args
Winogrande (5-shot)
winogrande
winogrande_xl
validation
type
value
name
acc
83.9
accuracy
task
dataset
metrics
source
type
name
text-generation
Text Generation
name
type
config
split
args
GSM8k (5-shot)
gsm8k
main
test
type
value
name
acc
71.49
accuracy
Blur-7b-slerp-v1.41
Blur-7b-slerp-v1.41 is a merge of the following models using mergekit :
🧩 Configuration
Detailed results can be found here
Metric
Value
Avg.
75.98
AI2 Reasoning Challenge (25-Shot)
72.78
HellaSwag (10-Shot)
88.65
MMLU (5-Shot)
64.84
TruthfulQA (0-shot)
74.23
Winogrande (5-shot)
83.90
GSM8k (5-shot)
71.49