Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 47.8
ARC (25-shot) 59.13
HellaSwag (10-shot) 82.13
MMLU (5-shot) 54.98
TruthfulQA (0-shot) 44.23
Winogrande (5-shot) 76.4
GSM8K (5-shot) 8.11
DROP (3-shot) 9.6
Description
Model synced from source: dhmeltzer/Llama-2-13b-hf-ds_eli5_1024_r_64_alpha_16_merged
Readme 581 KiB