license, license_name, license_link, model-index
license license_name license_link model-index
other qwen LICENSE
name results
Qwen-1_8B-Chat-llama
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 36.95 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 54.34 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 44.55 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 43.7
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 58.88 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 19.26 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Qwen-1_8B-Chat-llama Open LLM Leaderboard

Their non-commercial research license applies.

I used this script to make the model and used the tokenizer of CausalLM, as suggested in the comments of the script.

https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 42.94
AI2 Reasoning Challenge (25-Shot) 36.95
HellaSwag (10-Shot) 54.34
MMLU (5-Shot) 44.55
TruthfulQA (0-shot) 43.70
Winogrande (5-shot) 58.88
GSM8k (5-shot) 19.26
Description
Model synced from source: KnutJaegersberg/Qwen-1_8B-Chat-llama
Readme 3.7 MiB
Languages
Text 100%