license, license_name, license_link, model-index
| license |
license_name |
license_link |
model-index |
| other |
qwen |
LICENSE |
| name |
results |
| Qwen-1_8B-Chat-llama |
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| AI2 Reasoning Challenge (25-Shot) |
ai2_arc |
ARC-Challenge |
test |
|
|
| type |
value |
name |
| acc_norm |
36.95 |
normalized accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
split |
args |
| HellaSwag (10-Shot) |
hellaswag |
validation |
|
|
| type |
value |
name |
| acc_norm |
54.34 |
normalized accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| MMLU (5-Shot) |
cais/mmlu |
all |
test |
|
|
| type |
value |
name |
| acc |
44.55 |
accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| TruthfulQA (0-shot) |
truthful_qa |
multiple_choice |
validation |
|
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| Winogrande (5-shot) |
winogrande |
winogrande_xl |
validation |
|
|
| type |
value |
name |
| acc |
58.88 |
accuracy |
|
|
|
|
| task |
dataset |
metrics |
source |
| type |
name |
| text-generation |
Text Generation |
|
| name |
type |
config |
split |
args |
| GSM8k (5-shot) |
gsm8k |
main |
test |
|
|
| type |
value |
name |
| acc |
19.26 |
accuracy |
|
|
|
|
|
|
|
Their non-commercial research license applies.
I used this script to make the model and used the tokenizer of CausalLM, as suggested in the comments of the script.
https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py
Detailed results can be found here
| Metric |
Value |
| Avg. |
42.94 |
| AI2 Reasoning Challenge (25-Shot) |
36.95 |
| HellaSwag (10-Shot) |
54.34 |
| MMLU (5-Shot) |
44.55 |
| TruthfulQA (0-shot) |
43.70 |
| Winogrande (5-shot) |
58.88 |
| GSM8k (5-shot) |
19.26 |