ModelHub XC 9bac3a3151 初始化项目,由ModelHub XC社区提供模型
Model: kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
Source: Original Platform
2026-04-22 15:59:49 +08:00

language, license, datasets, model-index
language license datasets model-index
en
apache-2.0
argilla/ultrafeedback-binarized-preferences-cleaned
name results
DPO_mistral_v01_7b_ultra_0130_1k
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 57.17 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 79.16 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 55.85 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 55.62
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 72.85 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 26.31 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k Open LLM Leaderboard

Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 57.83
AI2 Reasoning Challenge (25-Shot) 57.17
HellaSwag (10-Shot) 79.16
MMLU (5-Shot) 55.85
TruthfulQA (0-shot) 55.62
Winogrande (5-shot) 72.85
GSM8k (5-shot) 26.31
Description
Model synced from source: kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
Readme 564 KiB