Files
Flora_DPO_7B/README.md
ModelHub XC dfa207a469 初始化项目,由ModelHub XC社区提供模型
Model: ResplendentAI/Flora_DPO_7B
Source: Original Platform
2026-05-11 21:03:54 +08:00

3.7 KiB

language, license, library_name, datasets, model-index
language license library_name datasets model-index
en
cc-by-sa-4.0 transformers
mlabonne/chatml_dpo_pairs
ResplendentAI/Synthetic_Soul_1k
name results
Flora_DPO_7B
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 71.76 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 88.28 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 64.13 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 71.08
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 84.53 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 65.81 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B Open LLM Leaderboard

Flora DPO

image/jpeg

Finetuned with this DPO dataset: https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs

Quants available here:

https://huggingface.co/solidrust/Flora-7B-DPO-AWQ

https://huggingface.co/Test157t/ResplendentAI-Flora_DPO_7B-5bpw-exl2

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.26
AI2 Reasoning Challenge (25-Shot) 71.76
HellaSwag (10-Shot) 88.28
MMLU (5-Shot) 64.13
TruthfulQA (0-shot) 71.08
Winogrande (5-shot) 84.53
GSM8k (5-shot) 65.81