license, tags, pipeline_tag, model-index
license tags pipeline_tag model-index
cc-by-4.0
mistral
merge
text-generation
name results
xDAN-SlimOrca
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 65.61 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 85.7 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 63.67 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 57.68
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 77.66 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 57.92 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca Open LLM Leaderboard

Model Card for xDAN-SlimOrca

Slerp merge of xDAN-L1-Chat-RL-v1 and mistral-7b-slimorcaboros.

.yaml file for mergekit

slices:
  - sources:
      - model: xDAN-AI/xDAN-L1-Chat-RL-v1
        layer_range: [0, 32]
      - model: openaccess-ai-collective/mistral-7b-slimorcaboros
        layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.14, 0.57, 0.4, 0.74, 1]
    - filter: mlp
      value: [0.86, 0.43, 0.6, 0.26, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.04
AI2 Reasoning Challenge (25-Shot) 65.61
HellaSwag (10-Shot) 85.70
MMLU (5-Shot) 63.67
TruthfulQA (0-shot) 57.68
Winogrande (5-shot) 77.66
GSM8k (5-shot) 57.92
Description
Model synced from source: Azazelle/xDAN-SlimOrca
Readme 1 MiB