language, license, library_name, tags, base_model, datasets, model-index
language license library_name tags base_model datasets model-index
en
cc-by-sa-4.0 transformers
mergekit
merge
liminerity/Multiverse-Experiment-slerp-7b
jeiku/Alpaca_NSFW_Shuffled_Mistral
ResplendentAI/Datura_7B
ChaoticNeutrals/Eris_Remix_7B
ResplendentAI/Alpaca_NSFW_Shuffled
unalignment/toxic-dpo-v0.2
name results
Paradigm_7B
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 73.63 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 88.66 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 64.02 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 75.19
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 84.53 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 66.79 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B Open LLM Leaderboard

Paradigm

image/jpeg

An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it!

GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.47
AI2 Reasoning Challenge (25-Shot) 73.63
HellaSwag (10-Shot) 88.66
MMLU (5-Shot) 64.02
TruthfulQA (0-shot) 75.19
Winogrande (5-shot) 84.53
GSM8k (5-shot) 66.79

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
base_model: ChaoticNeutrals/Eris_Remix_7B
parameters:
  normalize: true
models:
  - model: ChaoticNeutrals/Eris_Remix_7B
    parameters:
      weight: 1
  - model: ResplendentAI/Datura_7B
    parameters:
      weight: 1
  - model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral
    parameters:
      weight: 0.33
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.47
AI2 Reasoning Challenge (25-Shot) 73.63
HellaSwag (10-Shot) 88.66
MMLU (5-Shot) 64.02
TruthfulQA (0-shot) 75.19
Winogrande (5-shot) 84.53
GSM8k (5-shot) 66.79
Description
Model synced from source: ResplendentAI/Paradigm_7B
Readme 1 MiB