ModelHub XC 729c25a37d 初始化项目,由ModelHub XC社区提供模型
Model: theNovaAI/Supernova-experimental
Source: Original Platform
2026-04-13 20:36:42 +08:00

language, license, library_name, tags, base_model, inference, model-index
language license library_name tags base_model inference model-index
en
cc-by-nc-sa-4.0 transformers
mergekit
merge
PygmalionAI/pygmalion-2-13b
Undi95/Amethyst-13B
false
name results
Supernova-experimental
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 63.05 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 83.66 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 56.59 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 49.37
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 77.35 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 28.73 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental Open LLM Leaderboard

Supernova-experimental

This is an experimental model that was created for the development of NovaAI.

Good at chatting and some RP.

Quantized model here: theNovaAI/Supernova-experimental-GPTQ

Prompt Template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Models Merged

The following models were included in the merge:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.79
AI2 Reasoning Challenge (25-Shot) 63.05
HellaSwag (10-Shot) 83.66
MMLU (5-Shot) 56.59
TruthfulQA (0-shot) 49.37
Winogrande (5-shot) 77.35
GSM8k (5-shot) 28.73
Description
Model synced from source: theNovaAI/Supernova-experimental
Readme 583 KiB