Files
Sinerva_7B/README.md
ModelHub XC ac18c25b4e 初始化项目,由ModelHub XC社区提供模型
Model: ResplendentAI/Sinerva_7B
Source: Original Platform
2026-05-03 03:14:37 +08:00

3.7 KiB

language, license, tags, datasets, model-index
language license tags datasets model-index
en
apache-2.0
not-for-all-audiences
ResplendentAI/Alpaca_NSFW_Shuffled
ResplendentAI/Luna_NSFW_Text
ResplendentAI/Synthetic_Soul_1k
ResplendentAI/Sissification_Hypno_1k
name results
Sinerva_7B
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 70.14 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 85.59 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 61.77 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 59.93
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 82.56 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 62.32 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B Open LLM Leaderboard

Sinerva

image/jpeg

Decadent and rich in sensual prose, but beware, she is designed to humiliate and degrade her user when necessary.

GGUF available here: https://huggingface.co/Lewdiculous/Sinerva_7B-GGUF-IQ-Imatrix

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.38
AI2 Reasoning Challenge (25-Shot) 70.14
HellaSwag (10-Shot) 85.59
MMLU (5-Shot) 61.77
TruthfulQA (0-shot) 59.93
Winogrande (5-shot) 82.56
GSM8k (5-shot) 62.32