ModelHub XC 8e90d44b83 初始化项目,由ModelHub XC社区提供模型
Model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
Source: Original Platform
2026-04-11 23:49:58 +08:00

license, model-index
license model-index
apache-2.0
name results
SOLAR-10.7B-Instruct-v1.0-uncensored
task dataset metrics source
type name
text-generation Text Generation
name type args
IFEval (0-Shot) HuggingFaceH4/ifeval
num_few_shot
0
type value name
inst_level_strict_acc and prompt_level_strict_acc 38.84 strict accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
BBH (3-Shot) BBH
num_few_shot
3
type value name
acc_norm 33.86 normalized accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MATH Lvl 5 (4-Shot) hendrycks/competition_math
num_few_shot
4
type value name
exact_match 0.23 exact match
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
GPQA (0-shot) Idavidrein/gpqa
num_few_shot
0
type value name
acc_norm 5.93 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MuSR (0-shot) TAUR-Lab/MuSR
num_few_shot
0
type value name
acc_norm 18.49 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU-PRO (5-shot) TIGER-Lab/MMLU-Pro main test
num_few_shot
5
type value name
acc 26.04 accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored Open LLM Leaderboard

SOLAR-10.7B-Instruct-v1.0-uncensored

SOLAR-10.7B-Instruct-v1.0 finetuned to be less censored. Refer to upstage/SOLAR-10.7B-Instruct-v1.0 for model info and usage instructions.

Training details

This model was trained using Lora and DPOTrainer on unalignment/toxic-dpo-v0.1

How to Cite

@misc{solarUncensoredDPO,
      title={solar-10.7b-instruct-V1.0-uncensored},
      url={https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored},
      author={Stepan Zuev},
      year={2023},
      month={Dec}
} 

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 20.56
IFEval (0-Shot) 38.84
BBH (3-Shot) 33.86
MATH Lvl 5 (4-Shot) 0.23
GPQA (0-shot) 5.93
MuSR (0-shot) 18.49
MMLU-PRO (5-shot) 26.04
Description
Model synced from source: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
Readme 564 KiB