Files
ModelHub XC 5d6ae0174a 初始化项目,由ModelHub XC社区提供模型
Model: Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp
Source: Original Platform
2026-05-01 21:29:37 +08:00

5.0 KiB

license, tags, license_name, license_link, model-index
license tags license_name license_link model-index
other
merge
yi-34b https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
name results
Nous-Hermes-2-SUS-Chat-34B-Slerp
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 66.72 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 84.97 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 77.0 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 59.23
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 83.58 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 72.86 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp Open LLM Leaderboard

image/png

Nous-Hermes-2-SUS-Chat-34B-Slerp

This is the model for Nous-Hermes-2-SUS-Chat-34B-Slerp. I used mergekit to merge models.

Prompt Templates

You can use these prompt templates, but I recommend using ChatML.

ChatML (NousResearch/Nous-Hermes-2-Yi-34B):

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>

Human - Asistant (SUSTech/SUS-Chat-34B):

### Human: {user}

### Assistant: {asistant}

Yaml Config


slices:
- sources:
    - model: Nous-Hermes-2-Yi-34B
      layer_range: [0, 60]
    - model: SUS-Chat-34B
      layer_range: [0, 60]

merge_method: slerp
base_model: Yi-34B

parameters:
t:
  - filter: self_attn
    value: [0, 0.5, 0.3, 0.7, 1]
  - filter: mlp
    value: [1, 0.5, 0.7, 0.3, 0]
  - value: 0.5
tokenizer_source: union
dtype: bfloat16

Quantizationed versions

Quantizationed versions of this model is available thanks to TheBloke.

GPTQ
GGUF
AWQ

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.06
AI2 Reasoning Challenge (25-Shot) 66.72
HellaSwag (10-Shot) 84.97
MMLU (5-Shot) 77.00
TruthfulQA (0-shot) 59.23
Winogrande (5-shot) 83.58
GSM8k (5-shot) 72.86