Files
SynthIQ-7b/README.md
ModelHub XC 7111037615 初始化项目,由ModelHub XC社区提供模型
Model: sethuiyer/SynthIQ-7b
Source: Original Platform
2026-05-04 15:39:56 +08:00

6.4 KiB

language, license, library_name, tags, datasets, pipeline_tag, base_model, model-index
language license library_name tags datasets pipeline_tag base_model model-index
en
llama2 transformers
mistral
merge
stingning/ultrachat
garage-bAInd/Open-Platypus
Open-Orca/OpenOrca
TIGER-Lab/MathInstruct
OpenAssistant/oasst_top1_2023-08-25
teknium/openhermes
meta-math/MetaMathQA
Open-Orca/SlimOrca
text-generation
Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
ehartford/dolphin-2.1-mistral-7b
Open-Orca/Mistral-7B-OpenOrca
bhenrym14/mistral-7b-platypus-fp16
ehartford/samantha-1.2-mistral-7b
iteknium/CollectiveCognition-v1.1-Mistral-7B
HuggingFaceH4/zephyr-7b-alpha
name results
sethuiyer/SynthIQ-7b
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 65.87 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 85.82 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 64.75 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 57
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 78.69 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 64.06 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b Open LLM Leaderboard

SynthIQ

SynthIQ

This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used mergekit to merge models.

Benchmark Name Score
ARC 65.87
HellaSwag 85.82
MMLU 64.75
TruthfulQA 57.00
Winogrande 78.69
GSM8K 64.06
AGIEval 42.67
GPT4All 73.71
Bigbench 44.59

Update - 19/01/2024

Tested to work well with autogen and CrewAI

GGUF Files

Q4_K_M - medium, balanced quality - recommended

Q_6_K - very large, extremely low quality loss

Q8_0 - very large, extremely low quality loss - not recommended

Important Update: SynthIQ is now available on Ollama. You can use it by running the command ollama run stuehieyr/synthiq in your terminal. If you have limited computing resources, check out this video to learn how to run it on a Google Colab backend.

Yaml Config


slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
        layer_range: [0, 32]
      - model: uukuguy/speechless-mistral-six-in-one-7b
        layer_range: [0, 32]

merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1

parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
tokenizer_source: union

dtype: bfloat16

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Nous Benchmark Evalation Results

Detailed results can be found here

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.37
AI2 Reasoning Challenge (25-Shot) 65.87
HellaSwag (10-Shot) 85.82
MMLU (5-Shot) 64.75
TruthfulQA (0-shot) 57.00
Winogrande (5-shot) 78.69
GSM8k (5-shot) 64.06