Files
CodeCalc-Mistral-7B/README.md
ModelHub XC 6c9a77f0aa 初始化项目,由ModelHub XC社区提供模型
Model: sethuiyer/CodeCalc-Mistral-7B
Source: Original Platform
2026-05-06 02:13:46 +08:00

5.1 KiB

language, license, library_name, tags, base_model, pipeline_tag, model-index
language license library_name tags base_model pipeline_tag model-index
en
apache-2.0 transformers
mergekit
merge
uukuguy/speechless-code-mistral-7b-v1.0
upaya07/Arithmo2-Mistral-7B
text-generation
name results
sethuiyer/CodeCalc-Mistral-7B
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 61.95 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 83.64 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 62.78 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 47.49
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 78.3 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 63.53 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B Open LLM Leaderboard

CodeCalc-Mistral-7B

CodeCalc

Configuration

The following YAML configuration was used to produce this model:


base_model: uukuguy/speechless-code-mistral-7b-v1.0
dtype: bfloat16
merge_method: ties
models:
- model: uukuguy/speechless-code-mistral-7b-v1.0
- model: upaya07/Arithmo2-Mistral-7B
  parameters:
    density:  [0.25, 0.35, 0.45, 0.35, 0.25]
    weight: [0.1, 0.25, 0.5, 0.25, 0.1]
parameters:
  int8_mask: true

Evaluation

T Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
🔍 sethuiyer/CodeCalc-Mistral-7B 66.33 61.95 83.64 62.78 47.79 78.3 63.53
📉 uukuguy/speechless-code-mistral-7b-v1.0 63.6 61.18 83.77 63.4 47.9 78.37 47.01

The merge appears to be successful, especially considering the substantial improvement in the GSM8K benchmark while maintaining comparable performance on other metrics.

Usage

Alpaca Instruction Format and Divine Intellect preset.

You are an intelligent programming assistant.

### Instruction:
Implement a linked list in C++

### Response:

Preset:

temperature: 1.31
top_p: 0.14
repetition_penalty: 1.17
top_k: 49

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.33
AI2 Reasoning Challenge (25-Shot) 61.95
HellaSwag (10-Shot) 83.64
MMLU (5-Shot) 62.78
TruthfulQA (0-shot) 47.79
Winogrande (5-shot) 78.30
GSM8k (5-shot) 63.53