ModelHub XC 6e173b77d5 初始化项目,由ModelHub XC社区提供模型
Model: ajibawa-2023/Code-Mistral-7B
Source: Original Platform
2026-05-04 07:21:46 +08:00

language, license, tags, datasets, model-index
language license tags datasets model-index
en
apache-2.0
code
mathematics
ajibawa-2023/Code-290k-ShareGPT
m-a-p/Code-Feedback
microsoft/orca-math-word-problems-200k
teknium/openhermes
name results
Code-Mistral-7B
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 64.59 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 85.29 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 65.0 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 54.64
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 82.24 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 68.08 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-Mistral-7B Open LLM Leaderboard

Code-Mistral-7B

This Model is trained on refined version of my dataset Code-290k-ShareGPT.

Besides this it is trained on following datasets:

Code-Feedback

orca-math-word-problems-200k

Openhermes

The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model.

This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format.

Kindly note this is qLoRA version, a rare exception.

GGUF & Exllama

GGUF: Link

Exllama v2: Link

Special Thanks to Bartowski for quantizing this model.

Training:

Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral.

Example Prompt: This model uses ChatML prompt format.

<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

C++

image/jpeg

Error Resolving

image/jpeg

Matrices

image/jpeg

Machine Learning

image/jpeg

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.97
AI2 Reasoning Challenge (25-Shot) 64.59
HellaSwag (10-Shot) 85.29
MMLU (5-Shot) 65.00
TruthfulQA (0-shot) 54.64
Winogrande (5-shot) 82.24
GSM8k (5-shot) 68.08
Description
Model synced from source: ajibawa-2023/Code-Mistral-7B
Readme 1 MiB