Files
Code-290k-6.7B-Instruct/README.md
ModelHub XC 16d4d572a9 初始化项目,由ModelHub XC社区提供模型
Model: ajibawa-2023/Code-290k-6.7B-Instruct
Source: Original Platform
2026-05-05 20:28:46 +08:00

5.3 KiB

language, license, tags, datasets, model-index
language license tags datasets model-index
en
other
code
ajibawa-2023/Code-290k-ShareGPT
name results
Code-290k-6.7B-Instruct
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 34.9 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 51.99 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 34.89 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 41.95
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 52.64 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 3.49 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct Open LLM Leaderboard

Code-290k-6.7B-Instruct

This model is trained on DeepSeek-Coder-6.7B-Instruct. I have used my existing dataset Code-290k-ShareGPT for training purpose. It is trained on around 290000 set of codes. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. This model utilises Alpaca format. Besides code generation it will also give you explanation.

Training:

Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 85 hours. DeepSeek-Coder codebase and DeepSpeed was used for training purpose.

This is a full fine tuned model.

Links for quantized models are given below.

Exllama

Exllama v2:Link

Extremely thankful to Bartowski for making Quantized version of the model.

Example Prompt:

This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.

### Instruction:
{instruction}

### Response:

You can modify above Prompt as per your requirement. I have used Alpaca format.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Examples

  1. Bayes Theorem - Python

image/png

  1. Fermat's little theorem

image/png

  1. The Arrhenius equation using R

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.64
AI2 Reasoning Challenge (25-Shot) 34.90
HellaSwag (10-Shot) 51.99
MMLU (5-Shot) 34.89
TruthfulQA (0-shot) 41.95
Winogrande (5-shot) 52.64
GSM8k (5-shot) 3.49