199 lines
6.0 KiB
Markdown
199 lines
6.0 KiB
Markdown
---
|
|
license: apache-2.0
|
|
tags:
|
|
- merge
|
|
- mergekit
|
|
- lazymergekit
|
|
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
|
|
- mlabonne/AlphaMonarch-7B
|
|
base_model:
|
|
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
|
|
- mlabonne/AlphaMonarch-7B
|
|
model-index:
|
|
- name: MonarchCoder-7B
|
|
results:
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: AI2 Reasoning Challenge (25-Shot)
|
|
type: ai2_arc
|
|
config: ARC-Challenge
|
|
split: test
|
|
args:
|
|
num_few_shot: 25
|
|
metrics:
|
|
- type: acc_norm
|
|
value: 68.52
|
|
name: normalized accuracy
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: HellaSwag (10-Shot)
|
|
type: hellaswag
|
|
split: validation
|
|
args:
|
|
num_few_shot: 10
|
|
metrics:
|
|
- type: acc_norm
|
|
value: 87.3
|
|
name: normalized accuracy
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: MMLU (5-Shot)
|
|
type: cais/mmlu
|
|
config: all
|
|
split: test
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 64.65
|
|
name: accuracy
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: TruthfulQA (0-shot)
|
|
type: truthful_qa
|
|
config: multiple_choice
|
|
split: validation
|
|
args:
|
|
num_few_shot: 0
|
|
metrics:
|
|
- type: mc2
|
|
value: 61.21
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: Winogrande (5-shot)
|
|
type: winogrande
|
|
config: winogrande_xl
|
|
split: validation
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 80.19
|
|
name: accuracy
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: GSM8k (5-shot)
|
|
type: gsm8k
|
|
config: main
|
|
split: test
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 65.13
|
|
name: accuracy
|
|
source:
|
|
url: >-
|
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
|
|
name: Open LLM Leaderboard
|
|
language:
|
|
- en
|
|
library_name: transformers
|
|
---
|
|
|
|
# MonarchCoder-7B
|
|
|
|
|
|

|
|
|
|
MonarchCoder-7B is a slerp merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
* [Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0](https://huggingface.co/Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0)
|
|
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
|
|
|
The main aim behind creating this model is to create a model that performs well in reasoning, conversation, and coding. AlphaMonarch pperforms amazing on reasoning and conversation tasks. Merging AlphaMonarch with a coding model yielded MonarchCoder-7B which performs better on OpenLLM, Nous, and HumanEval benchmark. Although [MonarchCoder-2x7B](abideen/MonarchCoder-MoE-2x7B) performs better than MonarchCoder-7B.
|
|
|
|
|
|
## 🏆 Evaluation results
|
|
```
|
|
| Metric |MonarchCoder-Moe-2x7B||MonarchCoder-7B||AlphaMonarch|
|
|
|---------------------------------|---------------------|-----------------|------------|
|
|
|Avg. | 74.23 | 71.17 | 75.99 |
|
|
|HumanEval | 41.15 | 39.02 | 34.14 |
|
|
|HumanEval+ | 29.87 | 31.70 | 29.26 |
|
|
|MBPP | 40.60 | * | * |
|
|
|AI2 Reasoning Challenge (25-Shot)| 70.99 | 68.52 | 73.04 |
|
|
|HellaSwag (10-Shot) | 87.99 | 87.30 | 89.18 |
|
|
|MMLU (5-Shot) | 65.11 | 64.65 | 64.40 |
|
|
|TruthfulQA (0-shot) | 71.25 | 61.21 | 77.91 |
|
|
|Winogrande (5-shot) | 80.66 | 80.19 .| 84.69 |
|
|
|GSM8k (5-shot) . | 69.37 | 65.13 | 66.72 |
|
|
```
|
|
|
|
## 🧩 Configuration
|
|
|
|
```yaml
|
|
slices:
|
|
- sources:
|
|
- model: Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
|
|
layer_range: [0, 32]
|
|
- model: mlabonne/AlphaMonarch-7B
|
|
layer_range: [0, 32]
|
|
merge_method: slerp
|
|
base_model: mlabonne/AlphaMonarch-7B
|
|
parameters:
|
|
t:
|
|
- filter: self_attn
|
|
value: [0, 0.5, 0.3, 0.7, 1]
|
|
- filter: mlp
|
|
value: [1, 0.5, 0.7, 0.3, 0]
|
|
- value: 0.5
|
|
dtype: bfloat16
|
|
```
|
|
|
|
## 💻 Usage
|
|
|
|
```python
|
|
!pip install -qU transformers accelerate
|
|
|
|
from transformers import AutoTokenizer
|
|
import transformers
|
|
import torch
|
|
|
|
model = "abideen/MonarchCoder-7B"
|
|
messages = [{"role": "user", "content": "What is a large language model?"}]
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
pipeline = transformers.pipeline(
|
|
"text-generation",
|
|
model=model,
|
|
torch_dtype=torch.float16,
|
|
device_map="auto",
|
|
)
|
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
|
print(outputs[0]["generated_text"])
|
|
```
|