234 lines
7.5 KiB
Markdown
234 lines
7.5 KiB
Markdown
|
|
---
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
license: cc-by-nc-nd-4.0
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- moe
|
||
|
|
- merge
|
||
|
|
- medical
|
||
|
|
- mergekit
|
||
|
|
datasets:
|
||
|
|
- medmcqa
|
||
|
|
- cognitivecomputations/samantha-data
|
||
|
|
- jondurbin/bagel-v0.3
|
||
|
|
base_model:
|
||
|
|
- sethuiyer/Dr_Samantha_7b_mistral
|
||
|
|
- fblgit/UNA-TheBeagle-7b-v1
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
model-index:
|
||
|
|
- name: MedleyMD
|
||
|
|
results:
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: AI2 Reasoning Challenge (25-Shot)
|
||
|
|
type: ai2_arc
|
||
|
|
config: ARC-Challenge
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 25
|
||
|
|
metrics:
|
||
|
|
- type: acc_norm
|
||
|
|
value: 66.47
|
||
|
|
name: normalized accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: HellaSwag (10-Shot)
|
||
|
|
type: hellaswag
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 10
|
||
|
|
metrics:
|
||
|
|
- type: acc_norm
|
||
|
|
value: 86.06
|
||
|
|
name: normalized accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: MMLU (5-Shot)
|
||
|
|
type: cais/mmlu
|
||
|
|
config: all
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 65.1
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: TruthfulQA (0-shot)
|
||
|
|
type: truthful_qa
|
||
|
|
config: multiple_choice
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 0
|
||
|
|
metrics:
|
||
|
|
- type: mc2
|
||
|
|
value: 52.46
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: Winogrande (5-shot)
|
||
|
|
type: winogrande
|
||
|
|
config: winogrande_xl
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 80.27
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: GSM8k (5-shot)
|
||
|
|
type: gsm8k
|
||
|
|
config: main
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 68.99
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
---
|
||
|
|
|
||
|
|
# MedleyMD
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
|
||
|
|
MedleyMD is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
||
|
|
* [sethuiyer/Dr_Samantha_7b_mistral](https://huggingface.co/sethuiyer/Dr_Samantha_7b_mistral)
|
||
|
|
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
|
||
|
|
|
||
|
|
These models were chosen because `fblgit/UNA-TheBeagle-7b-v1` has excellent performance for a 7B parameter model and Dr.Samantha has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding, scoring 68.82% in topics related to clinical domain and psychology.
|
||
|
|
|
||
|
|
## Benchmark
|
||
|
|
|
||
|
|
On a synthetic benchmark of 35 medical diagnosis questions generated by GPT-4, GPT-4 also being an evaluator, MedleyMD scored **96.25/100**.
|
||
|
|
|
||
|
|
Nous Benchmark numbers shall be available soon.
|
||
|
|
|
||
|
|
|
||
|
|
## 🧩 Configuration
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
base_model: OpenPipe/mistral-ft-optimized-1227
|
||
|
|
gate_mode: hidden
|
||
|
|
dtype: bfloat16
|
||
|
|
|
||
|
|
experts:
|
||
|
|
- source_model: sethuiyer/Dr_Samantha_7b_mistral
|
||
|
|
positive_prompts: ["differential diagnosis", "Clinical Knowledge", "Medical Genetics", "Human Aging", "Human Sexuality", "College Medicine", "Anatomy", "College Biology", "High School Biology", "Professional Medicine", "Nutrition", "High School Psychology", "Professional Psychology", "Virology"]
|
||
|
|
|
||
|
|
- source_model: fblgit/UNA-TheBeagle-7b-v1
|
||
|
|
positive_prompts: ["How do you", "Explain the concept of", "Give an overview of", "Compare and contrast between", "Provide information about", "Help me understand", "Summarize", "Make a recommendation on", "chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
|
||
|
|
|
||
|
|
```
|
||
|
|
|
||
|
|
## GGUF
|
||
|
|
1. [medleymd.Q4_K_M](https://huggingface.co/sethuiyer/MedleyMD-GGUF/resolve/main/medleymd.Q4_K_M.gguf) [7.2GB]
|
||
|
|
2. [medleymd.Q5_K_M](https://huggingface.co/sethuiyer/MedleyMD-GGUF/resolve/main/medleymd.Q5_K_M.gguf) [9.13GB]
|
||
|
|
|
||
|
|
|
||
|
|
## Ollama
|
||
|
|
|
||
|
|
MedleyMD can be used in ollama by running```ollama run stuehieyr/medleymd``` in your terminal.
|
||
|
|
|
||
|
|
If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on
|
||
|
|
a Google Colab backend.
|
||
|
|
|
||
|
|
## Prompt format:
|
||
|
|
This model uses ChatML prompt format.
|
||
|
|
```
|
||
|
|
<|im_start|>system
|
||
|
|
You are Medley, a helpful AI assistant.<|im_end|>
|
||
|
|
<|im_start|>user
|
||
|
|
{prompt}<|im_end|>
|
||
|
|
<|im_start|>assistant
|
||
|
|
|
||
|
|
```
|
||
|
|
|
||
|
|
## 💻 Usage
|
||
|
|
|
||
|
|
```python
|
||
|
|
!pip install -qU transformers bitsandbytes accelerate
|
||
|
|
|
||
|
|
from transformers import AutoTokenizer
|
||
|
|
import transformers
|
||
|
|
import torch
|
||
|
|
|
||
|
|
model = "sethuiyer/MedleyMD"
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
||
|
|
pipeline = transformers.pipeline(
|
||
|
|
"text-generation",
|
||
|
|
model=model,
|
||
|
|
model_kwargs={"torch_dtype": torch.bfloat16, "load_in_4bit": True},
|
||
|
|
)
|
||
|
|
|
||
|
|
generation_kwargs = {
|
||
|
|
"max_new_tokens": 512,
|
||
|
|
"do_sample": True,
|
||
|
|
"temperature": 0.7,
|
||
|
|
"top_k": 50,
|
||
|
|
"top_p": 95,
|
||
|
|
}
|
||
|
|
|
||
|
|
messages = [{"role":"system", "content":"You are an helpful AI assistant. Please use </s> when you want to end the answer."},
|
||
|
|
{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
|
||
|
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||
|
|
outputs = pipeline(prompt, **generation_kwargs)
|
||
|
|
print(outputs[0]["generated_text"])
|
||
|
|
```
|
||
|
|
|
||
|
|
```text
|
||
|
|
A Mixture of Experts (Mixout) is a neural network architecture that combines the strengths of multiple expert networks to make a more accurate and robust prediction.
|
||
|
|
It is composed of a topmost gating network that assigns weights to each expert network based on their performance on past input samples.
|
||
|
|
The expert networks are trained independently, and the gating network learns to choose the best combination of these experts to make the final prediction.
|
||
|
|
Mixout demonstrates a stronger ability to handle complex data distributions and is more efficient in terms of training time and memory usage compared to a
|
||
|
|
traditional ensemble approach.
|
||
|
|
```
|
||
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__MedleyMD)
|
||
|
|
|
||
|
|
| Metric |Value|
|
||
|
|
|---------------------------------|----:|
|
||
|
|
|Avg. |69.89|
|
||
|
|
|AI2 Reasoning Challenge (25-Shot)|66.47|
|
||
|
|
|HellaSwag (10-Shot) |86.06|
|
||
|
|
|MMLU (5-Shot) |65.10|
|
||
|
|
|TruthfulQA (0-shot) |52.46|
|
||
|
|
|Winogrande (5-shot) |80.27|
|
||
|
|
|GSM8k (5-shot) |68.99|
|
||
|
|
|