ModelHub XC 55be907593 初始化项目,由ModelHub XC社区提供模型
Model: sethuiyer/MedleyMD
Source: Original Platform
2026-04-24 07:18:02 +08:00

language, license, library_name, tags, datasets, base_model, pipeline_tag, model-index
language license library_name tags datasets base_model pipeline_tag model-index
en
cc-by-nc-nd-4.0 transformers
moe
merge
medical
mergekit
medmcqa
cognitivecomputations/samantha-data
jondurbin/bagel-v0.3
sethuiyer/Dr_Samantha_7b_mistral
fblgit/UNA-TheBeagle-7b-v1
text-generation
name results
MedleyMD
task dataset metrics source
type name
text-generation Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot) ai2_arc ARC-Challenge test
num_few_shot
25
type value name
acc_norm 66.47 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HellaSwag (10-Shot) hellaswag validation
num_few_shot
10
type value name
acc_norm 86.06 normalized accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU (5-Shot) cais/mmlu all test
num_few_shot
5
type value name
acc 65.1 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
TruthfulQA (0-shot) truthful_qa multiple_choice validation
num_few_shot
0
type value
mc2 52.46
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
Winogrande (5-shot) winogrande winogrande_xl validation
num_few_shot
5
type value name
acc 80.27 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
GSM8k (5-shot) gsm8k main test
num_few_shot
5
type value name
acc 68.99 accuracy
url name
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/MedleyMD Open LLM Leaderboard

MedleyMD

logo

MedleyMD is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

These models were chosen because fblgit/UNA-TheBeagle-7b-v1 has excellent performance for a 7B parameter model and Dr.Samantha has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding, scoring 68.82% in topics related to clinical domain and psychology.

Benchmark

On a synthetic benchmark of 35 medical diagnosis questions generated by GPT-4, GPT-4 also being an evaluator, MedleyMD scored 96.25/100.

Nous Benchmark numbers shall be available soon.

🧩 Configuration

base_model: OpenPipe/mistral-ft-optimized-1227
gate_mode: hidden
dtype: bfloat16

experts:
  - source_model: sethuiyer/Dr_Samantha_7b_mistral
    positive_prompts: ["differential diagnosis", "Clinical Knowledge", "Medical Genetics", "Human Aging", "Human Sexuality", "College Medicine", "Anatomy", "College Biology", "High School Biology", "Professional Medicine", "Nutrition", "High School Psychology", "Professional Psychology", "Virology"]

  - source_model: fblgit/UNA-TheBeagle-7b-v1
    positive_prompts: ["How do you", "Explain the concept of", "Give an overview of", "Compare and contrast between", "Provide information about", "Help me understand", "Summarize", "Make a recommendation on", "chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]

GGUF

  1. medleymd.Q4_K_M [7.2GB]
  2. medleymd.Q5_K_M [9.13GB]

Ollama

MedleyMD can be used in ollama by runningollama run stuehieyr/medleymd in your terminal.

If you have limited computing resources, check out this video to learn how to run it on a Google Colab backend.

Prompt format:

This model uses ChatML prompt format.

<|im_start|>system
You are Medley, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

💻 Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "sethuiyer/MedleyMD"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.bfloat16, "load_in_4bit": True},
)

generation_kwargs = {
    "max_new_tokens": 512,
    "do_sample": True,
    "temperature": 0.7,
    "top_k": 50,
    "top_p": 95,
}

messages = [{"role":"system", "content":"You are an helpful AI assistant. Please use </s> when you want to end the answer."},
{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, **generation_kwargs)
print(outputs[0]["generated_text"])
A Mixture of Experts (Mixout) is a neural network architecture that combines the strengths of multiple expert networks to make a more accurate and robust prediction.
It is composed of a topmost gating network that assigns weights to each expert network based on their performance on past input samples.
The expert networks are trained independently, and the gating network learns to choose the best combination of these experts to make the final prediction.
Mixout demonstrates a stronger ability to handle complex data distributions and is more efficient in terms of training time and memory usage compared to a
traditional ensemble approach.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.89
AI2 Reasoning Challenge (25-Shot) 66.47
HellaSwag (10-Shot) 86.06
MMLU (5-Shot) 65.10
TruthfulQA (0-shot) 52.46
Winogrande (5-shot) 80.27
GSM8k (5-shot) 68.99
Description
Model synced from source: sethuiyer/MedleyMD
Readme 614 KiB