Files
Bio-8B-it/README.md
ModelHub XC 45e66aab6b 初始化项目,由ModelHub XC社区提供模型
Model: khazarai/Bio-8B-it
Source: Original Platform
2026-05-06 12:19:55 +08:00

3.3 KiB
Raw Permalink Blame History

base_model, tags, license, language, datasets, pipeline_tag, library_name
base_model tags license language datasets pipeline_tag library_name
unsloth/Qwen3-8B
text-generation-inference
transformers
unsloth
qwen3
sft
apache-2.0
en
bio-nlp-umass/bioinstruct
text-generation transformers

khazarai/Bio-8B-it

Model Description

Bio-8B-it is an 8B parameter biomedical instruction-tuned language model built on top of Qwen 3-8B. The model was fine-tuned using Supervised Fine-Tuning (SFT) with QLoRA via the PEFT framework.

This model is optimized for biomedical and clinical NLP instruction-following tasks, including:

  • Biomedical question answering
  • Clinical text summarization
  • Information extraction
  • Clinical trial eligibility assessment
  • Differential diagnosis reasoning

Base Model

  • Base: Qwen3-8B
  • Architecture: Decoder-only Transformer
  • Parameter count: 8B

Fine-Tuning Method

  • Technique: Supervised Fine-Tuning (SFT)
  • Parameter-efficient tuning: QLoRA (PEFT)
  • Base model loading: 4-bit / 8-bit quantization during training
  • Final merged model: 16-bit full-precision weights
  • Training objective: Instruction-following adaptation for biomedical tasks
  • QLoRA enables efficient fine-tuning by freezing base weights and training low-rank adapters, which are later merged into the full model.

Dataset Overview

  • Total samples: 25,000 instructionresponse pairs
  • Generation method: GPT-4 generated synthetic instruction tuning dataset
  • Inspired by: Self-Instruct methodology
  • Seed tasks: 80 manually constructed biomedical tasks
  • The dataset was automatically expanded by prompting GPT-4 with randomly selected seed examples to generate diverse biomedical instruction data.

Intended Use

This model is intended for:

  • Biomedical NLP research
  • Clinical text processing experiments
  • Instruction-following biomedical assistants
  • Academic evaluation on BioMedical NLP tasks

Out-of-Scope Use

This model is not intended for:

  • Direct clinical decision-making
  • Real-world medical diagnosis
  • Prescribing medication
  • Deployment in safety-critical healthcare systems
  • It should not replace licensed medical professionals.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("khazarai/Bio-8B-it")
model = AutoModelForCausalLM.from_pretrained(
    "khazarai/Bio-8B-it",
    device_map={"": 0}
)

question = """
Describe how to properly perform a hand hygiene using an alcohol-based hand sanitizer.
"""

messages = [
    {"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
    enable_thinking = False,
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1400,
    temperature = 0.7,
    top_p = 0.8,
    top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

Citation

If you use this model, please cite the original BioInstruct paper:

@article{Tran2024Bioinstruct,
    author = {Tran, Hieu and Yang, Zhichao and Yao, Zonghai and Yu, Hong},
    title = {BioInstruct: instruction tuning of large language models for biomedical natural language processing},
    journal = {Journal of the American Medical Informatics Association},
    year = {2024},
    doi = {10.1093/jamia/ocae122}
}