初始化项目,由ModelHub XC社区提供模型
Model: yashm/qwen25-15b-biomed-finetuned Source: Original Platform
This commit is contained in:
110
README.md
Normal file
110
README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
library_name: transformers
|
||||
tags:
|
||||
- qwen
|
||||
- biomedical
|
||||
- bioinformatics
|
||||
- fine-tuned
|
||||
- medical
|
||||
- llm
|
||||
license: apache-2.0
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-1.5B
|
||||
---
|
||||
|
||||
# Qwen2.5-1.5B Biomedical Fine-Tuned Model
|
||||
|
||||
This model is a biomedical and bioinformatics fine-tuned version of **Qwen/Qwen2.5-1.5B**, fine-tuned by **Dr. YMG**.
|
||||
|
||||
---
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
This model is a domain-adapted and instruction fine-tuned large language model specialized for biomedical and bioinformatics tasks.
|
||||
|
||||
- Developed by: Dr. YMG
|
||||
- Model type: Causal Language Model (LLM)
|
||||
- Language(s): English
|
||||
- License: Apache 2.0
|
||||
- Finetuned from model: Qwen/Qwen2.5-1.5B
|
||||
|
||||
### Model Sources
|
||||
|
||||
- Repository: https://huggingface.co/yashm/qwen25-15b-biomed-finetuned
|
||||
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B
|
||||
|
||||
---
|
||||
|
||||
## Uses
|
||||
|
||||
### Direct Use
|
||||
|
||||
- Biomedical concept explanation
|
||||
- Bioinformatics discussions
|
||||
- Research assistance
|
||||
- Literature summarization
|
||||
- Gene expression & biomarker discussion
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
- Clinical diagnosis
|
||||
- Medical treatment decisions
|
||||
- Drug prescription
|
||||
- Patient-specific advice
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
||||
import torch
|
||||
|
||||
MODEL_ID = "yashm/qwen25-15b-biomed-finetuned"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
MODEL_ID,
|
||||
device_map="auto",
|
||||
dtype=torch.bfloat16,
|
||||
trust_remote_code=True,
|
||||
)
|
||||
|
||||
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
||||
|
||||
prompt = "Explain gene expression in simple terms."
|
||||
|
||||
out = pipe(prompt, max_new_tokens=200)
|
||||
print(out[0]["generated_text"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Training Details
|
||||
|
||||
- Base model: Qwen/Qwen2.5-1.5B
|
||||
- Method: LoRA (PEFT)
|
||||
- Precision: BF16
|
||||
- Quantization: 4-bit QLoRA
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
- May hallucinate
|
||||
- Not medically validated
|
||||
- Limited to training data
|
||||
|
||||
---
|
||||
|
||||
## Disclaimer
|
||||
|
||||
For research and educational use only. Not for clinical decision-making.
|
||||
|
||||
---
|
||||
|
||||
## Author
|
||||
|
||||
Fine-tuned by Dr. YMG
|
||||
Reference in New Issue
Block a user