初始化项目,由ModelHub XC社区提供模型

Model: Mungert/MedScholar-1.5B-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-13 01:46:58 +08:00
commit 87e5700e91
24 changed files with 335 additions and 0 deletions

70
.gitattributes vendored Normal file
View File

@@ -0,0 +1,70 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-f16.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
MedScholar-1.5B-bf16.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6bca2f05107d39b1f2c21220e6cb16de5c06462a13c879b4a54888ff21592288
size 3093667072

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db03f2961de0f876ca742bd5ea692964e027e8b14db95852eb30d1ed185ae24f
size 2298879232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:951f484e68b89a98797808395362cbb945e41219d3d3b6c8f9c134af64abda77
size 2298879232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:be7fe7007402c8fcfa2661e851bc81d8448f5f956321e999777ff0b6bdbce56b
size 2065952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f71e0964a019050050c63ba07600b5b263ef830446a1dd822c513d17dc381545
size 781901376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca56ba355bef6f78924c81adf1966b83df9b206da08af3fa140dfc683afca22d
size 774565440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b30f0e2441d2c6fbfbfccd21711f0a7b3c6ccb124d8865392feb8f8fa111721
size 709088832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89b20236c05a46e8040b9d2783c879838003360a013c87c7d00216f43857808d
size 695240256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:18ef55d1507f6a3cabb858210e6fd2273e03466ba01d6ae16813613c09a0d750
size 936329280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a432545f5e3c196ed63da466e024e26fe94bf21695571d7de140dab6b854e7e0
size 895729728

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aeb93a06dfa351e9e927c416431033f6ec7af211c6726d9ec1e26cc8ad4e081a
size 824176704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a83c9993f2e9ace1fa1b1520854e8f0289b10f68c55be2dfd29445a25ce5c5d1
size 789741120

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7a844fd2eeebb1f75841255a6ca4100510a9d1a341d93aa8023f9ff5a8e26af4
size 874786368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88f5c05c60ab5e510d32067219bbaee27e2383225056c3b37cafc3457a0019bb
size 971259456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89d08a53d21497c34aed908751325bccad020b9403871cc924769c13cd42d874
size 987816000

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:147f3de48c3f98daced623e41ef74ad5c92d8658bb666daf9740cea13f4a1bf1
size 947388480

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:68cf3b03e54b0e6447399c11a090289a6dd1dbb42bcbbe56cd84b34267b13a2d
size 1067732544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70684e9920fb25f1ec630180fd8cab4c49858408c4e811df9d11850a0bbad34d
size 1164205632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f53ee1d97db0c5b7e6dc4fd0b3c12f8375ccc06107df9182f938c766f6444faf
size 1126928448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e54a2062799b65f1278a70f5461f7124f21d6660f426753957ba0a32f230a081
size 1111887936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f3f1d9709dd1f0e7dccd3bd9f1a3961cf1e66dead3d377583bae2e463bc4b480
size 1272737856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:de059edfef117fc914be7a4d791f49e6d33f1219ff2dfbb1820c8c8a68a23197
size 1646570752

199
README.md Normal file
View File

@@ -0,0 +1,199 @@
---
base_model: unsloth/qwen2.5-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
datasets:
- miriad/miriad-4.4M
---
# <span style="color: #7FFF7F;">MedScholar-1.5B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`66625a59`](https://github.com/ggerganov/llama.cpp/commit/66625a59a54d0a7504eda4c4e83abfcd83ba1cf8).
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# 🧠 MedScholar-1.5B
<img src="https://huggingface.co/yasserrmd/MedScholar-1.5B/resolve/main/banner.png" width="800"/>
**MedScholar-1.5B** is a compact, instruction-aligned medical question-answering model fine-tuned on 1 million randomly selected examples from the [MIRIAD-4.4M dataset](https://huggingface.co/datasets/miriad/miriad-4.4M). It is based on the [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) model and designed for efficient, in-context clinical knowledge exploration — **not diagnosis**.
---
## 📌 Model Details
- **Base Model**: [Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit)
- **Fine-tuning Dataset**: [MIRIAD-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M)
- **Samples Used**: 1,000,000 examples randomly selected from the full set
- **Prompt Style**: Minimal QA format (see below)
- **Training Framework**: [Unsloth](https://github.com/unslothai/unsloth) with QLoRA
- **License**: Apache-2.0 (inherits from base model); dataset is ODC-By 1.0
---
## 📋 Prompt Format
```text
### Question:
What is the role of LDL in cardiovascular health?
### Answer:
LDL plays a central role in the development of atherosclerosis by delivering cholesterol to peripheral tissues...
````
* The model expects the prompt to **end with `### Answer:`**, and will generate only the answer text.
* Do **not include the answer in the prompt** during inference.
---
## 🔒 Dataset Consent & License
This model was fine-tuned using **randomly selected 1 million examples** from the [MIRIAD-4.4M dataset](https://huggingface.co/datasets/miriad/miriad-4.4M), which is released under the [ODC-By 1.0 License](https://opendatacommons.org/licenses/by/1-0/).
> **The MIRIAD dataset is intended exclusively for academic research and educational exploration.**
> As stated by its authors:
>
> *“The outputs generated by models trained or fine-tuned on this dataset must not be used for medical diagnosis or decision-making involving real individuals.”*
---
## ⚠️ Intended Use
**This model is for research, educational, and exploration purposes only. It is not a medical device and must not be used to provide clinical advice, diagnosis, or treatment.**
---
## 💡 Example Inference (Python)
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="yasserrmd/MedScholar-1.5B", device=0)
prompt = """### Question:
What are the symptoms of acute pancreatitis?
### Answer:
"""
response = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7)
print(response[0]["generated_text"])
```
---
## 🤝 Acknowledgements
* MIRIAD Dataset by Zheng et al. (2025) [https://huggingface.co/datasets/miriad/miriad-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M)
* Qwen2.5 by Alibaba [https://huggingface.co/Qwen](https://huggingface.co/Qwen)
* Training infrastructure: [Unsloth](https://github.com/unslothai/unsloth)
---
## 📄 Citation
```bibtex
@misc{yasser2025medscholar,
title = {MedScholar-1.5B: Compact medical QA model fine-tuned on MIRIAD},
author = {Mohamed Yasser},
year = {2025},
howpublished = {\url{https://huggingface.co/yasserrmd/MedScholar-1.5B}},
}
```
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What Im Testing**
Im pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
-**Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If youre into **edge-device AI**, lets collaborate!
### **Other Assistants**
🟢 **TurboLLM** Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊