From 87e5700e914568e8e182cd8d689ad88932680165 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Mon, 13 Apr 2026 01:46:58 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: Mungert/MedScholar-1.5B-GGUF Source: Original Platform --- .gitattributes | 70 ++++++++++++ MedScholar-1.5B-bf16.gguf | 3 + MedScholar-1.5B-bf16_q8_0.gguf | 3 + MedScholar-1.5B-f16_q8_0.gguf | 3 + MedScholar-1.5B-imatrix.gguf | 3 + MedScholar-1.5B-iq3_m.gguf | 3 + MedScholar-1.5B-iq3_s.gguf | 3 + MedScholar-1.5B-iq3_xs.gguf | 3 + MedScholar-1.5B-iq3_xxs.gguf | 3 + MedScholar-1.5B-iq4_nl.gguf | 3 + MedScholar-1.5B-iq4_xs.gguf | 3 + MedScholar-1.5B-q3_k_m.gguf | 3 + MedScholar-1.5B-q3_k_s.gguf | 3 + MedScholar-1.5B-q4_0.gguf | 3 + MedScholar-1.5B-q4_1.gguf | 3 + MedScholar-1.5B-q4_k_m.gguf | 3 + MedScholar-1.5B-q4_k_s.gguf | 3 + MedScholar-1.5B-q5_0.gguf | 3 + MedScholar-1.5B-q5_1.gguf | 3 + MedScholar-1.5B-q5_k_m.gguf | 3 + MedScholar-1.5B-q5_k_s.gguf | 3 + MedScholar-1.5B-q6_k_m.gguf | 3 + MedScholar-1.5B-q8_0.gguf | 3 + README.md | 199 +++++++++++++++++++++++++++++++++ 24 files changed, 335 insertions(+) create mode 100644 .gitattributes create mode 100644 MedScholar-1.5B-bf16.gguf create mode 100644 MedScholar-1.5B-bf16_q8_0.gguf create mode 100644 MedScholar-1.5B-f16_q8_0.gguf create mode 100644 MedScholar-1.5B-imatrix.gguf create mode 100644 MedScholar-1.5B-iq3_m.gguf create mode 100644 MedScholar-1.5B-iq3_s.gguf create mode 100644 MedScholar-1.5B-iq3_xs.gguf create mode 100644 MedScholar-1.5B-iq3_xxs.gguf create mode 100644 MedScholar-1.5B-iq4_nl.gguf create mode 100644 MedScholar-1.5B-iq4_xs.gguf create mode 100644 MedScholar-1.5B-q3_k_m.gguf create mode 100644 MedScholar-1.5B-q3_k_s.gguf create mode 100644 MedScholar-1.5B-q4_0.gguf create mode 100644 MedScholar-1.5B-q4_1.gguf create mode 100644 MedScholar-1.5B-q4_k_m.gguf create mode 100644 MedScholar-1.5B-q4_k_s.gguf create mode 100644 MedScholar-1.5B-q5_0.gguf create mode 100644 MedScholar-1.5B-q5_1.gguf create mode 100644 MedScholar-1.5B-q5_k_m.gguf create mode 100644 MedScholar-1.5B-q5_k_s.gguf create mode 100644 MedScholar-1.5B-q6_k_m.gguf create mode 100644 MedScholar-1.5B-q8_0.gguf create mode 100644 README.md diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..3ea74ec --- /dev/null +++ b/.gitattributes @@ -0,0 +1,70 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-f16.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-imatrix.gguf filter=lfs diff=lfs merge=lfs -text +MedScholar-1.5B-bf16.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/MedScholar-1.5B-bf16.gguf b/MedScholar-1.5B-bf16.gguf new file mode 100644 index 0000000..d02f32b --- /dev/null +++ b/MedScholar-1.5B-bf16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bca2f05107d39b1f2c21220e6cb16de5c06462a13c879b4a54888ff21592288 +size 3093667072 diff --git a/MedScholar-1.5B-bf16_q8_0.gguf b/MedScholar-1.5B-bf16_q8_0.gguf new file mode 100644 index 0000000..cf963fd --- /dev/null +++ b/MedScholar-1.5B-bf16_q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db03f2961de0f876ca742bd5ea692964e027e8b14db95852eb30d1ed185ae24f +size 2298879232 diff --git a/MedScholar-1.5B-f16_q8_0.gguf b/MedScholar-1.5B-f16_q8_0.gguf new file mode 100644 index 0000000..fe0fbf2 --- /dev/null +++ b/MedScholar-1.5B-f16_q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:951f484e68b89a98797808395362cbb945e41219d3d3b6c8f9c134af64abda77 +size 2298879232 diff --git a/MedScholar-1.5B-imatrix.gguf b/MedScholar-1.5B-imatrix.gguf new file mode 100644 index 0000000..0b40617 --- /dev/null +++ b/MedScholar-1.5B-imatrix.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be7fe7007402c8fcfa2661e851bc81d8448f5f956321e999777ff0b6bdbce56b +size 2065952 diff --git a/MedScholar-1.5B-iq3_m.gguf b/MedScholar-1.5B-iq3_m.gguf new file mode 100644 index 0000000..523be63 --- /dev/null +++ b/MedScholar-1.5B-iq3_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f71e0964a019050050c63ba07600b5b263ef830446a1dd822c513d17dc381545 +size 781901376 diff --git a/MedScholar-1.5B-iq3_s.gguf b/MedScholar-1.5B-iq3_s.gguf new file mode 100644 index 0000000..2eb3014 --- /dev/null +++ b/MedScholar-1.5B-iq3_s.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca56ba355bef6f78924c81adf1966b83df9b206da08af3fa140dfc683afca22d +size 774565440 diff --git a/MedScholar-1.5B-iq3_xs.gguf b/MedScholar-1.5B-iq3_xs.gguf new file mode 100644 index 0000000..f4f192c --- /dev/null +++ b/MedScholar-1.5B-iq3_xs.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b30f0e2441d2c6fbfbfccd21711f0a7b3c6ccb124d8865392feb8f8fa111721 +size 709088832 diff --git a/MedScholar-1.5B-iq3_xxs.gguf b/MedScholar-1.5B-iq3_xxs.gguf new file mode 100644 index 0000000..7408d16 --- /dev/null +++ b/MedScholar-1.5B-iq3_xxs.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89b20236c05a46e8040b9d2783c879838003360a013c87c7d00216f43857808d +size 695240256 diff --git a/MedScholar-1.5B-iq4_nl.gguf b/MedScholar-1.5B-iq4_nl.gguf new file mode 100644 index 0000000..910b368 --- /dev/null +++ b/MedScholar-1.5B-iq4_nl.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18ef55d1507f6a3cabb858210e6fd2273e03466ba01d6ae16813613c09a0d750 +size 936329280 diff --git a/MedScholar-1.5B-iq4_xs.gguf b/MedScholar-1.5B-iq4_xs.gguf new file mode 100644 index 0000000..72909a0 --- /dev/null +++ b/MedScholar-1.5B-iq4_xs.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a432545f5e3c196ed63da466e024e26fe94bf21695571d7de140dab6b854e7e0 +size 895729728 diff --git a/MedScholar-1.5B-q3_k_m.gguf b/MedScholar-1.5B-q3_k_m.gguf new file mode 100644 index 0000000..aa6a621 --- /dev/null +++ b/MedScholar-1.5B-q3_k_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeb93a06dfa351e9e927c416431033f6ec7af211c6726d9ec1e26cc8ad4e081a +size 824176704 diff --git a/MedScholar-1.5B-q3_k_s.gguf b/MedScholar-1.5B-q3_k_s.gguf new file mode 100644 index 0000000..899fd04 --- /dev/null +++ b/MedScholar-1.5B-q3_k_s.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a83c9993f2e9ace1fa1b1520854e8f0289b10f68c55be2dfd29445a25ce5c5d1 +size 789741120 diff --git a/MedScholar-1.5B-q4_0.gguf b/MedScholar-1.5B-q4_0.gguf new file mode 100644 index 0000000..36cec6a --- /dev/null +++ b/MedScholar-1.5B-q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a844fd2eeebb1f75841255a6ca4100510a9d1a341d93aa8023f9ff5a8e26af4 +size 874786368 diff --git a/MedScholar-1.5B-q4_1.gguf b/MedScholar-1.5B-q4_1.gguf new file mode 100644 index 0000000..2a384bc --- /dev/null +++ b/MedScholar-1.5B-q4_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88f5c05c60ab5e510d32067219bbaee27e2383225056c3b37cafc3457a0019bb +size 971259456 diff --git a/MedScholar-1.5B-q4_k_m.gguf b/MedScholar-1.5B-q4_k_m.gguf new file mode 100644 index 0000000..b567416 --- /dev/null +++ b/MedScholar-1.5B-q4_k_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89d08a53d21497c34aed908751325bccad020b9403871cc924769c13cd42d874 +size 987816000 diff --git a/MedScholar-1.5B-q4_k_s.gguf b/MedScholar-1.5B-q4_k_s.gguf new file mode 100644 index 0000000..9156f8f --- /dev/null +++ b/MedScholar-1.5B-q4_k_s.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:147f3de48c3f98daced623e41ef74ad5c92d8658bb666daf9740cea13f4a1bf1 +size 947388480 diff --git a/MedScholar-1.5B-q5_0.gguf b/MedScholar-1.5B-q5_0.gguf new file mode 100644 index 0000000..5fe0c00 --- /dev/null +++ b/MedScholar-1.5B-q5_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68cf3b03e54b0e6447399c11a090289a6dd1dbb42bcbbe56cd84b34267b13a2d +size 1067732544 diff --git a/MedScholar-1.5B-q5_1.gguf b/MedScholar-1.5B-q5_1.gguf new file mode 100644 index 0000000..dde9b95 --- /dev/null +++ b/MedScholar-1.5B-q5_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70684e9920fb25f1ec630180fd8cab4c49858408c4e811df9d11850a0bbad34d +size 1164205632 diff --git a/MedScholar-1.5B-q5_k_m.gguf b/MedScholar-1.5B-q5_k_m.gguf new file mode 100644 index 0000000..3820fe4 --- /dev/null +++ b/MedScholar-1.5B-q5_k_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f53ee1d97db0c5b7e6dc4fd0b3c12f8375ccc06107df9182f938c766f6444faf +size 1126928448 diff --git a/MedScholar-1.5B-q5_k_s.gguf b/MedScholar-1.5B-q5_k_s.gguf new file mode 100644 index 0000000..0bee618 --- /dev/null +++ b/MedScholar-1.5B-q5_k_s.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e54a2062799b65f1278a70f5461f7124f21d6660f426753957ba0a32f230a081 +size 1111887936 diff --git a/MedScholar-1.5B-q6_k_m.gguf b/MedScholar-1.5B-q6_k_m.gguf new file mode 100644 index 0000000..b32cd1f --- /dev/null +++ b/MedScholar-1.5B-q6_k_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3f1d9709dd1f0e7dccd3bd9f1a3961cf1e66dead3d377583bae2e463bc4b480 +size 1272737856 diff --git a/MedScholar-1.5B-q8_0.gguf b/MedScholar-1.5B-q8_0.gguf new file mode 100644 index 0000000..9cb3dd8 --- /dev/null +++ b/MedScholar-1.5B-q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de059edfef117fc914be7a4d791f49e6d33f1219ff2dfbb1820c8c8a68a23197 +size 1646570752 diff --git a/README.md b/README.md new file mode 100644 index 0000000..62eedd0 --- /dev/null +++ b/README.md @@ -0,0 +1,199 @@ +--- +base_model: unsloth/qwen2.5-1.5b-unsloth-bnb-4bit +tags: +- text-generation-inference +- transformers +- unsloth +- qwen2 +license: apache-2.0 +language: +- en +datasets: +- miriad/miriad-4.4M +--- + +# MedScholar-1.5B GGUF Models + + +## Model Generation Details + +This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`66625a59`](https://github.com/ggerganov/llama.cpp/commit/66625a59a54d0a7504eda4c4e83abfcd83ba1cf8). + + + + + + +--- + + + Click here to get info on choosing the right GGUF model format + + +--- + + + + + + + + + +# 🧠 MedScholar-1.5B + + + + + +**MedScholar-1.5B** is a compact, instruction-aligned medical question-answering model fine-tuned on 1 million randomly selected examples from the [MIRIAD-4.4M dataset](https://huggingface.co/datasets/miriad/miriad-4.4M). It is based on the [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) model and designed for efficient, in-context clinical knowledge exploration β€” **not diagnosis**. + +--- + +## πŸ“Œ Model Details + +- **Base Model**: [Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit) +- **Fine-tuning Dataset**: [MIRIAD-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M) +- **Samples Used**: 1,000,000 examples randomly selected from the full set +- **Prompt Style**: Minimal QA format (see below) +- **Training Framework**: [Unsloth](https://github.com/unslothai/unsloth) with QLoRA +- **License**: Apache-2.0 (inherits from base model); dataset is ODC-By 1.0 + +--- + +## πŸ“‹ Prompt Format + +```text +### Question: +What is the role of LDL in cardiovascular health? + +### Answer: +LDL plays a central role in the development of atherosclerosis by delivering cholesterol to peripheral tissues... +```` + +* The model expects the prompt to **end with `### Answer:`**, and will generate only the answer text. +* Do **not include the answer in the prompt** during inference. + +--- + +## πŸ”’ Dataset Consent & License + +This model was fine-tuned using **randomly selected 1 million examples** from the [MIRIAD-4.4M dataset](https://huggingface.co/datasets/miriad/miriad-4.4M), which is released under the [ODC-By 1.0 License](https://opendatacommons.org/licenses/by/1-0/). + +> **The MIRIAD dataset is intended exclusively for academic research and educational exploration.** +> As stated by its authors: +> +> *β€œThe outputs generated by models trained or fine-tuned on this dataset must not be used for medical diagnosis or decision-making involving real individuals.”* + +--- + +## ⚠️ Intended Use + +**This model is for research, educational, and exploration purposes only. It is not a medical device and must not be used to provide clinical advice, diagnosis, or treatment.** + +--- + +## πŸ’‘ Example Inference (Python) + +```python +from transformers import pipeline + +pipe = pipeline("text-generation", model="yasserrmd/MedScholar-1.5B", device=0) + +prompt = """### Question: +What are the symptoms of acute pancreatitis? + +### Answer: +""" + +response = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7) +print(response[0]["generated_text"]) +``` + +--- + +## 🀝 Acknowledgements + +* MIRIAD Dataset by Zheng et al. (2025) – [https://huggingface.co/datasets/miriad/miriad-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M) +* Qwen2.5 by Alibaba – [https://huggingface.co/Qwen](https://huggingface.co/Qwen) +* Training infrastructure: [Unsloth](https://github.com/unslothai/unsloth) + +--- + +## πŸ“„ Citation + +```bibtex +@misc{yasser2025medscholar, + title = {MedScholar-1.5B: Compact medical QA model fine-tuned on MIRIAD}, + author = {Mohamed Yasser}, + year = {2025}, + howpublished = {\url{https://huggingface.co/yasserrmd/MedScholar-1.5B}}, +} +``` + + + + + + +This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. + + + + +--- + +# πŸš€ If you find these models useful + +Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: + +πŸ‘‰ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) + + +The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) + +πŸ’¬ **How to test**: + Choose an **AI assistant type**: + - `TurboLLM` (GPT-4.1-mini) + - `HugLLM` (Hugginface Open-source models) + - `TestLLM` (Experimental CPU-only) + +### **What I’m Testing** +I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: +- **Function calling** against live network services +- **How small can a model go** while still handling: + - Automated **Nmap security scans** + - **Quantum-readiness checks** + - **Network Monitoring tasks** + +🟑 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): +- βœ… **Zero-configuration setup** +- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. +- πŸ”§ **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! + +### **Other Assistants** +🟒 **TurboLLM** – Uses **gpt-4.1-mini** : +- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. +- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** +- **Real-time network diagnostics and monitoring** +- **Security Audits** +- **Penetration testing** (Nmap/Metasploit) + +πŸ”΅ **HugLLM** – Latest Open-source models: +- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. + +### πŸ’‘ **Example commands you could test**: +1. `"Give me info on my websites SSL certificate"` +2. `"Check if my server is using quantum safe encyption for communication"` +3. `"Run a comprehensive security audit on my server"` +4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! + +### Final Word + +I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. + +If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β˜•. Your support helps cover service costs and allows me to raise token limits for everyone. + +I'm also open to job opportunities or sponsorship. + +Thank you! 😊