Compare commits

...

10 Commits

Author SHA1 Message Date
team mradermacher
aae34989b7 auto-patch README.md 2026-01-24 02:44:30 +00:00
team mradermacher
7a303b2cb0 auto-patch README.md 2026-01-23 07:12:04 +00:00
team mradermacher
7ed0bf27f4 uploaded from marco 2026-01-23 06:00:18 +00:00
team mradermacher
21312177b4 uploaded from marco 2026-01-23 05:56:18 +00:00
team mradermacher
1f9d60c6f0 uploaded from marco 2026-01-23 05:55:17 +00:00
team mradermacher
ec914d72f8 auto-patch README.md 2026-01-23 05:54:24 +00:00
team mradermacher
49e92bda0a uploaded from marco 2026-01-23 05:52:16 +00:00
team mradermacher
992cb87585 uploaded from marco 2026-01-23 05:50:52 +00:00
team mradermacher
b52689b8b6 uploaded from marco 2026-01-23 05:48:41 +00:00
team mradermacher
3aafd7bb85 uploaded from marco 2026-01-23 05:47:52 +00:00
9 changed files with 112 additions and 0 deletions

7
.gitattributes vendored
View File

@@ -38,3 +38,10 @@ CURE-MED-7B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
CURE-MED-7B.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

3
CURE-MED-7B.IQ4_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f3c7f19f1917103e8fdb4bf9850e128971c56f6c52a896342dd92d897e5b8539
size 4250299616

3
CURE-MED-7B.Q3_K_L.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b58ded87cabcb3c6499dec2c64aa6e8952d9aa7d4d4c3022ed2a0d936a689182
size 4088460512

3
CURE-MED-7B.Q3_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42e71b5f3ba4d5641b67dc9a5095b4121efcf79eea27e7d49a5a43f990d5f465
size 3808392416

3
CURE-MED-7B.Q3_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d55f7f42356984d9116bdcbed099c20258a0e46d466bc99c74b9fa85d28f06cc
size 3492369632

3
CURE-MED-7B.Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:04c2af698c6c84c7f8e0d672f0c91fa062e72c7c71c2fb3a9f2ff603c32f4320
size 4683074784

3
CURE-MED-7B.Q5_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:74162c3e89376cf38a3a9239d12bb0b5d9b39084b109ae3553fa8ee9bbff9c8d
size 5444832480

3
CURE-MED-7B.Q5_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1511ceb799da245a15bc7461fde5dad895b4496dce5c8288999f64652dfc5490
size 5315177696

View File

@@ -1,3 +1,36 @@
---
base_model: Aikyam-Lab/CURE-MED-7B
datasets:
- Aikyam-Lab/CUREMED-BENCH
language:
- am
- bn
- fr
- ha
- hi
- ja
- ko
- es
- sw
- th
- tr
- vi
- yo
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- reasoning
- text-generation
- medical-ai
- multilingual-ai
- healthcare
- LLMs
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
@@ -7,3 +40,54 @@
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Aikyam-Lab/CURE-MED-7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CURE-MED-7B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CURE-MED-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CURE-MED-7B-GGUF/resolve/main/CURE-MED-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->