初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/gemma-3-medical-finetune-i1-GGUF Source: Original Platform
This commit is contained in:
60
.gitattributes
vendored
Normal file
60
.gitattributes
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
gemma-3-medical-finetune.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
86
README.md
Normal file
86
README.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
base_model: pedromoreira22/gemma-3-medical-finetune
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
mradermacher:
|
||||
readme_rev: 1
|
||||
quantized_by: mradermacher
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
- unsloth
|
||||
- gemma3_text
|
||||
---
|
||||
## About
|
||||
|
||||
<!-- ### quantize_version: 2 -->
|
||||
<!-- ### output_tensor_quantised: 1 -->
|
||||
<!-- ### convert_type: hf -->
|
||||
<!-- ### vocab_type: -->
|
||||
<!-- ### tags: nicoboss -->
|
||||
weighted/imatrix quants of https://huggingface.co/pedromoreira22/gemma-3-medical-finetune
|
||||
|
||||
<!-- provided-files -->
|
||||
|
||||
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gemma-3-medical-finetune-i1-GGUF).***
|
||||
|
||||
static quants are available at https://huggingface.co/mradermacher/gemma-3-medical-finetune-GGUF
|
||||
## Usage
|
||||
|
||||
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
||||
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
||||
more details, including on how to concatenate multi-part files.
|
||||
|
||||
## Provided Quants
|
||||
|
||||
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
||||
|
||||
| Link | Type | Size/GB | Notes |
|
||||
|:-----|:-----|--------:|:------|
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ1_S.gguf) | i1-IQ1_S | 0.7 | for the desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ1_M.gguf) | i1-IQ1_M | 0.7 | mostly desperate |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
|
||||
| [GGUF](https://huggingface.co/mradermacher/gemma-3-medical-finetune-i1-GGUF/resolve/main/gemma-3-medical-finetune.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
|
||||
|
||||
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
||||
types (lower is better):
|
||||
|
||||

|
||||
|
||||
And here are Artefact2's thoughts on the matter:
|
||||
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
||||
|
||||
## FAQ / Model Request
|
||||
|
||||
See https://huggingface.co/mradermacher/model_requests for some answers to
|
||||
questions you might have and/or if you want some other model quantized.
|
||||
|
||||
## Thanks
|
||||
|
||||
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
||||
me use its servers and providing upgrades to my workstation to enable
|
||||
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
||||
|
||||
<!-- end -->
|
||||
3
gemma-3-medical-finetune.i1-IQ1_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ1_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7aa5110f66ab346f68d8099c5efe92532c8a98a0faad2e2bb01936ede459ae5e
|
||||
size 643486496
|
||||
3
gemma-3-medical-finetune.i1-IQ1_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ1_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1eddf89a491d527ee28ceee43ecdb3e2753c0e861430b461774f12d840afe40f
|
||||
size 639194144
|
||||
3
gemma-3-medical-finetune.i1-IQ2_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d13bd921ddd0a350a2ac801b3d5bd248af752fa4709f109e0e1be66f822a8129
|
||||
size 669784352
|
||||
3
gemma-3-medical-finetune.i1-IQ2_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d73cf74a5ac068223d1fb65cb5783d69164999c58f09ba26aa2c675bce6a1e7b
|
||||
size 664061216
|
||||
3
gemma-3-medical-finetune.i1-IQ2_XS.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a75939e1c25a73863ac228e4cd46235ee51f1adce1e84e58a0b377988c52e3cf
|
||||
size 657322016
|
||||
3
gemma-3-medical-finetune.i1-IQ2_XXS.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ2_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5a2cf66e9ccd0df3d33ecde056d6cc3cf653c5a50608893d5e538cae557b710b
|
||||
size 650640416
|
||||
3
gemma-3-medical-finetune.i1-IQ3_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:da7da0850e85c8d241df06f78ae4a10c02a9601dabffc4413ccb626f9de697d0
|
||||
size 697061408
|
||||
3
gemma-3-medical-finetune.i1-IQ3_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ3_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3ea6015e0031cf11f16a4065f8570e6539837e1586e5d5a25a530bee7de35a10
|
||||
size 689815328
|
||||
3
gemma-3-medical-finetune.i1-IQ3_XS.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:48ac6b865d7d65c89f6b68798fab35d099795e99942b018cb301e6c91cad8ea4
|
||||
size 689815328
|
||||
3
gemma-3-medical-finetune.i1-IQ3_XXS.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:63bf18fe3ac23b22289655a7df2f5a556c622a2cc69276f582e256985a96032f
|
||||
size 680110880
|
||||
3
gemma-3-medical-finetune.i1-IQ4_NL.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11aa5d630eeccdfb175c2800f788cf5298bce953108fe46aabdd610bad3f22ab
|
||||
size 721863968
|
||||
3
gemma-3-medical-finetune.i1-IQ4_XS.gguf
Normal file
3
gemma-3-medical-finetune.i1-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f0310045a3b017738440c1baa9f83036c5b85a6515e65c4cab3e58a8ab334f4c
|
||||
size 714435872
|
||||
3
gemma-3-medical-finetune.i1-Q2_K.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c1d7ee9526af18438ccd2dfd3006abb0aa9b1a4a4e57273a13db311ff2bf7cfa
|
||||
size 689815328
|
||||
3
gemma-3-medical-finetune.i1-Q2_K_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q2_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:06e73f4eb924244da90254139fff7eda952a2571179ccb745f915309b57ff11a
|
||||
size 671272736
|
||||
3
gemma-3-medical-finetune.i1-Q3_K_L.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8ebfe524787adab16c360edd5e3c0ee31b5ad5952e1a110ff7b3ce2e16e78e01
|
||||
size 751576352
|
||||
3
gemma-3-medical-finetune.i1-Q3_K_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2a01976a03c178878622c459a02018137fe234b055453f1e177184b1c087801c
|
||||
size 722416928
|
||||
3
gemma-3-medical-finetune.i1-Q3_K_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:04f9dbf3d127b9e277a3c9ee956a8c04be0bdae9a710c3ecfafb97cb0e4488e6
|
||||
size 688856864
|
||||
3
gemma-3-medical-finetune.i1-Q4_0.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:76d064d8008bbb4ade40d80654efa72be4b3b2531c854de1c12dc435dc377180
|
||||
size 721919264
|
||||
3
gemma-3-medical-finetune.i1-Q4_1.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1f6f60d0e885e9aa6eaeb845cdb90f90b820f8af3ca2a04c7197466aa6fe359e
|
||||
size 764036384
|
||||
3
gemma-3-medical-finetune.i1-Q4_K_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:da511d8743fefe8406113fa61cbb46eb814b9469c0ee7854d58ba667e955e40a
|
||||
size 806059040
|
||||
3
gemma-3-medical-finetune.i1-Q4_K_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8875a1fb1ab2bf61404682483bf407427a40746e32fd02a0370ef243e0e5fa5b
|
||||
size 780993824
|
||||
3
gemma-3-medical-finetune.i1-Q5_K_M.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:df2f3e6019fcfa359ac6e9f1183601679672e53a56702a22e5b0122922dea116
|
||||
size 851346464
|
||||
3
gemma-3-medical-finetune.i1-Q5_K_S.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a3ec2c86dddfa43b4b2a19f5c688454b6c85222f15bc010e785a5d735c3a5beb
|
||||
size 836400416
|
||||
3
gemma-3-medical-finetune.i1-Q6_K.gguf
Normal file
3
gemma-3-medical-finetune.i1-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9f530c53cd22df433ce104f7cb1390f118dea1ab060871355af2233bb20d9a0e
|
||||
size 1011739424
|
||||
3
imatrix.dat
Normal file
3
imatrix.dat
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4708f16461d92f904a30a03c475cb16ea368155cec792b1f910788cc3719c693
|
||||
size 1430407
|
||||
Reference in New Issue
Block a user