From d1a4dec8b4b04f3170d98efc7335eb1c466343eb Mon Sep 17 00:00:00 2001 From: team mradermacher Date: Tue, 28 Oct 2025 08:38:27 +0000 Subject: [PATCH] auto-patch README.md --- README.md | 73 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/README.md b/README.md index f463fd9..a7c3268 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,22 @@ +--- +base_model: hasanbasbunar/Lodos-24B-Instruct-2510 +language: +- en +library_name: transformers +license: apache-2.0 +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +tags: +- text-generation-inference +- transformers +- unsloth +- mistral3 +- trl +- sft +--- +## About + @@ -7,3 +26,57 @@ weighted/imatrix quants of https://huggingface.co/hasanbasbunar/Lodos-24B-Instruct-2510 + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lodos-24B-Instruct-2510-i1-GGUF).*** + +static quants are available at https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-GGUF + +**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-GGUF).** +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Lodos-24B-Instruct-2510-i1-GGUF/resolve/main/Lodos-24B-Instruct-2510.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. + +