commit 2b2abdda3c825d6434646291543b2513643b4612 Author: ModelHub XC Date: Mon Apr 13 16:21:03 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..ee2ac59 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,60 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.imatrix.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Thinking.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/LFM2.5-1.2B-Thinking.i1-IQ1_M.gguf b/LFM2.5-1.2B-Thinking.i1-IQ1_M.gguf new file mode 100644 index 0000000..058ffc2 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ1_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:950f812f1023770d160082e4c97b88c0363ff0061b23f95453b07f87fc2973db +size 327145216 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ1_S.gguf b/LFM2.5-1.2B-Thinking.i1-IQ1_S.gguf new file mode 100644 index 0000000..32e55fe --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ1_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8982aeaeac42b92337d8ca1de4bbb20adf75eff02f038e1d3f7c6ecd4eef7339 +size 304387840 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ2_M.gguf b/LFM2.5-1.2B-Thinking.i1-IQ2_M.gguf new file mode 100644 index 0000000..fedaf31 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ2_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc6873f68c536a5d5e489f27345297e5b2ffebd6bb52fbcdd94c02239b979ac3 +size 434132736 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ2_S.gguf b/LFM2.5-1.2B-Thinking.i1-IQ2_S.gguf new file mode 100644 index 0000000..4d1c078 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ2_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87b732b57ad5cb8aa4ca71dede8dbadc9824d54429e51d4ef5a7e474efc580ad +size 403789568 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ2_XS.gguf b/LFM2.5-1.2B-Thinking.i1-IQ2_XS.gguf new file mode 100644 index 0000000..166eab8 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ2_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e54535d1a15e1ccb595800917aac0d78ade1d35d9fff6d62006061015da54950 +size 396203776 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ2_XXS.gguf b/LFM2.5-1.2B-Thinking.i1-IQ2_XXS.gguf new file mode 100644 index 0000000..15452a1 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ2_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a05ab73882874c55b0a63809374da00a003274f2ddcc3da158855c2a02b527b +size 365074176 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ3_M.gguf b/LFM2.5-1.2B-Thinking.i1-IQ3_M.gguf new file mode 100644 index 0000000..01047f8 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ3_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d2f54f6215df92e65424b795ac8ee5d053247a927f4840c731a62db395c9b85 +size 566793984 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ3_S.gguf b/LFM2.5-1.2B-Thinking.i1-IQ3_S.gguf new file mode 100644 index 0000000..47f7f95 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ3_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41f643514be1218d691551c9c4abedd9d09448e8912c5b938885117f1b24e325 +size 558159616 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ3_XS.gguf b/LFM2.5-1.2B-Thinking.i1-IQ3_XS.gguf new file mode 100644 index 0000000..d22f389 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ3_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c2858257660ea0964901b7f148c61b728979c20ffdfa7983b805e64e1165083 +size 537810688 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ3_XXS.gguf b/LFM2.5-1.2B-Thinking.i1-IQ3_XXS.gguf new file mode 100644 index 0000000..6b4ca60 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ3_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2716b457c6fb4d7a46fa472c18ef3c76e4163089010d3fa344235c52d32c8b60 +size 490985216 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ4_NL.gguf b/LFM2.5-1.2B-Thinking.i1-IQ4_NL.gguf new file mode 100644 index 0000000..3d2ea2a --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ4_NL.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99e0eeec3d877a510a077dcc03c55edf30878ca65fc71dfbecf49e8922ececfa +size 695752448 diff --git a/LFM2.5-1.2B-Thinking.i1-IQ4_XS.gguf b/LFM2.5-1.2B-Thinking.i1-IQ4_XS.gguf new file mode 100644 index 0000000..e4f5678 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3d45811281454a34c7d95460f2811332865367c7c377e4d79f7a68bb61ea8a5 +size 663377664 diff --git a/LFM2.5-1.2B-Thinking.i1-Q2_K.gguf b/LFM2.5-1.2B-Thinking.i1-Q2_K.gguf new file mode 100644 index 0000000..d257587 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6a520d2941f2d1e5292dd00f700cc52673217e3e5ed805ef41818bc664d79e0 +size 483399424 diff --git a/LFM2.5-1.2B-Thinking.i1-Q2_K_S.gguf b/LFM2.5-1.2B-Thinking.i1-Q2_K_S.gguf new file mode 100644 index 0000000..e95b9e0 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q2_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64da1e799e88330b534a83759019a704704971267255e3760436809a95a24d63 +size 460805888 diff --git a/LFM2.5-1.2B-Thinking.i1-Q3_K_L.gguf b/LFM2.5-1.2B-Thinking.i1-Q3_K_L.gguf new file mode 100644 index 0000000..163da8b --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d5e9ffc4d5cf35e04274381fc008e5fc719ccd585209bf6604dbf9e3ad311ef +size 635475712 diff --git a/LFM2.5-1.2B-Thinking.i1-Q3_K_M.gguf b/LFM2.5-1.2B-Thinking.i1-Q3_K_M.gguf new file mode 100644 index 0000000..8726f80 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ed91a182607274e207e48c0215081371482154787b169388136126aaae7fae3 +size 600348416 diff --git a/LFM2.5-1.2B-Thinking.i1-Q3_K_S.gguf b/LFM2.5-1.2B-Thinking.i1-Q3_K_S.gguf new file mode 100644 index 0000000..83e51f1 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f21585bc1860d89faba539b99527e24e55d1c0286ea6dffac670fa44837dfb55 +size 558159616 diff --git a/LFM2.5-1.2B-Thinking.i1-Q4_0.gguf b/LFM2.5-1.2B-Thinking.i1-Q4_0.gguf new file mode 100644 index 0000000..d96bed1 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8c1b42e231376e7a071c1d6cd9279505d54674a784f06ad1aee2cf158f94f8d +size 697849600 diff --git a/LFM2.5-1.2B-Thinking.i1-Q4_1.gguf b/LFM2.5-1.2B-Thinking.i1-Q4_1.gguf new file mode 100644 index 0000000..0cee2d7 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q4_1.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b3b22537fbcc1f53f68abc892d1a295091a3b7bb6c8b8830a5289ba32cd7dc9 +size 760502016 diff --git a/LFM2.5-1.2B-Thinking.i1-Q4_K_M.gguf b/LFM2.5-1.2B-Thinking.i1-Q4_K_M.gguf new file mode 100644 index 0000000..52da93f --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16ab232582cde589e6ea0ceb447bda800f2af6bf93dbf0951df8bfaa3845e070 +size 730896128 diff --git a/LFM2.5-1.2B-Thinking.i1-Q4_K_S.gguf b/LFM2.5-1.2B-Thinking.i1-Q4_K_S.gguf new file mode 100644 index 0000000..ce9d567 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32b78a9a04ca1bfa42e62cc25cea00131397a5eceb744626b41c40b0ee8ea7c0 +size 700471040 diff --git a/LFM2.5-1.2B-Thinking.i1-Q5_K_M.gguf b/LFM2.5-1.2B-Thinking.i1-Q5_K_M.gguf new file mode 100644 index 0000000..4e30e1a --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdf0a9c86243cdd283482fb8e41c09181373faeac277069f75861e144a0bc183 +size 843355904 diff --git a/LFM2.5-1.2B-Thinking.i1-Q5_K_S.gguf b/LFM2.5-1.2B-Thinking.i1-Q5_K_S.gguf new file mode 100644 index 0000000..8b64be2 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1b2418f07a61d105afd5b45779b3fc70831d2e498130d61275d191089fb6f7f +size 825251584 diff --git a/LFM2.5-1.2B-Thinking.i1-Q6_K.gguf b/LFM2.5-1.2B-Thinking.i1-Q6_K.gguf new file mode 100644 index 0000000..ad79038 --- /dev/null +++ b/LFM2.5-1.2B-Thinking.i1-Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3f31792bf8cea0e536b6dc95eb531d8b217196a9310839632774f1404d6db50 +size 962844416 diff --git a/LFM2.5-1.2B-Thinking.imatrix.gguf b/LFM2.5-1.2B-Thinking.imatrix.gguf new file mode 100644 index 0000000..258bf5c --- /dev/null +++ b/LFM2.5-1.2B-Thinking.imatrix.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdb2e65ee961f13c450e19524a2768e7a87815067832450da495ebd577eef50e +size 1161536 diff --git a/README.md b/README.md new file mode 100644 index 0000000..96bf8bf --- /dev/null +++ b/README.md @@ -0,0 +1,98 @@ +--- +base_model: LiquidAI/LFM2.5-1.2B-Thinking +language: +- en +- ar +- zh +- fr +- de +- ja +- ko +- es +library_name: transformers +license: other +license_link: LICENSE +license_name: lfm1.0 +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +tags: +- liquid +- lfm2.5 +- edge +--- +## About + + + + + + + + + +weighted/imatrix quants of https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM2.5-1.2B-Thinking-i1-GGUF).*** + +static quants are available at https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-GGUF +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.6 | very low quality | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q2_K.gguf) | i1-Q2_K | 0.6 | IQ3_XXS probably better | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.6 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ3_M.gguf) | i1-IQ3_M | 0.7 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.7 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.8 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | | +| [GGUF](https://huggingface.co/mradermacher/LFM2.5-1.2B-Thinking-i1-GGUF/resolve/main/LFM2.5-1.2B-Thinking.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. + +