diff --git a/README.md b/README.md index 28bfd4e..1ea3440 100644 --- a/README.md +++ b/README.md @@ -39,10 +39,30 @@ more details, including on how to concatenate multi-part files. | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/DLER-Llama-Nemotron-8B-Merge-Research-i1-GGUF/resolve/main/DLER-Llama-Nemotron-8B-Merge-Research.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):