diff --git a/README.md b/README.md index 8848bc2..193f7c9 100644 --- a/README.md +++ b/README.md @@ -91,8 +91,16 @@ This repo contains GGUF format model files for [ibm-granite/granite-8b-code-inst The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). + +
+ ## Prompt template + ``` System: {system_prompt} @@ -107,18 +115,18 @@ Answer: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | -| [granite-8b-code-instruct-128k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q2_K.gguf) | Q2_K | 2.852 GB | smallest, significant quality loss - not recommended for most purposes | -| [granite-8b-code-instruct-128k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q3_K_S.gguf) | Q3_K_S | 3.304 GB | very small, high quality loss | -| [granite-8b-code-instruct-128k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q3_K_M.gguf) | Q3_K_M | 3.674 GB | very small, high quality loss | -| [granite-8b-code-instruct-128k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q3_K_L.gguf) | Q3_K_L | 3.993 GB | small, substantial quality loss | -| [granite-8b-code-instruct-128k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q4_0.gguf) | Q4_0 | 4.276 GB | legacy; small, very high quality loss - prefer using Q3_K_M | -| [granite-8b-code-instruct-128k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q4_K_S.gguf) | Q4_K_S | 4.305 GB | small, greater quality loss | -| [granite-8b-code-instruct-128k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q4_K_M.gguf) | Q4_K_M | 4.548 GB | medium, balanced quality - recommended | -| [granite-8b-code-instruct-128k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q5_0.gguf) | Q5_0 | 5.190 GB | legacy; medium, balanced quality - prefer using Q4_K_M | -| [granite-8b-code-instruct-128k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q5_K_S.gguf) | Q5_K_S | 5.190 GB | large, low quality loss - recommended | -| [granite-8b-code-instruct-128k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q5_K_M.gguf) | Q5_K_M | 5.330 GB | large, very low quality loss - recommended | -| [granite-8b-code-instruct-128k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q6_K.gguf) | Q6_K | 6.161 GB | very large, extremely low quality loss | -| [granite-8b-code-instruct-128k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/tree/main/granite-8b-code-instruct-128k-Q8_0.gguf) | Q8_0 | 7.977 GB | very large, extremely low quality loss - not recommended | +| [granite-8b-code-instruct-128k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q2_K.gguf) | Q2_K | 2.852 GB | smallest, significant quality loss - not recommended for most purposes | +| [granite-8b-code-instruct-128k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_S.gguf) | Q3_K_S | 3.304 GB | very small, high quality loss | +| [granite-8b-code-instruct-128k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_M.gguf) | Q3_K_M | 3.674 GB | very small, high quality loss | +| [granite-8b-code-instruct-128k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_L.gguf) | Q3_K_L | 3.993 GB | small, substantial quality loss | +| [granite-8b-code-instruct-128k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_0.gguf) | Q4_0 | 4.276 GB | legacy; small, very high quality loss - prefer using Q3_K_M | +| [granite-8b-code-instruct-128k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_S.gguf) | Q4_K_S | 4.305 GB | small, greater quality loss | +| [granite-8b-code-instruct-128k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_M.gguf) | Q4_K_M | 4.548 GB | medium, balanced quality - recommended | +| [granite-8b-code-instruct-128k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_0.gguf) | Q5_0 | 5.190 GB | legacy; medium, balanced quality - prefer using Q4_K_M | +| [granite-8b-code-instruct-128k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_S.gguf) | Q5_K_S | 5.190 GB | large, low quality loss - recommended | +| [granite-8b-code-instruct-128k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_M.gguf) | Q5_K_M | 5.330 GB | large, very low quality loss - recommended | +| [granite-8b-code-instruct-128k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q6_K.gguf) | Q6_K | 6.161 GB | very large, extremely low quality loss | +| [granite-8b-code-instruct-128k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q8_0.gguf) | Q8_0 | 7.977 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction