188 lines
7.7 KiB
Markdown
188 lines
7.7 KiB
Markdown
---
|
|
pipeline_tag: text-generation
|
|
inference: false
|
|
license: apache-2.0
|
|
datasets:
|
|
- bigcode/commitpackft
|
|
- TIGER-Lab/MathInstruct
|
|
- meta-math/MetaMathQA
|
|
- glaiveai/glaive-code-assistant-v3
|
|
- glaive-function-calling-v2
|
|
- bugdaryan/sql-create-context-instruction
|
|
- garage-bAInd/Open-Platypus
|
|
- nvidia/HelpSteer
|
|
- bigcode/self-oss-instruct-sc2-exec-filter-50k
|
|
metrics:
|
|
- code_eval
|
|
library_name: transformers
|
|
tags:
|
|
- code
|
|
- granite
|
|
- TensorBlock
|
|
- GGUF
|
|
base_model: ibm-granite/granite-8b-code-instruct-128k
|
|
model-index:
|
|
- name: granite-8B-Code-instruct-128k
|
|
results:
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: HumanEvalSynthesis (Python)
|
|
type: bigcode/humanevalpack
|
|
metrics:
|
|
- type: pass@1
|
|
value: 62.2
|
|
name: pass@1
|
|
verified: false
|
|
- type: pass@1
|
|
value: 51.4
|
|
name: pass@1
|
|
verified: false
|
|
- type: pass@1
|
|
value: 38.9
|
|
name: pass@1
|
|
verified: false
|
|
- type: pass@1
|
|
value: 38.3
|
|
name: pass@1
|
|
verified: false
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: RepoQA (Python@16K)
|
|
type: repoqa
|
|
metrics:
|
|
- type: pass@1 (thresh=0.5)
|
|
value: 73.0
|
|
name: pass@1 (thresh=0.5)
|
|
verified: false
|
|
- type: pass@1 (thresh=0.5)
|
|
value: 37.0
|
|
name: pass@1 (thresh=0.5)
|
|
verified: false
|
|
- type: pass@1 (thresh=0.5)
|
|
value: 73.0
|
|
name: pass@1 (thresh=0.5)
|
|
verified: false
|
|
- type: pass@1 (thresh=0.5)
|
|
value: 62.0
|
|
name: pass@1 (thresh=0.5)
|
|
verified: false
|
|
- type: pass@1 (thresh=0.5)
|
|
value: 63.0
|
|
name: pass@1 (thresh=0.5)
|
|
verified: false
|
|
---
|
|
|
|
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
|
</div>
|
|
|
|
[](https://tensorblock.co)
|
|
[](https://twitter.com/tensorblock_aoi)
|
|
[](https://discord.gg/Ej5NmeHFf2)
|
|
[](https://github.com/TensorBlock)
|
|
[](https://t.me/TensorBlock)
|
|
|
|
|
|
## ibm-granite/granite-8b-code-instruct-128k - GGUF
|
|
|
|
This repo contains GGUF format model files for [ibm-granite/granite-8b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k).
|
|
|
|
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
|
|
|
|
|
## Our projects
|
|
<table border="1" cellspacing="0" cellpadding="10">
|
|
<tr>
|
|
<th style="font-size: 25px;">Awesome MCP Servers</th>
|
|
<th style="font-size: 25px;">TensorBlock Studio</th>
|
|
</tr>
|
|
<tr>
|
|
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
|
|
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
|
|
</tr>
|
|
<tr>
|
|
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
|
|
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
|
|
</tr>
|
|
<tr>
|
|
<th>
|
|
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
|
|
display: inline-block;
|
|
padding: 8px 16px;
|
|
background-color: #FF7F50;
|
|
color: white;
|
|
text-decoration: none;
|
|
border-radius: 6px;
|
|
font-weight: bold;
|
|
font-family: sans-serif;
|
|
">👀 See what we built 👀</a>
|
|
</th>
|
|
<th>
|
|
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
|
|
display: inline-block;
|
|
padding: 8px 16px;
|
|
background-color: #FF7F50;
|
|
color: white;
|
|
text-decoration: none;
|
|
border-radius: 6px;
|
|
font-weight: bold;
|
|
font-family: sans-serif;
|
|
">👀 See what we built 👀</a>
|
|
</th>
|
|
</tr>
|
|
</table>
|
|
## Prompt template
|
|
|
|
|
|
```
|
|
System:
|
|
{system_prompt}
|
|
|
|
Question:
|
|
{prompt}
|
|
|
|
Answer:
|
|
```
|
|
|
|
## Model file specification
|
|
|
|
| Filename | Quant type | File Size | Description |
|
|
| -------- | ---------- | --------- | ----------- |
|
|
| [granite-8b-code-instruct-128k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q2_K.gguf) | Q2_K | 2.852 GB | smallest, significant quality loss - not recommended for most purposes |
|
|
| [granite-8b-code-instruct-128k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_S.gguf) | Q3_K_S | 3.304 GB | very small, high quality loss |
|
|
| [granite-8b-code-instruct-128k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_M.gguf) | Q3_K_M | 3.674 GB | very small, high quality loss |
|
|
| [granite-8b-code-instruct-128k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_L.gguf) | Q3_K_L | 3.993 GB | small, substantial quality loss |
|
|
| [granite-8b-code-instruct-128k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_0.gguf) | Q4_0 | 4.276 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
|
| [granite-8b-code-instruct-128k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_S.gguf) | Q4_K_S | 4.305 GB | small, greater quality loss |
|
|
| [granite-8b-code-instruct-128k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_M.gguf) | Q4_K_M | 4.548 GB | medium, balanced quality - recommended |
|
|
| [granite-8b-code-instruct-128k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_0.gguf) | Q5_0 | 5.190 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
|
| [granite-8b-code-instruct-128k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_S.gguf) | Q5_K_S | 5.190 GB | large, low quality loss - recommended |
|
|
| [granite-8b-code-instruct-128k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_M.gguf) | Q5_K_M | 5.330 GB | large, very low quality loss - recommended |
|
|
| [granite-8b-code-instruct-128k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q6_K.gguf) | Q6_K | 6.161 GB | very large, extremely low quality loss |
|
|
| [granite-8b-code-instruct-128k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q8_0.gguf) | Q8_0 | 7.977 GB | very large, extremely low quality loss - not recommended |
|
|
|
|
|
|
## Downloading instruction
|
|
|
|
### Command line
|
|
|
|
Firstly, install Huggingface Client
|
|
|
|
```shell
|
|
pip install -U "huggingface_hub[cli]"
|
|
```
|
|
|
|
Then, downoad the individual model file the a local directory
|
|
|
|
```shell
|
|
huggingface-cli download tensorblock/granite-8b-code-instruct-128k-GGUF --include "granite-8b-code-instruct-128k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
|
|
```
|
|
|
|
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
|
|
|
```shell
|
|
huggingface-cli download tensorblock/granite-8b-code-instruct-128k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
|
```
|