commit 895ab02e1cab38a10695f5adbac009f91d104ae2 Author: ModelHub XC Date: Thu Apr 9 14:33:24 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: tensorblock/granite-8b-code-instruct-128k-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..6191696 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,47 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +granite-8b-code-instruct-128k-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..6a5972a --- /dev/null +++ b/README.md @@ -0,0 +1,213 @@ +--- +pipeline_tag: text-generation +inference: false +license: apache-2.0 +datasets: +- bigcode/commitpackft +- TIGER-Lab/MathInstruct +- meta-math/MetaMathQA +- glaiveai/glaive-code-assistant-v3 +- glaive-function-calling-v2 +- bugdaryan/sql-create-context-instruction +- garage-bAInd/Open-Platypus +- nvidia/HelpSteer +- bigcode/self-oss-instruct-sc2-exec-filter-50k +metrics: +- code_eval +library_name: transformers +tags: +- code +- granite +- TensorBlock +- GGUF +base_model: ibm-granite/granite-8b-code-instruct-128k +model-index: +- name: granite-8B-Code-instruct-128k + results: + - task: + type: text-generation + dataset: + name: HumanEvalSynthesis (Python) + type: bigcode/humanevalpack + metrics: + - type: pass@1 + value: 62.2 + name: pass@1 + verified: false + - type: pass@1 + value: 51.4 + name: pass@1 + verified: false + - type: pass@1 + value: 38.9 + name: pass@1 + verified: false + - type: pass@1 + value: 38.3 + name: pass@1 + verified: false + - task: + type: text-generation + dataset: + name: RepoQA (Python@16K) + type: repoqa + metrics: + - type: pass@1 (thresh=0.5) + value: 73.0 + name: pass@1 (thresh=0.5) + verified: false + - type: pass@1 (thresh=0.5) + value: 37.0 + name: pass@1 (thresh=0.5) + verified: false + - type: pass@1 (thresh=0.5) + value: 73.0 + name: pass@1 (thresh=0.5) + verified: false + - type: pass@1 (thresh=0.5) + value: 62.0 + name: pass@1 (thresh=0.5) + verified: false + - type: pass@1 (thresh=0.5) + value: 63.0 + name: pass@1 (thresh=0.5) + verified: false +--- + +
+TensorBlock +
+ +[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co) +[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi) +[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2) +[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock) +[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock) + + +## ibm-granite/granite-8b-code-instruct-128k - GGUF + +This repo contains GGUF format model files for [ibm-granite/granite-8b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k). + +The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). + + +## Our projects + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Forge
+ Forge Project +
An OpenAI-compatible multi-provider routing layer.
+ 🚀 Try it now! 🚀 +
Awesome MCP ServersTensorBlock Studio
MCP ServersStudio
A comprehensive collection of Model Context Protocol (MCP) servers.A lightweight, open, and extensible multi-LLM interaction studio.
+ 👀 See what we built 👀 + + 👀 See what we built 👀 +
+## Prompt template + + +``` +System: +{system_prompt} + +Question: +{prompt} + +Answer: +``` + +## Model file specification + +| Filename | Quant type | File Size | Description | +| -------- | ---------- | --------- | ----------- | +| [granite-8b-code-instruct-128k-Q2_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q2_K.gguf) | Q2_K | 2.852 GB | smallest, significant quality loss - not recommended for most purposes | +| [granite-8b-code-instruct-128k-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_S.gguf) | Q3_K_S | 3.304 GB | very small, high quality loss | +| [granite-8b-code-instruct-128k-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_M.gguf) | Q3_K_M | 3.674 GB | very small, high quality loss | +| [granite-8b-code-instruct-128k-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q3_K_L.gguf) | Q3_K_L | 3.993 GB | small, substantial quality loss | +| [granite-8b-code-instruct-128k-Q4_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_0.gguf) | Q4_0 | 4.276 GB | legacy; small, very high quality loss - prefer using Q3_K_M | +| [granite-8b-code-instruct-128k-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_S.gguf) | Q4_K_S | 4.305 GB | small, greater quality loss | +| [granite-8b-code-instruct-128k-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q4_K_M.gguf) | Q4_K_M | 4.548 GB | medium, balanced quality - recommended | +| [granite-8b-code-instruct-128k-Q5_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_0.gguf) | Q5_0 | 5.190 GB | legacy; medium, balanced quality - prefer using Q4_K_M | +| [granite-8b-code-instruct-128k-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_S.gguf) | Q5_K_S | 5.190 GB | large, low quality loss - recommended | +| [granite-8b-code-instruct-128k-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q5_K_M.gguf) | Q5_K_M | 5.330 GB | large, very low quality loss - recommended | +| [granite-8b-code-instruct-128k-Q6_K.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q6_K.gguf) | Q6_K | 6.161 GB | very large, extremely low quality loss | +| [granite-8b-code-instruct-128k-Q8_0.gguf](https://huggingface.co/tensorblock/granite-8b-code-instruct-128k-GGUF/blob/main/granite-8b-code-instruct-128k-Q8_0.gguf) | Q8_0 | 7.977 GB | very large, extremely low quality loss - not recommended | + + +## Downloading instruction + +### Command line + +Firstly, install Huggingface Client + +```shell +pip install -U "huggingface_hub[cli]" +``` + +Then, downoad the individual model file the a local directory + +```shell +huggingface-cli download tensorblock/granite-8b-code-instruct-128k-GGUF --include "granite-8b-code-instruct-128k-Q2_K.gguf" --local-dir MY_LOCAL_DIR +``` + +If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: + +```shell +huggingface-cli download tensorblock/granite-8b-code-instruct-128k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' +``` diff --git a/granite-8b-code-instruct-128k-Q2_K.gguf b/granite-8b-code-instruct-128k-Q2_K.gguf new file mode 100644 index 0000000..35ff928 --- /dev/null +++ b/granite-8b-code-instruct-128k-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15e2417c0c8e2e515aed0ef0ce0a5cf21b6fd81c023d17c993ab6c59e4cf8061 +size 3062071168 diff --git a/granite-8b-code-instruct-128k-Q3_K_M.gguf b/granite-8b-code-instruct-128k-Q3_K_M.gguf new file mode 100644 index 0000000..affeb25 --- /dev/null +++ b/granite-8b-code-instruct-128k-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3b8d8cdba311586998eadbfb0e1dd924d69d63dbfc27994360b4031d235379c +size 3944841088