ModelHub XC cf0d385bee 初始化项目,由ModelHub XC社区提供模型
Model: RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf
Source: Original Platform
2026-04-10 11:32:55 +08:00

Quantization made by Richard Erkhov.

Github

Discord

Request more models

granite-8b-code-instruct-128k - GGUF

Name Quant method Size
granite-8b-code-instruct-128k.Q2_K.gguf Q2_K 2.85GB
granite-8b-code-instruct-128k.IQ3_XS.gguf IQ3_XS 3.15GB
granite-8b-code-instruct-128k.IQ3_S.gguf IQ3_S 3.32GB
granite-8b-code-instruct-128k.Q3_K_S.gguf Q3_K_S 3.3GB
granite-8b-code-instruct-128k.IQ3_M.gguf IQ3_M 3.43GB
granite-8b-code-instruct-128k.Q3_K.gguf Q3_K 3.67GB
granite-8b-code-instruct-128k.Q3_K_M.gguf Q3_K_M 3.67GB
granite-8b-code-instruct-128k.Q3_K_L.gguf Q3_K_L 3.99GB
granite-8b-code-instruct-128k.IQ4_XS.gguf IQ4_XS 4.1GB
granite-8b-code-instruct-128k.Q4_0.gguf Q4_0 4.28GB
granite-8b-code-instruct-128k.IQ4_NL.gguf IQ4_NL 4.32GB
granite-8b-code-instruct-128k.Q4_K_S.gguf Q4_K_S 4.3GB
granite-8b-code-instruct-128k.Q4_K.gguf Q4_K 4.55GB
granite-8b-code-instruct-128k.Q4_K_M.gguf Q4_K_M 4.55GB
granite-8b-code-instruct-128k.Q4_1.gguf Q4_1 4.73GB
granite-8b-code-instruct-128k.Q5_0.gguf Q5_0 5.19GB
granite-8b-code-instruct-128k.Q5_K_S.gguf Q5_K_S 5.19GB
granite-8b-code-instruct-128k.Q5_K.gguf Q5_K 5.33GB
granite-8b-code-instruct-128k.Q5_K_M.gguf Q5_K_M 5.33GB
granite-8b-code-instruct-128k.Q5_1.gguf Q5_1 5.65GB
granite-8b-code-instruct-128k.Q6_K.gguf Q6_K 6.16GB
granite-8b-code-instruct-128k.Q8_0.gguf Q8_0 7.98GB

Original model description:

pipeline_tag: text-generation inference: false license: apache-2.0 datasets:

  • bigcode/commitpackft
  • TIGER-Lab/MathInstruct
  • meta-math/MetaMathQA
  • glaiveai/glaive-code-assistant-v3
  • glaive-function-calling-v2
  • bugdaryan/sql-create-context-instruction
  • garage-bAInd/Open-Platypus
  • nvidia/HelpSteer
  • bigcode/self-oss-instruct-sc2-exec-filter-50k metrics:
  • code_eval library_name: transformers tags:
  • code
  • granite model-index:
  • name: granite-8B-Code-instruct-128k results:
    • task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Python) metrics:
      • name: pass@1 type: pass@1 value: 62.2 verified: false
    • task: type: text-generation dataset: type: bigcode/humanevalpack
      name: HumanEvalSynthesis (Average) metrics:
      • name: pass@1 type: pass@1 value: 51.4 verified: false
    • task: type: text-generation dataset: type: bigcode/humanevalpack
      name: HumanEvalExplain (Average) metrics:
      • name: pass@1 type: pass@1 value: 38.9 verified: false
    • task: type: text-generation dataset: type: bigcode/humanevalpack
      name: HumanEvalFix (Average) metrics:
      • name: pass@1 type: pass@1 value: 38.3 verified: false
    • task: type: text-generation dataset: type: repoqa
      name: RepoQA (Python@16K) metrics:
      • name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 73.0 verified: false
    • task: type: text-generation dataset: type: repoqa
      name: RepoQA (C++@16K) metrics:
      • name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 37.0 verified: false
    • task: type: text-generation dataset: type: repoqa
      name: RepoQA (Java@16K) metrics:
      • name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 73.0 verified: false
    • task: type: text-generation dataset: type: repoqa
      name: RepoQA (TypeScript@16K) metrics:
      • name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 62.0 verified: false
    • task: type: text-generation dataset: type: repoqa
      name: RepoQA (Rust@16K) metrics:
      • name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 63.0 verified: false

image/png

Granite-8B-Code-Instruct-128K

Model Summary

Granite-8B-Code-Instruct-128K is a 8B parameter long-context instruct model fine tuned from Granite-8B-Code-Base-128K on a combination of permissively licensed data used in training the original Granite code instruct models, in addition to synthetically generated code instruction datasets tailored for solving long context problems. By exposing the model to both short and long context data, we aim to enhance its long-context capability without sacrificing code generation performance at short input context.

Usage

Intended use

The model is designed to respond to coding related instructions over long-conext input up to 128K length and can be used to build coding assistants.

Generation

This is a simple example of how to use Granite-8B-Code-Instruct model.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-8B-Code-instruct-128k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
    { "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
    input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
    print(i)

Training Data

Granite Code Instruct models are trained on a mix of short and long context data as follows.

Infrastructure

We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations

Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to Granite-8B-Code-Base-128K model card.

Description
Model synced from source: RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf
Readme 65 KiB