Model: Bloce3an/qwen2.5-0.5B-entities-relationship-gguf Source: Original Platform
base_model, library_name, pipeline_tag, tags, license, language
| base_model | library_name | pipeline_tag | tags | license | language | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Qwen/Qwen2.5-0.5B-Instruct | gguf | text-generation |
|
apache-2.0 |
|
Qwen2.5-0.5B-Instruct Knowledge Graph Extractor (GGUF)
This is a GGUF quantized version of the qwen2.5-0.5B-kg-lora2 model. The base model (Qwen2.5-0.5B-Instruct) was fine-tuned with Unsloth to strictly extract Knowledge Graph triples from unstructured text.
The weights have been merged and quantized to allow for fast, lightweight inference on CPU and edge devices using tools like llama.cpp, Ollama, or LM Studio.
Available Quantizations
This repository contains the following quantized files:
Q8_0: 8-bit quantization. Highest quality, nearly identical to the original fp16 model. Good if you have enough RAM.Q6_K: 6-bit quantization. Excellent balance between size and quality.Q4_K_M: 4-bit quantization. Recommended for most users. Fast inference and very low memory footprint with minimal quality loss.
Prompt Format (CRITICAL)
This model was strictly fine-tuned on a specific ChatML system prompt. You MUST use this exact system prompt or the model will hallucinate or output the wrong format.
<|im_start|>system
You are an expert at extracting clean, accurate knowledge graph triples from text.
Your task is to carefully read the input text and extract **all** meaningful triples in this exact format:
(subject | relation | object)
Strict rules you must follow:
- Subject and object must be specific named entities or concrete concepts explicitly mentioned in the text (people, organizations, locations, events, products, years, etc.)
- Relation should be a short, clear predicate in base form or simple present tense (examples: "is", "has", "works at", "located in", "born in", "capital of", "founded in")
- Only extract triples that are **directly supported** by the text — do **not** infer, assume, hallucinate or add information that is not clearly stated
- If uncertain about a triple → do **not** include it
- Each triple must be written on its **own separate line**
- Do **not** add any explanations, headings, numbering, bullet points, comments, or extra text of any kind
- If no valid triples can be extracted → return exactly one line: "No triples found"<|im_end|>
<|im_start|>user
Text:
{your_input_text}<|im_end|>
<|im_start|>assistant
Usage with llama.cpp
Once you have downloaded the .gguf file of your choice (e.g. model-unsloth.Q4_K_M.gguf), you can run it via llama.cpp using the CLI. Since Qwen2.5 uses ChatML, ensure that you pass the exact system instruction.
./main -m model-unsloth.Q4_K_M.gguf \
--color \
-c 2048 \
-temp 0.1 \
--repeat_penalty 1.15 \
-p "<|im_start|>system\nYou are an expert at extracting clean, accurate knowledge graph triples from text.\n\nYour task is to carefully read the input text and extract **all** meaningful triples in this exact format:\n(subject | relation | object)\n\nStrict rules you must follow:\n- Subject and object must be specific named entities or concrete concepts explicitly mentioned in the text (people, organizations, locations, events, products, years, etc.)\n- Relation should be a short, clear predicate in base form or simple present tense (examples: \"is\", \"has\", \"works at\", \"located in\", \"born in\", \"capital of\", \"founded in\")\n- Only extract triples that are **directly supported** by the text — do **not** infer, assume, hallucinate or add information that is not clearly stated\n- If uncertain about a triple → do **not** include it\n- Each triple must be written on its **own separate line**\n- Do **not** add any explanations, headings, numbering, bullet points, comments, or extra text of any kind\n- If no valid triples can be extracted → return exactly one line: \"No triples found\"<|im_end|>\n<|im_start|>user\nText:\nThe Tasmanian Devil is a carnivorous marsupial of the family Dasyuridae.<|im_end|>\n<|im_start|>assistant\n"
Intended Use
- Local processing of sensitive documents.
- Rapid edge-device extraction of explicit Entity-Relation-Entity relationships.
- Pipeline integration for RAG (Retrieval-Augmented Generation) graph curation.