96 lines
3.3 KiB
Markdown
96 lines
3.3 KiB
Markdown
|
|
---
|
||
|
|
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
||
|
|
library_name: gguf
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
license: apache-2.0
|
||
|
|
tags:
|
||
|
|
- gguf
|
||
|
|
- llama-cpp
|
||
|
|
- qwen2.5-coder
|
||
|
|
- celeste-imperia
|
||
|
|
---
|
||
|
|
|
||
|
|
# Qwen2.5-Coder-7B-Instruct-GGUF (Platinum Series)
|
||
|
|
|
||
|
|

|
||
|
|

|
||
|
|

|
||
|
|
[](https://razorpay.me/@huggingface)
|
||
|
|
|
||
|
|
This repository contains the **Platinum Series** universal GGUF release of **Qwen2.5-Coder-7B-Instruct**. This collection provides multiple quantization levels optimized for cross-platform performance, specializing in high-precision code generation and technical reasoning.
|
||
|
|
|
||
|
|
## 📦 Available Files & Quantization Details
|
||
|
|
|
||
|
|
| File Name | Quantization | Size | Accuracy | Recommended For |
|
||
|
|
| :--- | :--- | :--- | :--- | :--- |
|
||
|
|
| **Qwen2.5-Coder-7B-Instruct-Platinum-F16.gguf** | FP16 | ~15.0 GB | 100% | Master Reference / Benchmarking |
|
||
|
|
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q8_0.gguf** | Q8_0 | ~8.0 GB | 99.9% | Platinum Reference / High-Fidelity |
|
||
|
|
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf** | Q6_K | ~6.3 GB | 99.8% | High-Quality Coding Assistant |
|
||
|
|
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q5_K_M.gguf** | Q5_K_M | ~5.5 GB | 99.5% | Balanced Desktop Performance |
|
||
|
|
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q4_K_M.gguf** | Q4_K_M | ~4.7 GB | 99.0% | Efficiency / Mid-Range Hardware |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 🐍 Python Inference (llama-cpp-python)
|
||
|
|
|
||
|
|
To run these engines using Python:
|
||
|
|
|
||
|
|
```python
|
||
|
|
from llama_cpp import Llama
|
||
|
|
|
||
|
|
llm = Llama(
|
||
|
|
model_path="Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf",
|
||
|
|
n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
|
||
|
|
n_ctx=8192 # High context for coding tasks
|
||
|
|
)
|
||
|
|
|
||
|
|
output = llm("Write a C# class that implements a thread-safe singleton pattern.", max_tokens=300)
|
||
|
|
print(output["choices"][0]["text"])
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 💻 For C# / .NET Users (LLamaSharp)
|
||
|
|
|
||
|
|
This collection is fully compatible with .NET applications via the ``LLamaSharp`` library.
|
||
|
|
|
||
|
|
```csharp
|
||
|
|
using LLama.Common;
|
||
|
|
using LLama;
|
||
|
|
|
||
|
|
var parameters = new ModelParams("Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf") {
|
||
|
|
ContextSize = 8192,
|
||
|
|
GpuLayerCount = 35
|
||
|
|
};
|
||
|
|
|
||
|
|
using var model = LLamaWeights.LoadFromFile(parameters);
|
||
|
|
using var context = model.CreateContext(parameters);
|
||
|
|
var executor = new InteractiveExecutor(context);
|
||
|
|
|
||
|
|
Console.WriteLine("Coding Specialist Active.");
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 🏗️ Technical Details
|
||
|
|
- **Optimization Tool:** llama.cpp (CUDA-accelerated)
|
||
|
|
- **Architecture:** Qwen-2.5-Coder (7B)
|
||
|
|
- **Hardware Validation:** Dual-GPU (RTX 3090 + RTX A4000)
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
### ☕ Support the Forge
|
||
|
|
|
||
|
|
Maintaining high-capacity workstations for model conversion requires hardware investment. If these tools power your production software, please consider supporting the development:
|
||
|
|
|
||
|
|
| Platform | Support Link |
|
||
|
|
| :--- | :--- |
|
||
|
|
| **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
|
||
|
|
|
||
|
|
**Scan to support via UPI (India Only):**
|
||
|
|
|
||
|
|
<img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
**Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)
|