初始化项目,由ModelHub XC社区提供模型
Model: CelesteImperia/Qwen2.5-Coder-7B-Instruct-Platinum-GGUF Source: Original Platform
This commit is contained in:
39
.gitattributes
vendored
Normal file
39
.gitattributes
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen2.5-Coder-7B-Instruct-Platinum-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen2.5-Coder-7B-Instruct-Platinum-F16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen2.5-Coder-7B-Instruct-Platinum-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
17
LICENSE
Normal file
17
LICENSE
Normal file
@@ -0,0 +1,17 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
... [Full Apache 2.0 Text omitted for brevity but should be the standard 2004 version]
|
||||
|
||||
Copyright 2024 Alibaba Cloud (Qwen Team)
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
|
||||
3
Qwen2.5-Coder-7B-Instruct-Platinum-F16.gguf
Normal file
3
Qwen2.5-Coder-7B-Instruct-Platinum-F16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:274c0eb05fe4a712805d4b999ad8e419f14a908ef1ca1b310a737a418ba15452
|
||||
size 15237853792
|
||||
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q4_K_M.gguf
Normal file
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:46caa6175bbef0b12ffe552f702fb3949cb72c11acc29fdc935e10640b57f5a9
|
||||
size 4683074144
|
||||
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q5_K_M.gguf
Normal file
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:38d6bd18220d7ae5251470a35eb57f469748b5bbf210eb0048f1e0365229a1e7
|
||||
size 5444831840
|
||||
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf
Normal file
3
Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:86b8e170136541e170f5c08b4fe5038c367c90922f3444685e0ed4c3bf61b9ca
|
||||
size 6254199392
|
||||
96
README.md
Normal file
96
README.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
||||
library_name: gguf
|
||||
pipeline_tag: text-generation
|
||||
license: apache-2.0
|
||||
tags:
|
||||
- gguf
|
||||
- llama-cpp
|
||||
- qwen2.5-coder
|
||||
- celeste-imperia
|
||||
---
|
||||
|
||||
# Qwen2.5-Coder-7B-Instruct-GGUF (Platinum Series)
|
||||
|
||||

|
||||

|
||||

|
||||
[](https://razorpay.me/@huggingface)
|
||||
|
||||
This repository contains the **Platinum Series** universal GGUF release of **Qwen2.5-Coder-7B-Instruct**. This collection provides multiple quantization levels optimized for cross-platform performance, specializing in high-precision code generation and technical reasoning.
|
||||
|
||||
## 📦 Available Files & Quantization Details
|
||||
|
||||
| File Name | Quantization | Size | Accuracy | Recommended For |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
| **Qwen2.5-Coder-7B-Instruct-Platinum-F16.gguf** | FP16 | ~15.0 GB | 100% | Master Reference / Benchmarking |
|
||||
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q8_0.gguf** | Q8_0 | ~8.0 GB | 99.9% | Platinum Reference / High-Fidelity |
|
||||
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf** | Q6_K | ~6.3 GB | 99.8% | High-Quality Coding Assistant |
|
||||
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q5_K_M.gguf** | Q5_K_M | ~5.5 GB | 99.5% | Balanced Desktop Performance |
|
||||
| **Qwen2.5-Coder-7B-Instruct-Platinum-Q4_K_M.gguf** | Q4_K_M | ~4.7 GB | 99.0% | Efficiency / Mid-Range Hardware |
|
||||
|
||||
---
|
||||
|
||||
## 🐍 Python Inference (llama-cpp-python)
|
||||
|
||||
To run these engines using Python:
|
||||
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama(
|
||||
model_path="Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf",
|
||||
n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
|
||||
n_ctx=8192 # High context for coding tasks
|
||||
)
|
||||
|
||||
output = llm("Write a C# class that implements a thread-safe singleton pattern.", max_tokens=300)
|
||||
print(output["choices"][0]["text"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💻 For C# / .NET Users (LLamaSharp)
|
||||
|
||||
This collection is fully compatible with .NET applications via the ``LLamaSharp`` library.
|
||||
|
||||
```csharp
|
||||
using LLama.Common;
|
||||
using LLama;
|
||||
|
||||
var parameters = new ModelParams("Qwen2.5-Coder-7B-Instruct-Platinum-Q6_K.gguf") {
|
||||
ContextSize = 8192,
|
||||
GpuLayerCount = 35
|
||||
};
|
||||
|
||||
using var model = LLamaWeights.LoadFromFile(parameters);
|
||||
using var context = model.CreateContext(parameters);
|
||||
var executor = new InteractiveExecutor(context);
|
||||
|
||||
Console.WriteLine("Coding Specialist Active.");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Technical Details
|
||||
- **Optimization Tool:** llama.cpp (CUDA-accelerated)
|
||||
- **Architecture:** Qwen-2.5-Coder (7B)
|
||||
- **Hardware Validation:** Dual-GPU (RTX 3090 + RTX A4000)
|
||||
|
||||
---
|
||||
|
||||
### ☕ Support the Forge
|
||||
|
||||
Maintaining high-capacity workstations for model conversion requires hardware investment. If these tools power your production software, please consider supporting the development:
|
||||
|
||||
| Platform | Support Link |
|
||||
| :--- | :--- |
|
||||
| **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
|
||||
|
||||
**Scan to support via UPI (India Only):**
|
||||
|
||||
<img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
|
||||
|
||||
---
|
||||
|
||||
**Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)
|
||||
4
requirements.txt
Normal file
4
requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
optimum-intel[openvino,nncf]>=1.20.0
|
||||
transformers>=4.45.0
|
||||
accelerate
|
||||
sentencepiece
|
||||
Reference in New Issue
Block a user