初始化项目,由ModelHub XC社区提供模型

Model: CelesteImperia/Qwen2.5-7B-Instruct-Platinum-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 23:44:38 +08:00
commit f2d5803908
9 changed files with 172 additions and 0 deletions

40
.gitattributes vendored Normal file
View File

@@ -0,0 +1,40 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-Platinum-F16.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-Platinum-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-Platinum-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-Platinum-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-Platinum-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text

17
LICENSE Normal file
View File

@@ -0,0 +1,17 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
... [Full Apache 2.0 Text omitted for brevity but should be the standard 2004 version]
Copyright 2024 Alibaba Cloud (Qwen Team)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf66b24e905c6a037e21819170844a70a99ac3450624ab739adeacb30da95f17
size 15237853696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:35f9f55b0c7cdd52115063f16d591d4bc2bca0272083e3dcda184d41f3a2389b
size 4683074048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ee17538e7867b50ffa581f7a609e34e999d068f76390bdce372f33f65581659d
size 5444831744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7533d60e86d9edbbcc0406f0548ddca13b37580010ea1ebab1485949be8d0b2b
size 6254199296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7cb2bbb2b878fb4941ffda36fd21c2dd0ceee7d56288e857a3d7fa2980bb823e
size 8098525696

96
README.md Normal file
View File

@@ -0,0 +1,96 @@
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: gguf
pipeline_tag: text-generation
license: apache-2.0
tags:
- gguf
- llama-cpp
- qwen2.5
- celeste-imperia
---
# Qwen-2.5-7B-Instruct-GGUF (Platinum Series)
![Status](https://img.shields.io/badge/Status-Active-success)
![Format](https://img.shields.io/badge/Format-GGUF-green)
![Series](https://img.shields.io/badge/Series-Platinum-silver)
[![Support](https://img.shields.io/badge/Support-Razorpay-orange)](https://razorpay.me/@huggingface)
This repository contains the **Platinum Series** universal GGUF release of **Qwen-2.5-7B-Instruct**. This collection provides multiple quantization levels optimized for cross-platform performance, offering professional-grade reasoning and coding capabilities.
## 📦 Available Files & Quantization Details
| File Name | Quantization | Size | Accuracy | Recommended For |
| :--- | :--- | :--- | :--- | :--- |
| **Qwen2.5-7B-Instruct-Platinum-F16.gguf** | FP16 | ~15.0 GB | 100% | Master Reference / Benchmarking |
| **Qwen2.5-7B-Instruct-Platinum-Q8_0.gguf** | Q8_0 | ~8.0 GB | 99.9% | Platinum Reference / High-Fidelity |
| **Qwen2.5-7B-Instruct-Platinum-Q6_K.gguf** | Q6_K | ~6.3 GB | 99.8% | High-Quality Reasoning |
| **Qwen2.5-7B-Instruct-Platinum-Q5_K_M.gguf** | Q5_K_M | ~5.5 GB | 99.5% | Balanced Desktop Performance |
| **Qwen2.5-7B-Instruct-Platinum-Q4_K_M.gguf** | Q4_K_M | ~4.7 GB | 99.0% | Efficiency / Mid-Range Hardware |
---
## 🐍 Python Inference (llama-cpp-python)
To run these engines using Python:
```python
from llama_cpp import Llama
llm = Llama(
model_path="Qwen2.5-7B-Instruct-Platinum-Q8_0.gguf",
n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
n_ctx=4096
)
output = llm("Explain the core improvements in Qwen 2.5.", max_tokens=150)
print(output["choices"][0]["text"])
```
---
## 💻 For C# / .NET Users (LLamaSharp)
This collection is fully compatible with .NET applications via the ``LLamaSharp`` library.
```csharp
using LLama.Common;
using LLama;
var parameters = new ModelParams("Qwen2.5-7B-Instruct-Platinum-Q8_0.gguf") {
ContextSize = 4096,
GpuLayerCount = 35
};
using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
var executor = new InteractiveExecutor(context);
Console.WriteLine("Universal Engine Active.");
```
---
## 🏗️ Technical Details
- **Optimization Tool:** llama.cpp (CUDA-accelerated)
- **Architecture:** Qwen-2.5 (7B)
- **Hardware Validation:** Dual-GPU (RTX 3090 + RTX A4000)
---
### ☕ Support the Forge
Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:
| Platform | Support Link |
| :--- | :--- |
| **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
**Scan to support via UPI (India Only):**
<img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
---
**Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)

4
requirements.txt Normal file
View File

@@ -0,0 +1,4 @@
optimum-intel[openvino,nncf]>=1.20.0
transformers>=4.45.0
accelerate
sentencepiece