初始化项目,由ModelHub XC社区提供模型

Model: CelesteImperia/Phi-3.5-mini-instruct-Platinum-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 21:25:31 +08:00
commit e1d9765028
9 changed files with 176 additions and 0 deletions

40
.gitattributes vendored Normal file
View File

@@ -0,0 +1,40 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Phi-3.5-mini-instruct-Platinum-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Phi-3.5-mini-instruct-Platinum-F16.gguf filter=lfs diff=lfs merge=lfs -text
Phi-3.5-mini-instruct-Platinum-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Phi-3.5-mini-instruct-Platinum-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Phi-3.5-mini-instruct-Platinum-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dfb7f35ff9b60e406728c99bf6247454ebe51218766371e9381a995547cffba3
size 7643297280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c5347cbd620330a7c051c3d2fcad156b2c606340682f3d09c92df5bcfb9deaba
size 2396771328

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1e90a70f0c3832dd90e86e6df179446d51d94bdfa6934eee42bd8e892f8b9965
size 2755113984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:567330ee02f1d18c5bbaacf12984e57dcd9c5cdffad20be988f0154e362e50fd
size 3135853056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bfe6b9881e856dcb0fb33fede9b4b24c88f6a07592aa4f66568aa38d29a14153
size 4061222400

96
README.md Normal file
View File

@@ -0,0 +1,96 @@
---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: gguf
pipeline_tag: text-generation
license: mit
tags:
- gguf
- llama-cpp
- phi-3.5
- celeste-imperia
---
# Phi-3.5-mini-instruct-GGUF (Platinum Series)
![Status](https://img.shields.io/badge/Status-Active-success)
![Format](https://img.shields.io/badge/Format-GGUF-green)
![Series](https://img.shields.io/badge/Series-Platinum-silver)
[![Support](https://img.shields.io/badge/Support-Razorpay-orange)](https://razorpay.me/@huggingface)
This repository contains the **Platinum Series** universal GGUF release of **Phi-3.5-mini-instruct**. This collection provides multiple quantization levels optimized for cross-platform performance, offering advanced reasoning capabilities with 128k context support.
## 📦 Available Files & Quantization Details
| File Name | Quantization | Size | Accuracy | Recommended For |
| :--- | :--- | :--- | :--- | :--- |
| **Phi-3.5-mini-instruct-Platinum-F16.gguf** | FP16 | ~7.6 GB | 100% | Master Reference / Benchmarking |
| **Phi-3.5-mini-instruct-Platinum-Q8_0.gguf** | Q8_0 | ~4.1 GB | 99.9% | Platinum Reference / High-Fidelity |
| **Phi-3.5-mini-instruct-Platinum-Q6_K.gguf** | Q6_K | ~3.1 GB | 99.8% | High-Quality Inference |
| **Phi-3.5-mini-instruct-Platinum-Q5_K_M.gguf** | Q5_K_M | ~2.8 GB | 99.4% | Balanced Desktop Performance |
| **Phi-3.5-mini-instruct-Platinum-Q4_K_M.gguf** | Q4_K_M | ~2.4 GB | 98.8% | Mobile / Low-Power Efficiency |
---
## 🐍 Python Inference (llama-cpp-python)
To run these engines using Python:
```python
from llama_cpp import Llama
llm = Llama(
model_path="Phi-3.5-mini-instruct-Platinum-Q8_0.gguf",
n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
n_ctx=4096
)
output = llm("Discuss the architectural benefits of Phi-3.5.", max_tokens=150)
print(output["choices"][0]["text"])
```
---
## 💻 For C# / .NET Users (LLamaSharp)
This collection is fully compatible with .NET applications via the ``LLamaSharp`` library.
```csharp
using LLama.Common;
using LLama;
var parameters = new ModelParams("Phi-3.5-mini-instruct-Platinum-Q8_0.gguf") {
ContextSize = 4096,
GpuLayerCount = 35
};
using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
var executor = new InteractiveExecutor(context);
Console.WriteLine("Universal Engine Active.");
```
---
## 🏗️ Technical Details
- **Optimization Tool:** llama.cpp (CUDA-accelerated)
- **Architecture:** Phi-3.5
- **Hardware Validation:** Dual-GPU (RTX 3090 + RTX A4000)
---
### ☕ Support the Forge
Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:
| Platform | Support Link |
| :--- | :--- |
| **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
**Scan to support via UPI (India Only):**
<img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
---
**Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)

4
requirements.txt Normal file
View File

@@ -0,0 +1,4 @@
optimum-intel[openvino,nncf]>=1.20.0
transformers>=4.45.0
accelerate
sentencepiece