初始化项目,由ModelHub XC社区提供模型
Model: CelesteImperia/Phi-3.5-mini-instruct-Platinum-GGUF Source: Original Platform
This commit is contained in:
96
README.md
Normal file
96
README.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
base_model: microsoft/Phi-3.5-mini-instruct
|
||||
library_name: gguf
|
||||
pipeline_tag: text-generation
|
||||
license: mit
|
||||
tags:
|
||||
- gguf
|
||||
- llama-cpp
|
||||
- phi-3.5
|
||||
- celeste-imperia
|
||||
---
|
||||
|
||||
# Phi-3.5-mini-instruct-GGUF (Platinum Series)
|
||||
|
||||

|
||||

|
||||

|
||||
[](https://razorpay.me/@huggingface)
|
||||
|
||||
This repository contains the **Platinum Series** universal GGUF release of **Phi-3.5-mini-instruct**. This collection provides multiple quantization levels optimized for cross-platform performance, offering advanced reasoning capabilities with 128k context support.
|
||||
|
||||
## 📦 Available Files & Quantization Details
|
||||
|
||||
| File Name | Quantization | Size | Accuracy | Recommended For |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
| **Phi-3.5-mini-instruct-Platinum-F16.gguf** | FP16 | ~7.6 GB | 100% | Master Reference / Benchmarking |
|
||||
| **Phi-3.5-mini-instruct-Platinum-Q8_0.gguf** | Q8_0 | ~4.1 GB | 99.9% | Platinum Reference / High-Fidelity |
|
||||
| **Phi-3.5-mini-instruct-Platinum-Q6_K.gguf** | Q6_K | ~3.1 GB | 99.8% | High-Quality Inference |
|
||||
| **Phi-3.5-mini-instruct-Platinum-Q5_K_M.gguf** | Q5_K_M | ~2.8 GB | 99.4% | Balanced Desktop Performance |
|
||||
| **Phi-3.5-mini-instruct-Platinum-Q4_K_M.gguf** | Q4_K_M | ~2.4 GB | 98.8% | Mobile / Low-Power Efficiency |
|
||||
|
||||
---
|
||||
|
||||
## 🐍 Python Inference (llama-cpp-python)
|
||||
|
||||
To run these engines using Python:
|
||||
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama(
|
||||
model_path="Phi-3.5-mini-instruct-Platinum-Q8_0.gguf",
|
||||
n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
|
||||
n_ctx=4096
|
||||
)
|
||||
|
||||
output = llm("Discuss the architectural benefits of Phi-3.5.", max_tokens=150)
|
||||
print(output["choices"][0]["text"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💻 For C# / .NET Users (LLamaSharp)
|
||||
|
||||
This collection is fully compatible with .NET applications via the ``LLamaSharp`` library.
|
||||
|
||||
```csharp
|
||||
using LLama.Common;
|
||||
using LLama;
|
||||
|
||||
var parameters = new ModelParams("Phi-3.5-mini-instruct-Platinum-Q8_0.gguf") {
|
||||
ContextSize = 4096,
|
||||
GpuLayerCount = 35
|
||||
};
|
||||
|
||||
using var model = LLamaWeights.LoadFromFile(parameters);
|
||||
using var context = model.CreateContext(parameters);
|
||||
var executor = new InteractiveExecutor(context);
|
||||
|
||||
Console.WriteLine("Universal Engine Active.");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Technical Details
|
||||
- **Optimization Tool:** llama.cpp (CUDA-accelerated)
|
||||
- **Architecture:** Phi-3.5
|
||||
- **Hardware Validation:** Dual-GPU (RTX 3090 + RTX A4000)
|
||||
|
||||
---
|
||||
|
||||
### ☕ Support the Forge
|
||||
|
||||
Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:
|
||||
|
||||
| Platform | Support Link |
|
||||
| :--- | :--- |
|
||||
| **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
|
||||
|
||||
**Scan to support via UPI (India Only):**
|
||||
|
||||
<img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
|
||||
|
||||
---
|
||||
|
||||
**Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)
|
||||
Reference in New Issue
Block a user