Files
Phi-3.5-mini-instruct-Plati…/README.md
ModelHub XC e1d9765028 初始化项目,由ModelHub XC社区提供模型
Model: CelesteImperia/Phi-3.5-mini-instruct-Platinum-GGUF
Source: Original Platform
2026-05-05 21:25:31 +08:00

3.2 KiB

base_model, library_name, pipeline_tag, license, tags
base_model library_name pipeline_tag license tags
microsoft/Phi-3.5-mini-instruct gguf text-generation mit
gguf
llama-cpp
phi-3.5
celeste-imperia

Phi-3.5-mini-instruct-GGUF (Platinum Series)

Status Format Series Support

This repository contains the Platinum Series universal GGUF release of Phi-3.5-mini-instruct. This collection provides multiple quantization levels optimized for cross-platform performance, offering advanced reasoning capabilities with 128k context support.

📦 Available Files & Quantization Details

File Name Quantization Size Accuracy Recommended For
Phi-3.5-mini-instruct-Platinum-F16.gguf FP16 ~7.6 GB 100% Master Reference / Benchmarking
Phi-3.5-mini-instruct-Platinum-Q8_0.gguf Q8_0 ~4.1 GB 99.9% Platinum Reference / High-Fidelity
Phi-3.5-mini-instruct-Platinum-Q6_K.gguf Q6_K ~3.1 GB 99.8% High-Quality Inference
Phi-3.5-mini-instruct-Platinum-Q5_K_M.gguf Q5_K_M ~2.8 GB 99.4% Balanced Desktop Performance
Phi-3.5-mini-instruct-Platinum-Q4_K_M.gguf Q4_K_M ~2.4 GB 98.8% Mobile / Low-Power Efficiency

🐍 Python Inference (llama-cpp-python)

To run these engines using Python:

from llama_cpp import Llama

llm = Llama(
    model_path="Phi-3.5-mini-instruct-Platinum-Q8_0.gguf",
    n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
    n_ctx=4096
)

output = llm("Discuss the architectural benefits of Phi-3.5.", max_tokens=150)
print(output["choices"][0]["text"])

💻 For C# / .NET Users (LLamaSharp)

This collection is fully compatible with .NET applications via the LLamaSharp library.

using LLama.Common;
using LLama;

var parameters = new ModelParams("Phi-3.5-mini-instruct-Platinum-Q8_0.gguf") {
    ContextSize = 4096,
    GpuLayerCount = 35 
};

using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
var executor = new InteractiveExecutor(context);

Console.WriteLine("Universal Engine Active.");

🏗️ Technical Details

  • Optimization Tool: llama.cpp (CUDA-accelerated)
  • Architecture: Phi-3.5
  • Hardware Validation: Dual-GPU (RTX 3090 + RTX A4000)

Support the Forge

Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


Connect with the architect: Abhishek Jaiswal on LinkedIn