初始化项目,由ModelHub XC社区提供模型

Model: worthdoing/Phi-3.5-mini-instruct-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-08 16:04:07 +08:00
commit c371a2d01b
8 changed files with 194 additions and 0 deletions

41
.gitattributes vendored Normal file
View File

@@ -0,0 +1,41 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q3_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q4_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
phi-3.5-mini-instruct-Q5_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text

135
README.md Normal file
View File

@@ -0,0 +1,135 @@
---
language:
- en
- fr
- multilingual
license: apache-2.0
tags:
- gguf
- quantized
- mac
- apple-silicon
- local-inference
- worthdoing
base_model: microsoft/Phi-3.5-mini-instruct
quantized_by: worthdoing
pipeline_tag: text-generation
---
<p align="center">
<img src="https://raw.githubusercontent.com/Worth-Doing/brand-assets/main/png/variants/04-horizontal.png" alt="worthdoing" width="400"/>
</p>
<p align="center"><strong>Author: Simon-Pierre Boucher</strong></p>
<p align="center">
<img src="https://img.shields.io/badge/Format-GGUF-blue?style=for-the-badge" alt="GGUF"/>
<img src="https://img.shields.io/badge/Params-3.8B-orange?style=for-the-badge" alt="Parameters"/>
<img src="https://img.shields.io/badge/Platform-Apple_Silicon-black?style=for-the-badge&logo=apple" alt="Apple Silicon"/>
<img src="https://img.shields.io/badge/License-Apache_2.0-green?style=for-the-badge" alt="License"/>
<img src="https://img.shields.io/badge/Quantized_by-worthdoing-purple?style=for-the-badge" alt="worthdoing"/>
</p>
<p align="center">
<img src="https://img.shields.io/badge/Q4__K__M-2.0_GB-brightgreen?style=flat-square" alt="Q4_K_M"/>
<img src="https://img.shields.io/badge/Q5__K__M-2.4_GB-yellow?style=flat-square" alt="Q5_K_M"/>
<img src="https://img.shields.io/badge/Q8__0-3.5_GB-red?style=flat-square" alt="Q8_0"/>
</p>
# Phi-3.5-mini-instruct - GGUF Quantized by worthdoing
> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing**
## About
This is a GGUF quantized version of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`.
- **Original model:** [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
- **Parameters:** 3.8B
- **Quantized by:** worthdoing
- **Pipeline:** corelm-model v1.0
## Description
Microsoft's Phi 3.5 mini. Strong reasoning in a compact form factor.
## Available Quantizations
| File | Quant | BPW | Size | Use Case |
|------|-------|-----|------|----------|
| `phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~2.0 GB | **Recommended** - Best quality/size ratio |
| `phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~2.4 GB | Higher quality, still fast |
| `phi-3.5-mini-instruct-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~3.5 GB | Near-original quality |
## How to Use
### With Ollama
```bash
# Create a Modelfile
cat > Modelfile <<'MODELEOF'
FROM ./phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf
MODELEOF
ollama create phi-3.5-mini-instruct -f Modelfile
ollama run phi-3.5-mini-instruct
```
### With llama.cpp
```bash
llama-cli -m phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99
```
### With LM Studio
1. Download the GGUF file
2. Open LM Studio -> My Models -> Import
3. Select the GGUF file and start chatting
## Quantization Method
Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility:
### Step 1 — Download & Validation
- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`)
- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights
- Tokenizer, configuration, and all metadata are preserved
### Step 2 — Conversion to GGUF F16 Baseline
- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp)
- This lossless baseline preserves the full original model quality
- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents
### Step 3 — K-Quant Quantization
- The F16 baseline is quantized using `llama-quantize` with **k-quant methods**
- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively
- Each quantization level offers a different quality/size tradeoff:
| Method | Bits per Weight | Strategy |
|--------|----------------|----------|
| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. |
| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. |
| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. |
### Step 4 — Metadata Injection
- Custom metadata is embedded directly in each GGUF file:
- `general.quantized_by`: worthdoing
- `general.quantization_version`: corelm-1.0
- This ensures full traceability and provenance of every quantized file
### Tools & Environment
- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine
- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration
- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime
## Recommended Hardware
| Quant | Min RAM | Recommended |
|-------|---------|-------------|
| Q4_K_M | 4 GB | Mac with 8 GB+ RAM |
| Q5_K_M | 4 GB | Mac with 8 GB+ RAM |
| Q8_0 | 4 GB | Mac with 8 GB+ RAM |
## Tags
`general`, `reasoning`, `coding`, `math`
---
*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17*

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1297e3e54d28092775163a0826f19663432a7e5a90bb0062003ce5f14b3d1c24
size 1962554400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a58b37b8c631501c0f8a5ba711579c3c067c6d4c51ea5215c05190289b067f0a
size 2396770848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc91f523a4566dae0c3bba385452dea13da931a20bc8e6d4b5597004ac8aa75b
size 2202915360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7fee7e6ec750354f3d73d0aa96dc3675eca96cdba5047904e0c411604ce5f67b
size 2755113504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e4363d962d978ce36d7a5f1d0b6b0fcc1c54567d1d7b1c908c11a6430d3ba765
size 2641474080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:25a8bac561ce82344e2fe2e875e3633f81a162d3daee16f2ca51d559ae69669b
size 4061221920