commit c371a2d01b470fccbeaeb2973cd781e1caa892a6 Author: ModelHub XC Date: Fri May 8 16:04:07 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: worthdoing/Phi-3.5-mini-instruct-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..df4ffdb --- /dev/null +++ b/.gitattributes @@ -0,0 +1,41 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q3_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q4_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +phi-3.5-mini-instruct-Q5_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..00c2c0b --- /dev/null +++ b/README.md @@ -0,0 +1,135 @@ +--- +language: +- en +- fr +- multilingual +license: apache-2.0 +tags: +- gguf +- quantized +- mac +- apple-silicon +- local-inference +- worthdoing +base_model: microsoft/Phi-3.5-mini-instruct +quantized_by: worthdoing +pipeline_tag: text-generation +--- + +

+ worthdoing +

+

Author: Simon-Pierre Boucher

+ +

+ GGUF + Parameters + Apple Silicon + License + worthdoing +

+

+ Q4_K_M + Q5_K_M + Q8_0 +

+ +# Phi-3.5-mini-instruct - GGUF Quantized by worthdoing + +> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing** + +## About + +This is a GGUF quantized version of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`. + +- **Original model:** [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) +- **Parameters:** 3.8B +- **Quantized by:** worthdoing +- **Pipeline:** corelm-model v1.0 + +## Description + +Microsoft's Phi 3.5 mini. Strong reasoning in a compact form factor. + +## Available Quantizations + +| File | Quant | BPW | Size | Use Case | +|------|-------|-----|------|----------| +| `phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~2.0 GB | **Recommended** - Best quality/size ratio | +| `phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~2.4 GB | Higher quality, still fast | +| `phi-3.5-mini-instruct-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~3.5 GB | Near-original quality | + +## How to Use + +### With Ollama +```bash +# Create a Modelfile +cat > Modelfile <<'MODELEOF' +FROM ./phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf +MODELEOF + +ollama create phi-3.5-mini-instruct -f Modelfile +ollama run phi-3.5-mini-instruct +``` + +### With llama.cpp +```bash +llama-cli -m phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99 +``` + +### With LM Studio +1. Download the GGUF file +2. Open LM Studio -> My Models -> Import +3. Select the GGUF file and start chatting + +## Quantization Method + +Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility: + +### Step 1 — Download & Validation +- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`) +- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights +- Tokenizer, configuration, and all metadata are preserved + +### Step 2 — Conversion to GGUF F16 Baseline +- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp) +- This lossless baseline preserves the full original model quality +- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents + +### Step 3 — K-Quant Quantization +- The F16 baseline is quantized using `llama-quantize` with **k-quant methods** +- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively +- Each quantization level offers a different quality/size tradeoff: + +| Method | Bits per Weight | Strategy | +|--------|----------------|----------| +| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. | +| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. | +| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. | + +### Step 4 — Metadata Injection +- Custom metadata is embedded directly in each GGUF file: + - `general.quantized_by`: worthdoing + - `general.quantization_version`: corelm-1.0 +- This ensures full traceability and provenance of every quantized file + +### Tools & Environment +- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine +- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration +- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime + +## Recommended Hardware + +| Quant | Min RAM | Recommended | +|-------|---------|-------------| +| Q4_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q5_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q8_0 | 4 GB | Mac with 8 GB+ RAM | + +## Tags + +`general`, `reasoning`, `coding`, `math` + +--- + +*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17* diff --git a/phi-3.5-mini-instruct-Q3_K_M-worthdoing.gguf b/phi-3.5-mini-instruct-Q3_K_M-worthdoing.gguf new file mode 100644 index 0000000..a1e6e3a --- /dev/null +++ b/phi-3.5-mini-instruct-Q3_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1297e3e54d28092775163a0826f19663432a7e5a90bb0062003ce5f14b3d1c24 +size 1962554400 diff --git a/phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf b/phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf new file mode 100644 index 0000000..7f91593 --- /dev/null +++ b/phi-3.5-mini-instruct-Q4_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a58b37b8c631501c0f8a5ba711579c3c067c6d4c51ea5215c05190289b067f0a +size 2396770848 diff --git a/phi-3.5-mini-instruct-Q4_K_S-worthdoing.gguf b/phi-3.5-mini-instruct-Q4_K_S-worthdoing.gguf new file mode 100644 index 0000000..7240dca --- /dev/null +++ b/phi-3.5-mini-instruct-Q4_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc91f523a4566dae0c3bba385452dea13da931a20bc8e6d4b5597004ac8aa75b +size 2202915360 diff --git a/phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf b/phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf new file mode 100644 index 0000000..09fa89d --- /dev/null +++ b/phi-3.5-mini-instruct-Q5_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fee7e6ec750354f3d73d0aa96dc3675eca96cdba5047904e0c411604ce5f67b +size 2755113504 diff --git a/phi-3.5-mini-instruct-Q5_K_S-worthdoing.gguf b/phi-3.5-mini-instruct-Q5_K_S-worthdoing.gguf new file mode 100644 index 0000000..51e1e32 --- /dev/null +++ b/phi-3.5-mini-instruct-Q5_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4363d962d978ce36d7a5f1d0b6b0fcc1c54567d1d7b1c908c11a6430d3ba765 +size 2641474080 diff --git a/phi-3.5-mini-instruct-Q8_0-worthdoing.gguf b/phi-3.5-mini-instruct-Q8_0-worthdoing.gguf new file mode 100644 index 0000000..b6fd7c7 --- /dev/null +++ b/phi-3.5-mini-instruct-Q8_0-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25a8bac561ce82344e2fe2e875e3633f81a162d3daee16f2ca51d559ae69669b +size 4061221920