commit 5a6854dade948ffcea1c8514db99b3917237c2f9 Author: ModelHub XC Date: Thu May 7 14:50:17 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: worthdoing/Stablelm-2-Zephyr-1.6B-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..a616e06 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,41 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q3_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q4_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +stablelm-2-zephyr-1.6b-Q5_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..79f5bef --- /dev/null +++ b/README.md @@ -0,0 +1,135 @@ +--- +language: +- en +- fr +- multilingual +license: apache-2.0 +tags: +- gguf +- quantized +- mac +- apple-silicon +- local-inference +- worthdoing +base_model: stabilityai/stablelm-2-zephyr-1_6b +quantized_by: worthdoing +pipeline_tag: text-generation +--- + +

+ worthdoing +

+

Author: Simon-Pierre Boucher

+ +

+ GGUF + Parameters + Apple Silicon + License + worthdoing +

+

+ Q4_K_M + Q5_K_M + Q8_0 +

+ +# Stablelm-2-Zephyr-1.6B - GGUF Quantized by worthdoing + +> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing** + +## About + +This is a GGUF quantized version of [Stablelm-2-Zephyr-1.6B](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`. + +- **Original model:** [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b) +- **Parameters:** 1.6B +- **Quantized by:** worthdoing +- **Pipeline:** corelm-model v1.0 + +## Description + +Stability AI's small chat model. Efficient and responsive. + +## Available Quantizations + +| File | Quant | BPW | Size | Use Case | +|------|-------|-----|------|----------| +| `stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~0.9 GB | **Recommended** - Best quality/size ratio | +| `stablelm-2-zephyr-1.6b-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~1.0 GB | Higher quality, still fast | +| `stablelm-2-zephyr-1.6b-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~1.5 GB | Near-original quality | + +## How to Use + +### With Ollama +```bash +# Create a Modelfile +cat > Modelfile <<'MODELEOF' +FROM ./stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf +MODELEOF + +ollama create stablelm-2-zephyr-1.6b -f Modelfile +ollama run stablelm-2-zephyr-1.6b +``` + +### With llama.cpp +```bash +llama-cli -m stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99 +``` + +### With LM Studio +1. Download the GGUF file +2. Open LM Studio -> My Models -> Import +3. Select the GGUF file and start chatting + +## Quantization Method + +Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility: + +### Step 1 — Download & Validation +- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`) +- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights +- Tokenizer, configuration, and all metadata are preserved + +### Step 2 — Conversion to GGUF F16 Baseline +- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp) +- This lossless baseline preserves the full original model quality +- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents + +### Step 3 — K-Quant Quantization +- The F16 baseline is quantized using `llama-quantize` with **k-quant methods** +- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively +- Each quantization level offers a different quality/size tradeoff: + +| Method | Bits per Weight | Strategy | +|--------|----------------|----------| +| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. | +| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. | +| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. | + +### Step 4 — Metadata Injection +- Custom metadata is embedded directly in each GGUF file: + - `general.quantized_by`: worthdoing + - `general.quantization_version`: corelm-1.0 +- This ensures full traceability and provenance of every quantized file + +### Tools & Environment +- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine +- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration +- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime + +## Recommended Hardware + +| Quant | Min RAM | Recommended | +|-------|---------|-------------| +| Q4_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q5_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q8_0 | 4 GB | Mac with 8 GB+ RAM | + +## Tags + +`general`, `ultra-lightweight`, `chat` + +--- + +*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17* diff --git a/stablelm-2-zephyr-1.6b-Q3_K_M-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q3_K_M-worthdoing.gguf new file mode 100644 index 0000000..b57419f --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q3_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8f45f93af6b8d69268c880b32c396078207029dc444710ce1e44843467b5e2f +size 857708768 diff --git a/stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf new file mode 100644 index 0000000..b10f092 --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q4_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dee453da051e16a2252bf9522f46009011d79383881017196187f9e6653bc0b8 +size 1031444704 diff --git a/stablelm-2-zephyr-1.6b-Q4_K_S-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q4_K_S-worthdoing.gguf new file mode 100644 index 0000000..ef701be --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q4_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4e1e2876529565072401ac7b8cefe962436b223d11e92b914e91ff5a846fffb +size 989206752 diff --git a/stablelm-2-zephyr-1.6b-Q5_K_M-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q5_K_M-worthdoing.gguf new file mode 100644 index 0000000..418d2bb --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q5_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce711f78af73891015d12027b4a1a47007fc50e612dffaad15014df8abc52c75 +size 1187682528 diff --git a/stablelm-2-zephyr-1.6b-Q5_K_S-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q5_K_S-worthdoing.gguf new file mode 100644 index 0000000..a3079ff --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q5_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf71a9f6d42fb145215d206205f6c10ef83b08561635a9bb088fd2f606d03971 +size 1162615008 diff --git a/stablelm-2-zephyr-1.6b-Q8_0-worthdoing.gguf b/stablelm-2-zephyr-1.6b-Q8_0-worthdoing.gguf new file mode 100644 index 0000000..d4e137f --- /dev/null +++ b/stablelm-2-zephyr-1.6b-Q8_0-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3a46308ef81f0ad2bbea744eec948d598a43fafb96bad6e3436a0c30fdbc00d +size 1751881952