From 0d16d2095076924e9710d7680dc1e2688dc28669 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Thu, 7 May 2026 14:36:37 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: worthdoing/SmolLM2-1.7B-Instruct-GGUF Source: Original Platform --- .gitattributes | 42 ++++++ README.md | 135 +++++++++++++++++++ smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q6_K-worthdoing.gguf | 3 + smollm2-1.7b-instruct-Q8_0-worthdoing.gguf | 3 + 9 files changed, 198 insertions(+) create mode 100644 .gitattributes create mode 100644 README.md create mode 100644 smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q6_K-worthdoing.gguf create mode 100644 smollm2-1.7b-instruct-Q8_0-worthdoing.gguf diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..a8ba215 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,42 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +smollm2-1.7b-instruct-Q6_K-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..2448831 --- /dev/null +++ b/README.md @@ -0,0 +1,135 @@ +--- +language: +- en +- fr +- multilingual +license: apache-2.0 +tags: +- gguf +- quantized +- mac +- apple-silicon +- local-inference +- worthdoing +base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct +quantized_by: worthdoing +pipeline_tag: text-generation +--- + +

+ worthdoing +

+

Author: Simon-Pierre Boucher

+ +

+ GGUF + Parameters + Apple Silicon + License + worthdoing +

+

+ Q4_K_M + Q5_K_M + Q8_0 +

+ +# SmolLM2-1.7B-Instruct - GGUF Quantized by worthdoing + +> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing** + +## About + +This is a GGUF quantized version of [SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`. + +- **Original model:** [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) +- **Parameters:** 1.7B +- **Quantized by:** worthdoing +- **Pipeline:** corelm-model v1.0 + +## Description + +Smallest viable general model. Perfect for edge/embedded use. + +## Available Quantizations + +| File | Quant | BPW | Size | Use Case | +|------|-------|-----|------|----------| +| `smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~0.9 GB | **Recommended** - Best quality/size ratio | +| `smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~1.1 GB | Higher quality, still fast | +| `smollm2-1.7b-instruct-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~1.6 GB | Near-original quality | + +## How to Use + +### With Ollama +```bash +# Create a Modelfile +cat > Modelfile <<'MODELEOF' +FROM ./smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf +MODELEOF + +ollama create smollm2-1.7b-instruct -f Modelfile +ollama run smollm2-1.7b-instruct +``` + +### With llama.cpp +```bash +llama-cli -m smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99 +``` + +### With LM Studio +1. Download the GGUF file +2. Open LM Studio -> My Models -> Import +3. Select the GGUF file and start chatting + +## Quantization Method + +Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility: + +### Step 1 — Download & Validation +- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`) +- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights +- Tokenizer, configuration, and all metadata are preserved + +### Step 2 — Conversion to GGUF F16 Baseline +- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp) +- This lossless baseline preserves the full original model quality +- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents + +### Step 3 — K-Quant Quantization +- The F16 baseline is quantized using `llama-quantize` with **k-quant methods** +- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively +- Each quantization level offers a different quality/size tradeoff: + +| Method | Bits per Weight | Strategy | +|--------|----------------|----------| +| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. | +| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. | +| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. | + +### Step 4 — Metadata Injection +- Custom metadata is embedded directly in each GGUF file: + - `general.quantized_by`: worthdoing + - `general.quantization_version`: corelm-1.0 +- This ensures full traceability and provenance of every quantized file + +### Tools & Environment +- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine +- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration +- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime + +## Recommended Hardware + +| Quant | Min RAM | Recommended | +|-------|---------|-------------| +| Q4_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q5_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q8_0 | 4 GB | Mac with 8 GB+ RAM | + +## Tags + +`general`, `ultra-lightweight`, `edge` + +--- + +*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17* diff --git a/smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf b/smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf new file mode 100644 index 0000000..3b0fac8 --- /dev/null +++ b/smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0892bc4816a4d7af24e02b266925c5e9ecc3ed56d03b6f12156f780806af8acb +size 860181632 diff --git a/smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf b/smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf new file mode 100644 index 0000000..75eed9f --- /dev/null +++ b/smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:776f20c2c0ccb1338509da80fc7d6a5041c9cc35dec5414fa90ba1a95dfa6754 +size 1055609984 diff --git a/smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf b/smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf new file mode 100644 index 0000000..a1c2e58 --- /dev/null +++ b/smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4003d3560b4bfca2754ed91ca43abe146635eab0490ef63c0669a58781d9c43 +size 999117952 diff --git a/smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf b/smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf new file mode 100644 index 0000000..4cee734 --- /dev/null +++ b/smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a12ae83fc3a041555c1a9a5f5cd7fe2793be563bbced7f4a5c81f152e3281476 +size 1225479296 diff --git a/smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf b/smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf new file mode 100644 index 0000000..14d735e --- /dev/null +++ b/smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e6f4edb065b6266a5916201bc20ac07ba7594a5e99ffeb46c19a8cfd3b6ac93 +size 1192055936 diff --git a/smollm2-1.7b-instruct-Q6_K-worthdoing.gguf b/smollm2-1.7b-instruct-Q6_K-worthdoing.gguf new file mode 100644 index 0000000..202b023 --- /dev/null +++ b/smollm2-1.7b-instruct-Q6_K-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc8224dff88c053beb50fdbbb8499137b6dce97ac56742593c21975f2040e01d +size 1405965440 diff --git a/smollm2-1.7b-instruct-Q8_0-worthdoing.gguf b/smollm2-1.7b-instruct-Q8_0-worthdoing.gguf new file mode 100644 index 0000000..1dab202 --- /dev/null +++ b/smollm2-1.7b-instruct-Q8_0-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:448108c82fabcd9ece4ac8bb482a86eabc20a67e6bbede4dbf1f9bd8df3afe86 +size 1820415104