初始化项目,由ModelHub XC社区提供模型
Model: worthdoing/SmolLM2-1.7B-Instruct-GGUF Source: Original Platform
This commit is contained in:
42
.gitattributes
vendored
Normal file
42
.gitattributes
vendored
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
smollm2-1.7b-instruct-Q6_K-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
135
README.md
Normal file
135
README.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
- fr
|
||||||
|
- multilingual
|
||||||
|
license: apache-2.0
|
||||||
|
tags:
|
||||||
|
- gguf
|
||||||
|
- quantized
|
||||||
|
- mac
|
||||||
|
- apple-silicon
|
||||||
|
- local-inference
|
||||||
|
- worthdoing
|
||||||
|
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
|
||||||
|
quantized_by: worthdoing
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
---
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<img src="https://raw.githubusercontent.com/Worth-Doing/brand-assets/main/png/variants/04-horizontal.png" alt="worthdoing" width="400"/>
|
||||||
|
</p>
|
||||||
|
<p align="center"><strong>Author: Simon-Pierre Boucher</strong></p>
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<img src="https://img.shields.io/badge/Format-GGUF-blue?style=for-the-badge" alt="GGUF"/>
|
||||||
|
<img src="https://img.shields.io/badge/Params-1.7B-orange?style=for-the-badge" alt="Parameters"/>
|
||||||
|
<img src="https://img.shields.io/badge/Platform-Apple_Silicon-black?style=for-the-badge&logo=apple" alt="Apple Silicon"/>
|
||||||
|
<img src="https://img.shields.io/badge/License-Apache_2.0-green?style=for-the-badge" alt="License"/>
|
||||||
|
<img src="https://img.shields.io/badge/Quantized_by-worthdoing-purple?style=for-the-badge" alt="worthdoing"/>
|
||||||
|
</p>
|
||||||
|
<p align="center">
|
||||||
|
<img src="https://img.shields.io/badge/Q4__K__M-0.9_GB-brightgreen?style=flat-square" alt="Q4_K_M"/>
|
||||||
|
<img src="https://img.shields.io/badge/Q5__K__M-1.1_GB-yellow?style=flat-square" alt="Q5_K_M"/>
|
||||||
|
<img src="https://img.shields.io/badge/Q8__0-1.6_GB-red?style=flat-square" alt="Q8_0"/>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
# SmolLM2-1.7B-Instruct - GGUF Quantized by worthdoing
|
||||||
|
|
||||||
|
> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing**
|
||||||
|
|
||||||
|
## About
|
||||||
|
|
||||||
|
This is a GGUF quantized version of [SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`.
|
||||||
|
|
||||||
|
- **Original model:** [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
|
||||||
|
- **Parameters:** 1.7B
|
||||||
|
- **Quantized by:** worthdoing
|
||||||
|
- **Pipeline:** corelm-model v1.0
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Smallest viable general model. Perfect for edge/embedded use.
|
||||||
|
|
||||||
|
## Available Quantizations
|
||||||
|
|
||||||
|
| File | Quant | BPW | Size | Use Case |
|
||||||
|
|------|-------|-----|------|----------|
|
||||||
|
| `smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~0.9 GB | **Recommended** - Best quality/size ratio |
|
||||||
|
| `smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~1.1 GB | Higher quality, still fast |
|
||||||
|
| `smollm2-1.7b-instruct-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~1.6 GB | Near-original quality |
|
||||||
|
|
||||||
|
## How to Use
|
||||||
|
|
||||||
|
### With Ollama
|
||||||
|
```bash
|
||||||
|
# Create a Modelfile
|
||||||
|
cat > Modelfile <<'MODELEOF'
|
||||||
|
FROM ./smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf
|
||||||
|
MODELEOF
|
||||||
|
|
||||||
|
ollama create smollm2-1.7b-instruct -f Modelfile
|
||||||
|
ollama run smollm2-1.7b-instruct
|
||||||
|
```
|
||||||
|
|
||||||
|
### With llama.cpp
|
||||||
|
```bash
|
||||||
|
llama-cli -m smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99
|
||||||
|
```
|
||||||
|
|
||||||
|
### With LM Studio
|
||||||
|
1. Download the GGUF file
|
||||||
|
2. Open LM Studio -> My Models -> Import
|
||||||
|
3. Select the GGUF file and start chatting
|
||||||
|
|
||||||
|
## Quantization Method
|
||||||
|
|
||||||
|
Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility:
|
||||||
|
|
||||||
|
### Step 1 — Download & Validation
|
||||||
|
- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`)
|
||||||
|
- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights
|
||||||
|
- Tokenizer, configuration, and all metadata are preserved
|
||||||
|
|
||||||
|
### Step 2 — Conversion to GGUF F16 Baseline
|
||||||
|
- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp)
|
||||||
|
- This lossless baseline preserves the full original model quality
|
||||||
|
- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents
|
||||||
|
|
||||||
|
### Step 3 — K-Quant Quantization
|
||||||
|
- The F16 baseline is quantized using `llama-quantize` with **k-quant methods**
|
||||||
|
- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively
|
||||||
|
- Each quantization level offers a different quality/size tradeoff:
|
||||||
|
|
||||||
|
| Method | Bits per Weight | Strategy |
|
||||||
|
|--------|----------------|----------|
|
||||||
|
| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. |
|
||||||
|
| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. |
|
||||||
|
| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. |
|
||||||
|
|
||||||
|
### Step 4 — Metadata Injection
|
||||||
|
- Custom metadata is embedded directly in each GGUF file:
|
||||||
|
- `general.quantized_by`: worthdoing
|
||||||
|
- `general.quantization_version`: corelm-1.0
|
||||||
|
- This ensures full traceability and provenance of every quantized file
|
||||||
|
|
||||||
|
### Tools & Environment
|
||||||
|
- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine
|
||||||
|
- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration
|
||||||
|
- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime
|
||||||
|
|
||||||
|
## Recommended Hardware
|
||||||
|
|
||||||
|
| Quant | Min RAM | Recommended |
|
||||||
|
|-------|---------|-------------|
|
||||||
|
| Q4_K_M | 4 GB | Mac with 8 GB+ RAM |
|
||||||
|
| Q5_K_M | 4 GB | Mac with 8 GB+ RAM |
|
||||||
|
| Q8_0 | 4 GB | Mac with 8 GB+ RAM |
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
`general`, `ultra-lightweight`, `edge`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17*
|
||||||
3
smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q3_K_M-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:0892bc4816a4d7af24e02b266925c5e9ecc3ed56d03b6f12156f780806af8acb
|
||||||
|
size 860181632
|
||||||
3
smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q4_K_M-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:776f20c2c0ccb1338509da80fc7d6a5041c9cc35dec5414fa90ba1a95dfa6754
|
||||||
|
size 1055609984
|
||||||
3
smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q4_K_S-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a4003d3560b4bfca2754ed91ca43abe146635eab0490ef63c0669a58781d9c43
|
||||||
|
size 999117952
|
||||||
3
smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q5_K_M-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a12ae83fc3a041555c1a9a5f5cd7fe2793be563bbced7f4a5c81f152e3281476
|
||||||
|
size 1225479296
|
||||||
3
smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q5_K_S-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1e6f4edb065b6266a5916201bc20ac07ba7594a5e99ffeb46c19a8cfd3b6ac93
|
||||||
|
size 1192055936
|
||||||
3
smollm2-1.7b-instruct-Q6_K-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q6_K-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:fc8224dff88c053beb50fdbbb8499137b6dce97ac56742593c21975f2040e01d
|
||||||
|
size 1405965440
|
||||||
3
smollm2-1.7b-instruct-Q8_0-worthdoing.gguf
Normal file
3
smollm2-1.7b-instruct-Q8_0-worthdoing.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:448108c82fabcd9ece4ac8bb482a86eabc20a67e6bbede4dbf1f9bd8df3afe86
|
||||||
|
size 1820415104
|
||||||
Reference in New Issue
Block a user