From 3b0d2549aa2b96e819e2e9e527c6c4ac57700f27 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Fri, 8 May 2026 16:21:01 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: worthdoing/TinyLlama-1.1B-Chat-v1.0-GGUF Source: Original Platform --- .gitattributes | 38 +++++ README.md | 135 ++++++++++++++++++ ...lama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf | 3 + ...lama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf | 3 + tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf | 3 + 5 files changed, 182 insertions(+) create mode 100644 .gitattributes create mode 100644 README.md create mode 100644 tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf create mode 100644 tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf create mode 100644 tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..ffe9e3f --- /dev/null +++ b/.gitattributes @@ -0,0 +1,38 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text +tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..2edb281 --- /dev/null +++ b/README.md @@ -0,0 +1,135 @@ +--- +language: +- en +- fr +- multilingual +license: apache-2.0 +tags: +- gguf +- quantized +- mac +- apple-silicon +- local-inference +- worthdoing +base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 +quantized_by: worthdoing +pipeline_tag: text-generation +--- + +

+ worthdoing +

+

Author: Simon-Pierre Boucher

+ +

+ GGUF + Parameters + Apple Silicon + License + worthdoing +

+

+ Q4_K_M + Q5_K_M + Q8_0 +

+ +# TinyLlama-1.1B-Chat-v1.0 - GGUF Quantized by worthdoing + +> Quantized for local Mac inference (Apple Silicon / Metal) by **worthdoing** + +## About + +This is a GGUF quantized version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), optimized for running locally on Apple Silicon Macs with `llama.cpp`, `Ollama`, or `LM Studio`. + +- **Original model:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) +- **Parameters:** 1.1B +- **Quantized by:** worthdoing +- **Pipeline:** corelm-model v1.0 + +## Description + +Ultra-tiny Llama variant. Minimal resource usage for basic tasks. + +## Available Quantizations + +| File | Quant | BPW | Size | Use Case | +|------|-------|-----|------|----------| +| `tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf` | Q4_K_M | 4.58 | ~0.6 GB | **Recommended** - Best quality/size ratio | +| `tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf` | Q5_K_M | 5.33 | ~0.7 GB | Higher quality, still fast | +| `tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf` | Q8_0 | 7.96 | ~1.0 GB | Near-original quality | + +## How to Use + +### With Ollama +```bash +# Create a Modelfile +cat > Modelfile <<'MODELEOF' +FROM ./tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf +MODELEOF + +ollama create tinyllama-1.1b-chat-v1.0 -f Modelfile +ollama run tinyllama-1.1b-chat-v1.0 +``` + +### With llama.cpp +```bash +llama-cli -m tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf -p "Your prompt here" -ngl 99 +``` + +### With LM Studio +1. Download the GGUF file +2. Open LM Studio -> My Models -> Import +3. Select the GGUF file and start chatting + +## Quantization Method + +Our quantization pipeline (**corelm-model v1.0**) follows a rigorous multi-step process to ensure maximum quality and compatibility: + +### Step 1 — Download & Validation +- Model weights are downloaded from HuggingFace Hub in **SafeTensors** format (`.safetensors`) +- Legacy formats (`.bin`, `.pt`) are excluded to ensure clean, verified weights +- Tokenizer, configuration, and all metadata are preserved + +### Step 2 — Conversion to GGUF F16 Baseline +- The original model is converted to **GGUF format at FP16 precision** using `convert_hf_to_gguf.py` from [llama.cpp](https://github.com/ggml-org/llama.cpp) +- This lossless baseline preserves the full original model quality +- Architecture-specific tensors (attention, FFN, embeddings, MoE routing) are mapped to their GGUF equivalents + +### Step 3 — K-Quant Quantization +- The F16 baseline is quantized using `llama-quantize` with **k-quant methods** +- K-quants use a mixed-precision approach: more important layers (attention, output) retain higher precision, while less sensitive layers (FFN) are compressed more aggressively +- Each quantization level offers a different quality/size tradeoff: + +| Method | Bits per Weight | Strategy | +|--------|----------------|----------| +| **Q4_K_M** | ~4.58 bpw | Mixed 4/5-bit. Attention & output layers use Q5_K, FFN layers use Q4_K. Best balance of quality and size. | +| **Q5_K_M** | ~5.33 bpw | Mixed 5/6-bit. Attention & output layers use Q6_K, FFN layers use Q5_K. Higher quality with moderate size increase. | +| **Q8_0** | ~7.96 bpw | Uniform 8-bit. All layers quantized to 8-bit. Near-lossless quality, largest file size. | + +### Step 4 — Metadata Injection +- Custom metadata is embedded directly in each GGUF file: + - `general.quantized_by`: worthdoing + - `general.quantization_version`: corelm-1.0 +- This ensures full traceability and provenance of every quantized file + +### Tools & Environment +- **llama.cpp**: Used for both conversion and quantization — the industry-standard open-source LLM inference engine +- **Target platform**: Apple Silicon Macs (M1/M2/M3/M4) with Metal GPU acceleration +- **Inference runtimes**: Compatible with `llama.cpp`, `Ollama`, `LM Studio`, `koboldcpp`, and any GGUF-compatible runtime + +## Recommended Hardware + +| Quant | Min RAM | Recommended | +|-------|---------|-------------| +| Q4_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q5_K_M | 4 GB | Mac with 8 GB+ RAM | +| Q8_0 | 4 GB | Mac with 8 GB+ RAM | + +## Tags + +`general`, `ultra-lightweight`, `edge` + +--- + +*Quantized with corelm-model pipeline by **worthdoing** on 2026-04-17* diff --git a/tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf b/tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf new file mode 100644 index 0000000..e39f6c8 --- /dev/null +++ b/tinyllama-1.1b-chat-v1.0-Q4_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ce8be5e4a26f04f74d4064ac6b3bcdd69c3bcc31f1dbe9493a7d880c567e3d5 +size 667816384 diff --git a/tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf b/tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf new file mode 100644 index 0000000..f08cc21 --- /dev/null +++ b/tinyllama-1.1b-chat-v1.0-Q5_K_M-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33f1bf3f0d6145ff842ec205f36eafd89da0b8d6a6a0fb2ceb3194967a6b674a +size 782045632 diff --git a/tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf b/tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf new file mode 100644 index 0000000..7eca572 --- /dev/null +++ b/tinyllama-1.1b-chat-v1.0-Q8_0-worthdoing.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e09f6083c97ed61dc98c28003fbd7b153353bf5b8002402c005db1f312a1de27 +size 1169809856