From 0da03965b1763b7cac60b0410766cf9738a4dde3 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sat, 11 Apr 2026 04:10:56 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: DJLougen/MiroThinker-1.7-mini-GGUF-Q8_0 Source: Original Platform --- .gitattributes | 36 +++++++++++++++++++ MiroThinker-1.7-mini-Q8_0.gguf | 3 ++ README.md | 64 ++++++++++++++++++++++++++++++++++ 3 files changed, 103 insertions(+) create mode 100644 .gitattributes create mode 100644 MiroThinker-1.7-mini-Q8_0.gguf create mode 100644 README.md diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..8a00804 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,36 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +MiroThinker-1.7-mini-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/MiroThinker-1.7-mini-Q8_0.gguf b/MiroThinker-1.7-mini-Q8_0.gguf new file mode 100644 index 0000000..797460e --- /dev/null +++ b/MiroThinker-1.7-mini-Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86abec6de5b28caaded52bdd7c91d3adb2abf273aa8ae774e8e3f9b60e927bf3 +size 32483931680 diff --git a/README.md b/README.md new file mode 100644 index 0000000..58af0a0 --- /dev/null +++ b/README.md @@ -0,0 +1,64 @@ +--- +license: apache-2.0 +base_model: miromind-ai/MiroThinker-1.7-mini +tags: +- gguf +- quantized +- qwen3_moe +- text-generation +- agent +- deep-research +language: +- en +--- + +# MiroThinker-1.7-mini GGUF Q8_0 + +Q8_0 GGUF quantization of [miromind-ai/MiroThinker-1.7-mini](https://huggingface.co/miromind-ai/MiroThinker-1.7-mini). + +## Model Details + +- **Original Model:** miromind-ai/MiroThinker-1.7-mini (Qwen3 MoE, 30.5B params) +- **Quantization:** Q8_0 (8-bit) +- **File Size:** ~31 GB +- **Format:** GGUF (llama.cpp compatible) +- **Max Context:** 256K tokens +- **Max Tool Calls:** 300 + +## About MiroThinker-1.7-mini + +MiroThinker-1.7-mini is a deep research agent model fine-tuned from Qwen3-30B-A3B-Thinking-2507. It achieves state-of-the-art performance in deep research tasks among open-source models. + +### Benchmarks (original BF16) + +| Benchmark | Score | +|-----------|-------| +| BrowseComp | 74.0% | +| BrowseComp-ZH | 75.3% (SOTA) | +| GAIA-Val-165 | 82.7% | +| HLE-Text | 42.9% | + +## Usage + +Works with any GGUF-compatible runtime: llama.cpp, Ollama, LM Studio, etc. + +**Ollama:** +```bash +ollama run hf.co/DJLougen/MiroThinker-1.7-mini-GGUF-Q8_0 +``` + +**llama.cpp:** +```bash +llama-cli -m MiroThinker-1.7-mini-Q8_0.gguf -c 8192 -n 512 +``` + +## Recommended Parameters + +- temperature: 1.0 +- top_p: 0.95 +- repetition_penalty: 1.05 + +## Credits + +- Original model by [miromind-ai](https://huggingface.co/miromind-ai) +- Quantized by [DJLougen](https://huggingface.co/DJLougen) using llama.cpp