212 lines
8.6 KiB
Markdown
212 lines
8.6 KiB
Markdown
---
|
||
library_name: llama.cpp
|
||
license: apache-2.0
|
||
language:
|
||
- en
|
||
base_model: reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT
|
||
tags:
|
||
- gguf
|
||
- quantized
|
||
- distillation
|
||
- sft
|
||
- reasoning
|
||
- mathematics
|
||
- physics
|
||
- legal
|
||
- stem
|
||
- chain-of-thought
|
||
- convergentintel
|
||
- edge
|
||
- knowledge-distillation
|
||
pipeline_tag: text-generation
|
||
---
|
||
|
||
# Qwen3-1.7B-Distilled-30B-A3B-SFT — GGUF
|
||
|
||
GGUF quantizations of [reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT) for local and edge deployment via [llama.cpp](https://github.com/ggerganov/llama.cpp) and compatible runtimes.
|
||
|
||
## Available Quantizations
|
||
|
||
| File | Quant | Size | Description |
|
||
|---|---|---|---|
|
||
| `qwen3-1.7b-stem-proof-f16.gguf` | F16 | ~3.8 GB | Full precision reference |
|
||
| `qwen3-1.7b-distilled-30b-sft-Q8_0.gguf` | Q8_0 | ~2.1 GB | Near-lossless, desktop |
|
||
| `qwen3-1.7b-distilled-30b-sft-Q5_K_M.gguf` | Q5_K_M | ~1.4 GB | Balanced quality and size |
|
||
| `qwen3-1.7b-distilled-30b-sft-Q4_K_M.gguf` | Q4_K_M | ~1.2 GB | Mobile, edge, fastest inference |
|
||
|
||
**Recommended:** Q5_K_M for desktop use, Q4_K_M for mobile/edge.
|
||
|
||
## About the Model
|
||
|
||
This is a two-stage model:
|
||
|
||
**Stage 1 — DISC-Informed Knowledge Distillation:** Qwen3-1.7B distilled from Qwen3-30B-A3B-Instruct on 6,122 STEM chain-of-thought samples using proof-weighted cross-entropy loss (2.5x → 1.5x decay on derivation tokens) and KL divergence at T=2.0. The distillation emphasized multi-step reasoning over final-answer pattern matching.
|
||
|
||
**Stage 2 — Legal SFT:** Follow-up supervised fine-tuning on [Alignment-Lab-AI/Lawyer-Instruct](https://huggingface.co/datasets/Alignment-Lab-AI/Lawyer-Instruct) to add instruction-following capability and legal domain knowledge on top of the STEM reasoning backbone.
|
||
|
||
The result is a 1.7B model that fits on a phone and can do structured derivations, legal reasoning, and instruction-following.
|
||
|
||
| Attribute | Value |
|
||
|---|---|
|
||
| **Base model** | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) |
|
||
| **Teacher model** | [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) |
|
||
| **Distillation data** | 6,122 STEM CoT samples (12 datasets from [0xZee](https://huggingface.co/0xZee)) |
|
||
| **SFT data** | [Alignment-Lab-AI/Lawyer-Instruct](https://huggingface.co/datasets/Alignment-Lab-AI/Lawyer-Instruct) |
|
||
| **Developer** | Reaperdoesntrun / [Convergent Intelligence LLC](https://convergentintel.com): Research Division |
|
||
|
||
## Usage
|
||
|
||
### llama.cpp CLI
|
||
|
||
```bash
|
||
./llama-cli -m qwen3-1.7b-distilled-30b-sft-Q4_K_M.gguf \
|
||
-p "### Instruction:\nExplain the doctrine of promissory estoppel and provide a worked example.\n\n### Response:\n" \
|
||
-n 512 --temp 0.0
|
||
```
|
||
|
||
### llama.cpp Python
|
||
|
||
```python
|
||
from llama_cpp import Llama
|
||
|
||
llm = Llama(model_path="qwen3-1.7b-distilled-30b-sft-Q4_K_M.gguf", n_ctx=1024)
|
||
|
||
output = llm(
|
||
"### Instruction:\nProve that the sum of two even numbers is even.\n\n### Response:\n",
|
||
max_tokens=512,
|
||
temperature=0.0,
|
||
)
|
||
print(output["choices"][0]["text"])
|
||
```
|
||
|
||
### Ollama
|
||
|
||
```bash
|
||
# Create a Modelfile
|
||
echo 'FROM ./qwen3-1.7b-distilled-30b-sft-Q4_K_M.gguf' > Modelfile
|
||
ollama create stem-legal -f Modelfile
|
||
ollama run stem-legal "What is res judicata?"
|
||
```
|
||
|
||
### LM Studio
|
||
|
||
Download any GGUF file from this repo and load it directly in [LM Studio](https://lmstudio.ai/).
|
||
|
||
## Prompt Formats
|
||
|
||
This model responds to two prompt formats from its two training stages:
|
||
|
||
**STEM derivation (from distillation):**
|
||
|
||
```
|
||
Solve the following problem carefully and show a rigorous derivation.
|
||
|
||
Problem:
|
||
[Your math/physics/engineering problem]
|
||
|
||
Proof:
|
||
```
|
||
|
||
**Instruction-following (from SFT):**
|
||
|
||
```
|
||
### Instruction:
|
||
[Your question or task]
|
||
|
||
### Response:
|
||
```
|
||
|
||
## Limitations
|
||
|
||
This is a 1.7B model — it punches above its weight on structured reasoning but has hard limits. It can produce fluent but incorrect derivations. It is not a substitute for formal proof verification, legal counsel, or professional engineering analysis. Verify all outputs independently. Performance is strongest on physics, differential equations, and legal instruction-following. Weaker on underrepresented domains (molecular biology, physiology).
|
||
|
||
## Source Model
|
||
|
||
Full training details, methodology, hyperparameters, and the DISC-informed distillation approach are documented in the source model card:
|
||
|
||
**[reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT)**
|
||
|
||
## Citation
|
||
|
||
```bibtex
|
||
@misc{colca2026distilledsft,
|
||
title={Qwen3-1.7B Distilled 30B-A3B SFT: STEM Reasoning + Legal Instruction Following},
|
||
year={2026},
|
||
publisher={HuggingFace},
|
||
url={https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT-GGUF},
|
||
note={Convergent Intelligence LLC: Research Division}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
*Convergent Intelligence LLC: Research Division*
|
||
*"Where classical analysis fails to see, we begin."*
|
||
|
||
---
|
||
|
||
## Convergent Intelligence Portfolio
|
||
|
||
*Part of the [Qwen3 1.7B Distillation Series](https://huggingface.co/reaperdoesntknow) by [Convergent Intelligence LLC: Research Division](https://huggingface.co/reaperdoesntknow)*
|
||
|
||
|
||
#
|
||
## Mathematical Foundations
|
||
|
||
This is a GGUF-quantized variant. The mathematical foundations (Discrepancy Calculus, Topological Knowledge Distillation) are documented in the source model's card. The discrepancy operator $Df(x)$ and BV decomposition that inform the training pipeline are preserved through quantization — the structural boundaries detected by DISC during training are baked into the weights, not dependent on precision.
|
||
|
||
## Related Models
|
||
|
||
| Model | Downloads | Format |
|
||
|-------|-----------|--------|
|
||
| [Qwen3-1.7B-Distilled-30B-A3B](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B) | 96 | HF |
|
||
| [Qwen3-1.7B-Distilled-30B-A3B-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT) | 65 | HF |
|
||
| [Qwen3-1.7B-Thinking-Distil](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Thinking-Distil) | 501 | HF |
|
||
|
||
### Top Models from Our Lab
|
||
|
||
| Model | Downloads |
|
||
|-------|-----------|
|
||
| [LFM2.5-1.2B-Distilled-SFT](https://huggingface.co/reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT) | 342 |
|
||
| [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) | 302 |
|
||
| [Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF) | 203 |
|
||
| [Qwen3-1.7B-Coder-Distilled-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT-GGUF) | 194 |
|
||
| [SMOLM2Prover-GGUF](https://huggingface.co/reaperdoesntknow/SMOLM2Prover-GGUF) | 150 |
|
||
|
||
**Total Portfolio: 41 models | 2,781 total downloads**
|
||
|
||
|
||
*Last updated: 2026-03-28 12:55 UTC*
|
||
|
||
<!-- DISTILQWEN-SPOTLIGHT-START -->
|
||
|
||
## DistilQwen Collection
|
||
|
||
This model is part of the **[DistilQwen](https://huggingface.co/collections/reaperdoesntknow/distilqwen-69bf40ec669117e3f069ef1c)** proof-weighted distillation series.
|
||
Collection: **9 models** | **2,788 downloads**
|
||
|
||
### Teacher Variant Comparison
|
||
|
||
| Teacher | Student Size | Strength | Models |
|
||
|---------|-------------|----------|--------|
|
||
| Qwen3-30B-A3B (Instruct) | 1.7B | Instruction following, structured output, legal reasoning | 3 (833 DL) **← this model** |
|
||
| Qwen3-30B-A3B (Thinking) | 0.6B | Extended deliberation, higher-entropy distributions, proof derivation | 3 (779 DL) |
|
||
| Qwen3-30B-A3B (Coder) | 1.7B | Structured decomposition, STEM derivation, logical inference | 2 (825 DL) |
|
||
|
||
### Methodology
|
||
|
||
**The only BF16 collection in the portfolio.** While the broader Convergent Intelligence catalog (43 models, 12,000+ downloads) was trained on CPU at FP32 for $24 total compute, the DistilQwen series was trained on H100 at BF16 with a 30B-parameter teacher. Same methodology, premium hardware. This is what happens when you give the pipeline real compute.
|
||
|
||
All models use proof-weighted knowledge distillation: 55% cross-entropy with decaying proof weights (2.5× → 1.5×), 45% KL divergence at T=2.0. The proof weight amplifies loss on reasoning-critical tokens, forcing the student to allocate capacity to structural understanding rather than surface-level pattern matching.
|
||
|
||
Full methodology: [Structure Over Scale (DOI: 10.57967/hf/8165)](https://doi.org/10.57967/hf/8165)
|
||
|
||
### Related in this series
|
||
|
||
- [Qwen3-1.7B-Distilled-30B-A3B](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B) (292 downloads)
|
||
- [Qwen3-1.7B-Distilled-30B-A3B-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Distilled-30B-A3B-SFT) (252 downloads)
|
||
|
||
<!-- DISTILQWEN-SPOTLIGHT-END -->
|
||
<!-- cix-keeper-ts:2026-04-11T16:09:25Z -->
|
||
<!-- card-refresh: 2026-03-30 -->
|