---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- gpt-oss
- reasoning
- moe
- mixture-of-experts
- chain-of-thought
- unsloth
- gguf
- llama-cpp
base_model:
- openai/gpt-oss-20b
pipeline_tag: text-generation
model-index:
- name: GPT-OSS-Nano
results: []
---
# GPT-OSS-Nano

### Compact Reasoning Model with Mixture of Experts
[](https://github.com/unslothai/unsloth)
[](#gguf-files)
[](https://www.apache.org/licenses/LICENSE-2.0)
[](https://badge.socket.dev/huggingface/package/squ11z1/gpt-oss-nano?version=69620c60bd3e828ceb666af71239a6d84386a6fa)

**9B parameters • 12 experts • 128K context • Chain-of-thought reasoning**
[🤗 Model](https://huggingface.co/squ11z1/gpt-oss-9b-reasoning) | [📖 Docs](#usage) | [🔮 Q-GPT](https://huggingface.co/squ11z1/Q-GPT)
---
## 📋 Model Description
**GPT-OSS-Nano** is a fine-tuned Mixture of Experts (MoE) language model optimized for **step-by-step reasoning** and problem solving. Built on the GPT-OSS architecture with sparse expert activation, it achieves strong reasoning performance while using only ~3B active parameters per forward pass.
### ✨ Key Features
| Feature | Description |
|---------|-------------|
| 🧠 **Sparse MoE** | 12 experts, 4 active per token — efficient compute |
| 📝 **Chain-of-Thought** | Fine-tuned on reasoning datasets with step-by-step solutions |
| ⚡ **128K Context** | Long context with YaRN rope scaling |
| 🔮 **Q-GPT Ready** | Compatible with quantum confidence estimation |
| 📦 **GGUF Available** | Run locally with llama.cpp or Ollama |
---
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────┐
│ GPT-OSS-Nano │
├─────────────────────────────────────────────────────────┤
│ Total Parameters │ 9.0 Billion │
│ Active Parameters │ ~3 Billion (per forward pass) │
│ Hidden Dimension │ 2880 │
│ Attention Heads │ 64 (8 KV heads, GQA) │
│ Layers │ 24 │
│ Experts │ 12 total, 4 active │
│ Context Length │ 131,072 tokens │
│ Vocabulary Size │ 201,088 │
│ Precision │ BFloat16 │
└─────────────────────────────────────────────────────────┘
```
---
## 💻 Usage
### Quick Start with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"squ11z1/gpt-oss-nano",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"squ11z1/gpt-oss-nano",
trust_remote_code=True,
)
prompt = """Solve this step by step:
A store offers 20% off on all items. If a jacket costs $85,
what is the final price after discount?"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### ⚡ With Unsloth (2x Faster)
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"squ11z1/gpt-oss-nano",
dtype=None,
load_in_4bit=True, # 4-bit quantization for efficiency
)
# For inference
FastLanguageModel.for_inference(model)
```
### 📦 With GGUF (llama.cpp)
```bash
# Download the quantized model
wget https://huggingface.co/squ11z1/gpt-oss-nano/resolve/main/gpt-oss-9b-q4_k_m.gguf
# Run inference
./llama-cli -m gpt-oss-9b-q4_k_m.gguf \
-p "Solve step by step: What is 15% of 240?" \
-n 256 --temp 0.7
```
### 🦙 With Ollama
```bash
# Create Modelfile
echo 'FROM ./gpt-oss-9b-q4_k_m.gguf' > Modelfile
ollama create gpt-oss-nano -f Modelfile
# Run
ollama run gpt-oss-nano "Explain quantum computing simply"
```
---
## 🎓 Training
Training Details
| Parameter | Value |
|-----------|-------|
| **Base Model** | `openai/gpt-oss-20b` |
| **Method** | QLoRA (4-bit quantized LoRA) |
| **LoRA Rank** | 32 |
| **LoRA Alpha** | 32 |
| **Learning Rate** | 2e-4 |
| **Batch Size** | 2 (gradient accumulation: 8) |
| **Epochs** | 2 |
| **Framework** | Unsloth + TRL |
| **Hardware** | NVIDIA H200 |
**Dataset:** Superior-Reasoning — chain-of-thought examples with step-by-step problem solving.
---
## 🔮 Q-GPT: Quantum Confidence
GPT-OSS-Nano is compatible with **Q-GPT** — a quantum neural network that estimates response confidence.
```python
from q_gpt import load_qgpt
model, tokenizer = load_qgpt("squ11z1/gpt-oss-nano")
outputs = model.generate_with_confidence(inputs, max_new_tokens=256)
print(f"Response confidence: {outputs['confidence_label']}")
# Output: "high", "moderate", "low", etc.
if outputs['should_refuse']:
print("⚠️ Model is uncertain — consider refusing to answer")
```
Learn more: [squ11z1/Q-GPT](https://huggingface.co/squ11z1/Q-GPT)
---
## ⚠️ Limitations
- **Language:** Primarily optimized for English; multilingual performance varies
- **Hallucinations:** May generate plausible but incorrect information on obscure topics
- **Safety:** Not designed for safety-critical applications without validation
- **Math:** Strong at arithmetic reasoning; weaker on advanced mathematics
---
## 📜 License
This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
---
## 🙏 Acknowledgments
- **[Unsloth](https://github.com/unslothai/unsloth)** — 2x faster fine-tuning
- **[OpenAI](https://huggingface.co/openai)** — GPT-OSS base model
- **[llama.cpp](https://github.com/ggerganov/llama.cpp)** — GGUF format and quantization
---
## 📖 Citation
```bibtex
@misc{gptossnano2026,
title={GPT-OSS-Nano: Compact MoE Reasoning Model},
author={squ11z1},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/squ11z1/gpt-oss-nano}
}
```
---
**Pro Mundi Vita**