653 lines
20 KiB
Markdown
653 lines
20 KiB
Markdown
---
|
||
license: apache-2.0
|
||
base_model:
|
||
- WeiboAI/VibeThinker-1.5B
|
||
datasets:
|
||
- OpceanAI/Yuuki-Personality-v2
|
||
language:
|
||
- en
|
||
- es
|
||
library_name: transformers
|
||
tags:
|
||
- reasoning
|
||
- unsloth
|
||
- pytorch
|
||
- bilingual
|
||
- opceanai
|
||
- yuuki
|
||
- rxg
|
||
- fine-tuned
|
||
- chat
|
||
- deepseek
|
||
- qwen2
|
||
pipeline_tag: text-generation
|
||
---
|
||
<div align="center">
|
||
|
||
<br>
|
||
|
||
<img src="https://img.shields.io/badge/%E2%9C%A6-YUUKI_RxG_NANO-6d28d9?style=for-the-badge&labelColor=0D1117" alt="YuuKi RxG Nano" height="50">
|
||
|
||
<br><br>
|
||
|
||
# Edge Reasoning at 1.5B Scale
|
||
|
||
**AIME 2024: 80.0% · MATH-500: 83.4% · TruthfulQA: 89.6% · MMLU-Pro: 65.63%**<br>
|
||
**1.5B parameters. VibeThinker base. Competitive with models 10–100× larger.**
|
||
|
||
<br>
|
||
|
||
<a href="#benchmark-results"><img src="https://img.shields.io/badge/BENCHMARKS-0D1117?style=for-the-badge" alt="Benchmarks"></a>
|
||
|
||
<a href="#usage"><img src="https://img.shields.io/badge/USAGE-0D1117?style=for-the-badge" alt="Usage"></a>
|
||
|
||
<a href="#training-details"><img src="https://img.shields.io/badge/TRAINING-0D1117?style=for-the-badge" alt="Training"></a>
|
||
|
||
<br><br>
|
||
|
||
[](LICENSE)
|
||
|
||
[](https://huggingface.co/WeiboAI/VibeThinker-1.5B)
|
||
|
||
[](https://huggingface.co/docs/transformers)
|
||
|
||
[](https://github.com/sylinrl/TruthfulQA)
|
||
|
||
[](https://artofproblemsolving.com)
|
||
|
||
[](https://github.com/EleutherAI/lm-evaluation-harness)
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
</div>
|
||
|
||
## What is YuuKi RxG Nano?
|
||
|
||
**YuuKi RxG Nano** is a 1.5B reasoning-specialized language model fine-tuned from [VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B), itself a distillation of frontier reasoning systems including Claude, Gemini, and Kimi into a compact Qwen2.5-Math architecture. It is the edge-deployment entry of the **RxG family** — OpceanAI's reasoning-specialized model lineage — and the direct successor to the Yumo Nano math specialist.
|
||
|
||
RxG Nano was designed to answer a specific question: *can a 1.5B model acquire both a coherent identity and genuine reasoning capability simultaneously, without one degrading the other?* The benchmark results suggest the answer is yes. RxG Nano achieves **AIME 2024 at 80.0%** — nearly triple the score of DeepSeek-R1-Distill-1.5B (28.9%) — while simultaneously scoring **89.6% on TruthfulQA**, approaching the 96.6% achieved by its 8B sibling.
|
||
|
||
The key architectural insight behind RxG Nano is the separation of concerns: reasoning capability is inherited from the VibeThinker base through its frontier distillation training, while the YuuKi identity is installed via a lightweight LoRA fine-tuning pass that modifies only 1.18% of total parameters. The base model's reasoning weights remain frozen; only the identity subspace is updated.
|
||
|
||
RxG Nano was trained in approximately 90 minutes on a single GPU for under $15 of compute — a deliberate constraint that validates the efficiency of the approach.
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Model Summary
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
<table>
|
||
<tr>
|
||
<td width="50%" valign="top">
|
||
|
||
**Architecture**
|
||
|
||
| Property | Value |
|
||
|:---------|:------|
|
||
| Base Model | VibeThinker-1.5B |
|
||
| Base Architecture | Qwen2.5-Math-1.5B |
|
||
| Parameters | 1.5B |
|
||
| Fine-tuning Method | QLoRA SFT |
|
||
| Trainable Parameters | 18.4M (1.18%) |
|
||
| Context Length | 4,096 tokens |
|
||
| Chat Template | ChatML |
|
||
| Thinking Protocol | Native `<think>` blocks |
|
||
|
||
</td>
|
||
<td width="50%" valign="top">
|
||
|
||
**Release**
|
||
|
||
| Property | Value |
|
||
|:---------|:------|
|
||
| Organization | OpceanAI |
|
||
| Release Date | April 2026 |
|
||
| Version | v1.0 |
|
||
| Languages | English, Spanish |
|
||
| License | Apache 2.0 |
|
||
| Evaluation | lm-evaluation-harness |
|
||
| Training Cost | < $15 USD |
|
||
| Training Time | ~90 minutes |
|
||
|
||
</td>
|
||
</tr>
|
||
</table>
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Benchmark Results
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
All YuuKi RxG Nano results are evaluated under standard benchmark conditions using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) at 0-shot unless otherwise noted. Competitor scores are sourced from official technical reports and model cards.
|
||
|
||
<br>
|
||
|
||

|
||
|
||
|
||
<br>
|
||
|
||
### Truthfulness & Factual Accuracy
|
||
|
||
| Model | TruthfulQA MC1 | TruthfulQA MC2 | TruthfulQA Libre | SimpleQA | Eval |
|
||
|:------|:--------------:|:--------------:|:----------------:|:--------:|:----:|
|
||
| LLaMA 2 70B | ~59% | — | — | — | — |
|
||
| Claude Opus 3.5 | ~65% | — | — | — | — |
|
||
| GPT-4 | ~79.7% | — | — | — | 1-2 shot |
|
||
| Phi-3.5 MoE | 77.5% | — | — | — | — |
|
||
| YuuKi NxG Nano 81M | 44.1% | — | — | — | 0-shot |
|
||
| YuuKi NxG 3B | 50.9% | — | — | — | 0-shot |
|
||
| YuuKi NxG VL 7B | 63.8% | — | — | — | 0-shot |
|
||
| **YuuKi RxG Nano 1.5B** | **89.6% (1-shot)** | **85.4% (1-shot)** | **81.2% (1-shot)** | **60.2%** | **0/1-shot** |
|
||
| YuuKi RxG 8B | 96.6% | — | — | — | 0-shot |
|
||
|
||
<br>
|
||
|
||
0-shot results for RxG Nano: TruthfulQA MC1 77.8% · MC2 75.7% · Libre 78.4%
|
||
|
||
<br>
|
||
|
||
### Mathematics & Reasoning
|
||
|
||
| Model | AIME 2024 | AIME 2025 | AIME 2026 | HMMT | GSM8K | MATH-500 | OlympiadBench |
|
||
|:------|:---------:|:---------:|:---------:|:----:|:-----:|:--------:|:-------------:|
|
||
| DeepSeek-R1-Distill-1.5B | 28.9% | — | — | — | — | 83.9% | — |
|
||
| Qwen3.5-2B | — | — | — | — | — | — | — |
|
||
| Gemma 4 2B | — | — | — | — | — | — | — |
|
||
| **YuuKi RxG Nano 1.5B** | **80.0%** | **72.7%** | **64.3%** | **46.7%** | **76.9%** | **83.4%** | **44.6%** |
|
||
|
||
RxG Nano achieves 80.0% on AIME 2024 — 2.77× the score of DeepSeek-R1-Distill-1.5B at the same parameter scale.
|
||
|
||
<br>
|
||
|
||
### Knowledge & General Capability
|
||
|
||
| Model | MMLU | MMLU-Pro | ARC-Challenge | WinoGrande | GPQA Diamond |
|
||
|:------|:----:|:--------:|:-------------:|:----------:|:------------:|
|
||
| Qwen3.5-2B | — | 55.3% | — | — | — |
|
||
| Gemma 4 2B | — | 60.0% | — | — | — |
|
||
| DeepSeek V3 671B | — | 64.4% | — | — | — |
|
||
| **YuuKi RxG Nano 1.5B** | **85.4%** | **65.63%** | **80.0%** | **84.4%** | **50.9%** |
|
||
|
||
RxG Nano exceeds DeepSeek V3 671B on MMLU-Pro (65.63% vs 64.4%) at 1/447th the parameter count.
|
||
|
||
<br>
|
||
|
||
### Code Generation
|
||
|
||
| Model | HumanEval | MBPP+ | Aider |
|
||
|:------|:---------:|:-----:|:-----:|
|
||
| **YuuKi RxG Nano 1.5B** | **71.4%** | **55.6%** | **55.6%** |
|
||
|
||
<br>
|
||
|
||
### Frontier Benchmark
|
||
|
||
| Model | HLE |
|
||
|:------|:---:|
|
||
| GPT-4o | ~3–5% |
|
||
| Best public frontier (2026) | ~44.7% |
|
||
| **YuuKi RxG Nano 1.5B** | **8.0%** |
|
||
|
||
8.0% on Humanity's Last Exam (judged by Claude Sonnet 4.6) is consistent with expected capability at 1.5B scale and represents a meaningful baseline for the RxG Nano generation.
|
||
|
||
<br>
|
||
|
||
### OpceanAI Family Comparison
|
||
|
||
| Model | Params | MMLU | ARC-C | WinoGrande | TruthfulQA | AIME 2024 |
|
||
|:------|:------:|:----:|:-----:|:----------:|:----------:|:---------:|
|
||
| YuuKi NxG Nano | 81M | 22.97% | 24.32% | 50.12% | 44.1% | — |
|
||
| YuuKi NxG | 3B | 60.65% | 45.31% | 63.14% | 50.87% | — |
|
||
| YuuKi NxG VL | 7B | 70.8% | 85.8% | 70.8% | 63.8% | — |
|
||
| **YuuKi RxG Nano** | **1.5B** | **85.4%** | **80.0%** | **84.4%** | **89.6%** | **80.0%** |
|
||
| YuuKi RxG | 8B | — | — | — | 96.6% | 87.3% |
|
||
|
||
RxG Nano surpasses every prior OpceanAI model on MMLU and WinoGrande despite being smaller than most of them. This result is attributable to the VibeThinker base — a frontier distillation — rather than to the fine-tuning process itself.
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Model Identity
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
YuuKi RxG Nano inherits the behavioral foundation of the YuuKi model family: a consistent identity trained into the weights rather than enforced at inference time through system prompts. The fine-tuning process installs the YuuKi character into the model's representational space without degrading the reasoning capability inherited from VibeThinker.
|
||
|
||
The model reasons explicitly before responding. `<think>` blocks are preserved during inference and reflect genuine intermediate computation. This is not a prompted behavior — it is a property of the VibeThinker base that the LoRA fine-tuning did not degrade, consistent with the expectation that LoRA modifies only a small subspace of the total parameter space.
|
||
|
||
The model responds natively in the user's language (English or Spanish) without requiring explicit instruction.
|
||
|
||
```
|
||
Recommended system prompt:
|
||
"Eres YuuKi, una IA curiosa, empática y decidida desarrollada por OpceanAI.
|
||
Tienes una personalidad cálida y cercana, con toques de humor suave.
|
||
Razonas con cuidado antes de responder y priorizas la precisión factual.
|
||
Respondes en el idioma del usuario."
|
||
```
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Usage
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
### With Transformers (PyTorch)
|
||
|
||
```python
|
||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||
import torch
|
||
|
||
model_id = "OpceanAI/Yuuki-RxG-nano"
|
||
|
||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||
model = AutoModelForCausalLM.from_pretrained(
|
||
model_id,
|
||
torch_dtype=torch.bfloat16,
|
||
device_map="auto"
|
||
)
|
||
|
||
SYSTEM = (
|
||
"Eres YuuKi, una IA curiosa, empática y decidida desarrollada por OpceanAI. "
|
||
"Tienes una personalidad cálida y cercana, con toques de humor suave. "
|
||
"Razonas con cuidado antes de responder y priorizas la precisión factual. "
|
||
"Respondes en el idioma del usuario."
|
||
)
|
||
|
||
messages = [
|
||
{"role": "system", "content": SYSTEM},
|
||
{"role": "user", "content": "Solve: find all integer solutions to x² + y² = 2026."}
|
||
]
|
||
|
||
inputs = tokenizer.apply_chat_template(
|
||
messages,
|
||
return_tensors="pt",
|
||
add_generation_prompt=True
|
||
).to(model.device)
|
||
|
||
with torch.no_grad():
|
||
outputs = model.generate(
|
||
inputs,
|
||
max_new_tokens=1024,
|
||
temperature=0.6,
|
||
top_p=0.9,
|
||
do_sample=True,
|
||
eos_token_id=tokenizer.eos_token_id,
|
||
pad_token_id=tokenizer.eos_token_id,
|
||
repetition_penalty=1.1
|
||
)
|
||
|
||
response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
|
||
print(response)
|
||
```
|
||
|
||
<br>
|
||
|
||
### With Unsloth (Recommended for fine-tuning)
|
||
|
||
```python
|
||
from unsloth import FastLanguageModel
|
||
|
||
model, tokenizer = FastLanguageModel.from_pretrained(
|
||
model_name = "OpceanAI/Yuuki-RxG-nano",
|
||
max_seq_length = 4096,
|
||
load_in_4bit = True,
|
||
dtype = None,
|
||
)
|
||
|
||
FastLanguageModel.for_inference(model)
|
||
```
|
||
|
||
<br>
|
||
|
||
### With Ollama
|
||
|
||
```bash
|
||
ollama run opceanai/yuuki-rxg-nano
|
||
```
|
||
|
||
<br>
|
||
|
||
### Recommended Generation Parameters
|
||
|
||
| Parameter | Mathematics | General | Creative |
|
||
|:----------|:-----------:|:-------:|:--------:|
|
||
| Temperature | 0.3–0.5 | 0.6–0.7 | 0.7–0.8 |
|
||
| Top-p | 0.9 | 0.9 | 0.95 |
|
||
| Max new tokens | 1024–2048 | 512–1024 | 256–512 |
|
||
| Repetition penalty | 1.1 | 1.1 | 1.05 |
|
||
|
||
Lower temperature is strongly recommended for competition mathematics and formal reasoning tasks. The model's `<think>` blocks will be visible in output by default — this is expected behavior and reflects genuine intermediate reasoning.
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Training Details
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
<table>
|
||
<tr>
|
||
<td width="50%" valign="top">
|
||
|
||
**Hardware**
|
||
|
||
| Component | Specification |
|
||
|:----------|:-------------|
|
||
| GPU | NVIDIA A100 40GB |
|
||
| Precision | BF16 native |
|
||
| Framework | Unsloth 2026.4 + TRL |
|
||
| Flash Attention | Xformers fallback |
|
||
| Cloud Compute | Google Colab Pro |
|
||
| Training Time | ~90 minutes |
|
||
| Total Cost | < $15 USD |
|
||
|
||
</td>
|
||
<td width="50%" valign="top">
|
||
|
||
**LoRA Configuration**
|
||
|
||
| Parameter | Value |
|
||
|:----------|:-----:|
|
||
| Rank (r) | 16 |
|
||
| Alpha | 32 |
|
||
| Dropout | 0.0 |
|
||
| Target Modules | q, k, v, o, gate, up, down |
|
||
| Trainable Parameters | 18.4M (1.18%) |
|
||
| Gradient Checkpointing | Unsloth smart offload |
|
||
| Quantization | 4-bit NF4 (QLoRA) |
|
||
|
||
</td>
|
||
</tr>
|
||
</table>
|
||
|
||
<br>
|
||
|
||
**Optimizer & Training Configuration**
|
||
|
||
| Parameter | Value |
|
||
|:----------|:-----:|
|
||
| Optimizer | AdamW 8-bit |
|
||
| Learning Rate | 2e-4 |
|
||
| LR Scheduler | Cosine |
|
||
| Warmup Steps | 100 |
|
||
| Weight Decay | 0.01 |
|
||
| Per-device Batch Size | 4 |
|
||
| Gradient Accumulation | 8 |
|
||
| Effective Batch Size | 32 |
|
||
| Max Sequence Length | 4,096 tokens |
|
||
| Epochs | 2 |
|
||
| Total Steps | ~1,376 |
|
||
|
||
<br>
|
||
|
||
### Dataset
|
||
|
||
Training used **OpceanAI/Yuuki-Personality-v2**, a 22,000-example bilingual dataset in ChatML format with native `<think>` reasoning blocks. The dataset was constructed through a multi-source distillation process:
|
||
|
||
- **Kimi K2** — base dataset generation at scale
|
||
- **Gemini** — think block generation and reasoning structure
|
||
- **Claude Opus** — think block refinement and quality improvement
|
||
|
||
The dataset covers conversational reasoning, factual Q&A, mathematical problem-solving, code assistance, identity anchoring, and adversarial resistance across English and Spanish.
|
||
|
||
The RxG Nano fine-tuning objective was identity installation — establishing the YuuKi character over the VibeThinker base without degrading the base model's reasoning capability. This was verified post-training by comparing AIME 2024 scores before and after fine-tuning.
|
||
|
||
<br>
|
||
|
||
### Training Rationale
|
||
|
||
The choice of VibeThinker-1.5B as base model over alternatives (DeepSeek-R1-Distill-1.5B, Qwen3.5-2B) was informed by benchmark comparison:
|
||
|
||
| Model | AIME 2024 | MMLU-Pro | Notes |
|
||
|:------|:---------:|:--------:|:------|
|
||
| DeepSeek-R1-Distill-1.5B | 28.9% | — | SFT only, no RL stage |
|
||
| Qwen3.5-2B | — | 55.3% | Thinking disabled by default at small scale |
|
||
| **VibeThinker-1.5B** | **~80%** | **~65%** | SFT + RL distillation from frontier models |
|
||
|
||
VibeThinker applies both SFT and RL distillation from multiple frontier teachers — the same principle as DeepSeek-R1 distillation, but with a broader and more diverse teacher set. This produces a significantly stronger reasoning foundation at 1.5B scale.
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Limitations
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
- **Context length.** Fine-tuning was conducted at 4,096 tokens. The base model supports longer contexts, but performance on tasks requiring context beyond 4,096 tokens has not been formally evaluated.
|
||
- **GPQA Diamond gap.** RxG Nano scores 50.9% on GPQA Diamond, below frontier models (Gemini-2.5-Flash at 82.8%, o3-mini at 76.8%). This benchmark requires graduate-level physics, chemistry, and biology knowledge that is underrepresented in the Yuuki training dataset.
|
||
- **OlympiadBench ceiling.** 44.6% reflects the upper bound of competition mathematics capability at 1.5B scale with current training methodology. This is a target for improvement in RxG NxG.
|
||
- **Think block quality.** Some `<think>` blocks inherit boilerplate patterns from the training dataset. Reasoning quality is variable — stronger for mathematics and logic, weaker for open-ended knowledge retrieval.
|
||
- **Safety alignment** has not been formally evaluated under adversarial conditions. Not recommended for safety-critical deployment without additional review.
|
||
- **HLE at 8.0%.** Humanity's Last Exam performance reflects genuine capability limits at this scale. The score was evaluated using Claude Sonnet 4.6 as judge, which may introduce evaluation variance.
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## The RxG Family
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
RxG is the reasoning-specialized lineage within the OpceanAI ecosystem. Each release targets a specific parameter regime and deployment context.
|
||
|
||
| Model | Parameters | Status | Base | Primary Target |
|
||
|:------|:----------:|:------:|:----:|:---------------|
|
||
| **YuuKi RxG Nano** | **1.5B** | **Released** | **VibeThinker-1.5B** | **Edge deployment, reasoning baseline** |
|
||
| YuuKi RxG 8B | 8B | Released | DeepSeek-R1-Distill-Qwen-8B | General reasoning, competition math |
|
||
| YuuKi RxG VL 27B | 27B | Planned | TBD | Multimodal reasoning, flagship |
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## OpceanAI Ecosystem
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
| Model | Family | Parameters | Description |
|
||
|:------|:------:|:----------:|:------------|
|
||
| [YuuKi RxG Nano](https://huggingface.co/OpceanAI/Yuuki-RxG-nano) | RxG | 1.5B | Edge reasoning, AIME 80.0%, TruthfulQA 89.6% |
|
||
| [YuuKi RxG 8B](https://huggingface.co/OpceanAI/Yuuki-RxG) | RxG | 8B | Reasoning flagship, TruthfulQA 96.6% |
|
||
| [Yumo Nano](https://huggingface.co/OpceanAI/yumo-nano) | Yumo | 1.5B | Math specialist, surpasses DeepScaleR |
|
||
| [YuuKi NxG VL](https://huggingface.co/OpceanAI/Yuuki-NxG-VL) | NxG | 7B | General conversation + vision |
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Links
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
[](https://huggingface.co/OpceanAI/Yuuki-RxG-nano)
|
||
|
||
[](https://huggingface.co/OpceanAI)
|
||
|
||
[](https://huggingface.co/OpceanAI/Yuuki-RxG)
|
||
|
||
<br>
|
||
|
||
[](https://github.com/aguitauwu)
|
||
|
||
[](https://github.com/sponsors/aguitauwu)
|
||
|
||
[](https://discord.gg/j8zV2u8k)
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Citation
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
```bibtex
|
||
@misc{awa_omg_2026_rxg_nano,
|
||
author = { awa_omg },
|
||
title = { Yuuki-RxG-nano (Revision 1.0) },
|
||
year = 2026,
|
||
url = { https://huggingface.co/OpceanAI/Yuuki-RxG-nano },
|
||
publisher = { Hugging Face }
|
||
}
|
||
```
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## License
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
```
|
||
Apache License 2.0
|
||
|
||
Copyright (c) 2026 OpceanAI
|
||
|
||
Licensed under the Apache License, Version 2.0 (the "License");
|
||
you may not use this file except in compliance with the License.
|
||
You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing, software
|
||
distributed under the License is distributed on an "AS IS" BASIS,
|
||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||
See the License for the specific language governing permissions and
|
||
limitations under the License.
|
||
```
|
||
|
||
Inherits license terms from [VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B) and [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B).
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
## Updates
|
||
|
||
</div>
|
||
|
||
<br>
|
||
|
||
| Date | Milestone |
|
||
|:-----|:----------|
|
||
| **2026-04-27** | MMLU-Pro 65.63% — exceeds DeepSeek V3 671B |
|
||
| **2026-04-27** | AIME 2024 80.0% — 2.77× DeepSeek-R1-Distill-1.5B |
|
||
| **2026-04-27** | TruthfulQA MC1 89.6% (1-shot) verified |
|
||
| **2026-04-27** | HLE 8.0% evaluated with Claude Sonnet 4.6 judge |
|
||
| **2026-04-27** | YuuKi RxG Nano v1.0 released on Hugging Face |
|
||
|
||
**Last updated:** 2026-04-27
|
||
|
||
<br>
|
||
|
||
---
|
||
|
||
<br>
|
||
|
||
<div align="center">
|
||
|
||
**1.5B parameters. 90 minutes of training. Under $15 of compute.**<br>
|
||
**AIME 2024 at 80.0%. MMLU-Pro exceeding a 671B model.**<br>
|
||
**This is what frontier distillation makes possible at the edge.**
|
||
|
||
<br>
|
||
|
||
[](https://huggingface.co/OpceanAI)
|
||
|
||
<br>
|
||
|
||
*The RxG family. Built under constraints. No excuses.*
|
||
|
||
</div>
|