32 lines
667 B
Markdown
32 lines
667 B
Markdown
|
|
---
|
||
|
|
base_model: unsloth/Qwen3-4B-Thinking-2507
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- unsloth
|
||
|
|
- qwen3
|
||
|
|
datasets:
|
||
|
|
- TeichAI/MiniMax-M2.1-8800x
|
||
|
|
---
|
||
|
|
|
||
|
|
# Qwen3 4B Thinking 2507 - MiniMax M2.1 Distill
|
||
|
|
|
||
|
|
This model was trained on a reasoning dataset of **MiniMax M2.1**.
|
||
|
|
|
||
|
|
- 🧬 Datasets:
|
||
|
|
- `TeichAI/MiniMax-M2.1-8800x`
|
||
|
|
|
||
|
|
- 🏗 Base Model:
|
||
|
|
- `unsloth/Qwen3-4B-Thinking-2507`
|
||
|
|
|
||
|
|
- ⚡ Use cases:
|
||
|
|
- Coding
|
||
|
|
- Science
|
||
|
|
- Deep Research
|
||
|
|
|
||
|
|
- ∑ Stats (Dataset)
|
||
|
|
- Costs: $ 42.94 (USD)
|
||
|
|
- Total tokens (input + output): 39.2 M
|
||
|
|
---
|
||
|
|
|
||
|
|
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|