An 18B parameter Mixture of Experts model combining 8 specialized 3B experts, with 2 experts activated per token by default (configurable up to 4 at inference).
Architecture
Base model: theprint/GeneralChat-Llama3.2-3B (provides shared attention layers)
Total parameters: ~18B
Active parameters: ~5B (2 experts) or ~9B (4 experts)