ModelHub XC b0652515a6 初始化项目,由ModelHub XC社区提供模型
Model: theprint/theprint-moe-8x3-0126-GGUF
Source: Original Platform
2026-04-10 11:32:11 +08:00

license, language, base_model, pipeline_tag, tags
license language base_model pipeline_tag tags
apache-2.0
en
theprint/theprint-moe-8x3-0126
text-generation
moe
llama

theprint-MoE-8x3-0126-GGUF

An 18B parameter Mixture of Experts model combining 8 specialized 3B experts, with 2 experts activated per token by default (configurable up to 4 at inference).

Architecture

  • Base model: theprint/GeneralChat-Llama3.2-3B (provides shared attention layers)
  • Total parameters: ~18B
  • Active parameters: ~5B (2 experts) or ~9B (4 experts)
  • Gate mode: Hidden (prompt-based router initialization)

Full Model

For more information about this model, including access to the safetensor files, please see theprint/theprint-moe-8x3-0126.

Description
Model synced from source: theprint/theprint-moe-8x3-0126-GGUF
Readme 52 KiB