初始化项目,由ModelHub XC社区提供模型
Model: theprint/theprint-moe-8x3-0126-GGUF Source: Original Platform
This commit is contained in:
25
README.md
Normal file
25
README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- theprint/theprint-moe-8x3-0126
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- moe
|
||||
- llama
|
||||
---
|
||||
<img src="theprint_18b_moe.png" width="420" />
|
||||
|
||||
# theprint-MoE-8x3-0126-GGUF
|
||||
|
||||
An 18B parameter Mixture of Experts model combining 8 specialized 3B experts, with 2 experts activated per token by default (configurable up to 4 at inference).
|
||||
|
||||
## Architecture
|
||||
- Base model: theprint/GeneralChat-Llama3.2-3B (provides shared attention layers)
|
||||
- Total parameters: ~18B
|
||||
- Active parameters: ~5B (2 experts) or ~9B (4 experts)
|
||||
- Gate mode: Hidden (prompt-based router initialization)
|
||||
|
||||
## Full Model
|
||||
For more information about this model, including access to the safetensor files, please see [theprint/theprint-moe-8x3-0126](https://huggingface.co/theprint/theprint-moe-8x3-0126).
|
||||
Reference in New Issue
Block a user