初始化项目,由ModelHub XC社区提供模型

Model: llm-jp/optimal-sparsity-code-d512-E16-k16-520M-A520M
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-13 05:43:54 +08:00
commit ae939551dc
8 changed files with 148 additions and 0 deletions

31
README.md Normal file
View File

@@ -0,0 +1,31 @@
---
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
tags:
- mixtral
- moe
- reasoning
---
# Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
This repository contains model checkpoints from the paper [Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks](https://huggingface.co/papers/2508.18672).
For more details, including code and evaluation procedures, please refer to the official GitHub repository: [https://github.com/rioyokotalab/optimal-sparsity](https://github.com/rioyokotalab/optimal-sparsity)
## How to cite
If you find our work helpful, please feel free to cite the paper.
```bibtex
@inproceedings{
nakamura2026optimal,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=XFw2EPRUUR}
}
```