Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Projects Releases Wiki Activity
Files
fff5df3efe5e3c723e412a4cfb4129c6cd3f5bbc
xc-llm-ascend/vllm_ascend/ops/fused_moe
History
Jade Zheng 22f253142a [Feature] Support fine-grained shared expert overlap (#5482)
Fine-grained control over shared expert overlap to prevent resource
contention.

- vLLM version: v0.13.0
- vLLM main:
5326c89803

---------

Signed-off-by: Jade Zheng <zheng.shoujian@outlook.com>
2026-01-17 11:53:22 +08:00
..
__init__.py
[Refactor] [MoE] Rename moe-related classes & files (#3646)
2025-10-25 11:22:03 +08:00
comm_utils.py
[Refactor] [MoE] Rename moe-related classes & files (#3646)
2025-10-25 11:22:03 +08:00
experts_selector.py
[Kernel] Add moe_gating_top_k operator support for Ascend NPU (#5579)
2026-01-07 21:42:31 +08:00
fused_moe.py
[Feature] Support fine-grained shared expert overlap (#5482)
2026-01-17 11:53:22 +08:00
moe_comm_method.py
[Feature] Support fine-grained shared expert overlap (#5482)
2026-01-17 11:53:22 +08:00
moe_mlp.py
[refactor] Remove unnecessary attributes from set_ascend_forward_context (#5204)
2025-12-23 08:49:52 +08:00
prepare_finalize.py
[Feature] Support fine-grained shared expert overlap (#5482)
2026-01-17 11:53:22 +08:00
token_dispatcher.py
[Feature] Support fine-grained shared expert overlap (#5482)
2026-01-17 11:53:22 +08:00
Powered by Gitea Version: 1.24.3 Page: 236ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API