This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-ascend
Watch
3
Star
0
Fork
0
You've already forked xc-llm-ascend
Code
Issues
Pull Requests
Projects
Releases
Wiki
Activity
Files
981d803cb7963b85d7a296c2a10ff873232870d8
xc-llm-ascend
/
vllm_ascend
/
ops
/
fused_moe
History
realliujiaxu
5def28dcd3
[Feat]support sequence parallelism by pass for VL models (
#5632
)
2026-02-27 08:27:41 +08:00
..
__init__.py
[Refactor] [MoE] Rename moe-related classes & files (
#3646
)
2025-10-25 11:22:03 +08:00
comm_utils.py
[Lint]Style: Convert
vllm-ascend/
to ruff format(Batch
#11
) (
#6176
)
2026-02-06 15:28:49 +08:00
experts_selector.py
[Refact]Refact MLA/SFA weight prefetch to consist with moe weight prefetch (
#6629
)
2026-02-10 14:14:37 +08:00
fused_moe.py
[Feat]support sequence parallelism by pass for VL models (
#5632
)
2026-02-27 08:27:41 +08:00
moe_comm_method.py
[Bugfix][DispatchFFNCombine] resolve vec error caused by unaligned UB access (
#6707
)
2026-02-14 10:32:50 +08:00
moe_mlp.py
[Attention] add gpt-oss support (
#5901
)
2026-02-12 10:55:34 +08:00
prepare_finalize.py
[MOE Refactor] Remove QuantType in prepare_finalize.py (
#6534
)
2026-02-10 15:59:58 +08:00
token_dispatcher.py
[Lint]Style: Convert
vllm-ascend/
to ruff format(Batch
#11
) (
#6176
)
2026-02-06 15:28:49 +08:00