[1/N][Feat] Support MoE models with ACL Graph and refactor MoE communication logic (#2125)

### What this PR does / why we need it?
This PR refactors the MoE (Mixture of Experts) communication logic by
introducing a strategy pattern. It defines an abstract base class,
`MoECommMethod`, which encapsulates different communication strategies
for MoE layers. By decoupling the MoE implementation from any single
communication method, this change makes it simpler to add, replace, or
optimize communication strategies in the future.

Plan / Roadmap

1. Introduce `MoECommMethod`, implement `AllGatherImpl`, and adapt ACL
Graph handling to cover all scenarios (this PR).
2. Implement `MC2CommImpl` and `AllToAllCommImpl` to optimize
performance in specific scenarios.
3. Enable W8A8 / Int8 models to use `unified_fused_experts`.

Other notes

* Data-parallel (DP) communication currently does not work with vLLM's
dispatch/combine mechanisms; an alternative approach is required to
resolve this incompatibility.

- vLLM version: v0.10.0
- vLLM main:
f7ad6a1eb3

---------

Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
This commit is contained in:
yiz-liu
2025-08-12 21:10:20 +08:00
committed by GitHub
parent 1a70564e7c
commit 992271b027
7 changed files with 764 additions and 26 deletions

View File

@@ -19,12 +19,13 @@ from typing import Callable, Optional
import torch
from vllm.config import CompilationLevel, get_current_vllm_config
from vllm.forward_context import get_forward_context
from vllm.model_executor.layers.fused_moe.layer import \
UnquantizedFusedMoEMethod
from vllm_ascend.ascend_config import get_ascend_config
from vllm_ascend.ops.fused_moe import (fused_experts, fused_experts_moge,
select_experts)
from vllm_ascend.ops.fused_moe import (fused_experts_moge, select_experts,
unified_fused_experts)
from vllm_ascend.utils import is_310p
original_unquantized_fused_moe_init_func = UnquantizedFusedMoEMethod.__init__
@@ -95,20 +96,18 @@ def forward_oot(
expert_map=expert_map,
apply_router_weight_on_input=apply_router_weight_on_input)
# If use aclgraph, we need to set max_num_tokens to make
# the input shape of `npu_moe_init_routing` fixed
max_num_tokens = self.max_num_batched_tokens if self.use_aclgraph else None
moe_comm_method = get_forward_context().moe_comm_method
return fused_experts(
return unified_fused_experts(
hidden_states=x,
w1=layer.w13_weight,
w2=layer.w2_weight,
topk_weights=topk_weights,
topk_ids=topk_ids,
top_k=top_k,
global_num_experts=global_num_experts,
expert_map=expert_map,
apply_router_weight_on_input=apply_router_weight_on_input,
max_num_tokens=max_num_tokens)
moe_comm_method=moe_comm_method,
)
UnquantizedFusedMoEMethod.__init__ = unquantized_fused_moe_init_func