[Refactor] remove moe type of multicast. (#4224)

The main purposes of this PR are as follows: 
1. Remove the multicast-related code; 

Reason:
1. In the scenario like a2 Dual-System Back-to-Back Networking,the
performance is worse than all_gather. Before the modification, in e2e
test, it was 3 tps; after the modification, it is 10 tps.
2. At the same time, we usually enable the SP feature,it is consistent
with the current logic.
3. The advantage of broadcast communication lies in the fact that it
does not suffer from uneven DP load and does not require the prefill ACL
graph to be enabled. But we support prefill Acl graph recently.

So we think there is no need to maintain the multicast as one choice in
moe communication.

Performance benefits are as follows:
When not enable_flashcomm1, TTFT remains relatively stable at around
43000ms, which is approximately 15000ms faster than before the
modification.

When enable_flashcomm1, there is no diffenence, TTFT remains relatively
stable at around 29000ms.


- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Signed-off-by: weijinqian0 <1184188277@qq.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
This commit is contained in:
weijinqian0
2025-11-24 17:32:37 +08:00
committed by GitHub
parent 5508a602ed
commit ae068a3342
10 changed files with 30 additions and 249 deletions

View File

@@ -2192,8 +2192,8 @@ class NPUModelRunner(LoRAModelRunnerMixin):
kv_connector_output=kv_connector_output,
)
def _select_moe_comm_method(self, num_tokens: int,
with_prefill: bool) -> Optional[MoECommType]:
def _select_moe_comm_method(self,
num_tokens: int) -> Optional[MoECommType]:
"""1. If expert parallel is not enabled, we use all-gather since MC2 and all-to-all
are designed for expert parallelism.
2. If expert parallel is enabled, we need to consider the soc version and the
@@ -2244,12 +2244,6 @@ class NPUModelRunner(LoRAModelRunnerMixin):
else:
raise ValueError(f"Unsupported soc_version: {soc_version}")
if moe_comm_type == MoECommType.ALLGATHER and with_prefill:
if enable_sp():
moe_comm_type = MoECommType.ALLGATHER
else:
moe_comm_type = MoECommType.NAIVE_MULTICAST
# PanguProMoE only supports allgather
if model_type == "PanguProMoE":
moe_comm_type = MoECommType.ALLGATHER
@@ -2289,8 +2283,7 @@ class NPUModelRunner(LoRAModelRunnerMixin):
if self.dynamic_eplb:
self.eplb_updator.take_update_info_from_eplb_process()
moe_comm_type = self._select_moe_comm_method(num_input_tokens,
self.with_prefill)
moe_comm_type = self._select_moe_comm_method(num_input_tokens)
uniform_decode = (max_query_len == self.uniform_decode_query_len) and (
scheduler_output.total_num_scheduled_tokens
@@ -2823,7 +2816,7 @@ class NPUModelRunner(LoRAModelRunnerMixin):
with_prefill) = self._sync_metadata_across_dp(num_tokens,
with_prefill)
moe_comm_type = self._select_moe_comm_method(num_tokens, with_prefill)
moe_comm_type = self._select_moe_comm_method(num_tokens)
# If cudagraph_mode.decode_mode() == FULL and
# cudagraph_mode.seperate_routine(). This means that we are using
@@ -2999,8 +2992,7 @@ class NPUModelRunner(LoRAModelRunnerMixin):
# allowing vLLM to correctly estimate the maximum memory required.
if self.max_num_tokens > self.mc2_tokens_capacity and \
self._select_moe_comm_method(
self.mc2_tokens_capacity,
with_prefill=True) == MoECommType.MC2:
self.mc2_tokens_capacity) == MoECommType.MC2:
self._dummy_run(self.mc2_tokens_capacity, with_prefill=True)
output = None