【feat】switch for fusion ops gmmswigluquant (#5992)

### What this PR does / why we need it?

Set a additional config parameter to control whether the gmmswigluequant
fuseion operator is enabled; it is enabled by True. / When enabled with
a small number of GPUs, the gmmswigluquant fused operator can cause some
performance degradation.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
2c24bc6996

#### Perf

test model: GLM 4.6(w8a8)
- single A3 node(ep16, tp16),  async-scheduling, mtp, FULL_DECODE_ONLY
- bs=1, input_lens=32000, ouput_lens=1024

Without this PR: TPOT 32.22.ms
With this PR: TPOT 30.23ms

---------

Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>
This commit is contained in:
aipaes
2026-01-19 21:19:25 +08:00
committed by GitHub
parent 38cfcd572a
commit f58e110afe
4 changed files with 45 additions and 1 deletions

View File

@@ -48,6 +48,12 @@ def setup_moe_comm_method(moe_config):
_MoECommMethods[MoECommType.FUSED_MC2] = FusedMC2CommImpl(moe_config)
def set_gmmswigluquant_method():
from vllm_ascend.ascend_config import get_ascend_config
ascend_config = get_ascend_config()
return ascend_config.ascend_fusion_config.fusion_ops_gmmswigluquant
@dataclass
class FusedExpertsResult:
routed_out: torch.Tensor
@@ -69,6 +75,7 @@ class MoECommMethod(ABC):
self.token_dispatcher = self._get_token_dispatcher()
self.prepare_finalize = self._get_prepare_finalize()
self.use_fusion_ops = set_gmmswigluquant_method()
def prepare(
self,
@@ -159,7 +166,7 @@ class MoECommMethod(ABC):
w2_offset=w2_offset,
topk_scales=dispatch_results.topk_scales,
with_quant=use_int8_w8a8 or use_int4_w4a8 or use_int4_w4a16,
fusion=use_int8_w8a8,
fusion=use_int8_w8a8 and self.use_fusion_ops,
need_trans=need_trans,
dynamic_eplb=dynamic_eplb)