[cherry-pick][refactor]support gatingtopk operator generalization (#4050)
### What this PR does / why we need it? pick from : https://github.com/vllm-project/vllm-ascend/pull/2958 Past: npu_moe_gating_top_k can only support 'group_count=256' pattern Now: 1、npu_moe_gating_top_k support all size of group_count 2、the functionality of `torch_npu.npu_moe_gating_top_k_softmax` are included in `torch_npu.npu_moe_gating_top_k` CANN: depends on 8.3.RC1 Performance: 1. GLM4.5-w8a8, TPS improve 6% 2. Qwen3, the same as before Signed-off-by: 1092626063 <1092626063@qq.com>
This commit is contained in:
@@ -96,6 +96,7 @@ def set_ascend_forward_context(
|
||||
ep_size = (get_ep_group().world_size if
|
||||
vllm_config.parallel_config.enable_expert_parallel else 1)
|
||||
|
||||
# fused_moe_state is used in torchair, it will be deleted along with torchair
|
||||
is_deepseek_v3_r1 = hasattr(
|
||||
vllm_config.model_config.hf_config, 'n_routed_experts'
|
||||
) and vllm_config.model_config.hf_config.n_routed_experts == 256
|
||||
|
||||
Reference in New Issue
Block a user