[Bugfix] bugfix for moe_mlp in vllm-ascend/v0.11.0-dev (#4885)

### What this PR does / why we need it?
This PR fixes a bug in the moe_mlp module by correcting the arguments
passed to the torch_npu.npu_dequant_swiglu_quant function.It properly
converts group_list from a cumulative sum to counts for the group_index
parameter.

### Does this PR introduce _any_ user-facing change?
No


- vLLM version: v0.12.0
- vLLM main: https://github.com/vllm-project/vllm/main

---------

Signed-off-by: tanqingshan (A)  <50050625@china.huawei.com>
Signed-off-by: tanqingshan (A) <50050625@china.huawei.com>
Co-authored-by: tanqingshan (A) <50050625@china.huawei.com>
Co-authored-by: Mercykid-bash <ruanche0218@gmail.com>
This commit is contained in:
Clorist33
2025-12-12 14:51:47 +08:00
committed by GitHub
parent 9c0ad46c1a
commit 4f0dddc9ee
5 changed files with 41 additions and 34 deletions

View File

@@ -47,8 +47,8 @@ def test_generate_task_and_state_flow(mock_adaptor):
loader_obj.state = loader.ExpertWeightUpdateState.WAITING
loader_obj.generate_expert_d2d_transfer_task([], [], {}, 0)
assert loader_obj.comm_op_list is None
assert loader_obj.state == loader.ExpertWeightUpdateState.WAITING
assert not loader_obj.comm_op_list
assert loader_obj.state == loader.ExpertWeightUpdateState.READY
def test_asyn_transfer_and_update(mock_adaptor):