### What this PR does / why we need it?
PR #8220 in v0.18.0
In a previous PR #7843 , the o_proj layer of GLM-5 was reverted to TP
(Tensor Parallel) splitting when flashcomm1 was enabled. However, this
was a temporary workaround and did not address the root cause of the
precision issues observed in the o_proj layer under flashcomm1.
I am working on a definitive fix for this issue. Currently, a clear bug
has been identified in
880e20fdde/vllm_ascend/quantization/methods/w8a8_static.py (L124):
during quantized matrix multiplication, quant_bias is not added if
tp_rank > 0. In the flashcomm1 scenario, all ranks actually require the
addition of quant_bias, meaning tp_rank=0 should be passed to ensure the
bias is applied correctly.
This PR aims to resolve this logic error and fix the underlying
precision issue.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
glm5 e2e test
---------
Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: triomino <15924998+triomino@users.noreply.github.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>
### What this PR does / why we need it?
Refactor `vllm_ascend/ops/fused_moe` to replace scattered MoE business
`**kwargs` with typed request objects and explicit stage boundaries.
- Prepare, dispatch, MLP, and quant stages now have clearer ownership.
- Main MoE path no longer depends on business `kwargs.get(...)` lookups.
- Comm and dispatcher interfaces are request-only on the main path.
- UTs can assert stage-level fields directly instead of inferring
behavior indirectly.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
CI passed.
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
### What this PR does / why we need it?
This is a bug fix to resolve the issue where the MOE model fails to load
quantized weights in w4a8 format when EP is not enabled.The parameters
["weight_scale_second", "weight_offset_second", "scale_bias"] shall be
parsed in per-group mode, regardless of other conditions.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
- vLLM version: v0.16.0
- vLLM main:
4034c3d32e
Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>