Files
xc-llm-ascend/vllm_ascend/quantization
aipaes 0954fd0912 [BugFix][0.18.0] Fix quant_bias missing in w8a8_static when flashcomm1 is enabled for GLM-5 (#8304)
### What this PR does / why we need it?
PR #8220 in v0.18.0

In a previous PR #7843 , the o_proj layer of GLM-5 was reverted to TP
(Tensor Parallel) splitting when flashcomm1 was enabled. However, this
was a temporary workaround and did not address the root cause of the
precision issues observed in the o_proj layer under flashcomm1.

I am working on a definitive fix for this issue. Currently, a clear bug
has been identified in
880e20fdde/vllm_ascend/quantization/methods/w8a8_static.py (L124):
during quantized matrix multiplication, quant_bias is not added if
tp_rank > 0. In the flashcomm1 scenario, all ranks actually require the
addition of quant_bias, meaning tp_rank=0 should be passed to ensure the
bias is applied correctly.

This PR aims to resolve this logic error and fix the underlying
precision issue.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
glm5 e2e test

---------

Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: triomino <15924998+triomino@users.noreply.github.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>
2026-04-17 22:46:36 +08:00
..