[BugFix][0.18.0] Fix quant_bias missing in w8a8_static when flashcomm1 is enabled for GLM-5 (#8304)

### What this PR does / why we need it?
PR #8220 in v0.18.0

In a previous PR #7843 , the o_proj layer of GLM-5 was reverted to TP
(Tensor Parallel) splitting when flashcomm1 was enabled. However, this
was a temporary workaround and did not address the root cause of the
precision issues observed in the o_proj layer under flashcomm1.

I am working on a definitive fix for this issue. Currently, a clear bug
has been identified in
880e20fdde/vllm_ascend/quantization/methods/w8a8_static.py (L124):
during quantized matrix multiplication, quant_bias is not added if
tp_rank > 0. In the flashcomm1 scenario, all ranks actually require the
addition of quant_bias, meaning tp_rank=0 should be passed to ensure the
bias is applied correctly.

This PR aims to resolve this logic error and fix the underlying
precision issue.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
glm5 e2e test

---------

Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: triomino <15924998+triomino@users.noreply.github.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>
This commit is contained in:
aipaes
2026-04-17 22:46:36 +08:00
committed by GitHub
parent b72ade9acd
commit 0954fd0912
3 changed files with 5 additions and 23 deletions

View File

@@ -663,11 +663,7 @@ def _get_row_parallel_op(
| None
):
if enable_dsa_cp_with_layer_shard() and "o_proj" in prefix:
from vllm.config import get_current_vllm_config
vllm_config = get_current_vllm_config()
if vllm_config.model_config.hf_config.model_type not in ["glm_moe_dsa"]:
return ShardedCPRowParallelOp(layer)
return ShardedCPRowParallelOp(layer)
if "down_proj" in prefix and mlp_tp_enable() and not is_moe_layer(prefix):
return MLPRowParallelOp(layer)
if "o_proj" in prefix and oproj_tp_enable():