[Graph][Fusion]Add new pattern for AddRmsnormQuant with SP. (#5077)

### What this PR does / why we need it?
1. In addition to
[#4168](https://github.com/vllm-project/vllm-ascend/pull/4168),
[#5011](https://github.com/vllm-project/vllm-ascend/pull/5011), this PR
adds two more pattern for AddRmsnormQuant with SP enabled. The key
difference is to insert an additional `maybe_all_gather_and_maybe_unpad`
between `addrmsnorm` and `quantize`.
2. This PR also introduce another api `torch.ops.vllm.quantize`, so that
we pass `input_scale` and `input_scale_reciprocal` at the same time.
This is because `npu_add_rms_norm_quant` and `npu_quantize` requires
different `div_mode`. To avoid introducing additional reciprocal
calculation in runtime, we have to pass both of them to quantize api.
3. Removes redundant `AscendQuantRmsnorm`.


- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: Angazenn <supperccell@163.com>
This commit is contained in:
Angazenn
2025-12-18 20:25:44 +08:00
committed by GitHub
parent a74a1196c5
commit acc3578f58
7 changed files with 454 additions and 116 deletions

View File

@@ -128,8 +128,9 @@ class AscendW8A8LinearMethod:
if enable_flashcomm2_quant_comm:
quant_input_x = x.contiguous().view(
-1, layer.aclnn_input_scale_reciprocal.size(0))
quant_x = quant_per_tensor(
quant_x = torch.ops.vllm.quantize(
quant_input_x,
layer.aclnn_input_scale,
layer.aclnn_input_scale_reciprocal,
layer.aclnn_input_offset,
)
@@ -138,8 +139,9 @@ class AscendW8A8LinearMethod:
x = comm_fn(comm_input)
else:
# quant
x = quant_per_tensor(
x = torch.ops.vllm.quantize(
x,
layer.aclnn_input_scale,
layer.aclnn_input_scale_reciprocal,
layer.aclnn_input_offset,
)