【A5】【Qwen VL】Qwen VL adapt for A5 (#7046)

### What this PR does / why we need it?
Replace the '_npu_flash_attention_unpad' operator with the
'npu_fusion_attention' operator to ensure that the Qwen VL model can run
in the A5 environment and remove the 'mrope' operator call restriction
for A5.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
4034c3d32e

Signed-off-by: 汪越 <wangyue361@h-partners.com>
This commit is contained in:
yesyue-w
2026-03-20 16:56:12 +08:00
committed by GitHub
parent f39f566e22
commit c860535246
2 changed files with 10 additions and 11 deletions

View File

@@ -32,7 +32,7 @@ from vllm.triton_utils import HAS_TRITON
from vllm_ascend.ascend_forward_context import _EXTRA_CTX
from vllm_ascend.platform import NPUPlatform
from vllm_ascend.utils import AscendDeviceType, get_ascend_device_type, has_rope, is_vl_model
from vllm_ascend.utils import has_rope, is_vl_model
if HAS_TRITON:
from vllm.model_executor.layers.rotary_embedding.mrope import triton_mrope
@@ -519,7 +519,7 @@ class AscendMRotaryEmbedding(MRotaryEmbedding):
# todo: need cann update in 8.5.0
return self.forward_triton(positions, query, key)
if self.mrope_section != [16, 24, 24] or get_ascend_device_type() == AscendDeviceType.A5:
if self.mrope_section != [16, 24, 24]:
return super().forward_oot(positions, query, key)
import torch_npu