[Bugfix] Fix model run _npu_flash_attention hang issue (#4410)

Fix model run _npu_flash_attention in _forward_prefill_no_cache hang
issue, it was caused by wrong attention mask dtype.
### How was this patch tested?
Yes, tesed on Qwen2.5-VL and Qwen2.5-Omni

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: Ting FU <futing10@huawei.com>
This commit is contained in:
Ting FU
2025-11-29 09:20:22 +08:00
committed by GitHub
parent 048d350f9e
commit 9af34755ff
3 changed files with 6 additions and 7 deletions

View File

@@ -991,8 +991,8 @@ class NPUModelRunner(LoRAModelRunnerMixin):
max_seq_len, self.dtype, self.device)
# Prefill with cache hit.
elif attn_state == AscendAttentionState.PrefillCacheHit:
return self.attn_mask_builder.get_attn_mask(
2048, self.dtype, self.device)
return self.attn_mask_builder.get_splitfuse_attn_mask().to(
torch.bool)
# Decode-only situation.
else:
return None