[Bugfix] Fix model run _npu_flash_attention hang issue (#4410)

Fix model run _npu_flash_attention in _forward_prefill_no_cache hang
issue, it was caused by wrong attention mask dtype.
### How was this patch tested?
Yes, tesed on Qwen2.5-VL and Qwen2.5-Omni

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

Signed-off-by: Ting FU <futing10@huawei.com>
This commit is contained in:
Ting FU
2025-11-29 09:20:22 +08:00
committed by GitHub
parent 048d350f9e
commit 9af34755ff
3 changed files with 6 additions and 7 deletions

View File

@@ -74,10 +74,11 @@ class TestAttentionMaskBuilder(TestBase):
attn_mask = attention_mask_builder.get_attn_mask(
max_seq_len=2048, dtype=torch.float16, device=torch.device("cpu"))
self.assertEqual(attn_mask.shape, (2048, 2048))
self.assertEqual(attn_mask[0][-1], torch.tensor(True))
self.assertEqual(attention_mask_builder._seq_len_cached, 1024)
self.assertEqual(attn_mask[0][-1],
torch.tensor(float("-inf"), dtype=torch.float16))
self.assertEqual(attention_mask_builder._seq_len_cached, 2048)
self.assertEqual(attention_mask_builder.attn_mask_cache.shape,
(1024, 1024))
(2048, 2048))
self.assertEqual(attention_mask_builder.attn_mask_cache[0][-1],
torch.tensor(float("-inf"), dtype=torch.float16))