[Bugfix] Fix model run _npu_flash_attention hang issue (#4410)
Fix model run _npu_flash_attention in _forward_prefill_no_cache hang
issue, it was caused by wrong attention mask dtype.
### How was this patch tested?
Yes, tesed on Qwen2.5-VL and Qwen2.5-Omni
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: Ting FU <futing10@huawei.com>
This commit is contained in:
@@ -991,8 +991,8 @@ class NPUModelRunner(LoRAModelRunnerMixin):
|
||||
max_seq_len, self.dtype, self.device)
|
||||
# Prefill with cache hit.
|
||||
elif attn_state == AscendAttentionState.PrefillCacheHit:
|
||||
return self.attn_mask_builder.get_attn_mask(
|
||||
2048, self.dtype, self.device)
|
||||
return self.attn_mask_builder.get_splitfuse_attn_mask().to(
|
||||
torch.bool)
|
||||
# Decode-only situation.
|
||||
else:
|
||||
return None
|
||||
|
||||
Reference in New Issue
Block a user