[Refactor] Fix AttentionMaskBuilder singleton and remove redundant pcp_prefill_mask (#4870)
## What this PR does / why we need it? This PR fixes the `AttentionMaskBuilder` singleton initialization issue introduced in PR #4779 and removes the unused `pcp_prefill_mask` field. ### Background After PR #4779 made `AttentionMaskBuilder` a singleton with `@singleton` decorator, the class constructor now requires a `device` parameter. However, two initialization sites were still using the old parameterless constructor, causing failures. ### Changes 1. **Fix singleton initialization** - Fixed `AttentionMaskBuilder()` → `AttentionMaskBuilder(self.device)` in `AscendMLAMetadataBuilder.__init__()` - Fixed `AttentionMaskBuilder()` → `AttentionMaskBuilder(self.device)` in `AscendAttentionMetadataBuilder.__init__()` 2. **Remove unused field** - Removed `pcp_prefill_mask` field from `AscendPrefillContextParallelMetadata` (never used in codebase) - Updated related test assertions ### Related - Issue #5463 - PR #4779 (Unify all mask generation methods) - PR #5389 (Make AttentionMaskBuilder singleton) ## Does this PR introduce _any_ user-facing change? No. This is an internal refactoring. ## How was this patch tested? - ✅ Local testing: No linter errors - ✅ Unit tests for attention modules verified - ⏳ CI pipeline Signed-off-by: lico67373 <918688502@qq.com> Co-authored-by: weijinqian0 <1184188277@qq.com>
This commit is contained in:
@@ -58,9 +58,6 @@ def build_attn_metadata(
|
||||
decode_token_per_req: int,
|
||||
actual_seq_lengths_q: list[int],
|
||||
positions: torch.Tensor | None = None,
|
||||
attn_mask: torch.Tensor
|
||||
| None = None,
|
||||
spec_attn_mask: torch.Tensor | None = None,
|
||||
attn_state: Any | None = None,
|
||||
graph_pad_size: int = -1,
|
||||
num_input_tokens: int = 0,
|
||||
@@ -92,8 +89,6 @@ def build_attn_metadata(
|
||||
slot_mapping=slot_mapping,
|
||||
actual_seq_lengths_q=actual_seq_lengths_q,
|
||||
positions=positions,
|
||||
attn_mask=attn_mask,
|
||||
spec_attn_mask=spec_attn_mask,
|
||||
attn_state=attn_state,
|
||||
graph_pad_size=graph_pad_size,
|
||||
num_input_tokens=num_input_tokens,
|
||||
|
||||
Reference in New Issue
Block a user