[Misc] Remove CP Redundant Variables after FIA operator enables for CANN 8.5 (#6013)
### What this PR does / why we need it?
PCP/DCP splits the kv-cache onto different cards. After introducing the
parameter cp-kv-cache-interleave-size, the first size tokens will be
cached at Card 0, and so on.
However, if there are too few tokens, some cards will not store the
key-value pairs, resulting in values of 0, corrupted values, and
precision issues. Currently, additional operations are introduced to
avoid this precision problem.
After we integrate FIA operator in mla_cp._forward_decode and CANN
updates to 8.5.0, we now can remove these additional operations.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
passed all CI by CANN 8.5.0
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996
Signed-off-by: dsxsteven <dsxsteven@sina.com>
Signed-off-by: dsxsteven <36877507+dsxsteven@users.noreply.github.com>
This commit is contained in:
@@ -84,20 +84,13 @@ class AscendMetadataForDecode:
|
||||
"""Decode-specific metadata for Ascend attention with Context Parallelism."""
|
||||
|
||||
num_computed_tokens_of_pcp_dcp: list[list[list[int]]] | None = None
|
||||
batch_seq_mask: torch.Tensor = None
|
||||
block_tables: torch.Tensor = None
|
||||
|
||||
|
||||
def _process_attn_out_lse(
|
||||
attn_output: torch.Tensor, softmax_lse: torch.Tensor, batch_seq_mask: torch.Tensor
|
||||
) -> torch.Tensor:
|
||||
def _process_attn_out_lse(attn_output: torch.Tensor, softmax_lse: torch.Tensor) -> torch.Tensor:
|
||||
pcp_size = get_pcp_group().world_size
|
||||
dcp_size = get_decode_context_model_parallel_world_size()
|
||||
dcp_group = get_dcp_group().device_group if dcp_size > 1 else None
|
||||
out_mask = batch_seq_mask[:, None, None].expand_as(attn_output)
|
||||
attn_output = torch.where(out_mask, 0, attn_output)
|
||||
lse_mask = batch_seq_mask[:, None, None].expand_as(softmax_lse)
|
||||
softmax_lse = torch.where(lse_mask, -torch.inf, softmax_lse)
|
||||
softmax_lse = softmax_lse.to(torch.float32)
|
||||
attn_output = attn_output.to(torch.float32)
|
||||
# Concat out&lse: [bs,num_heads,v_head_dim] + [bs,num_heads,1] -> [bs,num_heads,v_head_dim+1]
|
||||
|
||||
Reference in New Issue
Block a user