[Refactor] remove some metadata variables in attention_v1. (#5160)
RFC: https://github.com/vllm-project/vllm-ascend/issues/4629
Reason:
The metadata data class contains an excessive number of variables. We
will inherit the metadata of the community and simultaneously remove
some variables that are no longer needed at present.
Todo:
1. remove attn_state partly.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
This commit is contained in:
@@ -1042,7 +1042,6 @@ class NPUModelRunner(GPUModelRunner):
|
||||
attn_mask=self.attn_mask,
|
||||
spec_attn_mask=self.spec_attn_mask,
|
||||
attn_state=self.attn_state,
|
||||
is_only_prefill=bool(np.all(num_valid_tokens != 1)),
|
||||
max_query_len=max_num_scheduled_tokens,
|
||||
decode_token_per_req=self.decode_token_per_req,
|
||||
prefill_context_parallel_metadata=long_seq_metadata,
|
||||
|
||||
@@ -45,7 +45,6 @@ def build_attn_metadata(
|
||||
| None = None,
|
||||
spec_attn_mask: torch.Tensor | None = None,
|
||||
attn_state: Any | None = None,
|
||||
is_only_prefill: bool = False,
|
||||
graph_pad_size: int = -1,
|
||||
num_input_tokens: int = 0,
|
||||
prefill_context_parallel_metadata: AscendPrefillContextParallelMetadata
|
||||
@@ -78,7 +77,6 @@ def build_attn_metadata(
|
||||
attn_mask=attn_mask,
|
||||
spec_attn_mask=spec_attn_mask,
|
||||
attn_state=attn_state,
|
||||
is_only_prefill=is_only_prefill,
|
||||
graph_pad_size=graph_pad_size,
|
||||
num_input_tokens=num_input_tokens,
|
||||
prefill_context_parallel_metadata=prefill_context_parallel_metadata,
|
||||
|
||||
Reference in New Issue
Block a user