[Misc] Drop 0102 related lines (#3323)

### What this PR does / why we need it?
Since https://github.com/vllm-project/vllm-ascend/pull/3284 merged,
should discard some extra code that was previously done for version
compatibility

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.11.0

Signed-off-by: wangli <wangli858794774@gmail.com>
This commit is contained in:
Li Wang
2025-10-09 14:10:57 +08:00
committed by GitHub
parent 1c5b302f0d
commit a36e3da78e

View File

@@ -253,16 +253,3 @@ class AscendLogitsProcessor(LogitsProcessor):
logits = logits[..., :self.org_vocab_size]
return logits
def forward(
self,
lm_head: VocabParallelEmbedding,
hidden_states: torch.Tensor,
# keep this for version compatibility
sampling_metadata=None, # type: ignore
embedding_bias: Optional[torch.Tensor] = None,
) -> Optional[torch.Tensor]:
return LogitsProcessor.forward(self,
lm_head,
hidden_states,
embedding_bias=embedding_bias)