[KVCache][Bugfix] Fix kv cache initialization error of attention layer (#3113)

### What this PR does / why we need it?
Fixes #3096 
1. Fix kv cache initialization error of attention layer. There are some
models with layer name like `attn.attn`, instead of `self_attn`, but the
initialization of kv cache tensors only check for `self_attn` and
`attn.attn`, which leding to the error `AssertionError: Some layers are
not correctly initialized`
2. Set the default value of input arg `sampling_metadata` in
`compute_logits` for the modeling files in vllm-ascend. Thus fixing the
error `Qwen3NextForCausalLM.compute_logits() missing 1 required
positional argument: 'sampling_metadata'`

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
test locally with internlm


- vLLM version: v0.10.2
- vLLM main:
5aeb925452

---------

Signed-off-by: MengqingCao <cmq0113@163.com>
This commit is contained in:
Mengqing Cao
2025-09-24 11:32:34 +08:00
committed by GitHub
parent 6aa4253798
commit 2d885869c5
6 changed files with 10 additions and 8 deletions

View File

@@ -344,7 +344,7 @@ class CustomQwen2ForCausalLM(nn.Module, SupportsLoRA, SupportsPP):
def compute_logits(
self,
hidden_states: torch.Tensor,
sampling_metadata, # type: ignore
sampling_metadata=None, # type: ignore
) -> Optional[torch.Tensor]:
logits = self.logits_processor(self.lm_head, hidden_states,
sampling_metadata)