Update vllm pin to 12.24 (#5307)

### What this PR does / why we need it?
Fix vllm break in the pr:
1. [Add MiMo-V2-Flash support]
(https://github.com/vllm-project/vllm/pull/30836)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

Co-authored-by: zxwang [1476209578@qq.com](mailto:1476209578@qq.com)

- vLLM version: release/v0.13.0
- vLLM main:
5fbfa8d9ef

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
Signed-off-by: zxwang <1476209578@qq.com>
Co-authored-by: zxwang <1476209578@qq.com>
This commit is contained in:
Nengjun Ma
2025-12-24 17:24:31 +08:00
committed by GitHub
parent a3f65b938f
commit 42c989a437
5 changed files with 8 additions and 6 deletions

View File

@@ -109,7 +109,9 @@ class AscendQKVParallelLinear(QKVParallelLinear):
*,
return_bias: bool = True,
disable_tp: bool = False,
v_head_size: int | None = None,
):
self.v_head_size = v_head_size if v_head_size is not None else head_size
self.custom_op, _, tp_size = get_parallel_op(disable_tp, prefix, self,
"column")
# TODO(realliujiaxu): Replace the initialization code below with super().__init__ after linear of vllm supports custom comm group