[CI] Fix broken ci (#2530)

vLLM commit https://github.com/vllm-project/vllm/pull/22711 changed the
encode cache entries logic, this PR adapt the same change for vllm
ascend to make CI happy.

Co-Authored-By: zhoux77899 <zhouxiang100@huawei.com>

- vLLM version: v0.10.1.1
- vLLM main:
0ff902f3b4

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2025-08-26 07:42:24 +08:00
committed by GitHub
parent 99bf25af76
commit 7e494e94a9
5 changed files with 256 additions and 123 deletions

View File

@@ -47,6 +47,8 @@ class CachedRequestState:
prompt_token_ids: list[int]
mm_kwargs: list[MultiModalKwargsItem]
mm_positions: list[PlaceholderRange]
# TODO: remove Optional after 0.10.1.1
mm_hashes: Optional[list[str]]
sampling_params: Optional[SamplingParams]
pooling_params: Optional[PoolingParams]
generator: Optional[torch.Generator]