[CI] Fix broken ci (#2530)
vLLM commit https://github.com/vllm-project/vllm/pull/22711 changed the
encode cache entries logic, this PR adapt the same change for vllm
ascend to make CI happy.
Co-Authored-By: zhoux77899 <zhouxiang100@huawei.com>
- vLLM version: v0.10.1.1
- vLLM main:
0ff902f3b4
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -47,6 +47,8 @@ class CachedRequestState:
|
||||
prompt_token_ids: list[int]
|
||||
mm_kwargs: list[MultiModalKwargsItem]
|
||||
mm_positions: list[PlaceholderRange]
|
||||
# TODO: remove Optional after 0.10.1.1
|
||||
mm_hashes: Optional[list[str]]
|
||||
sampling_params: Optional[SamplingParams]
|
||||
pooling_params: Optional[PoolingParams]
|
||||
generator: Optional[torch.Generator]
|
||||
|
||||
Reference in New Issue
Block a user