fix transformer version to 4.57.3 (#5250)

### What this PR does / why we need it?
In certain scenarios (such as smoke testing), the source code is used to
update the vllm-ascend version for running updated models (such as
Qwen3-VL). However, vllm and vllm-ascend themselves have no restrictions
on the transformer version, and the transformer will not be updated,
resulting in errors when launching the model.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: release/v0.13.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
This commit is contained in:
shaopeng-666
2025-12-23 23:55:40 +08:00
committed by GitHub
parent 3b59f20a28
commit 2a2d527e96

View File

@@ -30,5 +30,5 @@ numba
#--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi #--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
torch-npu==2.8.0 torch-npu==2.8.0
transformers<=4.57.1 transformers>=4.57.3
fastapi<0.124.0 fastapi<0.124.0