Files
xc-llm-ascend/requirements.txt
shaopeng-666 2a2d527e96 fix transformer version to 4.57.3 (#5250)
### What this PR does / why we need it?
In certain scenarios (such as smoke testing), the source code is used to
update the vllm-ascend version for running updated models (such as
Qwen3-VL). However, vllm and vllm-ascend themselves have no restrictions
on the transformer version, and the transformer will not be updated,
resulting in errors when launching the model.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: release/v0.13.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
2025-12-23 23:55:40 +08:00

35 lines
572 B
Plaintext

# Should be mirrored in pyporject.toml
cmake>=3.26
decorator
einops
numpy<2.0.0
packaging
pip
pybind11
pyyaml
scipy
pandas
setuptools>=64
setuptools-scm>=8
torch==2.8.0
torchvision
wheel
pandas-stubs
opencv-python-headless<=4.11.0.86 # Required to avoid numpy version conflict with vllm
compressed_tensors>=0.11.0
# requirements for disaggregated prefill
msgpack
quart
# Required for N-gram speculative decoding
numba
# Install torch_npu
#--pre
#--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
torch-npu==2.8.0
transformers>=4.57.3
fastapi<0.124.0