[RFC](https://github.com/vllm-project/vllm-ascend/issues/3328) for more details. Add dynamic batch feature in chunked prefilling strategy, the token budget can be refined to achieve better effective throughput and TPOT. !!! NOTE: only 910B3 is supported till now, we are working on further improvements. Additional file for lookup table is required. - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: Cheng Wang <wangchengkyrie@outlook.com>
30 lines
430 B
Plaintext
30 lines
430 B
Plaintext
# Should be mirrored in pyporject.toml
|
|
cmake>=3.26
|
|
decorator
|
|
einops
|
|
numpy<2.0.0
|
|
packaging
|
|
pip
|
|
pybind11
|
|
pyyaml
|
|
scipy
|
|
pandas
|
|
setuptools>=64
|
|
setuptools-scm>=8
|
|
torch>=2.7.1
|
|
torchvision
|
|
wheel
|
|
pandas-stubs
|
|
|
|
# requirements for disaggregated prefill
|
|
msgpack
|
|
quart
|
|
|
|
# Required for N-gram speculative decoding
|
|
numba
|
|
|
|
# Install torch_npu
|
|
--pre
|
|
--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
|
|
torch-npu==2.7.1.dev20250724
|