Upgrade vLLM to v0.10.0 (#1927)
### What this PR does / why we need it? - Upgrade to v0.10.0 - Drop v0.9.2 version compatibility - Add patch for `vllm_ascend/patch/worker/patch_common/patch_sampler_gather_logprobs.py` as workaround off3a683b7c9for v0.10.0 and also add e2e test `test_models_prompt_logprobs` - Pin transformers<4.54.0 as workaround of https://github.com/vllm-project/vllm-ascend/issues/2034 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Test locally: `VLLM_USE_MODELSCOPE=true pytest -sv tests/e2e/singlecard/test_offline_inference.py::test_models_prompt_logprobs` - CI passed - vLLM version: v0.9.2 - vLLM main:7728dd77bb--------- Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
This commit is contained in:
@@ -13,6 +13,8 @@ setuptools-scm>=8
|
||||
torch>=2.5.1
|
||||
torchvision<0.21.0
|
||||
wheel
|
||||
# Remove after https://github.com/vllm-project/vllm-ascend/issues/2034
|
||||
transformers<4.54.0
|
||||
|
||||
# requirements for disaggregated prefill
|
||||
msgpack
|
||||
|
||||
Reference in New Issue
Block a user