[Misc] upgrade to vllm main (#6646)
### What this PR does / why we need it? This PR upgrades the core vLLM dependency to a newer version from the main branch (`13397841ab469cecf1ed425c3f52a9ffc38139b5`). This is necessary to keep our project up-to-date with the latest features and fixes from upstream vLLM. 1.ac32e66cf9pass file is moved. - vLLM version: v0.15.0 - vLLM main:d7e17aaacd--------- Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com> Signed-off-by: wxsIcey <1790571317@qq.com> Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com> Co-authored-by: wxsIcey <1790571317@qq.com>
This commit is contained in:
@@ -132,7 +132,7 @@ def _run_worker_process(
|
||||
torch.npu.reset_peak_memory_stats()
|
||||
|
||||
|
||||
# @patch.dict(os.environ, clear=["HCCL_OP_EXPANSION_MODE","VLLM_WORKER_MULTIPROC_METHOD"])
|
||||
@pytest.mark.skip(reason="fix me")
|
||||
@pytest.mark.parametrize("model", MODELS)
|
||||
@pytest.mark.parametrize("max_tokens", [4, 36])
|
||||
@patch.dict(os.environ, {"ASCEND_RT_VISIBLE_DEVICES": "0,1"})
|
||||
|
||||
Reference in New Issue
Block a user