[Main2Main][Deps][Misc] Upgrade vLLM to v0.15.0 (#6470)
### What this PR does / why we need it?
This PR upgrades the vLLM dependency from `v0.14.1` to `v0.15.0`. This
involves:
- Updating the `VLLM_TAG` in all `Dockerfile`.
- Updating the vLLM version in `docs/source/conf.py`.
- Removing conditional code paths specific to `v0.14.1` across the
codebase, which simplifies maintenance.
- Fix `TypeError: MMEncoderAttention.__init__() got an unexpected
keyword argument 'multimodal_config'` due to
https://github.com/vllm-project/vllm/pull/31972.
- Fix `_shared_experts: 'NoneType' object is not callable` due to
https://github.com/vllm-project/vllm/pull/32082 by
https://github.com/vllm-project/vllm-ascend/pull/6335.
- Fix `ReshapeAndCacheOperation setup failed!` due to
https://github.com/vllm-project/vllm/pull/25954 by overriding attention
metadata slots.
This upgrade is necessary to keep the project aligned with the latest
features, bug fixes, and API changes in the vLLM project.
### Does this PR introduce _any_ user-facing change?
No, this is an internal dependency update and does not introduce any
user-facing changes.
### How was this patch tested?
CI is expected to pass with these changes, ensuring that all existing
tests are successful with the new vLLM version.
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
co-authored-by: shen-shanshan <467638484@qq.com>
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -28,14 +28,11 @@ from vllm.v1.worker.gpu.cudagraph_utils import CudaGraphManager
|
||||
from vllm.v1.worker.gpu.cudagraph_utils import \
|
||||
prepare_inputs_to_capture as prepare_inputs_to_capture_gpu
|
||||
from vllm.v1.worker.gpu.input_batch import InputBuffers
|
||||
from vllm.v1.attention.backend import AttentionMetadataBuilder
|
||||
|
||||
from vllm_ascend.worker.v2.utils import torch_cuda_wrapper
|
||||
from vllm_ascend.utils import vllm_version_is
|
||||
|
||||
if vllm_version_is('0.14.1'):
|
||||
from vllm.v1.attention.backends.utils import AttentionMetadataBuilder
|
||||
else:
|
||||
from vllm.v1.attention.backend import AttentionMetadataBuilder
|
||||
|
||||
|
||||
|
||||
class AclGraphManager(CudaGraphManager):
|
||||
|
||||
@@ -24,17 +24,13 @@ import numpy as np
|
||||
import torch
|
||||
from vllm.config import VllmConfig
|
||||
from vllm.v1.kv_cache_interface import EncoderOnlyAttentionSpec, KVCacheConfig
|
||||
from vllm.v1.attention.backend import AttentionMetadataBuilder
|
||||
|
||||
from vllm_ascend.attention.attention_mask import AttentionMaskBuilder
|
||||
from vllm_ascend.attention.attention_v1 import AscendAttentionState
|
||||
from vllm_ascend.attention.utils import (AscendCommonAttentionMetadata,
|
||||
AscendPrefillContextParallelMetadata)
|
||||
from vllm_ascend.utils import vllm_version_is
|
||||
|
||||
if vllm_version_is('0.14.1'):
|
||||
from vllm.v1.attention.backends.utils import AttentionMetadataBuilder
|
||||
else:
|
||||
from vllm.v1.attention.backend import AttentionMetadataBuilder
|
||||
|
||||
_ATTENTION_MASK_BUILDER = None
|
||||
|
||||
|
||||
Reference in New Issue
Block a user