[Main2Main][Deps][Misc] Upgrade vLLM to v0.15.0 (#6470)
### What this PR does / why we need it?
This PR upgrades the vLLM dependency from `v0.14.1` to `v0.15.0`. This
involves:
- Updating the `VLLM_TAG` in all `Dockerfile`.
- Updating the vLLM version in `docs/source/conf.py`.
- Removing conditional code paths specific to `v0.14.1` across the
codebase, which simplifies maintenance.
- Fix `TypeError: MMEncoderAttention.__init__() got an unexpected
keyword argument 'multimodal_config'` due to
https://github.com/vllm-project/vllm/pull/31972.
- Fix `_shared_experts: 'NoneType' object is not callable` due to
https://github.com/vllm-project/vllm/pull/32082 by
https://github.com/vllm-project/vllm-ascend/pull/6335.
- Fix `ReshapeAndCacheOperation setup failed!` due to
https://github.com/vllm-project/vllm/pull/25954 by overriding attention
metadata slots.
This upgrade is necessary to keep the project aligned with the latest
features, bug fixes, and API changes in the vLLM project.
### Does this PR introduce _any_ user-facing change?
No, this is an internal dependency update and does not introduce any
user-facing changes.
### How was this patch tested?
CI is expected to pass with these changes, ensuring that all existing
tests are successful with the new vLLM version.
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
co-authored-by: shen-shanshan <467638484@qq.com>
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -9,7 +9,6 @@ from vllm.model_executor.layers.fused_moe.config import FusedMoEConfig, FusedMoE
|
||||
|
||||
from vllm_ascend.ascend_config import init_ascend_config
|
||||
from vllm_ascend.eplb.core.eplb_utils import init_eplb_config
|
||||
from vllm_ascend.utils import vllm_version_is
|
||||
# isort: on
|
||||
|
||||
|
||||
@@ -21,24 +20,20 @@ class TestAscendConfig(unittest.TestCase):
|
||||
"refresh": True,
|
||||
"eplb_config": {"dynamic_eplb": True, "num_redundant_experts": 2},
|
||||
}
|
||||
if vllm_version_is('0.14.1'):
|
||||
moe_parallel_config = FusedMoEParallelConfig(2, 0, 1, 2, 1, 1, 1, 1, True, "hccl")
|
||||
moe_config = FusedMoEConfig(8, 8, 8192, 5, moe_parallel_config, torch.float16)
|
||||
else:
|
||||
from vllm.model_executor.layers.fused_moe.config import RoutingMethodType
|
||||
moe_parallel_config = FusedMoEParallelConfig(2, 0, 1, 2, 1, 1, 1, 1, True, "hccl", enable_eplb=True)
|
||||
moe_config = FusedMoEConfig(
|
||||
num_experts=8,
|
||||
experts_per_token=8,
|
||||
hidden_dim=8192,
|
||||
intermediate_size_per_partition=5,
|
||||
num_local_experts=8,
|
||||
activation="silu",
|
||||
device="npu",
|
||||
routing_method=RoutingMethodType.Simulated,
|
||||
moe_parallel_config=moe_parallel_config,
|
||||
in_dtype=torch.float16,
|
||||
)
|
||||
from vllm.model_executor.layers.fused_moe.config import RoutingMethodType
|
||||
moe_parallel_config = FusedMoEParallelConfig(2, 0, 1, 2, 1, 1, 1, 1, True, "hccl", enable_eplb=True)
|
||||
moe_config = FusedMoEConfig(
|
||||
num_experts=8,
|
||||
experts_per_token=8,
|
||||
hidden_dim=8192,
|
||||
intermediate_size_per_partition=5,
|
||||
num_local_experts=8,
|
||||
activation="silu",
|
||||
device="npu",
|
||||
routing_method=RoutingMethodType.Simulated,
|
||||
moe_parallel_config=moe_parallel_config,
|
||||
in_dtype=torch.float16,
|
||||
)
|
||||
moe_config.supports_eplb = True
|
||||
self.vllm_config = vllm_config
|
||||
self.moe_config = moe_config
|
||||
|
||||
Reference in New Issue
Block a user