[Patch] Remove the patch of ECExampleConnector (#5976)
### What this PR does / why we need it?
Part of #5304.
https://github.com/vllm-project/vllm/pull/30225 has been merged now. We
don't need this patch anymore.
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
This commit is contained in:
@@ -42,18 +42,6 @@
|
||||
# Future Plan:
|
||||
# Find a better way to support tensor alignment for 310p without this patch.
|
||||
#
|
||||
# ** 2. File: platform/patch_ec_connector.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `vllm.distributed.ec_transfer.ec_connector.shared_storage_connector.ECSharedStorageConnector.start_load_caches`
|
||||
# Why:
|
||||
# it's hard code to cuda
|
||||
# How:
|
||||
# change the cuda to npu
|
||||
# Related PR (if no, explain why):
|
||||
# https://github.com/vllm-project/vllm/pull/30225
|
||||
# Future Plan:
|
||||
# Remove this patch when vllm merges the PR.
|
||||
#
|
||||
# ** 3. File: platform/patch_mamba_config.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `vllm.model_executor.models.config.HybridAttentionMambaModelConfig.verify_and_update_config`
|
||||
|
||||
Reference in New Issue
Block a user