[Dist][EP] Remove ETP/EP maintained in vllm-ascend (#1681)
### What this PR does / why we need it?
Remove ETP/EP maintained in branch main. We drop this as there is no
relevant scenarios to use ETP now, and we may subsequently advocate
implementing expert tensor parallelism in vLLM to support scenarios
where the expert is needed to be sliced
This is a part of #1422 backport.
Fixes https://github.com/vllm-project/vllm-ascend/issues/1396
https://github.com/vllm-project/vllm-ascend/issues/1154
### Does this PR introduce _any_ user-facing change?
We'll not maintain etp/ep in vllm-ascend anymore, and use the tp/ep in
vllm instead.
### How was this patch tested?
CI passed with new added and existing test.
- vLLM version: v0.9.2
- vLLM main:
fe8a2c544a
Signed-off-by: MengqingCao <cmq0113@163.com>
This commit is contained in:
@@ -37,17 +37,7 @@
|
||||
# =================
|
||||
# ** File: platform/patch_common/patch_distributed.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `vllm.distributed.parallel_state.destroy_model_parallel()`
|
||||
# Why:
|
||||
# vllm dose not support outside platform maintain its own `CoordinatorGroup`, vllm-ascend maintain EP and ETP
|
||||
# inside of the repo, and needs a common interface to destroy them, this patch add the interface of destroy
|
||||
# platform owned `CoordinatorGroup` to make sure all the CoordinateGroup can be properly destroyed
|
||||
# How:
|
||||
# Call `vllm_ascend.distributed.parallel_state method `destroy_platform_model_parallel` to destroy all the `CoordinateGroup`
|
||||
# Related PR (if no, explain why):
|
||||
# Future Plan:
|
||||
# Remove those patch when vllm merged them
|
||||
# 2. `vllm.config.ParallelConfig.get_next_dp_init_port`
|
||||
# 1. `vllm.config.ParallelConfig.get_next_dp_init_port`
|
||||
# Why:
|
||||
# vllm doesn't support get port from environment.
|
||||
# How:
|
||||
|
||||
@@ -18,33 +18,12 @@
|
||||
# This file is a part of the vllm-ascend project.
|
||||
|
||||
import torch
|
||||
import vllm
|
||||
import vllm.distributed
|
||||
import vllm.envs as envs
|
||||
from vllm.config import ParallelConfig
|
||||
|
||||
from vllm_ascend.utils import is_310p
|
||||
|
||||
|
||||
def ascend_destroy_model_parallel():
|
||||
"""Set the groups to none and destroy them."""
|
||||
from vllm.distributed.parallel_state import _DP, _PP, _TP
|
||||
if _TP:
|
||||
_TP.destroy()
|
||||
_TP = None
|
||||
|
||||
if _PP:
|
||||
_PP.destroy()
|
||||
_PP = None
|
||||
|
||||
if _DP:
|
||||
_DP.destroy()
|
||||
_DP = None
|
||||
from vllm_ascend.distributed.parallel_state import \
|
||||
destory_ascend_model_parallel
|
||||
destory_ascend_model_parallel()
|
||||
|
||||
|
||||
def parallel_config_get_dp_port(self) -> int:
|
||||
"""
|
||||
We might need to initialize process groups in multiple
|
||||
@@ -62,7 +41,6 @@ def parallel_config_get_dp_port(self) -> int:
|
||||
return port
|
||||
|
||||
|
||||
vllm.distributed.parallel_state.destroy_model_parallel = ascend_destroy_model_parallel
|
||||
ParallelConfig.get_next_dp_init_port = parallel_config_get_dp_port
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user