[Dist][EP] Remove ETP/EP maintained in vllm-ascend (#1681)

### What this PR does / why we need it?
Remove ETP/EP maintained in branch main. We drop this as there is no
relevant scenarios to use ETP now, and we may subsequently advocate
implementing expert tensor parallelism in vLLM to support scenarios
where the expert is needed to be sliced

This is a part of #1422 backport.

Fixes https://github.com/vllm-project/vllm-ascend/issues/1396
https://github.com/vllm-project/vllm-ascend/issues/1154

### Does this PR introduce _any_ user-facing change?
We'll not maintain etp/ep in vllm-ascend anymore, and use the tp/ep in
vllm instead.

### How was this patch tested?
CI passed with new added and existing test.


- vLLM version: v0.9.2
- vLLM main:
fe8a2c544a

Signed-off-by: MengqingCao <cmq0113@163.com>
This commit is contained in:
Mengqing Cao
2025-07-21 09:08:04 +08:00
committed by GitHub
parent a8b316ac5b
commit 8cfd257992
24 changed files with 66 additions and 548 deletions

View File

@@ -18,33 +18,12 @@
# This file is a part of the vllm-ascend project.
import torch
import vllm
import vllm.distributed
import vllm.envs as envs
from vllm.config import ParallelConfig
from vllm_ascend.utils import is_310p
def ascend_destroy_model_parallel():
"""Set the groups to none and destroy them."""
from vllm.distributed.parallel_state import _DP, _PP, _TP
if _TP:
_TP.destroy()
_TP = None
if _PP:
_PP.destroy()
_PP = None
if _DP:
_DP.destroy()
_DP = None
from vllm_ascend.distributed.parallel_state import \
destory_ascend_model_parallel
destory_ascend_model_parallel()
def parallel_config_get_dp_port(self) -> int:
"""
We might need to initialize process groups in multiple
@@ -62,7 +41,6 @@ def parallel_config_get_dp_port(self) -> int:
return port
vllm.distributed.parallel_state.destroy_model_parallel = ascend_destroy_model_parallel
ParallelConfig.get_next_dp_init_port = parallel_config_get_dp_port