[Dist][EP] Remove ETP/EP maintained in vllm-ascend (#1681)
### What this PR does / why we need it?
Remove ETP/EP maintained in branch main. We drop this as there is no
relevant scenarios to use ETP now, and we may subsequently advocate
implementing expert tensor parallelism in vLLM to support scenarios
where the expert is needed to be sliced
This is a part of #1422 backport.
Fixes https://github.com/vllm-project/vllm-ascend/issues/1396
https://github.com/vllm-project/vllm-ascend/issues/1154
### Does this PR introduce _any_ user-facing change?
We'll not maintain etp/ep in vllm-ascend anymore, and use the tp/ep in
vllm instead.
### How was this patch tested?
CI passed with new added and existing test.
- vLLM version: v0.9.2
- vLLM main:
fe8a2c544a
Signed-off-by: MengqingCao <cmq0113@163.com>
This commit is contained in:
@@ -48,6 +48,7 @@ Run the following script to start the vLLM server on Multi-NPU:
|
||||
```bash
|
||||
vllm serve /path/to/pangu-pro-moe-model \
|
||||
--tensor-parallel-size 4 \
|
||||
--enable-expert-parallel \
|
||||
--trust-remote-code \
|
||||
--enforce-eager
|
||||
```
|
||||
@@ -145,6 +146,7 @@ if __name__ == "__main__":
|
||||
|
||||
llm = LLM(model="/path/to/pangu-pro-moe-model",
|
||||
tensor_parallel_size=4,
|
||||
enable_expert_parallel=True,
|
||||
distributed_executor_backend="mp",
|
||||
max_model_len=1024,
|
||||
trust_remote_code=True,
|
||||
|
||||
Reference in New Issue
Block a user