[BugFix] Disable enable_shared_expert_dp by default if tensor_parallel_size=1 (#6361)

### What this PR does / why we need it?

Disable enable_shared_expert_dp by default if tensor_parallel_size=1


- vLLM version: v0.14.1
- vLLM main:
dc917cceb8

Signed-off-by: underfituu <hzhucong@163.com>
This commit is contained in:
hucong
2026-01-28 22:01:01 +08:00
committed by GitHub
parent 8b0a7b6d80
commit df588ed488

View File

@@ -61,6 +61,7 @@ class AscendConfig:
self.enable_shared_expert_dp = (
additional_config.get("enable_shared_expert_dp", False)
and vllm_config.parallel_config.enable_expert_parallel
and vllm_config.parallel_config.tensor_parallel_size > 1
)
from vllm_ascend.utils import enable_sp