[BugFix] Disable enable_shared_expert_dp by default if tensor_parallel_size=1 (#6361)
### What this PR does / why we need it?
Disable enable_shared_expert_dp by default if tensor_parallel_size=1
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
Signed-off-by: underfituu <hzhucong@163.com>
This commit is contained in:
@@ -61,6 +61,7 @@ class AscendConfig:
|
||||
self.enable_shared_expert_dp = (
|
||||
additional_config.get("enable_shared_expert_dp", False)
|
||||
and vllm_config.parallel_config.enable_expert_parallel
|
||||
and vllm_config.parallel_config.tensor_parallel_size > 1
|
||||
)
|
||||
from vllm_ascend.utils import enable_sp
|
||||
|
||||
|
||||
Reference in New Issue
Block a user