Revert "[Feat] Shared expert dp for deepseek and deepseek_mtp (#3495)" (#3586)

### What this PR does / why we need it?
This reverts commit
bf87606932.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
E2E vllm serving with `enable_shared_expert_dp: true` in eager mode as
before.


- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

Signed-off-by: linfeng-yuan <1102311262@qq.com>
This commit is contained in:
linfeng-yuan
2025-10-21 22:24:30 +08:00
committed by GitHub
parent bd11c0054f
commit 4c9af353ee
7 changed files with 10 additions and 49 deletions

View File

@@ -99,7 +99,6 @@ class AscendRMSNorm(RMSNorm):
import torch_npu
if residual is not None:
residual = torch.ops.vllm.maybe_chunk_residual(x, residual)
assert x.size(0) == residual.size(0)
x, residual = _addrmsnorm_forward_oot(
self, x, residual, self.next_need_quant_fusion_linear,