[Bugfix] Fix MTP support for lmhead_tensor_parallel_size (#3915)

### What this PR does / why we need it?
Fix the issue of MTP being enabled and setting
Imhead_tensor_parallel_size=16 causing the inference to hang.

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: wyh145 <1987244901@qq.com>
This commit is contained in:
Nagisa125
2025-10-31 10:30:28 +08:00
committed by GitHub
parent 1966885be2
commit 6764777f00
2 changed files with 3 additions and 2 deletions

View File

@@ -2913,7 +2913,8 @@ class NPUModelRunner(LoRAModelRunnerMixin):
aclgraph_runtime_mode=aclgraph_runtime_mode,
batch_descriptor=batch_descriptor)
if need_dummy_logits:
dummy_compute_logits(hidden_states)
self.drafter.model.compute_logits(
hidden_states[dummy_indices])
if self.in_profile_run and self.dynamic_eplb:
self.model.clear_all_moe_loads()
if not self.in_profile_run and self.dynamic_eplb: