[Bugfix] Resolve MTP > 1 issue when lm head tp > 1 (#4254)

### What this PR does / why we need it?

Previously, the dummy run executed compute_logits only once, regardless
of num_speculative_tokens. This caused execute_model to hang on
compute_logits when lm head tensor parallelism exceeded 1. The fix
ensures compute_logits executes correctly during dummy run, matching
num_speculative_tokens.

I set the `non_blocking` argument to False when moving
`exceeds_max_model_len` to the CPU. From what I understand, using
`non_blocking=True` and immediately accessing the tensor on the CPU can
cause accuracy problems. However, this issue doesn't happen when
transferring data to a device. ref:
https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/18

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: Jade Zheng <zheng.shoujian@outlook.com>
This commit is contained in:
Jade Zheng
2025-12-01 10:22:36 +08:00
committed by GitHub
parent e8e20c0bbf
commit 51c8f60eb0
5 changed files with 29 additions and 17 deletions

View File

@@ -81,7 +81,8 @@ class TorchairMtpProposer(MtpProposer):
num_reqs: int = 0,
num_tokens_across_dp=None,
aclgraph_runtime_mode: CUDAGraphMode = CUDAGraphMode.NONE,
batch_descriptor=None) -> None:
batch_descriptor=None,
dummy_compute_logits=lambda hidden_states: None) -> None:
moe_comm_type = self.runner._select_moe_comm_method(num_tokens)
if not with_prefill:
@@ -143,6 +144,7 @@ class TorchairMtpProposer(MtpProposer):
self.model(input_ids=input_ids,
positions=positions,
hidden_states=previous_hidden_states)
dummy_compute_logits(previous_hidden_states)
if with_prefill:
break