[Bugfix] fix bug for mtp (#6514)

### What this PR does / why we need it?
fix(mtp): resolve MTP core bugs and enhance eager mode test cases
1. Resolved critical issues in eager mode MTP core execution logic;
2. Fixed functional bugs in the _update_states_after_model_execute
function;
3. Updated and released test_mtp_qwen3_next.py to validate eager mode
acceptance rate.
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

Signed-off-by: Bowen-Leee <caoshankuangren@gmail.com>
This commit is contained in:
bowenli
2026-02-25 17:50:57 +08:00
committed by GitHub
parent ed051737e9
commit e3927cc8f5
4 changed files with 5 additions and 5 deletions

View File

@@ -1374,6 +1374,9 @@ class NPUModelRunner(GPUModelRunner):
with record_function_or_nullcontext("sample_token"):
sampler_output = self._sample(logits, spec_decode_metadata)
if self.need_accepted_tokens:
self._update_states_after_model_execute(sampler_output.sampled_token_ids, scheduler_output)
def propose_draft_token_ids(sampled_token_ids):
assert spec_decode_common_attn_metadata is not None
self._draft_token_ids = self.propose_draft_token_ids(
@@ -1474,8 +1477,6 @@ class NPUModelRunner(GPUModelRunner):
logits,
sampling_metadata,
)
if self.need_accepted_tokens: # TODO remove this if
self._update_states_after_model_execute(sampler_output.sampled_token_ids)
return sampler_output
# TODO: remove this func after eagle_proposer is refactored and