[v0.18.0][Bugfix][Platform] Fix MiniMax M2 reasoning token usage accounting (#7700)

### What this PR does / why we need it?
This backports the MiniMax M2 reasoning-token usage accounting fix onto
`releases/v0.18.0` for vllm-ascend.

The release branch does not include the other local GLM patch commit, so
this PR keeps the MiniMax change self-contained by:
- registering `patch_minimax_usage_accounting` on the release branch
- backporting `completion_tokens_details.reasoning_tokens` into chat
usage generation
- fixing MiniMax reasoning token counting for `</think>`-delimited
outputs without depending on the GLM suffix patch

### Does this PR introduce _any_ user-facing change?
Yes. OpenAI-compatible chat usage accounting for MiniMax M2 responses
now reports corrected reasoning token counts on the release branch.

### How was this patch tested?
- `python -m compileall
vllm_ascend/patch/platform/patch_minimax_usage_accounting.py`
- `python - <<'PY'` import check for
`vllm_ascend.patch.platform.patch_minimax_usage_accounting` on top of
`releases/v0.18.0`

No targeted automated regression test exists for this release-branch
backport yet, so I validated syntax and module import compatibility on
the release branch.

---------

Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
Co-authored-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
This commit is contained in:
jack
2026-03-27 10:45:28 +08:00
committed by GitHub
parent a40eee2ba1
commit 53cc225cac
4 changed files with 455 additions and 0 deletions

View File

@@ -159,6 +159,26 @@
# Future Plan:
# Remove this patch after the upcoming KV cache spec refactor.
#
# ** 9. File: platform/patch_minimax_usage_accounting.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.entrypoints.openai.chat_completion.serving.OpenAIServingChat`
# `vllm.entrypoints.openai.engine.protocol.UsageInfo`
# `vllm.reasoning.minimax_m2_reasoning_parser`
# Why:
# MiniMax M2 reasoning outputs use `</think>` as the only boundary token,
# but the runtime usage accounting path either omits reasoning token
# details entirely or counts them incorrectly.
# How
# Monkey-patch the MiniMax reasoning token counters, extend `UsageInfo`
# with `completion_tokens_details.reasoning_tokens`, and update chat
# streaming/non-streaming usage generation to propagate the corrected
# counts.
# Related PR (if no, explain why):
# https://github.com/vllm-project/vllm/pull/37955
# Future Plan:
# Remove this patch once the upstream MiniMax usage-accounting fix is in
# the runtime vLLM version used by vllm-ascend.
#
# * Worker Patch:
# ===============
#