[Bugfix] Fix chunk prefill bug for long_sequence feature (#5444)

### What this PR does / why we need it?
Fix chunk prefill bug for long_sequence feature

When there are two requests with chunk prefill enabled in the
long-sequence scenario, if one request has only 1 token during
scheduling, it will be identified as a decode request and trigger an
error. This PR fixes the issue.
Closes: https://github.com/vllm-project/vllm-ascend/issues/5445

- vLLM version: release/v0.13.0
- vLLM main:
81786c8774
---------
Signed-off-by: LookAround <lixushi@huawei.com>
This commit is contained in:
LookAround0301
2026-01-05 09:16:36 +08:00
committed by GitHub
parent fbb93ad8f2
commit d25a2c20c5
2 changed files with 81 additions and 5 deletions

View File

@@ -463,13 +463,11 @@ class PCPManager:
]
for i, req_id in enumerate(input_batch.req_ids):
num_scheduled_tokens = scheduler_output.num_scheduled_tokens[
req_id]
is_prefill = input_batch.num_computed_tokens_cpu[
i] < input_batch.num_prompt_tokens[i]
num_scheduled_token = scheduler_output.num_scheduled_tokens[req_id]
is_prefill = num_scheduled_token > self.decode_threshold
if is_prefill:
num_cp_padded_scheduled_tokens = cdiv(
num_scheduled_tokens,
num_scheduled_token,
2 * self.pcp_world_size) * (2 * self.pcp_world_size)
chunk_size = num_cp_padded_scheduled_tokens // (
2 * self.pcp_world_size)