Files
xc-llm-ascend/vllm_ascend
LookAround0301 d25a2c20c5 [Bugfix] Fix chunk prefill bug for long_sequence feature (#5444)
### What this PR does / why we need it?
Fix chunk prefill bug for long_sequence feature

When there are two requests with chunk prefill enabled in the
long-sequence scenario, if one request has only 1 token during
scheduling, it will be identified as a decode request and trigger an
error. This PR fixes the issue.
Closes: https://github.com/vllm-project/vllm-ascend/issues/5445

- vLLM version: release/v0.13.0
- vLLM main:
81786c8774
---------
Signed-off-by: LookAround <lixushi@huawei.com>
2026-01-05 09:16:36 +08:00
..
2025-12-20 17:03:25 +08:00
2025-12-11 18:45:43 +08:00
2025-12-31 09:49:55 +08:00
2025-12-25 09:17:06 +08:00
2025-12-02 17:35:47 +08:00