[bugfix] fix the complex and potentially problematic generate_kv_idx. (#5957)

### What this PR does / why we need it?
In long-sequence scenarios, the chunked-prefill component may encounter
dimension misalignment issues, which previously occurred during
precision testing on the code_generate_lite dataset. This PR removes
redundant computations and instead derives the value using existing
results and straightforward calculations.
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
This commit is contained in:
Qiu
2026-01-21 14:21:02 +08:00
committed by GitHub
parent 12a668b1d9
commit 58ff465821
5 changed files with 4 additions and 57 deletions

View File

@@ -87,7 +87,6 @@ def test_models_chunked_prefill_mixed_length_prompts_including_1_token(
"VLLM_ALLOW_LONG_MAX_MODEL_LEN": "1"
})
@pytest.mark.parametrize("model", MODELS)
@pytest.mark.skip(reason="skip for bad adaptability with main2main")
def test_models_chunked_prefill_with_empty_kvcache(model: str):
TEST_ROPE_PARAMETERS = {
"rope_theta": 1000000,