Files
xc-llm-ascend/tests/e2e/singlecard/test_models.py
Wang Kunpeng bc5b7a5fb5 [bugfix] Fix MHA model runtime error in aclgraph mode (#5397)
### What this PR does / why we need it?
Currently, MHA models (eg: minicpm-2b, Baichuan-7b) will encounter
errors when running in piecewise graph mode, with error messages similar
to:
```
(E89999):  When layout is TND and PA not enabled, keyT(8) and valueT(8) must be equal to the last element of actualSeqenceLengthKV(5)[FUNC:CheckInputShapeWhenLayoutIsTND][FILE:prompt_flash_attention_tiling.cpp][LINE:3618]
```
The error occurs because the qkv in the Prefill stage is also padded,
causing the shape to be inconsistent with actual_seq_lengths.
Add unpadding logic for kv.

- vLLM version: release/v0.13.0
- vLLM main:
254f6b9867

Signed-off-by: Wang Kunpeng <1289706727@qq.com>
2025-12-26 21:37:28 +08:00

47 lines
1.4 KiB
Python

#
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
# This file is a part of the vllm-ascend project.
# Adapted from vllm/tests/entrypoints/llm/test_guided_generate.py
# Copyright 2023 The vLLM team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import pytest
from modelscope import snapshot_download # type: ignore
from tests.e2e.conftest import VllmRunner
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"
# Note: MiniCPM-2B is a MHA model, MiniCPM4-0.5B is a GQA model
MINICPM_MODELS = [
"openbmb/MiniCPM-2B-sft-bf16",
"OpenBMB/MiniCPM4-0.5B",
]
@pytest.mark.parametrize("model", MINICPM_MODELS)
def test_minicpm(model) -> None:
example_prompts = [
"Hello, my name is",
]
max_tokens = 5
with VllmRunner(snapshot_download(model),
max_model_len=512,
gpu_memory_utilization=0.7) as runner:
runner.generate_greedy(example_prompts, max_tokens)