fix profile run for vl model (#5136)

### What this PR does / why we need it?

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
This commit is contained in:
shaopeng-666
2025-12-17 23:51:31 +08:00
committed by GitHub
parent 43d974c6f7
commit 39bdd4cfaa
4 changed files with 1 additions and 3 deletions

View File

@@ -39,7 +39,6 @@ def test_multimodal_vl(prompt_template):
images = [image] * len(img_questions)
prompts = prompt_template(img_questions)
with VllmRunner("Qwen/Qwen3-VL-8B-Instruct",
max_model_len=4096,
mm_processor_kwargs={
"min_pixels": 28 * 28,
"max_pixels": 1280 * 28 * 28,