Files
xc-llm-ascend/tests/e2e/models/configs/Qwen3-VL-8B-Instruct.yaml
shaopeng-666 39bdd4cfaa fix profile run for vl model (#5136)
### What this PR does / why we need it?

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: 李少鹏 <lishaopeng21@huawei.com>
2025-12-17 23:51:31 +08:00

11 lines
203 B
YAML

model_name: "Qwen/Qwen3-VL-8B-Instruct"
hardware: "Atlas A2 Series"
model: "vllm-vlm"
tasks:
- name: "mmmu_val"
metrics:
- name: "acc,none"
value: 0.55
batch_size: 32
gpu_memory_utilization: 0.7