bench: Add MMMU benchmark for vLM (#3562)

This commit is contained in:
Mick
2025-02-23 00:10:59 +08:00
committed by GitHub
parent 9087694006
commit 45205d88a0
9 changed files with 1026 additions and 7 deletions

22
benchmark/mmmu/README.md Normal file
View File

@@ -0,0 +1,22 @@
## Run evaluation
### Evaluate sglang
```
python benchmark/mmmu/bench_sglang.py --model-path Qwen/Qwen2-VL-7B-Instruct --chat-template qwen2-vl
```
It's recommended to reduce the memory usage by appending something ike `--mem-fraction-static 0.6` to the command above.
### Evaluate hf
```
python benchmark/mmmu/bench_hf.py --model-path Qwen/Qwen2-VL-7B-Instruct
```
Some popular model results:
1. Qwen/Qwen2-VL-2B-Instruct: 0.241
2. Qwen/Qwen2-VL-7B-Instruct: 0.255
3. Qwen/Qwen2.5-VL-3B-Instruct: 0.245
4. Qwen/Qwen2.5-VL-7B-Instruct: 0.242