[Test][Accuracy] Add accuracy evaluation config for InternVL3_5-8B (#3964)
### What this PR does / why we need it?
To continuously monitor the accuracy of the InternVL3_5-8B model, this
PR adds the corresponding configuration file to the CI. We need to add
the `-hf` suffix to avoid incompatibility with the `lm-eval`
preprocessor.
### How was this patch tested?
`pytest -sv ./tests/e2e/models/test_lm_eval_correctness.py --config
./tests/e2e/models/configs/InternVL3_5-8B.yaml`
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
This commit is contained in:
11
tests/e2e/models/configs/InternVL3_5-8B.yaml
Normal file
11
tests/e2e/models/configs/InternVL3_5-8B.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
model_name: "OpenGVLab/InternVL3_5-8B-hf"
|
||||
runner: "linux-aarch64-a2-1"
|
||||
hardware: "Atlas A2 Series"
|
||||
model: "vllm-vlm"
|
||||
tasks:
|
||||
- name: "mmmu_val"
|
||||
metrics:
|
||||
- name: "acc,none"
|
||||
value: 0.58
|
||||
max_model_len: 40960
|
||||
trust_remote_code: True
|
||||
@@ -9,3 +9,4 @@ Qwen3-VL-30B-A3B-Instruct.yaml
|
||||
Qwen3-VL-8B-Instruct.yaml
|
||||
Qwen2.5-Omni-7B.yaml
|
||||
Meta-Llama-3.1-8B-Instruct.yaml
|
||||
InternVL3_5-8B.yaml
|
||||
Reference in New Issue
Block a user