Update README.md

This commit is contained in:
ai-modelscope
2025-05-30 12:02:26 +08:00
parent a197d1e41d
commit 42a1e3285a

View File

@@ -105,7 +105,7 @@ In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B]
### Multimodal Reasoning and Mathematics ### Multimodal Reasoning and Mathematics
![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/reasoning.png) ![image/png](https://huggingface.co/OpenGVLab/VisualPRM-8B-v1_1/resolve/main/visualprm-performance.png)
### OCR, Chart, and Document Understanding ### OCR, Chart, and Document Understanding
@@ -161,7 +161,7 @@ The evaluation results in the Figure below shows that the model with native mult
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data. As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
![image/png](https://huggingface.co/datasets/Weiyun1025/InternVL-Performance/resolve/main/internvl3/ablation-mpo.png) ![image/png](https://huggingface.co/datasets/OpenGVLab/MMPR-v1.2-prompts/resolve/main/ablation-mpo.png)
### Variable Visual Position Encoding ### Variable Visual Position Encoding