Weiyun1025
commited on
Commit
β’
c5d8dc2
1
Parent(s):
b7e8834
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -5,9 +5,9 @@ pipeline_tag: visual-question-answering
|
|
5 |
|
6 |
# InternVL2-8B
|
7 |
|
8 |
-
[\[π Blog\]](https://internvl.github.io/blog/) [\[π InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[π InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
9 |
|
10 |
-
[\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#
|
11 |
|
12 |
## Introduction
|
13 |
|
@@ -23,26 +23,26 @@ InternVL2 is a multimodal large language model series, featuring models of vario
|
|
23 |
|
24 |
## Performance
|
25 |
|
26 |
-
| Benchmark | MiniCPM-Llama3-V-2_5 | InternVL2-8B |
|
27 |
-
| :--------------------------: | :------------------: | :----------: |
|
28 |
-
| Model Size | 8.5B |
|
29 |
-
| | |
|
30 |
-
| DocVQA<sub>test</sub> | 84.8 |
|
31 |
-
| ChartQA<sub>test</sub> | - |
|
32 |
-
| InfoVQA<sub>test</sub> | - |
|
33 |
-
| TextVQA<sub>val</sub> | 76.6 |
|
34 |
-
| OCRBench | 725 |
|
35 |
-
| MME<sub>sum</sub> | 2024.6 |
|
36 |
-
| RealWorldQA | 63.5 |
|
37 |
-
| AI2D<sub>test</sub> | 78.4 |
|
38 |
-
| MMMU<sub>val</sub> | 45.8 |
|
39 |
-
| MMBench-EN<sub>test</sub> | 77.2 |
|
40 |
-
| MMBench-CN<sub>test</sub> | 74.2 |
|
41 |
-
| CCBench<sub>dev</sub> | 45.9 |
|
42 |
-
| MMVet<sub>GPT-4-0613</sub> | - |
|
43 |
-
| SEED-Image | 72.3 |
|
44 |
-
| HallBench<sub>avg</sub> | 42.4 |
|
45 |
-
| MathVista<sub>testmini</sub> | 54.3 |
|
46 |
|
47 |
- We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
|
48 |
|
|
|
5 |
|
6 |
# InternVL2-8B
|
7 |
|
8 |
+
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π Blog\]](https://internvl.github.io/blog/) [\[π InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[π InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
9 |
|
10 |
+
[\[π¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π δΈζ解读\]](https://zhuanlan.zhihu.com/p/675877376)
|
11 |
|
12 |
## Introduction
|
13 |
|
|
|
23 |
|
24 |
## Performance
|
25 |
|
26 |
+
| Benchmark | MiniCPM-Llama3-V-2_5 | InternVL-Chat-V1-5 ο½ InternVL2-8B |
|
27 |
+
| :--------------------------: | :------------------: | :----------: | :----------: |
|
28 |
+
| Model Size | 8.5B | | 8.1B |
|
29 |
+
| | | | |
|
30 |
+
| DocVQA<sub>test</sub> | 84.8 | | 91.6 |
|
31 |
+
| ChartQA<sub>test</sub> | - | | 83.3 |
|
32 |
+
| InfoVQA<sub>test</sub> | - | | 74.8 |
|
33 |
+
| TextVQA<sub>val</sub> | 76.6 | | 77.4 |
|
34 |
+
| OCRBench | 725 | | 794 |
|
35 |
+
| MME<sub>sum</sub> | 2024.6 | | 2210.3 |
|
36 |
+
| RealWorldQA | 63.5 | | 64.4 |
|
37 |
+
| AI2D<sub>test</sub> | 78.4 | | 83.8 |
|
38 |
+
| MMMU<sub>val</sub> | 45.8 | | 49.3 |
|
39 |
+
| MMBench-EN<sub>test</sub> | 77.2 | | 81.7 |
|
40 |
+
| MMBench-CN<sub>test</sub> | 74.2 | | 81.2 |
|
41 |
+
| CCBench<sub>dev</sub> | 45.9 | | 75.9 |
|
42 |
+
| MMVet<sub>GPT-4-0613</sub> | - | | 60.0 |
|
43 |
+
| SEED-Image | 72.3 | | 76.2 |
|
44 |
+
| HallBench<sub>avg</sub> | 42.4 | | 45.2 |
|
45 |
+
| MathVista<sub>testmini</sub> | 54.3 | | 58.3 |
|
46 |
|
47 |
- We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
|
48 |
|