FanqingM commited on
Commit
e02d8b6
1 Parent(s): d404a76

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Dataset Card for MMIU
4
+
5
+
6
+
7
+ <!-- - **Homepage:** -->
8
+ - **Repository:** https://github.com/OpenGVLab/MMIU
9
+ - **Paper:** https://arxiv.org/abs/2408.02718
10
+ - **Project Page:** https://mmiu-bench.github.io/
11
+ - **Point of Contact:** [Fanqing Meng](mailto:[email protected])
12
+
13
+ ## Introduction
14
+ MMIU encompasses 7 types of multi-image relationships, 52 tasks, 77K images, and 11K meticulously curated multiple-choice questions, making it the most extensive benchmark of its kind. Our evaluation of 24 popular MLLMs, including both open-source and proprietary models, reveals significant challenges in multi-image comprehension, particularly in tasks involving spatial understanding. Even the most advanced models, such as GPT-4o, achieve only 55.7% accuracy on MMIU. Through multi-faceted analytical experiments, we identify key performance gaps and limitations, providing valuable insights for future model and data improvements. We aim for MMIU to advance the frontier of LVLM research and development, moving us toward achieving sophisticated multimodal multi-image user interactions.
15
+
16
+ ## Data Structure
17
+ ### Data Fields
18
+
19
+ Each field of annotation is as follows:
20
+
21
+ * `task`: The name of task
22
+ * `visual_input_component`: Type of input image (e.g., point cloud, natural image, etc.)
23
+ * `source`: Source dataset of the sample
24
+ * `options`: Options for the question
25
+ * `question`: The question
26
+ * `context`: Context of the question (e.g., task description, etc.)
27
+ * `input_image_path`: List of input images (including question image and option images)
28
+ * `output`: The correct option for the question
29
+
30
+
31
+ ### Example
32
+ ```
33
+ {
34
+ "task": "forensic_detection_blink",
35
+ "visual_input_component": "natural image and synthetic image",
36
+ "source": "blink",
37
+ "options": "A: the first image\nB: the second image\nC: the third image\nD: the fourth image",
38
+ "question": "Which image is most likely to be a real photograph?",
39
+ "context": "You are a judge in a photography competition, and now you are given the four images. Please examine the details and tell which one of them is most likely to be a real photograph.\nSelect from
40
+ the following choices.\nA: the first image\nB: the second image\nC: the third image\nD: the fourth image\n",
41
+ "input_image_path": [
42
+ "./Low-level-semantic/forensic_detection_blink/forensic_detection_blink_0_0.jpg",
43
+ "./Low-level-semantic/forensic_detection_blink/forensic_detection_blink_0_1.jpg",
44
+ "./Low-level-semantic/forensic_detection_blink/forensic_detection_blink_0_2.jpg",
45
+ "./Low-level-semantic/forensic_detection_blink/forensic_detection_blink_0_3.jpg"
46
+ ],
47
+ "output": "D"
48
+ }
49
+ ```
50
+
51
+
52
+
53
+
54
+
55
+ ### Image Relationships
56
+ We include seven types of image relationships. For detailed information, please refer to Paper: https://arxiv.org/abs/2408.02718
57
+
58
+ ## Licensing Information
59
+ <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
60
+
61
+ ## Disclaimer
62
+ This dataset is intended primarily for research purposes. We strongly oppose any harmful use of the data or technology.