File size: 11,014 Bytes
56e7526
 
 
58d596c
56e7526
 
 
 
 
 
 
58d596c
 
 
56e7526
 
 
 
 
 
 
58d596c
 
 
56e7526
 
 
 
 
 
 
 
 
58d596c
56e7526
 
 
 
 
 
 
 
58d596c
56e7526
 
 
 
 
 
 
 
 
 
db8e94f
 
56e7526
 
 
 
 
 
 
 
 
 
db8e94f
 
58d596c
56e7526
 
 
 
 
 
 
58d596c
56e7526
 
 
 
 
 
58d596c
56e7526
 
 
 
f76502f
56e7526
 
 
f76502f
21d16b4
 
 
 
 
 
 
 
56e7526
 
 
f76502f
56e7526
 
 
f76502f
21d16b4
56e7526
 
21d16b4
 
56e7526
 
21d16b4
56e7526
 
 
 
 
 
 
 
 
 
 
 
 
02a8d32
56e7526
 
 
02a8d32
56e7526
 
 
 
 
 
 
21d16b4
56e7526
 
 
21d16b4
56e7526
21d16b4
56e7526
 
 
21d16b4
56e7526
21d16b4
56e7526
21d16b4
56e7526
 
 
21d16b4
56e7526
 
 
21d16b4
 
56e7526
 
 
 
 
 
 
c3d1e73
56e7526
 
 
c3d1e73
56e7526
 
 
 
 
 
 
 
21d16b4
56e7526
21d16b4
56e7526
21d16b4
56e7526
21d16b4
56e7526
 
 
21d16b4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
license: cdla-permissive-2.0
dataset_info:
- config_name: object_recognition_single
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string
- config_name: object_recognition_pairs
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string
- config_name: visual_prompting_single
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string
- config_name: visual_prompting_pairs
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string    
- config_name: spatial_reasoning_lrtb_single
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string
  - name: target_options
    dtype: string
- config_name: spatial_reasoning_lrtb_pairs
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: ground_truth
    dtype: string
  - name: target_options
    dtype: string    
- config_name: object_detection_single
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string
- config_name: object_detection_pairs
  features:
  - name: id
    dtype: int32
  - name: image
    dtype: image
  - name: prompt
    dtype: string    
configs:
- config_name: object_recognition_single
  data_files:
  - split: val
    path: single/recognition_val.parquet
- config_name: object_recognition_pairs
  data_files:
  - split: val
    path: pairs/recognition_val.parquet
- config_name: visual_prompting_single
  data_files:
  - split: val
    path: single/visual_prompting_val.parquet
- config_name: visual_prompting_pairs
  data_files:
  - split: val
    path: pairs/visual_prompting_val.parquet    
- config_name: spatial_reasoning_lrtb_single
  data_files:
  - split: val
    path: single/spatial_reasoning_val.parquet
- config_name: spatial_reasoning_lrtb_pairs
  data_files:
  - split: val
    path: pairs/spatial_reasoning_val.parquet
- config_name: object_detection_single
  data_files:
  - split: val
    path: single/object_detection_val.parquet
- config_name: object_detection_pairs
  data_files:
  - split: val
    path: pairs/object_detection_val.parquet
---

A key question for understanding multimodal performance is analyzing the ability for a model to have basic 
vs. detailed understanding of images. These capabilities are needed for models to be used in
real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
The datasets that do exist are static and publicly available, thus there is concern that current AI models could
be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
and detection. The datasets are challenging for most AI models and by being procedurally generated the
benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
on this data and the results being due to memorization.

This dataset has 4 sub-tasks: Object Recognition, Visual Prompting. Spatial Reasoning, and Object Detection. 

For each sub-task, the images consist of images of pasted objects on random
images. The objects are from the COCO object list and are gathered from internet data. Each object is
masked using the DeepLabV3 object detection model and then pasted on a random background.  The objects are pasted in one of four locations, top, left, bottom, and right, with small
amounts of random rotation, positional jitter, and scale.

There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
condition and sub-task.

__Object Recognition__

	Answer type: Open-ended

	Example for "single"

		{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What objects are in this image?", "ground_truth": "book"}

	Example for "pairs":

		{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in this image?", "ground_truth": "['keyboard', 'surfboard']"}

__Visual Prompting__

Answer type: Open-ended

	Example for "single"

		{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What object is in the red box in this image?", "ground_truth": "book"}

	Example for "pairs":

		{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in the red and yellow box in this image?", "ground_truth": "['keyboard', 'surfboard']"}


__Spatial Reasoning__

	Answer type: Multiple Choice

	Example for "single"

		{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "Is the book on the bottom, right, top, or left of the image?\nAnswer with one of (top, bottom, right, or left) only.", "ground_truth": "left", "target_options": ["top", "bottom", "right", "left"]}

	Example for "pairs"

		{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "Is the keyboard right, above, left, or below the surfboard in the image?\nAnswer with one of (below, above, right, or left) only.", "ground_truth": "left", "target_options": ["right", "left", "below", "above"]}

	What are the evaluation disaggregation pivots/attributes to run metrics for?

	Disaggregation by (group by):

	"single": (left, right, top, bottom)
	"pairs": (left, right, above, below) 

__Object Detection__

	Answer type: Open-ended

	Example for "single":

		{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}

	Example for "pairs":

		{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}