neelsj commited on
Commit
56e7526
·
verified ·
1 Parent(s): 7beb06d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -3
README.md CHANGED
@@ -1,3 +1,197 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cdla-permissive-2.0
3
+ dataset_info:
4
+ - config_name: object_detection_single
5
+ features:
6
+ - name: id
7
+ dtype: int32
8
+ - name: image
9
+ dtype: image
10
+ - name: prompt
11
+ dtype: string
12
+ - config_name: object_detection_pairs
13
+ features:
14
+ - name: id
15
+ dtype: int32
16
+ - name: image
17
+ dtype: image
18
+ - name: prompt
19
+ dtype: string
20
+ - config_name: object_recognition_single
21
+ features:
22
+ - name: id
23
+ dtype: int32
24
+ - name: image
25
+ dtype: image
26
+ - name: prompt
27
+ dtype: string
28
+ - name: ground_truth
29
+ dtype: string
30
+ - config_name: object_recognition_pairs
31
+ features:
32
+ - name: id
33
+ dtype: int32
34
+ - name: image
35
+ dtype: image
36
+ - name: prompt
37
+ dtype: string
38
+ - name: ground_truth
39
+ dtype: string
40
+ - config_name: spatial_reasoning_lrtb_single
41
+ features:
42
+ - name: id
43
+ dtype: int32
44
+ - name: image
45
+ dtype: image
46
+ - name: prompt
47
+ dtype: string
48
+ - name: ground_truth
49
+ dtype: string
50
+ - config_name: spatial_reasoning_lrtb_pairs
51
+ features:
52
+ - name: id
53
+ dtype: int32
54
+ - name: image
55
+ dtype: image
56
+ - name: prompt
57
+ dtype: string
58
+ - name: ground_truth
59
+ dtype: string
60
+ - config_name: visual_prompting_single
61
+ features:
62
+ - name: id
63
+ dtype: int32
64
+ - name: image
65
+ dtype: image
66
+ - name: prompt
67
+ dtype: string
68
+ - name: ground_truth
69
+ dtype: string
70
+ - config_name: visual_prompting_pairs
71
+ features:
72
+ - name: id
73
+ dtype: int32
74
+ - name: image
75
+ dtype: image
76
+ - name: prompt
77
+ dtype: string
78
+ - name: ground_truth
79
+ dtype: string
80
+ configs:
81
+ - config_name: object_detection_single
82
+ data_files:
83
+ - split: val
84
+ path: object_detection_single/object_detection_val_long_prompt.parquet
85
+ - config_name: object_detection_pairs
86
+ data_files:
87
+ - split: val
88
+ path: object_detection_pairs/object_detection_val_long_prompt.parquet
89
+ - config_name: object_recognition_single
90
+ data_files:
91
+ - split: val
92
+ path: spatial_reasoning_lrtb_single/recognition_val.parquet
93
+ - config_name: object_recognition_pairs
94
+ data_files:
95
+ - split: val
96
+ path: spatial_reasoning_lrtb_pairs/recognition_val.parquet
97
+ - config_name: spatial_reasoning_lrtb_single
98
+ data_files:
99
+ - split: val
100
+ path: spatial_reasoning_lrtb_single/spatial_reasoning_lrtb_single.parquet
101
+ - config_name: spatial_reasoning_lrtb_pairs
102
+ data_files:
103
+ - split: val
104
+ path: spatial_reasoning_lrtb_pairs/spatial_reasoning_lrtb_pairs.parquet
105
+ - config_name: visual_prompting_single
106
+ data_files:
107
+ - split: val
108
+ path: visual_prompting_single/visual_prompting_val.parquet
109
+ - config_name: visual_prompting_pairs
110
+ data_files:
111
+ - split: val
112
+ path: visual_prompting_pairs/visual_prompting_val.parquet
113
+ ---
114
+
115
+ A key question for understanding multimodal performance is analyzing the ability for a model to have basic
116
+ vs. detailed understanding of images. These capabilities are needed for models to be used in
117
+ real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
118
+ and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
119
+ The datasets that do exist are static and publicly available, thus there is concern that current AI models could
120
+ be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
121
+ procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
122
+ and detection. The datasets are challenging for most AI models and by being procedurally generated the
123
+ benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
124
+ on this data and the results being due to memorization.
125
+
126
+ This dataset has 4 sub-tasks: Object Recognition, Visual Prompting. Spatial Rea-
127
+ soning, and Object Detection.
128
+
129
+ For each sub-task, the images consist of images of pasted objects on random
130
+ images. The objects are from the COCO object list and are gathered from internet data. Each object is
131
+ masked using the DeepLabV3 object detection model and then pasted on a random background from the
132
+ Places365 dataset. The objects are pasted in one of four locations, top, left, bottom, and right, with small
133
+ amounts of random rotation, positional jitter, and scale.
134
+
135
+ There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
136
+ sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
137
+ backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
138
+ condition and sub-task.
139
+
140
+ __Object Detection__
141
+
142
+ Answer type: Open-ended
143
+
144
+ Example for "single":
145
+
146
+ {"images": ["val\\banana\\left\\fire_station\\0000075_Places365_val_00030609.jpg"], "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
147
+
148
+ Example for "pairs":
149
+
150
+ {"images": ["val\\hair drier_broccoli\\left\\church-indoor\\0000030_0000059_Places365_val_00000401.jpg"], "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
151
+
152
+ __Object Recognition__
153
+
154
+ Answer type: Open-ended
155
+
156
+ Example for "single"
157
+
158
+ {"images": ["val\\potted plant\\left\\ruin\\0000097_Places365_val_00018147.jpg"], "prompt": "What objects are in this image?", "ground_truth": "potted plant"}
159
+
160
+ Example for "pairs":
161
+
162
+ {"images": ["val\\bottle_keyboard\\left\\ruin\\0000087_0000069_Places365_val_00035062.jpg"], "prompt": "What objects are in this image?", "ground_truth": "['bottle', 'keyboard']"}
163
+
164
+ __Spatial Reasoning__
165
+
166
+ Answer type: Multiple Choice
167
+
168
+ Example for "single"
169
+
170
+ {"images": ["val\\potted plant\\left\\ruin\\0000097_Places365_val_00018147.jpg"],
171
+ "query_text": "Is the potted plant on the right, top, left, or bottom of the image?\nAnswer with one of (right, bottom, top, or left) only.",
172
+ "target_text": "left"}
173
+
174
+ Example for "pairs"
175
+
176
+ {"images": ["val\\bottle_keyboard\\left\\ruin\\0000087_0000069_Places365_val_00035062.jpg"],
177
+ "query_text": "Is the bottle above, below, right, or left of the keyboard in the image?\nAnswer with one of (below, right, left, or above) only.",
178
+ "target_text": "left"}
179
+
180
+ What are the evaluation disaggregation pivots/attributes to run metrics for?
181
+
182
+ Disaggregation by (group by):
183
+
184
+ "single": (left, right, top, bottom)
185
+ "pairs": (left, right, above, below)
186
+
187
+ __Visual Prompting__
188
+
189
+ Answer type: Open-ended
190
+
191
+ Example for "single"
192
+
193
+ {"images": ["val\\potted plant\\left\\ruin\\0000097_Places365_val_00018147.jpg"], "prompt": "What objects are in this image?", "ground_truth": "potted plant"}
194
+
195
+ Example for "pairs":
196
+
197
+ {"images": ["val\\sheep_banana\\left\\landfill\\0000099_0000001_Places365_val_00031238.jpg"], "prompt": "What objects are in the red and yellow box in this image?", "ground_truth": "['sheep', 'banana']"}