IMAGE_UNDERSTANDING / README.md
neelsj's picture
Update README.md
db8e94f verified
---
license: cdla-permissive-2.0
dataset_info:
- config_name: object_recognition_single
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- config_name: object_recognition_pairs
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- config_name: visual_prompting_single
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- config_name: visual_prompting_pairs
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- config_name: spatial_reasoning_lrtb_single
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- name: target_options
dtype: string
- config_name: spatial_reasoning_lrtb_pairs
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- name: ground_truth
dtype: string
- name: target_options
dtype: string
- config_name: object_detection_single
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
- config_name: object_detection_pairs
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: prompt
dtype: string
configs:
- config_name: object_recognition_single
data_files:
- split: val
path: single/recognition_val.parquet
- config_name: object_recognition_pairs
data_files:
- split: val
path: pairs/recognition_val.parquet
- config_name: visual_prompting_single
data_files:
- split: val
path: single/visual_prompting_val.parquet
- config_name: visual_prompting_pairs
data_files:
- split: val
path: pairs/visual_prompting_val.parquet
- config_name: spatial_reasoning_lrtb_single
data_files:
- split: val
path: single/spatial_reasoning_val.parquet
- config_name: spatial_reasoning_lrtb_pairs
data_files:
- split: val
path: pairs/spatial_reasoning_val.parquet
- config_name: object_detection_single
data_files:
- split: val
path: single/object_detection_val.parquet
- config_name: object_detection_pairs
data_files:
- split: val
path: pairs/object_detection_val.parquet
---
A key question for understanding multimodal performance is analyzing the ability for a model to have basic
vs. detailed understanding of images. These capabilities are needed for models to be used in
real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
The datasets that do exist are static and publicly available, thus there is concern that current AI models could
be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
and detection. The datasets are challenging for most AI models and by being procedurally generated the
benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
on this data and the results being due to memorization.
This dataset has 4 sub-tasks: Object Recognition, Visual Prompting. Spatial Reasoning, and Object Detection.
For each sub-task, the images consist of images of pasted objects on random
images. The objects are from the COCO object list and are gathered from internet data. Each object is
masked using the DeepLabV3 object detection model and then pasted on a random background. The objects are pasted in one of four locations, top, left, bottom, and right, with small
amounts of random rotation, positional jitter, and scale.
There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
condition and sub-task.
__Object Recognition__
Answer type: Open-ended
Example for "single"
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What objects are in this image?", "ground_truth": "book"}
Example for "pairs":
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in this image?", "ground_truth": "['keyboard', 'surfboard']"}
__Visual Prompting__
Answer type: Open-ended
Example for "single"
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What object is in the red box in this image?", "ground_truth": "book"}
Example for "pairs":
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in the red and yellow box in this image?", "ground_truth": "['keyboard', 'surfboard']"}
__Spatial Reasoning__
Answer type: Multiple Choice
Example for "single"
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "Is the book on the bottom, right, top, or left of the image?\nAnswer with one of (top, bottom, right, or left) only.", "ground_truth": "left", "target_options": ["top", "bottom", "right", "left"]}
Example for "pairs"
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "Is the keyboard right, above, left, or below the surfboard in the image?\nAnswer with one of (below, above, right, or left) only.", "ground_truth": "left", "target_options": ["right", "left", "below", "above"]}
What are the evaluation disaggregation pivots/attributes to run metrics for?
Disaggregation by (group by):
"single": (left, right, top, bottom)
"pairs": (left, right, above, below)
__Object Detection__
Answer type: Open-ended
Example for "single":
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
Example for "pairs":
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}