Update README.md
Browse files
README.md
CHANGED
@@ -141,11 +141,11 @@ __Object Detection__
|
|
141 |
|
142 |
Example for "single":
|
143 |
|
144 |
-
{"
|
145 |
|
146 |
Example for "pairs":
|
147 |
|
148 |
-
{"
|
149 |
|
150 |
__Object Recognition__
|
151 |
|
@@ -153,11 +153,11 @@ __Object Recognition__
|
|
153 |
|
154 |
Example for "single"
|
155 |
|
156 |
-
{"
|
157 |
|
158 |
Example for "pairs":
|
159 |
|
160 |
-
{"
|
161 |
|
162 |
__Spatial Reasoning__
|
163 |
|
@@ -165,15 +165,11 @@ __Spatial Reasoning__
|
|
165 |
|
166 |
Example for "single"
|
167 |
|
168 |
-
{"
|
169 |
-
"query_text": "Is the potted plant on the right, top, left, or bottom of the image?\nAnswer with one of (right, bottom, top, or left) only.",
|
170 |
-
"target_text": "left"}
|
171 |
|
172 |
Example for "pairs"
|
173 |
|
174 |
-
{"
|
175 |
-
"query_text": "Is the bottle above, below, right, or left of the keyboard in the image?\nAnswer with one of (below, right, left, or above) only.",
|
176 |
-
"target_text": "left"}
|
177 |
|
178 |
What are the evaluation disaggregation pivots/attributes to run metrics for?
|
179 |
|
@@ -188,8 +184,8 @@ Answer type: Open-ended
|
|
188 |
|
189 |
Example for "single"
|
190 |
|
191 |
-
{"
|
192 |
|
193 |
Example for "pairs":
|
194 |
|
195 |
-
{"
|
|
|
141 |
|
142 |
Example for "single":
|
143 |
|
144 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
|
145 |
|
146 |
Example for "pairs":
|
147 |
|
148 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
|
149 |
|
150 |
__Object Recognition__
|
151 |
|
|
|
153 |
|
154 |
Example for "single"
|
155 |
|
156 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What objects are in this image?", "ground_truth": "book"}
|
157 |
|
158 |
Example for "pairs":
|
159 |
|
160 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in this image?", "ground_truth": "['keyboard', 'surfboard']"}
|
161 |
|
162 |
__Spatial Reasoning__
|
163 |
|
|
|
165 |
|
166 |
Example for "single"
|
167 |
|
168 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "Is the book on the bottom, right, top, or left of the image?\nAnswer with one of (top, bottom, right, or left) only.", "ground_truth": "left", "target_options": ["top", "bottom", "right", "left"]}
|
|
|
|
|
169 |
|
170 |
Example for "pairs"
|
171 |
|
172 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "Is the keyboard right, above, left, or below the surfboard in the image?\nAnswer with one of (below, above, right, or left) only.", "ground_truth": "left", "target_options": ["right", "left", "below", "above"]}
|
|
|
|
|
173 |
|
174 |
What are the evaluation disaggregation pivots/attributes to run metrics for?
|
175 |
|
|
|
184 |
|
185 |
Example for "single"
|
186 |
|
187 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What object is in the red box in this image?", "ground_truth": "book"}
|
188 |
|
189 |
Example for "pairs":
|
190 |
|
191 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in the red and yellow box in this image?", "ground_truth": "['keyboard', 'surfboard']"}
|