Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
ziyaosg commited on
Commit
aa4827b
1 Parent(s): 7e35bc4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -82,7 +82,7 @@ This repository contains the QAs of the following paper:
82
  [Yuxuan Ding](https://scholar.google.com/citations?user=jdsf4z4AAAAJ)<sup>1</sup>,&nbsp;
83
  [Yanan Zheng](https://scholar.google.com/citations?user=0DqJ8eIAAAAJ)<sup>1</sup>,&nbsp;
84
  [Yilun Zhao](https://yilunzhao.github.io/)<sup>1</sup>,&nbsp;
85
- [Tesca Fizgerald](https://www.tescafitzgerald.com/)<sup>1</sup>,&nbsp;
86
  [Arman Cohan](https://armancohan.com/)<sup>1</sup><sup>2</sup> <br>
87
  >*Equal contribution. <br>
88
  ><sup>1</sup>Yale University &nbsp;<sup>2</sup>Allen Institute of AI <sup>
@@ -95,7 +95,7 @@ This repository contains the QAs of the following paper:
95
 
96
  Our study of existing benchmarks shows that visual temporal reasoning capabilities of Multimodal Foundation Models (MFMs) are likely overestimated as many questions can be solved by using a single, few, or out-of-order frames. To systematically examine current visual temporal reasoning tasks, we propose three principles with corresponding metrics: (1) *Multi-Frame Gain*, (2) *Frame Order Sensitivity*, and (3) *Frame Information Disparity*.
97
 
98
- Following these principles, we introduce TOMATO, a novel benchmark crafted to rigorously assess MFMs' temporal reasoning capabilities in video understanding. TOMATO comprises 1,484 carefully curated, human-annotated questions spanning 6 tasks (i.e. *action count*, *direction*, *rotation*, *shape&trend*, *velocity&frequency*, and *visual cues*), applied to 1,417 videos, including 805 self-recorded and -generated videos, that encompass 3 video scenarios (i.e. *human-centric*, *real-world*, and *simulated*). In the 805 self-created videos, we apply editting to incorporate *counterfactual scenes*, *composite motions*, and *zoomed-in* views, aiming to investigate the impact of these characteristics on the performance of MFMs.
99
 
100
  ### Task Examples
101
 
 
82
  [Yuxuan Ding](https://scholar.google.com/citations?user=jdsf4z4AAAAJ)<sup>1</sup>,&nbsp;
83
  [Yanan Zheng](https://scholar.google.com/citations?user=0DqJ8eIAAAAJ)<sup>1</sup>,&nbsp;
84
  [Yilun Zhao](https://yilunzhao.github.io/)<sup>1</sup>,&nbsp;
85
+ [Tesca Fitzgerald](https://www.tescafitzgerald.com/)<sup>1</sup>,&nbsp;
86
  [Arman Cohan](https://armancohan.com/)<sup>1</sup><sup>2</sup> <br>
87
  >*Equal contribution. <br>
88
  ><sup>1</sup>Yale University &nbsp;<sup>2</sup>Allen Institute of AI <sup>
 
95
 
96
  Our study of existing benchmarks shows that visual temporal reasoning capabilities of Multimodal Foundation Models (MFMs) are likely overestimated as many questions can be solved by using a single, few, or out-of-order frames. To systematically examine current visual temporal reasoning tasks, we propose three principles with corresponding metrics: (1) *Multi-Frame Gain*, (2) *Frame Order Sensitivity*, and (3) *Frame Information Disparity*.
97
 
98
+ Following these principles, we introduce TOMATO, a novel benchmark crafted to rigorously assess MFMs' temporal reasoning capabilities in video understanding. TOMATO comprises 1,484 carefully curated, human-annotated questions spanning 6 tasks (i.e. *action count*, *direction*, *rotation*, *shape&trend*, *velocity&frequency*, and *visual cues*), applied to 1,417 videos, including 805 self-recorded and -generated videos, that encompass 3 video scenarios (i.e. *human-centric*, *real-world*, and *simulated*). In the 805 self-created videos, we apply editing to incorporate *counterfactual scenes*, *composite motions*, and *zoomed-in* views, aiming to investigate the impact of these characteristics on the performance of MFMs.
99
 
100
  ### Task Examples
101