Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
ziyaosg commited on
Commit
2d3e3bb
β€’
1 Parent(s): ef234e6

Updated README.md

Browse files
Files changed (1) hide show
  1. README.md +367 -67
README.md CHANGED
@@ -1,67 +1,367 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: count
7
- path: data/count-*
8
- - split: direction
9
- path: data/direction-*
10
- - split: rotation
11
- path: data/rotation-*
12
- - split: shape_trend
13
- path: data/shape_trend-*
14
- - split: velocity_frequency
15
- path: data/velocity_frequency-*
16
- - split: visual_cues
17
- path: data/visual_cues-*
18
- dataset_info:
19
- features:
20
- - name: question
21
- dtype: string
22
- - name: demonstration_type
23
- dtype: string
24
- - name: variation
25
- struct:
26
- - name: composite
27
- dtype: int64
28
- - name: counterfactual
29
- dtype: int64
30
- - name: first_person
31
- dtype: int64
32
- - name: zoom
33
- dtype: int64
34
- - name: motion_type
35
- dtype: string
36
- - name: answer
37
- dtype: int64
38
- - name: note
39
- dtype: string
40
- - name: key
41
- dtype: string
42
- - name: options
43
- sequence: string
44
- - name: video_source_url
45
- dtype: string
46
- splits:
47
- - name: count
48
- num_bytes: 60102
49
- num_examples: 292
50
- - name: direction
51
- num_bytes: 124629
52
- num_examples: 403
53
- - name: rotation
54
- num_bytes: 92655
55
- num_examples: 286
56
- - name: shape_trend
57
- num_bytes: 61447
58
- num_examples: 223
59
- - name: velocity_frequency
60
- num_bytes: 57868
61
- num_examples: 210
62
- - name: visual_cues
63
- num_bytes: 16937
64
- num_examples: 70
65
- download_size: 71255
66
- dataset_size: 413638
67
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: count
7
+ path: data/count-*
8
+ - split: direction
9
+ path: data/direction-*
10
+ - split: rotation
11
+ path: data/rotation-*
12
+ - split: shape_trend
13
+ path: data/shape_trend-*
14
+ - split: velocity_frequency
15
+ path: data/velocity_frequency-*
16
+ - split: visual_cues
17
+ path: data/visual_cues-*
18
+ dataset_info:
19
+ features:
20
+ - name: question
21
+ dtype: string
22
+ - name: demonstration_type
23
+ dtype: string
24
+ - name: variation
25
+ struct:
26
+ - name: composite
27
+ dtype: int64
28
+ - name: counterfactual
29
+ dtype: int64
30
+ - name: first_person
31
+ dtype: int64
32
+ - name: zoom
33
+ dtype: int64
34
+ - name: motion_type
35
+ dtype: string
36
+ - name: answer
37
+ dtype: int64
38
+ - name: note
39
+ dtype: string
40
+ - name: key
41
+ dtype: string
42
+ - name: options
43
+ sequence: string
44
+ - name: video_source_url
45
+ dtype: string
46
+ splits:
47
+ - name: count
48
+ num_bytes: 60102
49
+ num_examples: 292
50
+ - name: direction
51
+ num_bytes: 124629
52
+ num_examples: 403
53
+ - name: rotation
54
+ num_bytes: 92655
55
+ num_examples: 286
56
+ - name: shape_trend
57
+ num_bytes: 61447
58
+ num_examples: 223
59
+ - name: velocity_frequency
60
+ num_bytes: 57868
61
+ num_examples: 210
62
+ - name: visual_cues
63
+ num_bytes: 16937
64
+ num_examples: 70
65
+ download_size: 71255
66
+ dataset_size: 413638
67
+ ---
68
+
69
+ # πŸ… TOMATO
70
+
71
+ [**πŸ“„ Paper**](https://arxiv.org/abs/2410.23266) | [**πŸ’» Code**](https://github.com/yale-nlp/TOMATO) | [**🎬 Videos**](https://drive.google.com/file/d/1-dNt9bZcp6C3RXuGoAO3EBgWkAHg8NWR/view?usp=drive_link)
72
+
73
+
74
+
75
+ This repository contains the QAs of the following paper:
76
+
77
+ >πŸ… TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models <br>
78
+ >[Ziyao Shangguan](https://ziyaosg.github.io/)\*<sup>1</sup>,&nbsp;
79
+ [Chuhan Li](https://LeeChuh.github.io)\*<sup>1</sup>,&nbsp;
80
+ [Yuxuan Ding](https://scholar.google.com/citations?user=jdsf4z4AAAAJ)<sup>1</sup>,&nbsp;
81
+ [Yanan Zheng](https://scholar.google.com/citations?user=0DqJ8eIAAAAJ)<sup>1</sup>,&nbsp;
82
+ [Yilun Zhao](https://yilunzhao.github.io/)<sup>1</sup>,&nbsp;
83
+ [Tesca Fizgerald](https://www.tescafitzgerald.com/)<sup>1</sup>,&nbsp;
84
+ [Arman Cohan](https://armancohan.com/)<sup>1</sup><sup>2</sup> <br>
85
+ >*Equal contribution. <br>
86
+ ><sup>1</sup>Yale University &nbsp;<sup>2</sup>Allen Institute of AI <sup>
87
+
88
+
89
+ ## TOMATO - A Visual Temporal Reasoning Benchmark
90
+ ![figure1](./misc/figure1.png)
91
+
92
+ ### Introduction
93
+
94
+ Our study of existing benchmarks shows that visual temporal reasoning capabilities of Multimodal Foundation Models (MFMs) are likely overestimated as many questions can be solved by using a single, few, or out-of-order frames. To systematically examine current visual temporal reasoning tasks, we propose three principles with corresponding metrics: (1) *Multi-Frame Gain*, (2) *Frame Order Sensitivity*, and (3) *Frame Information Disparity*.
95
+
96
+ Following these principles, we introduce TOMATO, a novel benchmark crafted to rigorously assess MFMs' temporal reasoning capabilities in video understanding. TOMATO comprises 1,484 carefully curated, human-annotated questions spanning 6 tasks (i.e. *action count*, *direction*, *rotation*, *shape&trend*, *velocity&frequency*, and *visual cues*), applied to 1,417 videos, including 805 self-recorded and -generated videos, that encompass 3 video scenarios (i.e. *human-centric*, *real-world*, and *simulated*). In the 805 self-created videos, we apply editting to incorporate *counterfactual scenes*, *composite motions*, and *zoomed-in* views, aiming to investigate the impact of these characteristics on the performance of MFMs.
97
+
98
+ ### Task Examples
99
+
100
+ ![rotation](./misc/ball_rotation_frames.png)
101
+ >What direction(s) does the Ping Pong ball rotate in? <br>
102
+ >A. Clockwise throughout. <br>
103
+ >B. No rotation. <br>
104
+ >C. Clockwise then counter-clockwise. <br>
105
+ >D. Counter-clockwise throughout. <br>
106
+ >E. Counter-clockwise then clockwise. <br>
107
+ >
108
+ >Answer: D. Counter-clockwise throughout. <br>
109
+
110
+ ![acceleration](./misc/dropping_reversed_frames.png)
111
+ >What is the pattern of the object’s speed in the video? <br>
112
+ >A. Not moving at all. <br>
113
+ >B. Constant speed. <br>
114
+ >C. Decelerating. <br>
115
+ >D. Accelerating. <br>
116
+ >
117
+ >Answer: C. Decelerating.
118
+
119
+
120
+ ![human_gesture](./misc/human_gesture_frames.png) <br>
121
+ >What instruction did the person give to the camera in the video? <br>
122
+ >A. Moving Down. <br>
123
+ >B. Moving Left. <br>
124
+ >C. Moving Further. <br>
125
+ >D. Moving Closer. <br>
126
+ >E. Moving Right. <br>
127
+ >F. Moving Up. <br>
128
+ >
129
+ >Answer: E. Moving Right.
130
+
131
+
132
+ ![synthetic_human](./misc/synthetic_human_frames.png) <br>
133
+ >How many triangle(s) does the person draw in the air throughout the entire video? <br>
134
+ >A. 0 <br>
135
+ >B. 1 <br>
136
+ >C. 2 <br>
137
+ >D. 3 <br>
138
+ >E. 4 <br>
139
+ >F. 5 <br>
140
+ >
141
+ >Answer: E. 4
142
+
143
+ ### Analysis Highlight
144
+
145
+ ![earth_moon_frames](./misc/earth_moon_frames.png)
146
+
147
+ Our in-depth error case analysis reveals that **models lack the basic ability to interpret frames as a continuous sequence**. In the example, while GPT-4o correctly generates captions for each consecutive change in the moon's movement, showcasing its ability to reason at individual time steps, it still fails to infer based on the captions that the overall sequence represents a clockwise rotation and falsely concludes that it is a counter-clockwise rotation.
148
+
149
+ For more detailed error case analysis, please refer to Section 6.3 in our paper.
150
+
151
+
152
+ ## Dataset and Evaluation
153
+ ### 1. Setup
154
+
155
+ ```bash
156
+ git clone https://github.com/yale-nlp/TOMATO
157
+ cd TOMATO
158
+ ```
159
+ Download the [videos](https://drive.google.com/file/d/1-dNt9bZcp6C3RXuGoAO3EBgWkAHg8NWR/view?usp=drive_link) and unzip into the /TOMATO directory
160
+
161
+ <details>
162
+ <summary>After downloading the videos, your file structure should look like this.</summary>
163
+
164
+ ```
165
+ .
166
+ β”œβ”€β”€ data/
167
+ β”œβ”€β”€ src/
168
+ β”œβ”€β”€ videos/
169
+ β”‚ β”œβ”€β”€ human/
170
+ β”‚ β”œβ”€β”€ object/
171
+ β”‚ β”œβ”€β”€ simulated/
172
+
173
+ ```
174
+ </details>
175
+
176
+
177
+ #### 1.1 Proprietary Models
178
+ To install the required packages for evaluating proprietary models, run:
179
+ ```bash
180
+ pip install openai # GPT
181
+ pip install google-generativeai # Gemini
182
+ pip install anthropic # Claude
183
+ pip install reka-api==2.0.0 # Reka
184
+ ```
185
+ Create a `.env` file in the root directory with the following format:
186
+ ```
187
+ OPENAI_API_KEY="your_openai_api_key"
188
+ GEMINI_API_KEY="your_gemini_api_key"
189
+ ANTHROPIC_API_KEY="your_anthropic_api_key"
190
+ REKA_API_KEY="your_reka_api_key"
191
+ ```
192
+
193
+ #### 1.2 Open-sourced Models
194
+ Create a directory named `pretrained` in the root of TOMATO to store open-sourced models. For example, to download `Qwen-2-VL-7B` model, run the following command:
195
+
196
+ ```bash
197
+ mkdir pretrained && cd pretrained
198
+ huggingface-cli download
199
+ --resume-download
200
+ --local-dir-use-symlinks False Qwen/Qwen2-VL-7B-Instruct
201
+ --local-dir Qwen2-VL-7B-Instruct
202
+ ```
203
+
204
+ <details>
205
+ <summary>After downloading open-sourced models, your file structure should look like this.</summary>
206
+
207
+ ```
208
+ .
209
+ β”œβ”€β”€ data/
210
+ β”œβ”€β”€ src/
211
+ β”œβ”€β”€ videos/
212
+ β”œβ”€β”€ pretrained/
213
+ β”‚ β”œβ”€β”€ Qwen2-VL-7B-Instruct/
214
+ β”‚ β”œβ”€β”€ ...
215
+ ```
216
+ </details>
217
+ <br>
218
+
219
+ **Note**: To use `Video-CCAM`, `LLaVA-NeXT`, `Video-LLaVA`, `VideoLLaMA2`, and `VILA`, follow additional instructions below. <br>
220
+ Clone their repositories into the `./src/generate_lib/` directory. Run the following commands:
221
+ ```bash
222
+ cd ./src/generate_lib
223
+
224
+ git clone [email protected]:QQ-MM/Video-CCAM.git # Video-CCAM
225
+ git clone [email protected]:LLaVA-VL/LLaVA-NeXT.git # LLaVA-NeXT
226
+ git clone [email protected]:DAMO-NLP-SG/VideoLLaMA2.git # VideoLLaMA2
227
+ git clone [email protected]:PKU-YuanGroup/Video-LLaVA.git # Video-LLaVA
228
+ git clone [email protected]:NVlabs/VILA.git # VILA
229
+ ```
230
+ After cloning, rename the directories by replacing hyphens (`-`) with underscores (`_`):
231
+ ```bash
232
+ mv Video-CCAM Video_CCAM
233
+ mv LLaVA-NeXT LLaVA_NeXT
234
+ mv Video-LLaVA Video_LLaVA
235
+ ```
236
+
237
+ ### 2. Evaluation
238
+
239
+ To run evaluation with a model:
240
+ ```bash
241
+ python src/evaluate.py
242
+ --model $model_name
243
+ --reasoning_type ALL
244
+ --demonstration_type ALL
245
+ --total_frames $total_frames
246
+ ```
247
+ All supported models are listed [here](https://github.com/yale-nlp/TOMATO/blob/2161ce9a98291ce4fcb7aff9a531d10502bf5b10/src/config.json#L2-L62). To evaluate additional models, please refer to the next section.<br>
248
+
249
+ [This](https://github.com/yale-nlp/TOMATO/blob/2161ce9a98291ce4fcb7aff9a531d10502bf5b10/src/config.json#L63-L70) is a list of models that take in videos directly and any specified `total_frames` will be ignore. <br>
250
+
251
+ You can specify a subset of `reasoning_type` and `demonstration_type` using a comma-seperated list. [These](https://github.com/yale-nlp/TOMATO/blob/2161ce9a98291ce4fcb7aff9a531d10502bf5b10/src/config.json#L71-83) are the lists of valid choices.
252
+
253
+ ### 3. Result Parsing
254
+ When our standard parser using regular expression fails, we employ `GPT-4o-mini` to extract answers from model response. To use the parser:
255
+ ```bash
256
+ python src/parse_result.py
257
+ ```
258
+ **Note**: This parser is designed to be incremental. It only parses additional raw model responses while leaving the already parsed results unchanged.
259
+
260
+ ### 4. Display Categorized Scores
261
+
262
+ Scores are grouped by `model`, `reasoning_type`+`model`, and `demonstration_type`+`model`. To display scores:
263
+
264
+ ```bash
265
+ python src/get_categorized_score.py
266
+ ```
267
+
268
+ ## Evaluate Additional Models
269
+
270
+ Our evaluation scripts are designed for the ease of adding additional models, simply:
271
+
272
+ ### 1. Add Model to Config File
273
+ Add `model_family` and `model_name` to `src/config.json` like below:
274
+
275
+ ```json
276
+ {
277
+ "models": {
278
+ "{model_family}": [
279
+ "{model_name}",
280
+ "..."
281
+ ]
282
+ ```
283
+
284
+ ### 2. Add Model Evaluation Code
285
+ Create the corresponding `{model_family}.py` file under `src/generate_lib` with the starter code below:
286
+
287
+ ```python
288
+ from generate_lib.constant import GENERATION_TEMPERATURE, GENERATION_TOP_P, SYSTEM_PROMPT, MAX_TOKENS, GENERATION_SEED
289
+ from generate_lib.construct_prompt import construct_prompt
290
+ from generate_lib.utils import read_video
291
+
292
+ def generate_response(model_name: str, queries: list, total_frames: int, output_dir: str):
293
+ # initialize your model
294
+ model = ...
295
+
296
+ for query in queries:
297
+ id_ = query['id']
298
+ question = query['question']
299
+ gt = optionized_list[query['answer']]
300
+
301
+ # construct prompt
302
+ base64Frames, _ = read_video(video_path=video_path, total_frames=total_frames)
303
+ prompt, all_choices, index2ans = construct_prompt(question=question,
304
+ options=options,
305
+ num_frames=total_frames)
306
+
307
+ # generate response
308
+ response = model(...)
309
+
310
+ # save model response in default format to use our result parser
311
+ with open(output_dir, "a") as f:
312
+ f.write(json.dumps(
313
+ {
314
+ "id": id_,
315
+ "question": question,
316
+ "response": response,
317
+ "all_choices": all_choices,
318
+ "index2ans": index2ans,
319
+ 'gt': gt
320
+ }
321
+ ) + "\n")
322
+ ```
323
+
324
+
325
+ ## Experiments
326
+
327
+ ### 1. Comparison with Existing Benchmarks
328
+
329
+ #### 1.1 Multi-Frame Gain ($\kappa$): a *higher* value indicates the task is less solvable by a single frame.
330
+ ![multi_frame_gain1](./misc/multi_frame_gain1.png)
331
+ ![multi_frame_gain2](./misc/multi_frame_gain2.png)
332
+
333
+ #### 1.2 Frame Order Sensitivity ($\tau$): a *higher* value indicates the task is more reliant on the correct order of frames.
334
+ ![frame_order_sensitivity](./misc/frame_order_sensitivity.png)
335
+
336
+
337
+ #### 1.3 Frame Information Parity ($\rho$): a *lower* value indicates information is more evenly distributed across the frames.
338
+ ![frame_information_parity](./misc/frame_information_parity.png)
339
+
340
+
341
+ ### 2. Leaderboard
342
+ We evaluate general-purpose MFMs on TOMATO, with all models tested in a zero-shot setting. The scores below are represented percentage accuracy (\%).
343
+
344
+ ![main_results](./misc/main_results.png)
345
+
346
+
347
+
348
+
349
+ # Contact
350
+ If you have any questions or suggestions, please don't hesitate to let us know. You can post an issue on this repository, or contact us directly at:
351
+ - Ziyao Shangguan: [email protected]
352
+ - Chuhan Li: [email protected]
353
+
354
+ # Citation
355
+ If you find πŸ…TOMATO useful for your research and applications, please cite using this BibTex:
356
+
357
+ ```bibtex
358
+ @misc{shangguan2024tomatoassessingvisualtemporal,
359
+ title={TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models},
360
+ author={Ziyao Shangguan and Chuhan Li and Yuxuan Ding and Yanan Zheng and Yilun Zhao and Tesca Fitzgerald and Arman Cohan},
361
+ year={2024},
362
+ eprint={2410.23266},
363
+ archivePrefix={arXiv},
364
+ primaryClass={cs.CV},
365
+ url={https://arxiv.org/abs/2410.23266},
366
+ }
367
+ ```