mwhanna commited on
Commit
6779234
·
1 Parent(s): 0dfa74f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -1,3 +1,106 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for ACT-Thor
2
+
3
+ ## Table of Contents
4
+ - [Table of Contents](#table-of-contents)
5
+ - [Dataset Description](#dataset-description)
6
+ - [Dataset Summary](#dataset-summary)
7
+ - [Supported Tasks](#supported-tasks)
8
+ - [Dataset Structure](#dataset-structure)
9
+ - [Data Instances](#data-instances)
10
+ - [Data Fields](#data-fields)
11
+ - [Data Splits](#data-splits)
12
+ - [Dataset Creation](#dataset-creation)
13
+ - [Curation Rationale](#curation-rationale)
14
+ - [Source Data](#source-data)
15
+ - [Annotations](#annotations)
16
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
17
+ - [Discussion of Biases](#discussion-of-biases)
18
+ - [Other Known Limitations](#other-known-limitations)
19
+ - [Additional Information](#additional-information)
20
+ - [Dataset Curators](#dataset-curators)
21
+ - [Licensing Information](#licensing-information)
22
+ - [Citation Information](#citation-information)
23
+ - [Contributions](#contributions)
24
+
25
+ ## Dataset Description
26
+
27
+ - **Repository:** https://github.com/hannamw/ACT-Thor
28
+ - **Paper:** Paper ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments (COLING 2022; Link to be added soon)
29
+ - **Point of Contact:** Michael Hanna ([email protected])
30
+
31
+ ### Dataset Summary
32
+
33
+ This dataset is intended to test models' abilities to understand actions, and to do so in a controlled fashion. It is generated automatically using [AI2-Thor](https://ai2thor.allenai.org/), and thus contains images of a virtual house. Models receive an image of an object in a house (the before-image), an action, and four after-images that might have potentially resulted from performing the action on the object. Then, they must predict which of the after-images actually resulted from performing the action in the before-image.
34
+
35
+ ### Supported Tasks
36
+
37
+ This dataset implements the contrast set task discussed in the paper: given a before image and an action, predict which of 4 after images is the actual result of performing the action in the before image. However, the raw data (not included here) could be used for other tasks, e.g. given a before and after image, infer the action taken. Feel free to reach out and request the full data (with all of the metadata and other information that might be useful), or collect it automatically using the scripts available on the project's [GitHub repo](https://github.com/hannamw/ACT-Thor)!
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Data Instances
42
+
43
+ There are 4441 instances in the dataset, each consisting of the fields below:
44
+
45
+ ### Data Fields
46
+
47
+ - id: integer ID of the example
48
+ - object: name (string) of the object of interest
49
+ - action: name (string) of the action taken
50
+ - action_id: integer ID of the action taken
51
+ - scene: the ID (string) of the scene from which this example comes
52
+ - before_image: The before image
53
+ - after_image_{0-3}: The after images, from which the correct image is to be chosen
54
+ - label: The index (0-3) of the correct after image
55
+
56
+ Only the action_id, before_image, and after_image need be fed into the model, which should predict the label.
57
+
58
+ ### Data Splits
59
+
60
+ We create 3 different train-valid-test splits. In the sample split, each examples has been randomly assigned to either the train, valid, and test split, without any special organization. The object split introduces new objects in the test split, to test object generalization. Finally, the scene split is organized such that the scenes contained in train, valid, and test are disjoint (to test scene generalization).
61
+
62
+ ## Dataset Creation
63
+
64
+ ### Curation Rationale
65
+
66
+ This dataset was curated for two reasons. Its main purpose is to test models' abilities to understand the consequences of actions. However, its creation also intends to showcase the potential of virtual platforms as sites for the collection of data, especially in a highly controlled fashion.
67
+
68
+ ### Source Data
69
+
70
+ #### Initial Data Collection and Normalization
71
+
72
+ All of the data is collected by navigating throughout AI2-Thor virtual environments and recording images in metadata. Check out the paper, where we describe this process in detail!
73
+
74
+ ### Annotations
75
+
76
+ #### Annotation process
77
+
78
+ This dataset is generated entirely automatically using AI2-Thor, so there are no annotations. In the paper, we discuss annotations created by humans performing the task; these are only used to check that the task is feasible for humans. We're happy to release these if requested; these were collected from students at 2 universities.
79
+
80
+ ## Considerations for Using the Data
81
+
82
+ ### Discussion of Biases
83
+
84
+ This paper uses artificially generated images of homes from AI2-Thor. Because of the limited variety of homes, a model performing well on this dataset might not perform well in the context of other homes (e.g. of different designs, from different cultures, etc.)
85
+
86
+ ### Other Known Limitations
87
+
88
+ This dataset is small, so updating it to include a greater diversity of actions / objects would be very useful. If these actions / objects are added to AI2-Thor, more data can be collected using the script on our [GitHub repo](https://github.com/hannamw/ACT-Thor).
89
+
90
+ ## Additional Information
91
+
92
+ ### Dataset Curators
93
+
94
+ Michael Hanna ([email protected]), Federico Pedeni ([email protected])
95
+
96
+ ### Licensing Information
97
+
98
+ Creative Commons 4.0
99
+
100
+ ### Citation Information
101
+
102
+ Please cite the associated COLING 2022 paper, "Paper ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments". The full citation will be added here when the paper is published.
103
+
104
+ ### Contributions
105
+
106
+ Thanks to [@hannamw](https://github.com/hannamw) for adding this dataset.