Datasets:

ArXiv:
License:
OscarDo93589 commited on
Commit
a1bdebd
·
verified ·
1 Parent(s): 18957da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -12,7 +12,7 @@ The data we released is divided into three domains: mobile, desktop and web.
12
 
13
  All annotation data is stored in JSON format and each sample contains:
14
  * `img_filename`: the interface screenshot file
15
- * `instruction`: human instruction
16
  * `bbox`: the bounding box of the target element corresponding to instruction
17
 
18
  Some data also contains a `data_type`, which records the type of an element in its structured information, if it can be obtained.
@@ -143,8 +143,15 @@ The annotation data is stored in
143
 
144
  - `fineweb_3m.json`
145
 
 
 
 
 
 
146
  ***
147
 
 
 
148
  [1] [AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents](https://arxiv.org/abs/2407.17490)
149
 
150
  [2] [UIBert: Learning Generic Multimodal Representations for UI Understanding](https://arxiv.org/abs/2107.13731)
 
12
 
13
  All annotation data is stored in JSON format and each sample contains:
14
  * `img_filename`: the interface screenshot file
15
+ * `instruction`: human instruction or referring expression extracted from ally tree or html
16
  * `bbox`: the bounding box of the target element corresponding to instruction
17
 
18
  Some data also contains a `data_type`, which records the type of an element in its structured information, if it can be obtained.
 
143
 
144
  - `fineweb_3m.json`
145
 
146
+ ***
147
+
148
+ ### Best practices
149
+
150
+
151
  ***
152
 
153
+ **The following are the open-source datasets we used as data sources. We welcome everyone to check the details and cite these sources accordingly!**
154
+
155
  [1] [AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents](https://arxiv.org/abs/2407.17490)
156
 
157
  [2] [UIBert: Learning Generic Multimodal Representations for UI Understanding](https://arxiv.org/abs/2107.13731)