Datasets:

ArXiv:
License:
OS-Atlas-data / README.md
OscarDo93589's picture
Update README.md
ce39403 verified
|
raw
history blame
2.77 kB
---
license: apache-2.0
---
# GUI Grounding Pre-training Data for OS-ATLAS
This document describes the acquisition of the pre-training data used by OS-ATLAS.
**Notes:** In GUI grounding data, the position of the target element is recorded in the `bbox` key, represented by `[left, top, right, bottom]`.
Each value is a [0, 1] decimal number indicating the ratio of the corresponding position to the width or height of the image.
The data we released is divided into three domains: mobile, desktop and web.
All annotation data is stored in JSON format and each sample contains:
* `img_filename`: the interface screenshot file
* `instruction`: human instruction
* `bbox`: the bounding box of the target element corresponding to instruction
Some data also contains a `data_type`, which records the type of an element in its structured information, if it can be obtained.
### Mobile data
This part of data is stored under the *mobile_domain* directory. Our mobile grounding data consists of four parts.
#### AMEX
Android Multi-annotation EXpo (AMEX) is a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents [1].
The annotation data is stored in
-'amex_raw.json'
Due to the single file size limitation of Hugging Face datasets, we stored the Amex images in *zip* format and split them into several sub-files.
- `amex_images_part_aa`
- `amex_images_part_ab`
- `amex_images_part_ac`
You need to first merge these split files back into the original file and then extract the contents.
```
cat amex_images_part_* > amex_images.zip
7z x amex_images.zip -aoa -o/path/to/extract/folder
```
#### UIBert
This is a dataset extended from Rico dataset [2] for two tasks: similar UI component retrieval and referring expression component retrieval [3].
The annotation data is stored in
- `uibert_raw.json`
The UIBert images are stored in
- `UIBert.zip`
#### Widget Captioning and RICOSCA
Widget Captioning data are collected by [4].
RICOSCA is a dataset automatically labeled using Android VH in [5]
The annotation data is stored in
- `widget_captioning.json`
- `ricosca.json`
The rico images are stored in
- `rico_imgs.zip`
#### Android_world_data
This part of data are sampled from a android environment for building and benchmarking autonomous computer control agents [6].
The annotation data is stored in
- `aw_mobile.json`
The rico images are stored in
- `mobile_images.zip`
### Desktop data
This part of data is stored under the *desktop_domain* directory. All of the desktop grounding data is collected from the real environments of personal computers running different operating systems. Each image is split into multiple sub-images to enhance data diversity.
Our mobile grounding data consists of three parts.