Datasets:
ArXiv:
License:
OscarDo93589
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -87,7 +87,9 @@ All of the desktop grounding data is collected from the real environments of per
|
|
87 |
|
88 |
Our desktop grounding data consists of three parts: Windows, Linux and MacOS.
|
89 |
|
90 |
-
**The image and annotation data for each operating system are stored in corresponding zip and json files
|
|
|
|
|
91 |
|
92 |
```
|
93 |
cat windows_image_part_* > windows_images.zip
|
@@ -98,4 +100,37 @@ cat windows_image_part_* > windows_images.zip
|
|
98 |
|
99 |
This part of data is stored under the *web_domain* directory.
|
100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
|
|
|
87 |
|
88 |
Our desktop grounding data consists of three parts: Windows, Linux and MacOS.
|
89 |
|
90 |
+
**The image and annotation data for each operating system are stored in corresponding zip and json files.**
|
91 |
+
|
92 |
+
It is worth noting that, due to the large size of the Windows image data, the split files need to be merged before extraction.
|
93 |
|
94 |
```
|
95 |
cat windows_image_part_* > windows_images.zip
|
|
|
100 |
|
101 |
This part of data is stored under the *web_domain* directory.
|
102 |
|
103 |
+
Our desktop grounding data consists of two parts.
|
104 |
+
|
105 |
+
#### Seeclick web data
|
106 |
+
|
107 |
+
The web data from SeeClick [7] was crawled from websites provided by Common Crawl, containing more than 270k webpage screenshots and over 3 million webpage elements.
|
108 |
+
|
109 |
+
The annotation data is stored in
|
110 |
+
|
111 |
+
- `seeclick_web.json`
|
112 |
+
|
113 |
+
The images are stored into split files and need to be merged before extraction.
|
114 |
+
|
115 |
+
```
|
116 |
+
cat seeclick_web_image_part_* > seeclick_web_images.zip
|
117 |
+
7z x seeclick_web_images.zip -aoa -o/path/to/extract/folder
|
118 |
+
```
|
119 |
+
|
120 |
+
#### Fineweb_crawled_data
|
121 |
+
|
122 |
+
This part of data is crawled from web pages from the latest URLs obtained from FineWeb [8], a cleaned and deduplicated English dataset derived from Common Crawl.
|
123 |
+
|
124 |
+
Since this portion of the data contains at least 1.6 million images, we have compressed them into 10 zip files, from `fineweb_3m_s11.zip` to `fineweb_3m_s52.zip`.
|
125 |
+
|
126 |
+
Please extract them into the same directory.
|
127 |
+
|
128 |
+
As an example,
|
129 |
+
|
130 |
+
```
|
131 |
+
7z x fineweb_3m_s11.zip -aoa -o/same/path/to/extract/fineweb
|
132 |
+
```
|
133 |
+
|
134 |
+
The annotation data is stored in
|
135 |
|
136 |
+
- `fineweb_3m.json`
|