Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
xwk123 commited on
Commit
40e2df9
·
verified ·
1 Parent(s): ddd73fb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # MobileVLM
5
+ ![My Local Image](./mobilevlm.png)
6
+
7
+ ## Paper Link
8
+ [MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding](https://aclanthology.org/2024.findings-emnlp.599/)
9
+
10
+ ### News
11
+ - **2024.11.12** - Partial training data and random walk code for Mobile3M released!
12
+ - **2024.10.4** - Test data for Mobile3M released!
13
+ - **2024.9.26** - Our work accepted by EMNLP 2024 Findings!
14
+
15
+ ### 1. Quick Start
16
+
17
+ #### Requirements
18
+ - transformers==4.32.0
19
+ - accelerate
20
+ - tiktoken
21
+ - einops
22
+ - transformers_stream_generator==0.0.4
23
+ - scipy
24
+ - torchvision
25
+ - pillow
26
+ - tensorboard
27
+ - matplotlib
28
+
29
+ ### 2. Mobile3M Dataset
30
+ ![Dataset Image](./mobilevlm_table.png) <!-- Replace with actual image path -->
31
+
32
+ #### Training Data
33
+ Training data is available at the following link: [data](https://huggingface.co/datasets/xwk123/Mobile3M/tree/main). We will gradually upload data for all apps.
34
+
35
+ #### Corpus Collection Script
36
+ To start collecting data, run the script `main/corpus/googleCreatDataset/arm_graph_para_lock.py`.
37
+
38
+ Example usage:
39
+ ```bash
40
+ python googleCreatDataset/arm_graph_para_lock.py --device_name 10.53.89.79:6532 --systemPort 8112 --appid 8201 --command_executorhttp://127.0.0.1:4812/wd/hub--appPackage com.lucky.luckyclient --name_en lucky --diff_max 0.5 --diff_png 0.3 --waitadb 8 --prefix lucky0_3_1_2_ --recheck -1
41
+ ```
42
+
43
+ #### Parameter Descriptions
44
+
45
+ - **device_name**: Name of the emulator.
46
+ - **appid**: Storage ID of the app being collected, e.g., 8201.
47
+ - **command_executor**: Appium system endpoint URL.
48
+ - **--diff_max 0.5 --diff_png 0.3**: Page similarity thresholds for differentiating screens.
49
+ - **--prefix lucky0_3_1_2_**: Distributed starting path for data collection.
50
+ - **--recheck -1**: Specifies whether to recheck previously collected data. Set to `-1` for no recheck.
51
+
52
+
53
+ ### Data Generation Code for Each Task
54
+ ![Task](./taSK.png) <!-- Replace with actual image path -->
55
+
56
+ The code for generating data for each task can be found in the following directories:
57
+
58
+ ### Our Test Data
59
+ Our test data is available at [data](https://huggingface.co/datasets/xwk123/mobilevlm_test).
60
+
61
+ ### 4. License
62
+
63
+ The dataset of this project is licensed under the [**Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)**](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
64
+
65
+ The source code of the this is licensed under the [**Apache 2.0**](http://www.apache.org/licenses/LICENSE-2.0) license.
66
+
67
+
68
+ #### Summary of Terms
69
+ - **Attribution**: You must give appropriate credit, provide a link to the license, and indicate if changes were made.
70
+ - **NonCommercial**: You may not use the material for commercial purposes.
71
+ - **ShareAlike**: If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
72
+
73
+ #### License Badge
74
+ [![License: CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
75
+
76
+ ### 5. Citation
77
+ If you'd like to use our benchmark or cite this paper, please kindly use the reference below:
78
+
79
+ ```bibtex
80
+ @article{wu2024mobilevlm,
81
+ title={Mobilevlm: A vision-language model for better intra-and inter-ui understanding},
82
+ author={Wu, Qinzhuo and Xu, Weikai and Liu, Wei and Tan, Tao and Liu, Jianfeng and Li, Ang and Luan, Jian and Wang, Bin and Shang, Shuo},
83
+ journal={arXiv preprint arXiv:2409.14818},
84
+ year={2024}
85
+ }