Datasets:

ArXiv:
License:
NamCyan commited on
Commit
a480046
·
1 Parent(s): b1d8d14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +230 -0
README.md CHANGED
@@ -1,3 +1,233 @@
1
  ---
 
 
 
 
 
 
 
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - code
4
+ - en
5
+ multilinguality:
6
+ - multiprogramming languages
7
+ task_categories:
8
+ - text-generation
9
  license: mit
10
+ dataset_info:
11
+ features:
12
+ - name: identifier
13
+ dtype: string
14
+ - name: return_type
15
+ dtype: string
16
+ - name: repo
17
+ dtype: string
18
+ - name: path
19
+ dtype: string
20
+ - name: language
21
+ dtype: string
22
+ - name: code
23
+ dtype: string
24
+ - name: code_tokens
25
+ dtype: string
26
+ - name: original_docstring
27
+ dtype: string
28
+ - name: comment
29
+ dtype: string
30
+ - name: docstring_tokens
31
+ dtype: string
32
+ - name: docstring
33
+ dtype: string
34
+ - name: original_string
35
+ dtype: string
36
+ pretty_name: The Vault Function
37
+ viewer: false
38
  ---
39
+
40
+
41
+
42
+ ## Table of Contents
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks](#supported-tasks)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-fields)
50
+ - [Data Splits](#data-splits)
51
+ - [Dataset Statistics](#dataset-statistics)
52
+ - [Usage](#usage)
53
+ - [Additional Information](#additional-information)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+ - [Contributions](#contributions)
57
+
58
+
59
+ ## Dataset Description
60
+
61
+ - **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
62
+ - **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
63
+ - **Contact:** [email protected]
64
+ - **Website:** https://www.fpt-aicenter.com/ai-residency/
65
+
66
+ <p align="center">
67
+ <img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
68
+ </p>
69
+
70
+ <div align="center">
71
+
72
+ # The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
73
+ </div>
74
+
75
+
76
+ ## Dataset Summary
77
+ The Vault is a multilingual code-text dataset with over 34 million pairs ìn function-level covering 10 popular programming languages. It is the largest corpus containing parallel code-text data. By building upon [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a massive raw code sample collection, the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level. The Vault can serve many purposes at multiple levels.
78
+
79
+ ## Supported Tasks
80
+ The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
81
+
82
+ ## Languages
83
+ The natural language text (docstring) is in English.
84
+
85
+ 10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
86
+
87
+ ## Dataset Structure
88
+ ### Data Instances
89
+ ```
90
+ {
91
+
92
+ "hexsha": "5c47f0b4c173a8fd03e4e633d9b3dd8211e67ad0",
93
+ "repo": "neumanna94/beepboop",
94
+ "path": "js/scripts.js",
95
+ "license": [
96
+ "MIT"
97
+ ],
98
+ "language": "JavaScript",
99
+ "identifier": "beepBoopSelector",
100
+ "code": "function beepBoopSelector(inputString, bbFunction){\n if(bbFunction==1){\n return beepBoop(inputString);\n } else if(bbFunction==2){\n return beepBoop2(inputString);\n } else if(bbFunction==3){\n return beepBoop3(inputString);\n } else {\n }\n}",
101
+ "code_tokens": [
102
+ "function",
103
+ "beepBoopSelector",
104
+ "(",
105
+ "inputString",
106
+ ",",
107
+ "bbFunction",
108
+ ")",
109
+ "{",
110
+ "if",
111
+ "(",
112
+ "bbFunction",
113
+ "==",
114
+ "1",
115
+ ")",
116
+ "{",
117
+ "return",
118
+ "beepBoop",
119
+ "(",
120
+ "inputString",
121
+ ")",
122
+ ";",
123
+ "}",
124
+ "else",
125
+ "if",
126
+ "(",
127
+ "bbFunction",
128
+ "==",
129
+ "2",
130
+ ")",
131
+ "{",
132
+ "return",
133
+ "beepBoop2",
134
+ "(",
135
+ "inputString",
136
+ ")",
137
+ ";",
138
+ "}",
139
+ "else",
140
+ "if",
141
+ "(",
142
+ "bbFunction",
143
+ "==",
144
+ "3",
145
+ ")",
146
+ "{",
147
+ "return",
148
+ "beepBoop3",
149
+ "(",
150
+ "inputString",
151
+ ")",
152
+ ";",
153
+ "}",
154
+ "else",
155
+ "{",
156
+ "}",
157
+ "}"
158
+ ],
159
+ }
160
+
161
+ ```
162
+ ### Data Fields
163
+
164
+ Data fields for function level:
165
+ - **hexsha** (string): the unique git hash of file
166
+ - **repo** (string): the owner/repo
167
+ - **path** (string): the full path to the original file
168
+ - **license** (list): licenses in the repo
169
+ - **language** (string): the programming language
170
+ - **identifier** (string): the function or method name
171
+ - **code** (string): the part of the original that is code
172
+ - **code_tokens** (list): tokenized version of `code`
173
+
174
+
175
+ ### Data Splits
176
+
177
+ In this repo, the inline level data is not split, and contain in only train set.
178
+
179
+ ## Dataset Statistics
180
+
181
+
182
+ | | train/small | train/medium | train/full | validation | test | total |
183
+ |:-----------|------------:|-------------:|-----------:|-----------:|-------:|--------------:|
184
+ |Python | 370,657 | 1,952,110 | 7,772,647 | 30,992 | 21,652 | 7,825,291 |
185
+ |Java | 351,213 | 1,612,366 | 6,629,193 | 22,677 | 15,552 | 6,667,422 |
186
+ |JavaScript | 82,931 | 404,729 | 1,640,416 | 22,044 | 21,108 | 1,683,568 |
187
+ |PHP | 236,638 | 1,155,476 | 4,656,371 | 21,375 | 19,010 | 4,696,756 |
188
+ |C | 105,978 | 381,207 | 1,639,319 | 27,525 | 19,122 | 1,685,966 |
189
+ |C# | 141,090 | 783,166 | 3,305,891 | 24,787 | 19,638 | 3,350,316 |
190
+ |C++ | 87,420 | 410,907 | 1,671,268 | 20,011 | 18,169 | 1,709,448 |
191
+ |Go | 267,535 | 1,319,547 | 5,109,020 | 19,102 | 25,314 | 5,153,436 |
192
+ |Ruby | 23,921 | 112,574 | 424,339 | 17,338 | 19,908 | 461,585 |
193
+ |Rust | 35,367 | 224,015 | 825,130 | 16,716 | 23,141 | 864,987 |
194
+ |TOTAL | 1,702,750 | 8,356,097 |33,673,594 |222,567 |202,614 |**34,098,775** |
195
+
196
+ ## Usage
197
+ You can load The Vault dataset using datasets library: ```pip install datasets```
198
+
199
+ ```python
200
+ from datasets import load_dataset
201
+
202
+ # Load full function level dataset (40M samples)
203
+ dataset = load_dataset("Fsoft-AIC/the-vault-inline")
204
+
205
+
206
+ # specific language (e.g. Python)
207
+ dataset = load_dataset("Fsoft-AIC/the-vault-inline", languages=['Python'])
208
+
209
+ # dataset streaming
210
+ data = load_dataset("Fsoft-AIC/the-vault-inline", streaming= True)
211
+ for sample in iter(data['train']):
212
+ print(sample)
213
+ ```
214
+
215
+ A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
216
+
217
+ ## Additional information
218
+ ### Licensing Information
219
+ MIT License
220
+
221
+ ### Citation Information
222
+
223
+ ```
224
+ @article{manh2023vault,
225
+ title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
226
+ author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
227
+ journal={arXiv preprint arXiv:2305.06156},
228
+ year={2023}
229
+ }
230
+ ```
231
+
232
+ ### Contributions
233
+ This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code).