lvwerra HF staff commited on
Commit
f16b533
1 Parent(s): c4d537e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -4
README.md CHANGED
@@ -1,9 +1,9 @@
1
  # GitHub Code Dataset
2
 
3
- ## What is it?
4
  The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the public GitHub dataset on Google BiqQuery.
5
 
6
- ## How to use it
7
 
8
  The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
9
 
@@ -61,6 +61,36 @@ Naturally, you can also download the full dataset. Note that this will download
61
  ds = load_dataset("lvwerra/github-code", split="train")
62
  ```
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  ## Languages
65
 
66
  The dataset contains 30 programming languages with over 60 extensions:
@@ -122,8 +152,12 @@ Each example is also annotated with the license of the associated repository. Th
122
  ]
123
  ```
124
 
125
- ## Dataset creation
126
 
127
  The dataset was created in two steps:
128
  1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/query.sql)). The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_.
129
- 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/github_preprocessing.py)).
 
 
 
 
 
1
  # GitHub Code Dataset
2
 
3
+ ## Dataset Description
4
  The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the public GitHub dataset on Google BiqQuery.
5
 
6
+ ### How to use it
7
 
8
  The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
9
 
 
61
  ds = load_dataset("lvwerra/github-code", split="train")
62
  ```
63
 
64
+ ## Data Structure
65
+
66
+ ### Data Instances
67
+
68
+ ```python
69
+ {
70
+ 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
71
+ 'repo_name': 'MirekSz/webpack-es6-ts',
72
+ 'path': 'app/mods/mod190.js',
73
+ 'language': 'JavaScript',
74
+ 'license': 'isc',
75
+ 'size': 73
76
+ }
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ |Field|Type|Description|
82
+ |---|---|---|
83
+ |code|string|content of source file|
84
+ |repo_name|string|name of the GitHub repository|
85
+ |path|string|path of file in GitHub repository|
86
+ |language|string|programming language as inferred by extension|
87
+ |license|string|license of GitHub repository|
88
+ |size|int|size of source file in bytes|
89
+
90
+ ### Data Splits
91
+
92
+ The dataset only contains a train split.
93
+
94
  ## Languages
95
 
96
  The dataset contains 30 programming languages with over 60 extensions:
 
152
  ]
153
  ```
154
 
155
+ ## Dataset Creation
156
 
157
  The dataset was created in two steps:
158
  1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/query.sql)). The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_.
159
+ 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/lvwerra/github-code/blob/main/github_preprocessing.py)).
160
+
161
+ ## Considerations for Using the Data
162
+
163
+ The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.