ArtifactAI commited on
Commit
997cffe
·
1 Parent(s): 07621c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -34,4 +34,70 @@ language:
34
  pretty_name: arxiv_cplusplus_research_code
35
  size_categories:
36
  - 10B<n<100B
37
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  pretty_name: arxiv_cplusplus_research_code
35
  size_categories:
36
  - 10B<n<100B
37
+ ---
38
+ # Dataset card for ArtifactAI/arxiv_cplusplus_research_code
39
+ ## Dataset Description
40
+
41
+ https://huggingface.co/datasets/ArtifactAI/arxiv_cplusplus_research_code
42
+
43
+
44
+ ### Dataset Summary
45
+
46
+ ArtifactAI/arxiv_python_research_code contains over 10.6GB of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
47
+
48
+ ### How to use it
49
+ ```python
50
+ from datasets import load_dataset
51
+
52
+ # full dataset (10.6GB of data)
53
+ ds = load_dataset("ArtifactAI/arxiv_cplusplus_research_code", split="train")
54
+
55
+ # dataset streaming (will only download the data as needed)
56
+ ds = load_dataset("ArtifactAI/arxiv_cplusplus_research_code", streaming=True, split="train")
57
+ for sample in iter(ds): print(sample["code"])
58
+ ```
59
+
60
+ ## Dataset Structure
61
+ ### Data Instances
62
+ Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
63
+ ### Data Fields
64
+ - `repo` (string): code repository name.
65
+ - `file` (string): file path in the repository.
66
+ - `code` (string): code within the file.
67
+ - `file_length`: (integer): number of characters in the file.
68
+ - `avg_line_length`: (float): the average line-length of the file.
69
+ - `max_line_length`: (integer): the maximum line-length of the file.
70
+ - `extension_type`: (string): file extension.
71
+
72
+ ### Data Splits
73
+
74
+ The dataset has no splits and all data is loaded as train split by default.
75
+
76
+ ## Dataset Creation
77
+
78
+ ### Source Data
79
+ #### Initial Data Collection and Normalization
80
+ 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
81
+
82
+ These repositories were then filtered, and the code from each of "cpp", "cxx", "cc", "h", "hpp", "hxx" file extension was extracted into 1.4 million files.
83
+
84
+ #### Who are the source language producers?
85
+
86
+ The source (code) language producers are users of GitHub that created unique repository
87
+
88
+ ### Personal and Sensitive Information
89
+ The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
90
+
91
+ ## Additional Information
92
+
93
+ ### Dataset Curators
94
+ Matthew Kenney, Artifact AI, [email protected]
95
+
96
+ ### Citation Information
97
+ ```
98
+ @misc{arxiv_cplusplus_research_code,
99
+ title={arxiv_cplusplus_research_code},
100
+ author={Matthew Kenney},
101
+ year={2023}
102
+ }
103
+ ```