File size: 3,193 Bytes
2b70ee0
 
63e8664
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07621c7
 
 
 
 
 
 
997cffe
 
 
 
c06fa9c
997cffe
 
 
 
 
 
 
 
 
 
 
c06fa9c
997cffe
 
c06fa9c
997cffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c06fa9c
997cffe
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: openrail
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: repo
    dtype: string
  - name: file
    dtype: string
  - name: code
    dtype: string
  - name: file_length
    dtype: int64
  - name: avg_line_length
    dtype: float64
  - name: max_line_length
    dtype: int64
  - name: extension_type
    dtype: string
  splits:
  - name: train
    num_bytes: 21983781651.45426
    num_examples: 1634156
  download_size: 10635788503
  dataset_size: 21983781651.45426
task_categories:
- text-generation
language:
- en
pretty_name: arxiv_cplusplus_research_code
size_categories:
- 10B<n<100B
---
# Dataset card for ArtifactAI/arxiv_cplusplus_research_code
## Dataset Description

https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_cplusplus_research_code


### Dataset Summary

ArtifactAI/arxiv_python_research_code contains over 10.6GB of  source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.

### How to use it
```python
from datasets import load_dataset

# full dataset (10.6GB of data)
ds = load_dataset("AlgorithmicResearchGroup/arxiv_cplusplus_research_code", split="train")

# dataset streaming (will only download the data as needed)
ds = load_dataset("AlgorithmicResearchGroup/arxiv_cplusplus_research_code", streaming=True, split="train")
for sample in iter(ds): print(sample["code"])
```

## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
### Data Fields
- `repo` (string): code repository name.
- `file` (string): file path in the repository.
- `code` (string): code within the file.
- `file_length`: (integer): number of characters in the file.
- `avg_line_length`: (float): the average line-length of the file.
- `max_line_length`: (integer): the maximum line-length of the file.
- `extension_type`: (string): file extension.

### Data Splits

The dataset has no splits and all data is loaded as train split by default.

## Dataset Creation

### Source Data
#### Initial Data Collection and Normalization
34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.

These repositories were then filtered, and the code from each of "cpp", "cxx", "cc", "h", "hpp", "hxx" file extension was extracted into 1.4 million files.

#### Who are the source language producers?

The source (code) language producers are users of GitHub that created unique repository

### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. 

## Additional Information

### Dataset Curators
Matthew Kenney, AlgorithmicResearchGroup, [email protected]

### Citation Information
```
@misc{arxiv_cplusplus_research_code,
    title={arxiv_cplusplus_research_code},
    author={Matthew Kenney},
    year={2023}
}
```