Datasets:
apcl
/

License:
File size: 1,431 Bytes
e2b0afb
 
 
 
 
 
 
 
12ad80a
 
 
e2b0afb
 
 
 
 
 
 
 
 
 
 
5482190
 
e2b0afb
 
5482190
 
e2b0afb
 
5482190
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: bigscience-openrail-m
task_categories:
- text-generation
pretty_name: SO13M
size_categories:
- 10M<n<100M
---
# so13m
so13m is a dataset containing 13m discussion threads from StackOverflow. The origin of the data is the StackExchange data dump from between January 2014 and December 2022. The threads cover a multitude of topics. This dataset serves as a natural language and (often) accompanying code in the domain of software engineering. Its inclusion could help downstream tasks depending on generating or understanding natural language.

---

## so13m file list
- so13m.pkl -- a pickle file that is a dictionary for stackoverflow's posts with key = post id and value = stackoverflow post
- so13m.json.gz -- a compressed version of json file that is a dicrionary for stackoverflow's posts with key = post id and value = stackoverflow post
- stackoverflow_txtfiles.pkl -- a pickle file that is a list of id of stackoverflow's post
- train.bin; val.bin -- bin files for traning and fine-tuning models

---
## so13m dataset details

We provide the size of our dataset in the following table:

| Config | Value |
| ------- | ------- |
|number of tokens | 10,495,518,108|
|number of Stack Overflow Posts | 13,071,148|
|megabytes after processing |16,695 |

We tokenize our data using scripts provided in our [github repository](https://github.com/apcl-research/jam/blob/main/data/jam_so13m/prepare_stackoverflow.py).