Datasets:
apcl
/

License:
so13m / README.md
aakashba's picture
Update README.md
5482190
metadata
license: bigscience-openrail-m
task_categories:
  - text-generation
pretty_name: SO13M
size_categories:
  - 10M<n<100M

so13m

so13m is a dataset containing 13m discussion threads from StackOverflow. The origin of the data is the StackExchange data dump from between January 2014 and December 2022. The threads cover a multitude of topics. This dataset serves as a natural language and (often) accompanying code in the domain of software engineering. Its inclusion could help downstream tasks depending on generating or understanding natural language.


so13m file list

  • so13m.pkl -- a pickle file that is a dictionary for stackoverflow's posts with key = post id and value = stackoverflow post
  • so13m.json.gz -- a compressed version of json file that is a dicrionary for stackoverflow's posts with key = post id and value = stackoverflow post
  • stackoverflow_txtfiles.pkl -- a pickle file that is a list of id of stackoverflow's post
  • train.bin; val.bin -- bin files for traning and fine-tuning models

so13m dataset details

We provide the size of our dataset in the following table:

Config Value
number of tokens 10,495,518,108
number of Stack Overflow Posts 13,071,148
megabytes after processing 16,695

We tokenize our data using scripts provided in our github repository.