Datasets:
metadata
license: other
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- n>1T
extra_gated_prompt: >-
Access to this dataset is automatically granted upon accepting the [ImpACT
license for medium risk artifacts](https://allenai.org/licenses/impact-mr) and
completing all fields below.
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
Dolma
Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It is openly released under AI2’s ImpACT license as a medium risk artifact.
More information:
- Read Dolma announcement blogpost on Medium;
- Learn more about Dolma on its Data Sheet;
- Review Dolma's ImpACT license for medium risk artifacts;
- Explore the open source tools we created to curate Dolma.
Summary Statistics
Source | Type | Gzip files (GB) | Documents (millions) | GPT-NeoX Tokens (billions) |
---|---|---|---|---|
CommonCrawl | web | 4,197 | 4,600 | 2,415 |
C4 | web | 302 | 364 | 175 |
peS2o | academic | 150 | 38.8 | 57 |
The Stack | code | 675 | 236 | 430 |
Project Gutenberg | books | 6.6 | 0.052 | 4.8 |
Wikipedia | encyclopedic | 5.8 | 6.1 | 3.6 |
Total | 5,334 | 5,245 | 3,084 |