Datasets:
File size: 2,394 Bytes
351aa33 b959d5c 351aa33 b959d5c dc84ce6 b959d5c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: other
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- n>1T
extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [ImpACT license for medium risk artifacts](https://allenai.org/licenses/impact-mr) and completing all fields below."
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
---
# Dolma
<img alt="Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background." src="/logo.png" width="100%">
Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It is openly released under AI2’s ImpACT license as a medium risk artifact.
More information:
- Read Dolma **announcement blogpost** [on Medium](https://soldni.medium.com/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b8da64);
- Learn more about Dolma on its [**Data Sheet**](https://drive.google.com/file/d/12gOf5I5RytsD159nSP7iim_5zN31FCXq/view?usp=drive_link);
- Review Dolma's [**ImpACT license** for medium risk artifacts](https://allenai.org/licenses/impact-mr);
- Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
## Summary Statistics
|**Source**|**Type**|**Gzip files (GB)**|**Documents (millions)**|**[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) Tokens (billions)**|
|:---|:---:|:---:|:---:|:----:|
|[CommonCrawl](https://commoncrawl.org/)|web|4,197|4,600|2,415|
|[C4](https://huggingface.co/datasets/allenai/c4)|web|302|364|175|
|[peS2o](https://huggingface.co/datasets/allenai/peS2o)|academic|150|38.8|57|
|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|code|675|236|430|
|[Project Gutenberg](https://www.gutenberg.org/)|books|6.6|0.052|4.8|
|[Wikipedia](https://dumps.wikimedia.org/)|encyclopedic|5.8|6.1|3.6|
|**Total** |**5,334**|**5,245**|**3,084**|
|