Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

image/png

More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release DataDecide—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).

25 Data Recipes

We call the 25 corpora we train on data recipes as they range across popular corpora including Dolma, DCLM, RefinedWeb, C4, and FineWeb as well as combinations of interventions on these datasets such as source mixing, deduplication, and filtering. This HuggingFace Dataset contains the tokenized data used to build these recipes with OLMo. Uploading is in progress.

Source Recipe Description
Dolma1.7 Original The 1.7 release of the Dolma dataset (Soldaini et al., 2024), a 2.3 trillion token corpus sampling sources commonly used in LM training.
Dolma1.7 No code Dolma1.7 with code-related subsets (Starcoder, StackExchange) removed.
Dolma1.7 No math, code Dolma1.7 excluding OpenWebMath, arXiv STEM papers, Starcoder, StackExchange, and Algebraic Stack.
Dolma1.7 No Reddit Dolma1.7 with Reddit subset excluded.
Dolma1.7 No Flan Dolma1.7 with Flan subset removed.
Dolma1.6++ Original Dolma1.6 with additional sources from Dolma1.7: RedPajama ArXiv, OpenWebMath, Algebraic Stack, Flan, Starcoder, and Falcon.
C4 Original The C4 dataset (Raffel et al., 2020) as processed in Dolma1.7, derived from April 2019 Common Crawl with automatic filtering.
FineWeb-Pro Original FineWeb-Pro (Zhou et al., 2024), created using a model-guided approach to apply programmatic cleaning over FineWeb.
FineWeb-Edu Original FineWeb-Edu (Benallal et al., 2024), deduplicated subset of SmolLM-Corpus filtered by an educational quality classifier.
Falcon Original Falcon RefinedWeb (Penedo et al., 2023) as used in Dolma1.7, built from all Common Crawl through June 2023, aggressively filtered.
Falcon+CC Original Unfiltered combination of Falcon RefinedWeb and Dolma1.7's Common Crawl data.
Falcon+CC QC 10% Top 10% of Falcon+CC by reproduction of the DCLM quality filter (Li et al., 2024).
Falcon+CC QC 20% Top 20% of Falcon+CC by reproduced DCLM filter.
Falcon+CC QC Orig 10% Top 10% using the original DCLM-provided quality filter.
Falcon+CC QC Tulu 10% Top 10% filtered using a classifier trained on pre-release Tulu-v3 (Lambert et al., 2024).
DCLM-Baseline Original DCLM-Baseline from Li et al. (2024).
DCLM-Baseline QC 7% FW2 Top 7% by DCLM filter, filtered with FineWeb-Edu, keeping only documents scored ≥2.
DCLM-Baseline QC 7% FW3 Same as above but restricted to documents scored ≥3.
DCLM-Baseline QC FW 10% Filtered using FineWeb-Edu classifier, top 10% retained.
DCLM-Baseline QC FW 3% Same as above but only top 3% retained.
DCLM-Baseline QC 10% 10% retained using classifier fine-tuned on OpenHermes and Reddit ELI5.
DCLM-Baseline QC 20% Same as above, but retaining top 20%.
DCLM-Baseline 25% / Dolma 75% 75% Dolma / 25% DCLM Mixed dataset: 75% Dolma1.7 and 25% DCLM-Baseline.
DCLM-Baseline 50% / Dolma 50% 50% Dolma / 50% DCLM Mixed dataset: 50% Dolma1.7 and 50% DCLM-Baseline.
DCLM-Baseline 75% / Dolma 25% 25% Dolma / 75% DCLM Mixed dataset: 25% Dolma1.7 and 75% DCLM-Baseline.

350 Models over Differences in Data in Scale

For each of our 25 datasets and 14 model sizes, we train a model linked below. Each has intermediate checkpoints (uploading after initial release), runs over 3 random seeds. All models finish training at a token to parameter ratio of 100 (e.g., 1B parameters -> 100B tokens).

Dolma1.7 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no code) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no math, code) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no Reddit) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no Flan) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.6++ 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
C4 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
FineWeb-Pro 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
FineWeb-Edu 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC 20%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC Orig 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC Tulu 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 7%, FW2) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 7%, FW3) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC FW 3%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC FW 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 20%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 25% / Dolma 75% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 50% / Dolma 50% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 75% / Dolma 25% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B

Links

Citation

BibTeX:

@article{MagnussonDataDecide2025,
      title={{DataDecide: How to Predict Best Pretraining Data with Small Experiments}},
      author={Ian Magnusson and Nguyen Tai and Ben Bogin and David Heineman and Jena Hwang and Luca Soldaini and Akshita Bhagia and Jiacheng Liu and Dirk Groeneveld and Oyvind Tafjord and Noah A. Smith and Pang Wei Koh and Jesse Dodge},
      year={2025},
      journal={arXiv preprint},
}
Downloads last month
11,306

Collection including allenai/DataDecide-data-recipes