license: apache-2.0
Dataset from "Approaching Human-Level Forecasting with Language Models"
This document details the curated dataset developed for our research paper, Approaching Human-Level Forecasting with Language Models, authored by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.
Data Source and Format
The dataset is compiled from forecasting platforms including Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. These platforms enable users to predict future events by assigning probabilities to different outcomes, structured as follows:
- Background Description: Contextual information for each forecasting question.
- Resolution Criterion: Guidelines on how and when each question is considered resolved.
- Timestamps: Key dates including the publication (begin date), forecast submission deadline (close date), and outcome resolution (resolve date).
Submissions are accepted between the begin date and the earlier of the resolve or close dates. See Table 1 in our paper for an in-depth example.
Raw Data Composition
The raw dataset encompasses 48,754 questions and 7,174,607 user forecasts from 2015 to 2024, across various question types and topics globally. However, it includes challenges such as ill-defined questions and a significant imbalance in source platform contributions post-June 1, 2023. For a complete view of the raw data, visit our dataset on Hugging Face.
Data Curation Process
To refine the dataset for analytical rigor, we undertook the following steps:
- Filtering: Exclusion of ill-defined, overly personal, or niche-interest questions to ensure data quality and relevance.
- Adjustment for Imbalance: Careful selection to mitigate the recent source imbalance, focusing on a diverse representation of forecasting questions.
- Binary Focus: Conversion of multiple-choice questions to binary format, concentrating on binary outcomes for a streamlined analysis.
- Temporal Segregation: To prevent leakage from language models' pre-training, the test set includes only questions published after June 1, 2024, with earlier questions allocated to training and validation sets.
This curation resulted in 5,516 binary questions, with 3,762 for training, 840 for validation, and 914 for testing. Detailed examples and curation insights are provided in Table 2a and Appendix C of our paper.
Significance for Research
The curated dataset is pivotal for our investigation into language models' forecasting capabilities, aiming to benchmark against or exceed human predictive performance. It enables focused analysis on high-quality, relevant forecasting questions.
Detailed methodologies and insights from our study are available in the linked paper at the beginning of this document. We invite feedback and collaboration to further this field of research.
How to Cite
If you find our dataset and research useful for your work, please cite it using the following BibTeX entry:
@misc{halawi2024approaching,
title={Approaching Human-Level Forecasting with Language Models},
author={Danny Halawi and Fred Zhang and Chen Yueh-Han and Jacob Steinhardt},
year={2024},
eprint={2402.18563},
archivePrefix={arXiv},
primaryClass={cs.LG}
}