--- license: apache-2.0 ---
This document details the curated dataset developed for our research paper, Approaching Human-Level Forecasting with Language Models, authored by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.
The dataset is compiled from forecasting platforms including Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. These platforms enable users to predict future events by assigning probabilities to different outcomes, structured as follows:
Submissions are accepted between the begin date and the earlier of the resolve or close dates. See Table 1 in our paper for an in-depth example.
The raw dataset encompasses 48,754 questions and 7,174,607 user forecasts from 2015 to 2024, across various question types and topics globally. However, it includes challenges such as ill-defined questions and a significant imbalance in source platform contributions post-June 1, 2023. For a complete view of the raw data, visit our dataset on Hugging Face.
To refine the dataset for analytical rigor, we undertook the following steps:
This curation resulted in 5,516 binary questions, with 3,762 for training, 840 for validation, and 914 for testing. Detailed examples and curation insights are provided in Table 2a and Appendix C of our paper.
The curated dataset is pivotal for our investigation into language models' forecasting capabilities, aiming to benchmark against or exceed human predictive performance. It enables focused analysis on high-quality, relevant forecasting questions.
Detailed methodologies and insights from our study are available in the linked paper at the beginning of this document. We invite feedback and collaboration to further this field of research.
If you find our dataset and research useful for your work, please cite it using the following BibTeX entry:
```bibtex @misc{halawi2024approaching, title={Approaching Human-Level Forecasting with Language Models}, author={Danny Halawi and Fred Zhang and Chen Yueh-Han and Jacob Steinhardt}, year={2024}, eprint={2402.18563}, archivePrefix={arXiv}, primaryClass={cs.LG} }