Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
<p align="center"><h1>Dataset from "Approaching Human-Level Forecasting with Language Models"</h1></p>
|
5 |
+
|
6 |
+
<p>This documentation details the clean dataset derived from the raw data used in our research paper, <strong><a href="https://arxiv.org/abs/2402.18563" target="_blank">Approaching Human-Level Forecasting with Language Models</a></strong>, by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.</p>
|
7 |
+
|
8 |
+
<h2>Data Curation Process</h2>
|
9 |
+
<p>To enhance the quality and relevance of our dataset, we implemented a rigorous data curation process. This process involved:</p>
|
10 |
+
<ul>
|
11 |
+
<li>Filtering out ill-defined questions and those of overly personal or niche interest.</li>
|
12 |
+
<li>Excluding questions with low forecast submissions or trading volume on platforms like Manifold and Polymarket.</li>
|
13 |
+
<li>Converting multiple-choice questions into binary format to maintain consistency and focus on binary outcomes.</li>
|
14 |
+
<li>Ensuring that the test set only contains questions appearing after the knowledge cut-off date for the models used (June 1, 2024), to prevent potential leakage. Questions opened after this date were used for testing, while those resolved before were allocated to the training and validation sets.</li>
|
15 |
+
</ul>
|
16 |
+
|
17 |
+
<p>The curated dataset includes 5,516 binary questions, with 3,762 allocated for training, 840 for validation, and 914 for testing. This selection was made to ensure a balanced and representative sample of forecasting challenges. Detailed examples and further information on the curation methodology are available in <em>Table 2a</em> and <em>Appendix C</em> of our paper.</p>
|
18 |
+
|
19 |
+
<h2>Research Significance</h2>
|
20 |
+
<p>The curation and analysis of this dataset are pivotal to our research. They allow us to more accurately assess the forecasting capabilities of language models and explore their potential to match or exceed human-level accuracy in predicting future events. Our findings contribute valuable insights into the effectiveness of language models in complex decision-making scenarios.</p>
|
21 |
+
|
22 |
+
<p>We invite researchers and practitioners to review our methodology and findings for a deeper understanding of the potential and limitations of language models in forecasting applications. For more detailed discussions, please refer to the paper linked at the beginning of this document.</p>
|