Update README.md
Browse files
README.md
CHANGED
@@ -8,17 +8,17 @@ language:
|
|
8 |
- en
|
9 |
tags:
|
10 |
- climate
|
11 |
-
pretty_name:
|
12 |
size_categories:
|
13 |
- 1K<n<10K
|
14 |
---
|
15 |
|
16 |
-
#
|
17 |
_What do LLMs know about climate? Let's find out!_
|
18 |
|
19 |
-
##
|
20 |
|
21 |
-
We introduce the **
|
22 |
|
23 |
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
|
24 |
|
@@ -35,7 +35,7 @@ Source: [IPCC AR6 Working Group I report](https://www.ipcc.ch/report/ar6/wg1/)
|
|
35 |
|
36 |
## Dataset Construction
|
37 |
|
38 |
-
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final
|
39 |
|
40 |
From the full 8094 labeled sentences, we further selected **300 statements to form a smaller and more tractable test dataset**. We performed a random selection of sentences within each report and confidence category, with the following objectives:
|
41 |
- Making the test set distribution representative of the confidence class distribution in the overall train set and within each report;
|
@@ -62,6 +62,6 @@ We use this dataset to evaluate how recent LLMs fare at classifying the scientif
|
|
62 |
|
63 |
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
|
64 |
|
65 |
-
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the
|
66 |
|
67 |
Pre-print upcomping.
|
|
|
8 |
- en
|
9 |
tags:
|
10 |
- climate
|
11 |
+
pretty_name: ClimateX – Expert Confidence in Climate Statements
|
12 |
size_categories:
|
13 |
- 1K<n<10K
|
14 |
---
|
15 |
|
16 |
+
# ClimateX: Expert Confidence in Climate Statements
|
17 |
_What do LLMs know about climate? Let's find out!_
|
18 |
|
19 |
+
## ClimateX Dataset
|
20 |
|
21 |
+
We introduce the **Expert Confidence in Climate Statements (ClimateX) dataset**, a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively.
|
22 |
|
23 |
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
|
24 |
|
|
|
35 |
|
36 |
## Dataset Construction
|
37 |
|
38 |
+
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ClimateX dataset contains 8094 labeled sentences.
|
39 |
|
40 |
From the full 8094 labeled sentences, we further selected **300 statements to form a smaller and more tractable test dataset**. We performed a random selection of sentences within each report and confidence category, with the following objectives:
|
41 |
- Making the test set distribution representative of the confidence class distribution in the overall train set and within each report;
|
|
|
62 |
|
63 |
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
|
64 |
|
65 |
+
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ClimateX dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge.
|
66 |
|
67 |
Pre-print upcomping.
|