Datasets:
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: sst2_cognitive-bias
size_categories:
- 100K<n<1M
source_datasets:
- sst2
dataset_info:
features:
- name: idx
dtype: string
- name: sentence
dtype: string
- name: label
dtype: int64
- name: dist
dtype: string
- name: shot1_idx
dtype: string
- name: shot1_sent
dtype: string
- name: shot1_label
dtype: int64
- name: shot2_idx
dtype: string
- name: shot2_sent
dtype: string
- name: shot2_label
dtype: int64
- name: shot3_idx
dtype: string
- name: shot3_sent
dtype: string
- name: shot3_label
dtype: int64
- name: shot4_idx
dtype: string
- name: shot4_sent
dtype: string
- name: shot4_label
dtype: int64
- name: few_shot_string
dtype: string
- name: few_shot_hard_string
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 286790625
num_examples: 250000
download_size: 47727501
dataset_size: 286790625
splits:
- name: train
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for cobie_sst2
This dataset is a modification of the original SST-2 dataset for LLM cognitive bias evaluation.
Language(s)
- English (
en
)
Dataset Summary
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
Dataset Structure
The modifications carried out in the dataset are thought to evaluate cognitive biases in a few-shot setting and with two different task complexities. We make use of 25,000 instances from the original dataset, while the remaining ones serve as few-shot examples. Each instance is prompted with all possible unbalanced 4-shot distributions. To increase the original task complexity, we also introduce an additional neutral example between the first and last two examples.
Dataset Fields
idx
: original sentence id, in the format<original_partition>_<original_id>
.sentence
: test sentence.label
: sentiment of the test sentence, either "negative" (0
) or "positive" (1
).dist
: few-shot distribution (0000
,1111
,0001
,0010
,0100
,1000
,1110
,1101
,1011
,0111
).shot<n>_idx
: original id of the example sentence, in the format<original_partition>_<original_id>
.shot<n>_sent
: example sentence.shot<n>_label
: sentiment of the example sentence.few_shot_string
: string with all 4 shots the sentence is prompted with.few_shot_hard_string
: string with the same 4 shots and an additional neutral example between the first and last two to increase task complexity.
Supported Tasks and Leaderboards
sentiment-classification
Additional Information
Dataset Curators
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
This work has been promoted and financed by the Generalitat de Catalunya through the Aina project. This work is also funded by the Ministerio para la Transformación Digital y de la Función Pública and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project Desarrollo Modelos ALIA.
Licensing Information
This work is licensed under a MIT License (same as original).
Citation Information
@inproceedings{cobie,
title={Cognitive Biases, Task Complexity, and Result Intepretability in Large Language Models},
author={Mario Mina and Valle Ruiz-Fernández and Júlia Falcão and Luis Vasquez-Reina and Aitor Gonzalez-Agirre},
booktitle={Proceedings of The 31st International Conference on Computational Linguistics (COLING)},
year={2025 (to appear)}
}