Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
fxmeng's picture
Update README.md
3878b22 verified
|
raw
history blame
5.48 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10K<n<100K

Dataset Summary

The commonsense reasoning tasks consist of 8 subtasks, each with predefined training and testing sets, as described by LLM-Adapters (Hu et al., 2023). The following table lists the details of each sub-dataset.

Train Test Information
BoolQ (Clark et al., 2019) 9427 3270 Question-answering dataset for yes/no questions
PIQA (Bisk et al., 2020) 16113 1838 Questions with two solutions requiring physical commonsense to answer
SIQA (Sap et al., 2019) 33410 1954 Reasoning about people’s actions and their social implications
HellaSwag (Zellers et al., 2019) 39905 10042 Commonsense NLI questions including a context and several endings which complete the context
WinoGrande (Sakaguchi et al., 2021) 40398 1267 Fill-in-a-blank task with binary options, and the goal is to choose the right option for a given sentence which requires commonsense reasoning
ARC_Challenge (Clark et al., 2018) 1119 1172 The Challenge Set of ARC dataset of genuine grade-school level, multiple-choice science questions
ARC_Easy (Clark et al., 2018) 2251 2376 The Easy Set of ARC dataset of genuine grade-school level, multiple-choice science questions
OpenBookQA (Mihaylov et al., 2018) 4957 500 Questions requiring multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension

For WinoGrande, the original dataset includes multiple partitions: [xs, s, m, l, xl, debiased]. While LLM-Adapters simply concatenated all these partitions, note that the “xl” partition actually includes all others, leading to extensive data duplication. After removing duplicates, the training data is reduced from 63.2K to 40.4K instances.

Additionally, in the LLM-Adapters paper, the training set sizes of ARC_Challenge and ARC_Easy were reversed by mistake; here, we correct that error.

Load with Datasets

# Training using the entire dataset:

train_set = load_dataset("fxmeng/commonsense_filtered", split='train')
Dataset({
    features: ['query', 'answer', 'output'],
    num_rows: 147580
})

# Testing each subtask separately:

test_set = load_dataset("fxmeng/commonsense_filtered", data_dir='boolq', split='test')
Dataset({
    features: ['query', 'answer', 'output'],
    num_rows: 3270
})

Citation Information

If the dataset is helpful for your work, would you be willing to cite our fine-tuning paper?

@article{meng2024pissa,
  title={Pissa: Principal singular values and singular vectors adaptation of large language models},
  author={Meng, Fanxu and Wang, Zhaohui and Zhang, Muhan},
  journal={arXiv preprint arXiv:2404.02948},
  year={2024}
}
@article{meng2024clover,
  title={CLOVER: Constrained Learning with Orthonormal Vectors for Eliminating Redundancy},
  author={Meng, Fanxu and Zhang, Muhan},
  journal={arXiv preprint arXiv:2411.17426},
  year={2024}
}

Reference

@article{hu2023llm,
  title={Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models},
  author={Hu, Zhiqiang and Wang, Lei and Lan, Yihuai and Xu, Wanyu and Lim, Ee-Peng and Bing, Lidong and Xu, Xing and Poria, Soujanya and Lee, Roy Ka-Wei},
  journal={arXiv preprint arXiv:2304.01933},
  year={2023}
}
@article{clark2019boolq,
  title={BoolQ: Exploring the surprising difficulty of natural yes/no questions},
  author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei and Kwiatkowski, Tom and Collins, Michael and Toutanova, Kristina},
  journal={arXiv preprint arXiv:1905.10044},
  year={2019}
}
@inproceedings{bisk2020piqa,
  title={Piqa: Reasoning about physical commonsense in natural language},
  author={Bisk, Yonatan and Zellers, Rowan and Gao, Jianfeng and Choi, Yejin and others},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  volume={34},
  pages={7432--7439},
  year={2020}
}
@article{sap2019socialiqa,
  title={Socialiqa: Commonsense reasoning about social interactions},
  author={Sap, Maarten and Rashkin, Hannah and Chen, Derek and LeBras, Ronan and Choi, Yejin},
  journal={arXiv preprint arXiv:1904.09728},
  year={2019}
}
@article{zellers2019hellaswag,
  title={Hellaswag: Can a machine really finish your sentence?},
  author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
  journal={arXiv preprint arXiv:1905.07830},
  year={2019}
}
@article{sakaguchi2021winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
  journal={Communications of the ACM},
  volume={64},
  number={9},
  pages={99--106},
  year={2021},
  publisher={ACM New York, NY, USA}
}
@article{clark2018think,
  title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
  author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
  journal={arXiv preprint arXiv:1803.05457},
  year={2018}
}

@article{mihaylov2018can,
  title={Can a suit of armor conduct electricity? a new dataset for open book question answering},
  author={Mihaylov, Todor and Clark, Peter and Khot, Tushar and Sabharwal, Ashish},
  journal={arXiv preprint arXiv:1809.02789},
  year={2018}
}