lasha-nlp commited on
Commit
4a9301a
1 Parent(s): da96b1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -1
README.md CHANGED
@@ -1,3 +1,178 @@
 
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - found
9
+ - crowdsourced
10
+ license:
11
+ - cc-by-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ pretty_name: condaqa
15
+ size_categories:
16
+ - 10K<n<100K
17
+ source_datasets:
18
+ - original
19
+ tags:
20
+ - negation
21
+ - reading comprehension
22
+ task_categories:
23
+ - question-answering
24
+ task_ids: []
25
  ---
26
+
27
+ # Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
28
+
29
+ ## Dataset Description
30
+
31
+ - **Repository:** https://github.com/AbhilashaRavichander/CondaQA
32
+ - **Paper:** https://arxiv.org/abs/2211.00295.
33
+ - **Point of Contact:** [email protected]
34
+
35
+ ## Dataset Summary
36
+
37
+ Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
38
+
39
+ If you use this dataset, we would appreciate you citing our work:
40
+
41
+ ```
42
+ @inproceedings{ravichander-et-al-2022-condaqa,
43
+ title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
44
+ author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
45
+ proceedings={EMNLP 2022},
46
+ year={2022}
47
+ }
48
+ ```
49
+
50
+ From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
51
+
52
+ ### Supported Tasks and Leaderboards
53
+
54
+ The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
55
+
56
+ ### Language
57
+ English
58
+
59
+ ## Dataset Structure
60
+
61
+ ### Data Instances
62
+ Here's an example instance:
63
+
64
+ ```
65
+ {"QuestionID": "q10",
66
+ "original cue": "rarely",
67
+ "PassageEditID": 0,
68
+ "original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
69
+ "SampleID": 5294,
70
+ "label": "YES",
71
+ "original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
72
+ "sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
73
+ "PassageID": 444,
74
+ "sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
75
+ }
76
+
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ * `QuestionID`: unique ID for this question (might be asked for multiple passages)
82
+ * `original cue`: Negation cue that was used to select this passage from Wikipedia
83
+ * `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
84
+ * `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
85
+ * `SampleID`: unique ID for this passage-question pair
86
+ * `label`: answer
87
+ * `original sentence`: Sentence that contains the negated statement
88
+ * `sentence2`: question
89
+ * `PassageID`: unique ID for the Wikipedia passage
90
+ * `sentence1`: passage
91
+
92
+ ### Data Splits
93
+
94
+ Data splits can be accessed as:
95
+ ```
96
+ from datasets import load_dataset
97
+ train_set = load_dataset("condaqa", "train")
98
+ dev_set = load_dataset("condaqa", "dev")
99
+ test_set = load_dataset("condaqa", "test")
100
+ ```
101
+
102
+ ## Dataset Creation
103
+
104
+ Full details are in the paper.
105
+
106
+
107
+ ### Curation Rationale
108
+
109
+ From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
110
+ 1. The dataset should include a wide variety of negation cues, not just negative particles.
111
+ 2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
112
+ 3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
113
+ 4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
114
+
115
+ ### Source Data
116
+
117
+ From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
118
+
119
+ "We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
120
+
121
+ #### Initial Data Collection and Normalization
122
+
123
+ We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
124
+
125
+ #### Who are the source language producers?
126
+
127
+ Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
128
+
129
+ ### Annotations
130
+
131
+ #### Annotation process
132
+
133
+ From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
134
+
135
+ Full details are in the paper.
136
+
137
+ #### Who are the annotators?
138
+
139
+ From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
140
+
141
+ ### Personal and Sensitive Information
142
+
143
+ We expect that such information has already been redacted from Wikipedia.
144
+
145
+ ## Considerations for Using the Data
146
+
147
+ ### Social Impact of Dataset
148
+
149
+ A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
150
+
151
+ ### Discussion of Biases
152
+
153
+ We are not aware of societal biases that are exhibited in this dataset.
154
+
155
+ ### Other Known Limitations
156
+
157
+ From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
158
+
159
+ ## Additional Information
160
+
161
+ ### Dataset Curators
162
+
163
+ From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
164
+
165
+ ### Licensing Information
166
+
167
+ license: apache-2.0
168
+
169
+ ### Citation Information
170
+
171
+ ```
172
+ @inproceedings{ravichander-et-al-2022-condaqa,
173
+ title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
174
+ author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
175
+ proceedings={EMNLP 2022},
176
+ year={2022}
177
+ }
178
+ ```