File size: 4,869 Bytes
77bca15
 
183ff85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ed0e83
 
 
 
 
 
 
 
99694b9
8ed0e83
 
 
38d78fd
 
 
 
 
77bca15
8ed0e83
edf8ace
8ed0e83
 
 
 
3ee82c9
 
 
 
8ed0e83
 
ed2c189
dbce638
3ee82c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
612eebb
3ee82c9
612eebb
 
 
 
 
 
 
 
 
 
3ee82c9
612eebb
3ee82c9
612eebb
 
 
 
3ee82c9
99694b9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: apache-2.0
dataset_info:
  features:
  - name: pattern_id
    dtype: int64
  - name: pattern
    dtype: string
  - name: test_id
    dtype: int64
  - name: negation_type
    dtype: string
  - name: semantic_type
    dtype: string
  - name: syntactic_scope
    dtype: string
  - name: isDistractor
    dtype: bool
  - name: label
    dtype: bool
  - name: sentence
    dtype: string
  splits:
  - name: train
    num_bytes: 41264658
    num_examples: 268505
  - name: validation
    num_bytes: 3056321
    num_examples: 22514
  - name: test
    num_bytes: 12684749
    num_examples: 90281
  download_size: 6311034
  dataset_size: 57005728
task_categories:
- text-classification
language:
- en
tags:
- commonsense
- negation
- LLMs
- LLM
pretty_name: This is NOT a Dataset
size_categories:
- 100K<n<1M
multilinguality:
  - monolingual
source_datasets:
  - original
paperswithcode_id: this-is-not-a-dataset
---


<p align="center">
    <img src="https://github.com/hitz-zentroa/This-is-not-a-Dataset/raw/main/assets/tittle.png" style="height: 250px;">
</p>

<h3 align="center">"A Large Negation Benchmark to Challenge Large Language Models"</h3>

<p align="justify">
We introduce a large semi-automatically generated dataset of ~400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms that we use to evaluate LLMs.
</p>

- 📖 Paper: [This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models (EMNLP'23)](http://arxiv.org/abs/2310.15941)
- 💻 Baseline Code and the Official Scorer: [https://github.com/hitz-zentroa/This-is-not-a-Dataset](https://github.com/hitz-zentroa/This-is-not-a-Dataset)

# Data explanation

- **pattern_id** (int): The ID of the pattern,in range [1,11]
- **pattern** (str): The name of the pattern
- **test_id** (int): For each pattern we use a set of templates to instanciate the triples. Examples are grouped in triples by test id
- **negation_type** (str): Affirmation, verbal, non-verbal
- **semantic_type** (str): None (for affirmative sentences), analytic, synthetic
- **syntactic_scope** (str): None (for affirmative sentences), clausal, subclausal
- **isDistractor** (bool): We use distractors (randonly selectec synsets) to generate false kwoledge.
- **<span style="color:green">sentence</span>**  (str): The sentence. <ins>This is the input of the model</ins>
- **<span style="color:green">label</span>** (bool): The label of the example, True if the statement is true, False otherwise. <ins>This is the target of the model</ins>

If you want to run experiments with this dataset, please, use the [Official Scorer](https://github.com/hitz-zentroa/This-is-not-a-Dataset#scorer) to ensure reproducibility and fairness. 

# Citation

```bibtex
@inproceedings{garcia-ferrero-etal-2023-dataset,
    title = "This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models",
    author = "Garc{\'\i}a-Ferrero, Iker  and
      Altuna, Bego{\~n}a  and
      Alvez, Javier  and
      Gonzalez-Dios, Itziar  and
      Rigau, German",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.531",
    doi = "10.18653/v1/2023.emnlp-main.531",
    pages = "8596--8615",
    abstract = "Although large language models (LLMs) have apparently acquired a certain level of grammatical knowledge and the ability to make generalizations, they fail to interpret negation, a crucial step in Natural Language Processing. We try to clarify the reasons for the sub-optimal performance of LLMs understanding negation. We introduce a large semi-automatically generated dataset of circa 400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms. We have used our dataset with the largest available open LLMs in a zero-shot approach to grasp their generalization and inference capability and we have also fine-tuned some of the models to assess whether the understanding of negation can be trained. Our findings show that, while LLMs are proficient at classifying affirmative sentences, they struggle with negative sentences and lack a deep understanding of negation, often relying on superficial cues. Although fine-tuning the models on negative sentences improves their performance, the lack of generalization in handling negation is persistent, highlighting the ongoing challenges of LLMs regarding negation understanding and generalization. The dataset and code are publicly available.",
}
```