Datasets:
File size: 9,329 Bytes
cf5338b 010f822 cf66077 bba5166 cf66077 bba5166 cf66077 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: difficulty_in_understanding
dtype: string
- name: left_image
dtype: string
- name: overall_description
dtype: string
- name: right_image
dtype: string
- name: stage
dtype: string
splits:
- name: train
num_bytes: 142007936.672
num_examples: 1084
download_size: 139447610
dataset_size: 142007936.672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
pretty_name: 'Y'
size_categories:
- 1K<n<10K
tags:
- arxiv:2409.13592
---
# *YesBut Dataset (https://yesbut-dataset.github.io/)*
Understanding satire and humor is a challenging task for even current Vision-Language models. In this paper, we propose the challenging tasks of Satirical Image Detection (detecting whether an image is satirical), Understanding (generating the reason behind the image being satirical), and Completion (given one half of the image, selecting the other half from 2 given options, such that the complete image is satirical) and release a high-quality dataset YesBut, consisting of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles, to evaluate those tasks. Each satirical image in the dataset depicts a normal scenario, along with a conflicting scenario which is funny or ironic. Despite the success of current Vision-Language Models on multimodal tasks such as Visual QA and Image Captioning, our benchmarking experiments show that such models perform poorly on the proposed tasks on the YesBut Dataset in Zero-Shot Settings w.r.t both automated as well as human evaluation. Additionally, we release a dataset of 119 real, satirical photographs for further research. The dataset and code are available at https://github.com/abhi1nandy2/yesbut_dataset.
## Dataset Details
YesBut Dataset consists of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles. Each satirical image is posed in a “Yes, But” format, where the left half of the image depicts a normal scenario, while the right half depicts a conflicting scenario which is funny or ironic.
**Currently on huggingface, the dataset contains the satirical images from 3 different stages of annotation, along with corresponding metadata and image descriptions.**
> Download non-satirical images from the following Google Drive Links -
> https://drive.google.com/file/d/1Tzs4OcEJK469myApGqOUKPQNUtVyTRDy/view?usp=sharing - Non-Satirical Images annotated in Stage 3
> https://drive.google.com/file/d/1i4Fy01uBZ_2YGPzyVArZjijleNbt8xRu/view?usp=sharing - Non-Satirical Images annotated in Stage 4
### Dataset Description
The YesBut dataset is a high-quality annotated dataset designed to evaluate the satire comprehension capabilities of vision-language models. It consists of 2547 multimodal images, 1084 of which are satirical, while 1463 are non-satirical. The dataset covers a variety of artistic styles, such as colorized sketches, 2D stick figures, and 3D stick figures, making it highly diverse. The satirical images follow a "Yes, But" structure, where the left half depicts a normal scenario, and the right half contains an ironic or humorous twist. The dataset is curated to challenge models in three main tasks: Satirical Image Detection, Understanding, and Completion. An additional dataset of 119 real, satirical photographs is also provided to further assess real-world satire comprehension.
- **Curated by:** Annotators who met the qualification criteria of being undergraduate sophomore students or above, enrolled in English-medium colleges.
- **Language(s) (NLP):** English
- **License:** Apache license 2.0
### Dataset Sources
- **Project Website:** https://yesbut-dataset.github.io/
- **Repository:** https://github.com/abhi1nandy2/yesbut_dataset
- **Paper:**
- HuggingFace: https://huggingface.co/papers/2409.13592
- ArXiv - https://arxiv.org/abs/2409.13592
- This work has been accepted **as a long paper in EMNLP Main 2024**
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
The YesBut dataset is intended for benchmarking the performance of vision-language models on satire and humor comprehension tasks. Researchers and developers can use this dataset to test their models on Satirical Image Detection (classifying an image as satirical or non-satirical), Satirical Image Understanding (generating the reason behind the satire), and Satirical Image Completion (choosing the correct half of an image that makes the completed image satirical). This dataset is particularly suitable for models developed for image-text reasoning and cross-modal understanding.
### Out-of-Scope Use
YesBut Dataset is not recommended for tasks unrelated to multimodal understanding, such as basic image classification without the context of humor or satire.
## Dataset Structure
The YesBut dataset contains two types of images: satirical and non-satirical. Satirical images are annotated with textual descriptions for both the left and right sub-images, as well as an overall description containing the punchline. Non-satirical images are also annotated but lack the element of irony or contradiction. The dataset is divided into multiple annotation stages (Stage 2, Stage 3, Stage 4), with each stage including a mix of original and generated sub-images. The data is stored in image and metadata formats, with the annotations being key to the benchmarking tasks.
## Dataset Creation
### Curation Rationale
The YesBut dataset was curated to address the gap in the ability of existing vision-language models to comprehend satire and humor. Satire comprehension involves a higher level of reasoning and understanding of context, irony, and social cues, making it a challenging task for models. By including a diverse range of images and satirical scenarios, the dataset aims to push the boundaries of multimodal understanding in artificial intelligence.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
The images in the YesBut dataset were collected from social media platforms, particularly from the "X" (formerly Twitter) handle @_yesbut_. The original satirical images were manually annotated, and additional synthetic images were generated using DALL-E 3. These images were then manually labeled as satirical or non-satirical. The annotations include textual descriptions, binary features (e.g., presence of text), and difficulty ratings based on the difficulty of understanding satire/irony in the images. Data processing involved adding new variations of sub-images to expand the dataset's diversity.
#### Who are the source data producers?
Prior to annotation and expansion of the dataset, images were downloaded from the posts in ‘X’ (erstwhile known as Twitter) handle @_yesbut_ (with proper consent).
#### Annotation process
The annotation process was carried out in four stages: (1) collecting satirical images from social media, (2) manual annotation of the images, (3) generating additional 2D stick figure images using DALL-E 3, and (4) generating 3D stick figure images.
#### Who are the annotators?
YesBut was curated by annotators who met the qualification criteria of being undergraduate sophomore students or above, enrolled in English-medium colleges.
#### Personal and Sensitive Information
The images in YesBut Dataset do not include any personal identifiable information, and the annotations are general descriptions related to the satirical content of the images.
## Bias, Risks, and Limitations
- Subjectivity of annotations: The annotation task involves utilizing background knowledge that may differ among annotators. Consequently, we manually reviewed the annotations to minimize the number of incorrect annotations in the dataset. However, some subjectivity still remains.
- Extension to languages other than English: This work is in the English Language. However, we plan to extend our work to languages other than English.
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@article{nandy2024yesbut,
title={YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models},
author={Nandy, Abhilash and Agarwal, Yash and Patwa, Ashish and Das, Millon Madhur and Bansal, Aman and Raj, Ankit and Goyal, Pawan and Ganguly, Niloy},
journal={arXiv preprint arXiv:2409.13592},
year={2024}
}
**APA:**
Nandy, A., Agarwal, Y., Patwa, A., Das, M. M., Bansal, A., Raj, A., ... & Ganguly, N. (2024). YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models. arXiv preprint arXiv:2409.13592.
## Dataset Card Contact
Get in touch at [email protected]
|