--- license: mit task_categories: - text-classification - image-classification - conversational --- # SOCRATIS: A benchmark of diverse open-ended emotional reactions to image-caption pairs. ### ICCV WECIA Workshop 2023 (oral) [Project Page](https://kdeng55.github.io/socratis-website/), [Paper](https://arxiv.org/abs/2308.16741) We release a benchmark which contains 18K diverse emotions and reasons for feeling them on 2K image-caption pairs. Our current preliminary findings have shown that Humans prefer human-written emotional reactions over machine-generated by more than two times. We also find that current metrics fail to correlate with human preference, leaving significant room for research! We release the data publicly. `test.json` contains the testing data in the following format: ``` { unique_id: [[image_path, caption, emotions, explanations, anonymized_demographics], ...] } ``` The `unique_id` is a unique id for a image-caption pair. Each `unique_id` key has a list of entries from diverse workers. Each entry consists of the emotions and the explanations for feeling that emotion. Demographics may be missing for many annotations since they were optional and some workers opted to not disclose it. All data is anonymized. The image files are at: https://drive.google.com/file/d/1J8SiUEfKqc5rfxE1nwZUrG1Hcz7Djc3G/view?usp=sharing.