File size: 2,874 Bytes
f25a918
 
 
 
 
 
 
 
c6ca239
7907822
f25a918
7907822
 
f25a918
 
 
c6ca239
 
f25a918
4708f3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question
    dtype: string
  splits:
  - name: test
    num_bytes: 574815661.388
    num_examples: 2374
  download_size: 580096603
  dataset_size: 574815661.388
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

### CulturalVQA
Foundation models and vision-language pre-training have notably advanced Vision Language Models (VLMs), enabling multimodal processing of visual and linguistic data. However, their performance has been typically assessed on general scene understanding - recognizing objects, attributes, and actions - rather than cultural comprehension. We introduce CulturalVQA, a visual question-answering benchmark aimed at assessing VLM's geo-diverse cultural understanding. We curate a diverse collection of 2,378 image-question pairs with 1-5 answers per question representing cultures from 11 countries across 5 continents. The questions probe understanding of various facets of culture such as clothing, food, drinks, rituals, and traditions.

> **Note:** The answers for CulturalVQA benchmark is not publicly available. We are working on creating a competition where participants can upload their predictions and evaluate their models. Stay tuned for more updates!
If you need to urgently need to evaluate please contact [email protected]

### Loading the dataset
To load and use the CulturalVQA benchmark, use the following commands:
```
from datasets import load_dataset

culturalvqa_dataset = load_dataset('mair-lab/CulturalVQA')
```
Once the dataset is loaded each instance contains the following fields:

- `u_id`: A unique identifier for each image-question pair
- `image`: The image data in binary format
- `question`: The question pertaining to the image

### Usage and License
CulturalVQA is a test-only benchmark and can be used to evaluate models. The images are scraped from the internet and are not owned by the authors. All annotations are released under the CC BY-SA 4.0 license.

### Citation Information
If you are using this dataset, please cite
```
@inproceedings{nayak-etal-2024-benchmarking,
    title = "Benchmarking Vision Language Models for Cultural Understanding",
    author = "Nayak, Shravan  and
      Jain, Kanishk  and
      Awal, Rabiul  and
      Reddy, Siva  and
      Steenkiste, Sjoerd Van  and
      Hendricks, Lisa Anne  and
      Stanczak, Karolina  and
      Agrawal, Aishwarya",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.329",
    pages = "5769--5790"
}
```