GlobalRG-Retrieval / README.md
UBCNLP's picture
Update README.md
0acd262 verified
---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
- name: region
dtype: string
- name: universal
dtype: string
splits:
- name: test
num_bytes: 1563779706
num_examples: 3000
download_size: 2126495223
dataset_size: 1563779706
task_categories:
- image-classification
tags:
- cultural
- visual
- retrieval
- universals
size_categories:
- 1K<n<10K
---
### GlobalRG - Retrieval Across Universals Task
Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures due to underrepresentation in training datasets. Various benchmarks have been proposed to test models' cultural inclusivity, but they have limited coverage of cultures and do not adequately assess cultural diversity across universal as well as culture-specific local concepts. We introduce the GlobalRG-Retrieval benchmark, which aims at retrieving culturally diverse images for universal concepts from 50 countries.
> **Note:** The answers for the GlobalRG-Retrieval benchmark are not publicly available. We are working on creating a competition where participants can upload their predictions and evaluate their models. Stay tuned for more updates!
If you need to urgently need to evaluate, please contact [email protected] and fill out this form. https://forms.gle/pSbnGso13co6V4518.
### Loading the dataset
To load and use the GlobalRG-Grounding benchmark, use the following commands:
```
from datasets import load_dataset
globalrg_retrieval_dataset = load_dataset('UBCNLP/GlobalRG-Retrieval')
```
Once the dataset is loaded, each instance contains the following fields:
- `u_id`: A unique identifier for each image-region-concept tuple
- `image`: The image data in binary format
- `region`: The cultural region pertaining to the image
- `universal`: The universal concept pertaining the image.
### Usage and License
GlobalRG is a test-only benchmark and can be used to evaluate models. The images are scraped from the internet and are not owned by the authors. All annotations are released under the CC BY-SA 4.0 license.
### Citation Information
If you are using this dataset, please cite
```
@inproceedings{bhatia-etal-2024-local,
title = "From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models",
author = "Bhatia, Mehar and
Ravi, Sahithya and
Chinchure, Aditya and
Hwang, EunJeong and
Shwartz, Vered",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.385",
doi = "10.18653/v1/2024.emnlp-main.385",
pages = "6763--6782"
}
```