Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
odia_vqa_en_odi_set / README.md
shantipriya's picture
Update README.md
41649b9 verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - text-generation
language:
  - or
  - en
pretty_name: odia_vqa_instruction
size_categories:
  - 10K<n<100K

Dataset Card for OVQA Instruction Set

Dataset Summary

Odia Visual Question Answering (OVQA) Instruction Set is a multimodal dataset comprising text and images structured in an instruction format, designed for developing Multimodal Large Language Models (MLLMs).

Supported Tasks and Leaderboards

Multimodal Large Language Model (MLLM)

Languages

Odia, English

Dataset Structure

JSON

Paper

For more details on data preparation, experiments, and evaluation, read the paper here:

Read the Paper

Licensing Information

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

Citation Information

If you find this repository useful, please consider giving 👏 and citing:

@inproceedings{parida2025ovqa,
  title={OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language},
  author={Parida, Shantipriya and Sahoo, Shashikanta and Sekhar, Sambit and Sahoo, Kalyanamalini and Kotwal, Ketan and Khosla, Sonal and Dash, Satya Ranjan and Bose, Aneesh and Kohli, Guneet Singh and Lenka, Smruti Smita and others},
  booktitle={Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages},
  pages={58--66},
  year={2025}
}