File size: 1,823 Bytes
b78abe7 1f74766 b78abe7 1f74766 b78abe7 94f9ba8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
dataset_info:
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: query_image
dtype: image
- name: choice_image_0
dtype: image
- name: choice_image_1
dtype: image
- name: ques_type
dtype: string
- name: label
dtype: string
- name: grade
dtype: string
- name: skills
dtype: string
splits:
- name: val
num_bytes: 329185883.464
num_examples: 21488
- name: test
num_bytes: 333201645.625
num_examples: 21489
download_size: 667286379
dataset_size: 662387529.089
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: test
path: data/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
๐ [Homepage](https://lmms-lab.github.io/) | ๐ [Documentation](docs/README.md) | ๐ค [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [ICONQA](https://iconqa.github.io/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{lu2021iconqa,
title = {IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning},
author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun},
booktitle = {The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks},
year = {2021}
}
``` |