|
--- |
|
task_categories: |
|
- multiple-choice |
|
- question-answering |
|
- visual-question-answering |
|
language: |
|
- en |
|
- zh |
|
tags: |
|
- multimodal |
|
- intelligence |
|
size_categories: |
|
- 1K<n<10K |
|
license: apache-2.0 |
|
pretty_name: mmiq |
|
configs: |
|
- config_name: default |
|
features: |
|
- name: category |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: question_en |
|
dtype: string |
|
- name: question_zh |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: MD5 |
|
dtype: string |
|
- name: data_id |
|
dtype: int64 |
|
- name: answer |
|
dtype: string |
|
- name: split |
|
dtype: string |
|
--- |
|
# Dataset Card for "MM-IQ" |
|
|
|
- [Dataset Description](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description) |
|
- [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information) |
|
- [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples) |
|
- [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard) |
|
- [Dataset Usage](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-usage) |
|
- [Data Downloading](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-downloading) |
|
- [Data Format](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-format) |
|
- [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation) |
|
- [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation) |
|
|
|
## Dataset Description |
|
|
|
**MM-IQ** is a new benchmark designed to evaluate MLLMs' intelligence through multiple reasoning patterns demanding abstract reasoning abilities. It encompasses **three input formats, six problem configurations, and eight reasoning patterns**. With **2,710 samples**, MM-IQ is the most comprehensive and largest AVR benchmark for evaluating the intelligence of MLLMs, and **3x and 10x** larger than two very recent benchmarks MARVEL and MathVista-IQTest, respectively. By focusing on AVR problems, MM-IQ provides a targeted assessment of the cognitive capabilities and intelligence of MLLMs, contributing to a more comprehensive understanding of their strengths and limitations in the pursuit of AGI. |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" /> |
|
|
|
|
|
|
|
## Paper Information |
|
|
|
- Paper: Coming soon. |
|
- Code: https://github.com/AceCHQ/MMIQ/tree/main |
|
- Project: https://acechq.github.io/MMIQ-benchmark/ |
|
- Leaderboard: https://acechq.github.io/MMIQ-benchmark/#leaderboard |
|
|
|
|
|
## Dataset Examples |
|
|
|
Examples of our MM-IQ: |
|
1. Logical Operation Reasoning |
|
|
|
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/logical_AND_2664.png" style="zoom:100%;" /> |
|
|
|
<details> |
|
|
|
|
|
<summary>🔍 Click to expand/collapse more examples</summary> |
|
|
|
2. Mathematical Reasoning |
|
<p>Prompt1: Choose the most appropriate option from the given four options to present a certain regularity: </p> |
|
<p>Option A: 4; Option B: 5; Option C: 6; Option D: 7. </p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/arithmetic_1133.png" style="zoom:120%;" /> |
|
|
|
3. 2D-geometry Reasoning |
|
<p>Prompt: The option that best fits the given pattern of figures is ( ).</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/2D_sys_1036.png" style="zoom:40%;" /> |
|
|
|
4. 3D-geometry Reasoning |
|
<p>Prompt: The one that matches the top view is:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/3D_view_1699.png" style="zoom:30%;" /> |
|
|
|
5. visual instruction Reasoning |
|
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/Visual_instruction_arrow_2440.png" style="zoom:50%;" /> |
|
|
|
6. Spatial Relationship Reasoning |
|
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/spatial_6160.png" style="zoom:120%;" /> |
|
|
|
7. Concrete Object Reasoning |
|
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/concrete_object_6167.png" style="zoom:120%;" /> |
|
|
|
8. Temporal Movement Reasoning |
|
<p>Prompt:Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p> |
|
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/temporal_rotation_1379.png" style="zoom:50%;" /> |
|
|
|
</details> |
|
|
|
## Leaderboard |
|
|
|
🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard). |
|
|
|
|
|
## Dataset Usage |
|
|
|
### Data Downloading |
|
|
|
|
|
You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)): |
|
|
|
```python |
|
from IPython.display import display, Image |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("huanqia/MM-IQ") |
|
``` |
|
|
|
Here are some examples of how to access the downloaded dataset: |
|
|
|
```python |
|
# print the first example on the MM-IQ dataset |
|
print(dataset["test"][0]) |
|
print(dataset["test"][0]['data_id']) # print the problem id |
|
print(dataset["test"][0]['question']) # print the question text |
|
print(dataset["test"][0]['answer']) # print the answer |
|
# Display the image |
|
print("Image context:") |
|
display(dataset["test"][0]['image']) |
|
``` |
|
|
|
We have uploaded a demo to illustrate how to access the MM-IQ dataset on Hugging Face, available at [hugging_face_dataset_demo.ipynb](https://github.com/AceCHQ/MMIQ/blob/main/mmiq/jupyter_notebook_demos/hugging_face_dataset_demo.ipynb). |
|
|
|
|
|
|
|
|
|
### Data Format |
|
|
|
The dataset is provided in Parquet format and contains the following attributes: |
|
|
|
```json |
|
{ |
|
"question": [string] The question text, |
|
"answer": [string] The correct answer for the problem, |
|
"data_id": [int] The problem id, |
|
"category": [string] The category of reasoning pattern, |
|
"image": [image] Containing image (raw bytes and image path) corresponding to the image in data.zip, |
|
} |
|
``` |
|
|
|
|
|
### Automatic Evaluation |
|
|
|
🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/AceCHQ/MMIQ/tree/main/mmiq). |
|
|
|
|
|
## Citation |
|
|
|
If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX: |
|
``` |
|
@misc{cai2025mm-iq, |
|
title = {MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models}, |
|
author = {Huanqia Cai and Yijun Yang and Winston Hu}, |
|
month = {January}, |
|
year = {2025} |
|
} |
|
``` |
|
|
|
## Contact |
|
[Huanqia Cai]([email protected]): [email protected] |