Datasets:

Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
VideoMMMU / README.md
KairuiHu's picture
Update README.md
638e993 verified
---
extra_gated_prompt: 'The VideoMMMU dataset contains links to web videos used for data
collection purposes. VideoMMMU does not own or claim rights to the content linked
within this dataset; all rights and copyright remain with the respective content
creators and channel owners. Users are responsible for ensuring compliance with
the terms and conditions of the platforms hosting these videos. '
extra_gated_fields:
I acknowledge that VideoMMMU does not own the videos linked in this dataset: checkbox
I acknowledge that VideoMMMU is not the original creator of the videos in this dataset: checkbox
? I understand that VideoMMMU may modify or remove dataset content at the request
of content creators or in accordance with platform policies
: checkbox
I accept the dataset license terms (CC-BY-NC-SA 4-0): checkbox
I agree to use this dataset for non-commercial use ONLY: checkbox
dataset_info:
- config_name: Adaptation
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: image
dtype: image
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 78229293.0
num_examples: 300
download_size: 78107780
dataset_size: 78229293.0
- config_name: Comprehension
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 210307
num_examples: 300
download_size: 95067
dataset_size: 210307
- config_name: Perception
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 177880
num_examples: 300
download_size: 83750
dataset_size: 177880
configs:
- config_name: Adaptation
data_files:
- split: test
path: Adaptation/test-*
- config_name: Comprehension
data_files:
- split: test
path: Comprehension/test-*
- config_name: Perception
data_files:
- split: test
path: Perception/test-*
---
<!-- ---
dataset_info:
- config_name: Adaptation
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: image
dtype: image
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 78229293.0
num_examples: 300
download_size: 78107780
dataset_size: 78229293.0
- config_name: Comprehension
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 210307
num_examples: 300
download_size: 95067
dataset_size: 210307
- config_name: Perception
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: link_selected
dtype: string
- name: question_type
dtype: string
splits:
- name: test
num_bytes: 177880
num_examples: 300
download_size: 83750
dataset_size: 177880
configs:
- config_name: Adaptation
data_files:
- split: test
path: Adaptation/test-*
- config_name: Comprehension
data_files:
- split: test
path: Comprehension/test-*
- config_name: Perception
data_files:
- split: test
path: Perception/test-*
--- -->
This dataset contains the data for the paper [Video-MMMU: Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos](https://huggingface.co/papers/2501.13826). Video-MMMU is a multi-modal, multi-disciplinary benchmark designed to assess LMMs' ability to acquire and utilize knowledge from videos.
Project page: https://videommmu.github.io/
### Leaderboard (last updated: 07 Feb, 2025)
| Model | Overall | Perception | Comprehension | Adaptation | Δknowledge |
|---|---|---|---|---|---|
| **Human Expert** | 74.44 | 84.33 | 78.67 | 60.33 | +33.1 |
| [Claude-3.5-Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) | 65.78 | 72.00 | 69.67 | 55.67 | +11.4 |
| [GPT-4o](https://openai.com/index/hello-gpt-4o/) | 61.22 | 66.00 | 62.00 | 55.67 | +15.6 |
| [Qwen-2.5-VL-72B](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) | 60.22 | 69.33 | 61.00 | 50.33 | +9.7 |
| [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | 53.89 | 59.00 | 53.33 | 49.33 | +8.7 |
| [Aria](https://rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model) | 50.78 | 65.67 | 46.67 | 40.00 | +3.2 |
| [Gemini 1.5 Flash](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf) | 49.78 | 57.33 | 49.00 | 43.00 | -3.3 |
| [LLaVA-Video-72B](https://huggingface.co/lmms-lab/LLaVA-Video-72B-Qwen2) | 49.67 | 59.67 | 46.00 | 43.33 | +7.1 |
| [LLaVA-OneVision-72B](https://huggingface.co/llava-hf/llava-onevision-qwen2-72b-ov-hf) | 48.33 | 59.67 | 42.33 | 43.00 | +6.6 |
| [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | 47.44 | 58.33 | 44.33 | 39.67 | +2.2 |
| [mPLUG-Owl3-7B](https://github.com/X-PLUG/mPLUG-Owl/tree/main/mPLUG-Owl3) | 42.00 | 49.33 | 38.67 | 38.00 | +7.5 |
| [MAmmoTH-VL-8B](https://mammoth-vl.github.io/) | 41.78 | 51.67 | 40.00 | 33.67 | +1.5 |
| [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B) | 37.44 | 47.33 | 33.33 | 31.67 | -8.5 |
| [LLaVA-Video-7B](https://huggingface.co/lmms-lab/LLaVA-Video-7B-Qwen2) | 36.11 | 41.67 | 33.33 | 33.33 | -5.3 |
| [VILA1.5-40B](https://huggingface.co/Efficient-Large-Model/VILA1.5-40b) | 34.00 | 38.67 | 30.67 | 32.67 | +9.4 |
| [Llama-3.2-11B](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/) | 30.00 | 35.67 | 32.33 | 22.00 | - |
| [LongVA-7B](https://huggingface.co/lmms-lab/LongVA-7B) | 23.98 | 24.00 | 24.33 | 23.67 | -7.0 |
| [VILA1.5-8B](https://huggingface.co/Efficient-Large-Model/Llama-3-VILA1.5-8B-Fix) | 20.89 | 20.33 | 17.33 | 25.00 | +5.9 |
To submit your model results, please send an email to [email protected]