Datasets:
MLVU
/

Languages:
English
ArXiv:
License:
MLVU_Test / README.md
JUNJIE99's picture
Update README.md
e66b1d6 verified
---
license: cc-by-nc-sa-4.0
extra_gated_prompt: >-
You acknowledge and understand that: This dataset is provided solely for
academic research purposes. It is not intended for commercial use or any other
non-research activities. All copyrights, trademarks, and other intellectual
property rights related to the videos in the dataset remain the exclusive
property of their respective owners.
You assume full responsibility for any additional use or dissemination of this dataset and for any consequences that may arise from such actions. You are also aware that the copyright holders of the original videos retain the right to request the removal of their videos from the dataset.
Furthermore, it is your responsibility to respect these conditions and to use the dataset ethically and in compliance with all applicable laws and regulations. Any violation of these terms may result in the immediate termination of your access to the dataset.
extra_gated_fields:
Institution: text
Name: text
Country: country
I want to use this dataset for:
type: select
options:
- Research
I agree to use this dataset solely for research purposes: checkbox
I will not use this dataset in any way that infringes upon the rights of the copyright holders of the original videos, and strictly prohibit its use for any commercial purposes: checkbox
task_categories:
- question-answering
language:
- en
---
<h1 align="center">MLVU: Multi-task Long Video Understanding Benchmark</h1>
<p align="center">
<a href="https://arxiv.org/abs/2406.04264">
<img alt="Build" src="http://img.shields.io/badge/cs.CV-arXiv%3A2406.04264-B31B1B.svg">
</a>
<a href="https://huggingface.co/datasets/MLVU/MVLU">
<img alt="Build" src="https://img.shields.io/badge/πŸ€— Dataset-MLVU Benchmark (Dev)-yellow">
</a>
<a href="https://github.com/JUNJIE99/MLVU">
<img alt="Build" src="https://img.shields.io/badge/Github-MLVU: Multi task Long Video Understanding Benchmark-blue">
</a>
</p>
This repo contains the annotation data and evaluation code for the paper "[MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding](https://arxiv.org/abs/2406.04264)".
## πŸ”” News:
- πŸ†• 7/28/2024: The data for the **MLVU-Test set** has been released ([πŸ€— Link](https://huggingface.co/datasets/MLVU/MLVU_Test))! The test set includes 11 different tasks, featuring our newly added Sports Question Answering (SQA, single-detail LVU) and Tutorial Question Answering (TQA, multi-detail LVU). The MLVU-Test has **expanded the number of options in multiple-choice questions to six**. While the ground truth of the MLVU-Test will remain undisclosed, everyone will be able to evaluate online (the online website will be coming soon!). πŸ”₯
- πŸŽ‰ 7/28/2024: The **MLVU-Dev set** has now been integrated into [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval)! You can now conveniently evaluate the multiple-choice questions (MLVU<sub>M</sub>) of the [MLVU-Dev set](https://huggingface.co/datasets/MLVU/MVLU) with a single click using lmms-eval. Thanks to the lmms-eval team! πŸ”₯
- :trophy: 7/25/2024: We have released the [MLVU-Test leaderboard](https://github.com/JUNJIE99/MLVU?tab=readme-ov-file#trophy-mlvu-test-leaderboard) and incorporated evaluation results for several recently launched models, such as [LongVA](https://github.com/EvolvingLMMs-Lab/LongVA), [VILA](https://github.com/NVlabs/VILA), [ShareGPT4-Video](https://github.com/ShareGPT4Omni/ShareGPT4Video), etc. πŸ”₯
- 🏠 6/19/2024: For better maintenance and updates of MLVU, we have migrated MLVU to this new repository. We will continue to update and maintain MLVU here. If you have any questions, feel free to raise an issue. πŸ”₯
- πŸ₯³ 6/7/2024: We have released the MLVU [Benchmark](https://huggingface.co/datasets/MLVU/MVLU) and [Paper](https://arxiv.org/abs/2406.04264)! πŸ”₯
## License
Our dataset is under the CC-BY-NC-SA-4.0 license.
⚠️ If you need to access and use our dataset, you must understand and agree: **This dataset is for research purposes only and cannot be used for any commercial or other purposes. The user assumes all effects arising from any other use and dissemination.**
We do not own the copyright of any raw video files. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors. Therefore, for the movies, TV series, documentaries, and cartoons used in the dataset, we have reduced the resolution, clipped the length, adjusted dimensions, etc. of the original videos to minimize the impact on the rights of the original works.
If the original authors of the related works still believe that the videos should be removed, please contact [email protected] or directly raise an issue.
## Introduction
We introduce MLVU: the first comprehensive benchmark designed for evaluating Multimodal Large Language Models (MLLMs) in Long Video Understanding (LVU) tasks. MLVU is constructed from a wide variety of long videos, with lengths ranging from 3 minutes to 2 hours, and includes nine distinct evaluation tasks. These tasks challenge MLLMs to handle different types of tasks, leveraging both global and local information from videos.
Our evaluation of 20 popular MLLMs, including GPT-4o, reveals significant challenges in LVU, with even the top-performing GPT-4o only achieving an average score of 64.6% in multi-choice tasks. In addition, our empirical results underscore the need for improvements in context length, image understanding, and strong LLM-backbones. We anticipate that MLVU will serve as a catalyst for the community to further advance MLLMs' capabilities in understanding long videos.
![Statistical overview of our LVBench dataset. **Left:** Distribution of Video Duration; **Middle** Distribution of Source Types for Long Videos; **Right:** Quantification of Each Task Type.](./figs/statistic.png)
## πŸ† MLVU-Test Leaderboard
This table is sorted by M-AVG in descending order.
| | Input | TR | AR | NQA | ER | PQA | SQA | AO | AC | TQA | M-AVG | SSC | VS | G-Avg |
|------------|-------|------|------|------|------|------|------|------|------|------|-------|------|------|-------|
| [GPT-4o](https://openai.com/index/hello-gpt-4o/) | 0.5&nbsp;fps| 83.7 | 68.8 | 42.9 | 47.8 | 57.1 | 63.6 | 46.2 | 35.0 | 48.7 | 54.9 | 6.80 | 4.94 | 5.87 |
| [VILA-1.5](https://github.com/NVlabs/VILA) | 14 frm| 84.7 | 56.4 | 38.3 | 35.8 | 62.0 | 38.8 | 34.3 | 11.7 | 34.9 | 44.2 | 5.11 | 2.53 | 3.82 |
| [GPT-4 Turbo](https://openai.com/index/gpt-4v-system-card/) | 16 frm| 85.7 | 61.5 | 40.0 | 41.5 | 48.0 | 41.7 | 22.9 | 6.7 | 41.9 | 43.3 | 4.95 | 4.38 | 4.67 |
| [LongVA](https://github.com/EvolvingLMMs-Lab/LongVA) | 256 frm| 81.3 | 41.0 | 46.7 | 39.6 | 46.0 | 44.4 | 17.1 | 23.3 | 30.2 | 41.1 | 4.92 | 2.90 | 3.91 |
| [InternVL-1.5](https://github.com/OpenGVLab/InternVL) | 16 frm| 80.2 | 51.3 | 40.0 | 24.5 | 42.0 | 30.6 | 14.3 | 13.3 | 39.5 | 37.3 | 5.18 | 2.73 | 3.96 |
| [VideoChat2_HD](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2)| 16 frm| 74.7 | 43.6 | 35.0 | 34.0 | 30.0 | 30.6 | 21.4 | 23.3 | 23.3 | 35.1 | 5.14 | 2.83 | 3.99 |
| [VideoLLaMA2-Chat](https://github.com/DAMO-NLP-SG/VideoLLaMA2)| 16 frm| 76.9 | 35.9 | 26.7 | 34.0 | 40.0 | 27.8 | 17.1 | 15.0 | 20.9 | 32.7 | 5.27 | 2.40 | 3.84 |
| [ShareGPT4Video](https://github.com/ShareGPT4Omni/ShareGPT4Video) | 16 frm| 73.6 | 25.6 | 31.7 | 45.3 | 38.0 | 38.9 | 17.1 | 8.3 | 25.6 | 33.8 | 4.72 | 2.53 | 3.63 |
| [VideoChat2-Vicuna](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2) | 16 frm| 72.5 | 30.8 | 18.3 | 28.3 | 26.0 | 36.1 | 17.1 | 23.3 | 18.6 | 30.1 | 4.80 | 2.30 | 3.55 |
| [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) | 8 frm| 70.3 | 38.5 | 13.3 | 26.4 | 26.0 | 38.9 | 20.0 | 21.7 | 20.9 | 30.7 | 5.06 | 2.30 | 3.68 |
| [LLaVA-1.6](https://github.com/haotian-liu/LLaVA) | 16 frm| 63.7 | 17.9 | 13.3 | 26.4 | 30.0 | 22.2 | 21.4 | 16.7 | 16.3 | 25.3 | 4.20 | 2.00 | 3.10 |
| [Claude-3-Opus](https://claude.ai/login?returnTo=%2F%3F) | 16 frm| 53.8 | 30.8 | 14.0 | 17.0 | 20.0 | 47.2 | 10.0 | 6.7 | 25.6 | 25.0 | 3.67 | 2.83 | 3.25 |
| [VideoChat](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat) | 16 frm| 26.4 | 12.8 | 18.3 | 17.0 | 22.0 | 11.1 | 15.7 | 11.7 | 14.0 | 16.6 | 4.90 | 2.15 | 3.53 |
| [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT)| 16 frm| 17.6 | 17.9 | 28.3 | 32.1 | 22.0 | 27.8 | 17.1 | 13.3 | 11.6 | 20.9 | 5.06 | 2.22 | 3.64 |
| [Video-LLaMA-2](https://github.com/DAMO-NLP-SG/Video-LLaMA)| 16 frm| 52.7 | 12.8 | 13.3 | 17.0 | 12.0 | 19.4 | 15.7 | 8.3 | 18.6 | 18.9 | 4.87 | 2.23 | 3.55 |
| [Qwen-VL-Max](https://github.com/QwenLM/Qwen)| 10 frm| 75.8 | 53.8 | 15.0 | 26.4 | 38.0 | 44.4 | 20.0 | 11.7 | 22.6 | 34.2 | 4.84 | 3.00 | 3.92 |
| [MA-LMM](https://github.com/boheumd/MA-LMM) | 16 frm| 44.0 | 23.1 | 13.3 | 30.2 | 14.0 | 27.8 | 18.6 | 13.3 | 14.0 | 22.0 | 4.61 | 3.04 | 3.83 |
| [MiniGPT4-Video](https://github.com/Vision-CAIR/MiniGPT4-video)| 90 frm| 64.9 | 46.2 | 20.0 | 30.2 | 30.0 | 16.7 | 15.7 | 15.0 | 18.6 | 28.6 | 4.27 | 2.50 | 3.39 |
| [Movie-LLM](https://github.com/Deaddawn/MovieLLM-code) | 1 fps| 27.5 | 25.6 | 10.0 | 11.3 | 16.0 | 16.7 | 20.0 | 21.7 | 23.3 | 19.1 | 4.93 | 2.10 | 3.52 |
| [Otter-I](https://github.com/Luodian/Otter) | 16 frm| 17.6 | 17.9 | 16.7 | 17.0 | 18.0 | 16.7 | 15.7 | 16.7 | 14.0 | 16.7 | 3.90 | 2.03 | 2.97 |
| [Otter-V](https://github.com/Luodian/Otter) | 16 frm| 16.5 | 12.8 | 16.7 | 22.6 | 22.0 | 8.3 | 12.9 | 13.3 | 16.3 | 15.7 | 4.20 | 2.18 | 3.19 |
|[MovieChat](https://github.com/rese1f/MovieChat) | 2048&nbsp;frm| 18.7 | 10.3 | 23.3 | 15.1 | 16.0 | 30.6 | 17.1 | 15.0 | 16.3 | 18.0 | 3.24 | 2.30 | 2.77 |
| [mPLUG-Owl-V](https://github.com/X-PLUG/mPLUG-Owl)| 16 frm| 25.3 | 15.4 | 6.7 | 13.2 | 22.0 | 19.4 | 14.3 | 20.0 | 18.6 | 17.2 | 5.01 | 2.20 | 3.61 |
| [LLaMA-VID](https://github.com/dvlab-research/LLaMA-VID) | 1 fps| 20.9 | 23.1 | 21.7 | 11.3 | 16.0 | 16.7 | 18.6 | 15.0 | 11.6 | 17.2 | 4.15 | 2.70 | 3.43 |
## License
Our dataset is under the CC-BY-NC-SA-4.0 license.
⚠️ If you need to access and use our dataset, you must understand and agree: **This dataset is for research purposes only and cannot be used for any commercial or other purposes. The user assumes all effects arising from any other use and dissemination.**
We do not own the copyright of any raw video files. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors. Therefore, for the movies, TV series, documentaries, and cartoons used in the dataset, we have reduced the resolution, clipped the length, adjusted dimensions, etc. of the original videos to minimize the impact on the rights of the original works.
If the original authors of the related works still believe that the videos should be removed, please contact [email protected] or directly raise an issue.
## MLVU Benchmark
> Before you access our dataset, we kindly ask you to thoroughly read and understand the license outlined above. If you cannot agree to these terms, we request that you refrain from downloading our video data.
The annotation file is readily accessible [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/MLVU/data). For the raw videos, you can access them via this [<u>πŸ€— HF Link</u>](https://huggingface.co/datasets/MLVU/MVLU).
MLVU encompasses nine distinct tasks, which include multiple-choice tasks as well as free-form generation tasks. These tasks are specifically tailored for long-form video understanding, and are classified into three categories: holistic understanding, single detail understanding, and multi-detail understanding. Examples of the tasks are displayed below.
![Task Examples of our MLVU.](./figs/task_example.png)
## Evaluation
Please refer to our [evaluation](https://github.com/JUNJIE99/MLVU/tree/main/evaluation) and [evaluation_test](https://github.com/JUNJIE99/MLVU/tree/main/evaluation_test) folder for more details.
## Hosting and Maintenance
The annotation files will be permanently retained.
If some videos are requested to be removed, we will replace them with a set of video frames sparsely sampled from the video and adjusted in resolution. Since **all the questions in MLVU are only related to visual content** and do not involve audio, this will not significantly affect the validity of MLVU (most existing MLLMs also understand videos by frame extraction).
If even retaining the frame set is not allowed, we will still keep the relevant annotation files, and replace them with the meta-information of the video, or actively seek more reliable and risk-free video sources.
## Citation
If you find this repository useful, please consider giving a star 🌟 and citation
```
@article{MLVU,
title={MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding},
author={Zhou, Junjie and Shu, Yan and Zhao, Bo and Wu, Boya and Xiao, Shitao and Yang, Xi and Xiong, Yongping and Zhang, Bo and Huang, Tiejun and Liu, Zheng},
journal={arXiv preprint arXiv:2406.04264},
year={2024}
}
```