MVP-Bench / README.md
GZClarence's picture
Update README.md
4d50250 verified
metadata
extra_gated_heading: >-
  You need to share contact information with the authors to access this
  MVP-Bench.
extra_gated_prompt: >-
  ### MVP-Bench COMMUNITY LICENSE AGREEMENT

  By clicking "I Accept" below or by using or distributing any portion or
  element of the MVP-Bench Materials, you agree to be bound by this Agreement.

  1. The authors make no warranties regarding MVP-Bench, including but not
  limited to being up-to-date, correct, or complete. Authors cannot be held
  liable for providing access to or usage of the MVP-Bench.

  2. The data should only be used for scientific or research purposes. Any other
  use is explicitly prohibited.

  3. The data must not be provided or shared in part of full with any third
  party.

  4. The researcher takes full responsibility for usage of MVP-Bench once they
  are provided with the access.
extra_gated_fields:
  First Name: text
  Last Name: text
  Date of birth: date_picker
  Country: country
  Affiliation: text
  geo: ip_location
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected, stored, and processed: checkbox
extra_gated_button_content: Submit
language:
  - en
pipeline_tag: VQA Benchmark
tags:
  - visual perception
  - benchmark
  - VQA
  - LVLM
  - MLLM

MVP-Bench

MVP-Bench is a benchmark for evaluating the multi-level visual perception competence of Large Visual-Language Models (LVLMs). It was introduced in the paper MVP-Bench: Can Large Vision-Language Models Conduct Multi-level Visual Perception Like Humans and first released in this repository.

Benchmark description

MVP-Bench consists of 520 image pairs and 1872 questions. For evaluating LVLMs' visual perception at both levels, the 1872 questions contain 1105 high-level and 767 low-level questions. Besides, another novelty of this benchmark is assessing LVLMs' capability to conduct visual perception according to the correct visual clues and discern various visual perceptions between an image pair. Hence, we design 454 cross-image questions and 1418 single-image questions to form the question set. In terms of the question types, 872 of them are multiple-choice questions and 1000 of them are Yes/No questions.

Intended uses & limitations

The benchmark is only used for scientific research. To avoid any misuse of MVP-Bench, we will impose strict rules for access requirements and frequently track the follow-up works to constrain its usage within research-only goals.

BibTeX entry and citation info