dreamerlin commited on
Commit
d521b61
โ€ข
1 Parent(s): 52938ff

Create README.md

Browse files

# V2PE-Data

![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/ewbZmWctNv-uLFlnMCGK9.png)


[\[๐Ÿ“‚ GitHub\]](https://github.com/OpenGVLab/V2PE) [\[๐Ÿ†• Blog\]](https://zzdhybthu.github.io/V2PE.github.io/) [\[๐Ÿ“œ Paper\]](https://arxiv.org/abs/2412.09616) [\[๐Ÿค— HF Models\]](https://huggingface.co/OpenGVLab/V2PE)

## Summary

We introduce two augmented long-context multimodal datasets: **Long Visual Question Answering** and **Long multimodal Retrieval**. These datasets aim to enhance VLMs' long-context training and establish a systematic evaluation framework, thereby addressing the challenges associated with long-context understanding that extend beyond the scope of existing training data.


![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/93ts7Q204GAX-Lu6tLnY8.png)


- **Long Visual Question Answering (Long-VQA):** The Long-VQA dataset aims to evaluate the capabilities of VLMs in understanding and reasoning over long multimodal sequences within general visual question-answering tasks. We extended 17 widely adopted datasets (e.g., DocVQA, GQA, SQA), expanding their content from short sequences to those containing up to 32K tokens. The tasks involve answering questions that require commonsense reasoning, factual knowledge, and interpretation of visual information from charts, documents, and real-world texts. Long-VQA contains 533K samples: 392K for training (up to 32K tokens) and 141K for validation (up to 64K tokens) to evaluate the generalization to longer contexts.

![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/gkfXER4GLtFGYpjQ0gu7G.png)

- **Long Multimodal Retrieval (Long-MR):** we developed Long-MR by inserting a target image or textual segment into sequences of interleaved images and texts. Long-MR evaluates VLMs' ability to retrieve specific targets from ultra-long multimodal sequences, requiring models to locate the inserted "needle" and answer associated questions. We generated two subsets of Long-MR: Long-MR-32K (488K samples, sequences up to 32K tokens) and Long-MR-256K (50K samples, sequences up to 256K tokens), following the data construction process of MM-NIAH. To assess the limits of VLMs' long-context capabilities, we further extend the official MM-NIAH evaluation benchmark by generating testing samples with sequence lengths ranging from 64K to 1M tokens, resulting in the MM-NIAH-1M benchmark. This extension pushes the testing capacity beyond the original MM-NIAH, which was limited to sequences of up to 64K tokens.

![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/mEpfOPY0gue_BHDDNCOMH.png)

Please refer to our [paper](https://arxiv.org/abs/2412.09616) for more details.

## Evaluation Results of [Released Model](https://huggingface.co/OpenGVLab/V2PE)

**General MLLM Benchmarks**

| Model | #Param | ChartQA | DocVQA | AI2D | InfoVQA | SQA | POPE | MMMU<sub>val</sub> | MMBench<sub>EN</sub> | SEED<sub>I</sub> | Avg |
|---------------------------|--------|---------|--------|-------|---------|-------|-------|--------------------|---------------------|------------------|-------|
| InternVL2-2B | 2.0B | 71.7 | 86.9 | 74.1 | 58.9 | 94.1 | 85.2 | 36.3 | 73.4 | 70.9 | 72.4 |
| DeepSeek-VL-1.3B | 2.0B | 47.4 | - | 51.5 | - | 68.4 | 85.9 | 33.8 | 66.4 | 66.0 | - |
| Qwen2-VL-2B | 2.0B | 73.5 | 90.1 | 74.7 | 65.5 | - | - | 41.1 | 74.9 | - | - |
| Aquila-VL-2B | 2.2B | 32.0 | 85.0 | 75.1 | 58.3 | 95.1 | 83.1 | 46.9 | 79.0 | 73.9 | 69.8 |
| MiniCPM-V-2 | 2.8B | 55.6 | 71.9 | 62.9 | - | 80.7 | 86.3 | 38.2 | 64.1 | 67.1 | - |
| Vintern-3B-beta | 3.7B | 68.3 | - | 69.1 | - | 75.0 | 87.4 | 46.7 | 70.6 | 70.0 | - |
| Llama 3.2 11B | 11B | 83.4 | 88.4 | 91.1 | - | - | - | 50.7 | 68.0 | - | - |
| Qwen2-VL-72B | 73B | 88.3 | 96.5 | 88.1 | 84.5 | 91.2 | 87.2 | 64.5 | 86.9 | 77.9 | 85.0 |
| GPT-4o | - | 85.7 | 92.8 | 84.7 | - | 90.1 | 97.2 | 69.1 | 82.1 | 76.7 | - |
| **InternVL2-V2PE-32K** | 2.0B | **76.4** | **83.9** | **73.2** | **55.9** | **94.9** | **88.8** | **36.6** | **73.5** | **71.2** | **72.5** |

**Long-Context MLLM Benchmarks**

| Model | #Param | MM-NIAH/Image | MM-NIAH/Text | MM-NIAH/Avg | Milebench/T | Milebench/S | Milebench/NI | Milebench/Avg | VideoMME | MVBench |
|--------------------------|--------|---------------|--------------|-------------|--------------|--------------|---------------|--------------|------------|------------|
| InternVL2-2B | 2.0B | 23.0 | 18.9 | 21.0 | 58.2 | 54.5 | 37.0 | 49.9 | - | - |
| Phi-3-Vision | 2.7B | - | - | - | 46.9 | 50.0 | - | - | - | - |
| OmChat | 3.9B | - | - | - | 51.4 | 52.0 | - | - | 45.9 | 50.2 |
| LongLLaVA | 9B | - | - | - | 47.3 | 46.8 | - | - | 43.7 | 49.1 |
| LongLLaVA | 13B | - | - | - | 52.7 | 52.1 | - | - | 51.6 | 54.6 |
| VILA | 13B | 14.5 | 40.5 | 27.5 | - | - | - | - | - | - |
| Gemini-1.5 | - | 28.5 | 82.1 | 55.2 | 50.2 | 58.3 | 97.9 | **68.8** | **69.6** | - |
| GPT-4V | - | - | 84.1 | - | 45.6 | 58.9 | **99.4** | 68.0 | 59.9 | 43.5 |
| GPT-4o | - | - | - | - | 56.2 | **63.5** | - | - | 64.7 | - |
| Claude3-Opus | - | - | - | - | 37.4 | 48.1 | 85.3 | 56.9 | 59.7 | - |
| **InternVL2-V2PE-32K** | 2.0B | **78.1** | **85.7** | **81.8** | **65.5** | 56.4 | 97.2 | 72.5 | 50.7 | **65.6** |

## Usage

Please refer to our [GitHub repo](https://github.com/OpenGVLab/V2PE?tab=readme-ov-file#prepare-training-datasets).

## Citation

If you find this work helpful in your research, please consider citing:

```bibtex


@misc
{ge2024v2peimprovingmultimodallongcontext,
title={V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding},
author={Junqi Ge and Ziyi Chen and Jintao Lin and Jinguo Zhu and Xihui Liu and Jifeng Dai and Xizhou Zhu},
year={2024},
eprint={2412.09616},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.09616},
}
``

Files changed (1) hide show
  1. README.md +11 -0
README.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ - question-answering
6
+ language:
7
+ - en
8
+ pretty_name: V2PE-Data
9
+ size_categories:
10
+ - 100B<n<1T
11
+ ---