--- license: mit --- # Visual Haystacks Dataset Card ## Dataset details Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set. ``` dataset/ └── coco ├── annotations ├── test2017 ├── train2017 └── val2017 ``` ## Dataset date: VHs was collected in April 2024, directly derived from COCO's image and object annotations. ## Paper or resources for more information: [TODO] ## License: [TODO] Where to send questions or comments about the model: https://github.com/visual-haystacks/[TODO]/issues ## Intended use Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots. Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.