tsunghanwu commited on
Commit
7a7b390
Β·
verified Β·
1 Parent(s): 967df8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -1,3 +1,30 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Visual Haystacks Dataset Card
6
+
7
+ ## Dataset details
8
+ Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
9
+
10
+ ```
11
+ dataset/
12
+ └── coco
13
+ β”œβ”€β”€ annotations
14
+ β”œβ”€β”€ test2017
15
+ β”œβ”€β”€ train2017
16
+ └── val2017
17
+ ```
18
+
19
+ ## Dataset date: VHs was collected in April 2024, directly derived from COCO's image and object annotations.
20
+
21
+ ## Paper or resources for more information: [TODO]
22
+
23
+ ## License: [TODO]
24
+
25
+ Where to send questions or comments about the model: https://github.com/visual-haystacks/[TODO]/issues
26
+
27
+ ## Intended use
28
+ Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
29
+
30
+ Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.