Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,30 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# Visual Haystacks Dataset Card
|
6 |
+
|
7 |
+
## Dataset details
|
8 |
+
Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
|
9 |
+
|
10 |
+
```
|
11 |
+
dataset/
|
12 |
+
βββ coco
|
13 |
+
βββ annotations
|
14 |
+
βββ test2017
|
15 |
+
βββ train2017
|
16 |
+
βββ val2017
|
17 |
+
```
|
18 |
+
|
19 |
+
## Dataset date: VHs was collected in April 2024, directly derived from COCO's image and object annotations.
|
20 |
+
|
21 |
+
## Paper or resources for more information: [TODO]
|
22 |
+
|
23 |
+
## License: [TODO]
|
24 |
+
|
25 |
+
Where to send questions or comments about the model: https://github.com/visual-haystacks/[TODO]/issues
|
26 |
+
|
27 |
+
## Intended use
|
28 |
+
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
|
29 |
+
|
30 |
+
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|