Update dataset card: Add link to paper on HF
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
3 |
task_categories:
|
4 |
- image-to-text
|
5 |
- text-to-image
|
@@ -9,13 +11,12 @@ tags:
|
|
9 |
- Multimodal
|
10 |
- Vision-Language
|
11 |
- VLLMs
|
12 |
-
size_categories:
|
13 |
-
- 1K<n<10K
|
14 |
---
|
|
|
15 |
# VL-ICL Bench
|
16 |
VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning
|
17 |
|
18 |
-
[[Webpage]](https://ys-zong.github.io/VL-ICL/) [[Paper]](https://
|
19 |
|
20 |
|
21 |
## Image-to-Text Tasks
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
size_categories:
|
4 |
+
- 1K<n<10K
|
5 |
task_categories:
|
6 |
- image-to-text
|
7 |
- text-to-image
|
|
|
11 |
- Multimodal
|
12 |
- Vision-Language
|
13 |
- VLLMs
|
|
|
|
|
14 |
---
|
15 |
+
|
16 |
# VL-ICL Bench
|
17 |
VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning
|
18 |
|
19 |
+
[[Webpage]](https://ys-zong.github.io/VL-ICL/) [[Paper]](https://huggingface.co/papers/2403.13164) [[Code]](https://github.com/ys-zong/VL-ICL)
|
20 |
|
21 |
|
22 |
## Image-to-Text Tasks
|