File size: 5,035 Bytes
e61d9ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
TITLE = """<h1 align="center" id="space-title">ConTextual Leaderboard</h1>"""
INTRODUCTION_TEXT = """
Models are becoming quite good at understanding text on its own, but what about text in images, which gives important contextual information? For example, navigating a map, or understanding a meme? The ability to reason about the interactions between the text and visual context in images can power many real-world applications, such as AI assistants, or tools to assist the visually impaired. We refer to these tasks as context-sensitive text-rich visual reasoning tasks.
At the moment, most evaluations of instruction-tuned large multimodal models (LMMs) focus on testing how well models can respond to human instructions posed as questions or imperative tasks over images… but not how well they understand context-sensitive text-rich scenes! That’s why we created ConTextual, a Context-sensitive Text-rich visuaL reasoning dataset for evaluating LMMs. We also released a leaderboard, so that the community can see for themselves which models are the best at this task. (See our [paper](https://arxiv.org/abs/2401.13311) for more details.)
## Data
ConTextual comprises **506 examples covering 8 real-world visual scenarios** - *Time Reading, Shopping, Navigation, Abstract Scenes, Mobile Application, Webpages, Infographics and Miscellaneous Natural Scenes*. Each sample consists of:
- A text-rich image
- A human-written instruction (question or imperative task)
- A human-written reference response
### Data Access
ConTextual data can be found on HuggingFace and GitHub.
- HuggingFace
- [Test](https://huggingface.co/datasets/ucla-contextual/contextual_test)
- [Val](https://huggingface.co/datasets/ucla-contextual/contextual_val)
- Github
- [Test](https://github.com/rohan598/ConTextual/blob/main/data/contextual_test.csv)
- [Val](https://github.com/rohan598/ConTextual/blob/main/data/contextual_val.csv)
### Data Format
```
{
"image_url": [string] url to the hosted image,
"instruction" [string] instruction text,
"response": [string] response text (only provided for samples in the val subset),
"category": visual scenario this example belongs to like 'time' and 'shopping' out of 8 possible scenarios in ConTextual
}
```
"""
SUBMISSION_TEXT = """
## Submissions
Results can be submitted for only validation here. Scores are expressed as the percentage of correct answers for a given split.
Submission made by our team are labelled "ConTextual authors".
### Validation Results
To submit your validation results to the leaderboard, you can run our auto-evaluation code (Evaluation Pipeline with GPT4), following the instructions [here](https://github.com/rohan598/ConTextual?tab=readme-ov-file#-evaluation-pipeline-gpt-4).
We expect submissions to be json format as shown below:
```
{"model_name": {"img_url": "1 or 0 as integer"}
Replace model name with your model name (string)
Replace img_url with img_url of the instance (string)
Value for an img url is either 0 or 1 (int)
There should be 100 predictions, corresponding to the 100 urls of the val set.
```
**Please do not utilize the public dev set as part of training data for your models.**
### Test Results
Once you are happy with your val results, you can send your model predictions to [rohan](mailto:[email protected]) and [hritik](mailto:[email protected]).
Please include in your email
1) A name for your model.
2) Organization (affiliation).
3) (Optionally) GitHub repo or paper link.
We expect submissions to be json format similar to val set as shown below:
```
{"model_name": {"img_url": "predicted response"}
Replace model name with your model name (string)
Replace img_url with img_url of the instance (string)
Value for an img url is the predicted response for that instance (string)
There should be 506 predictions, corresponding to the 506 urls of the test set.
```
**Please revisit the test leaderboard within 1 to 2 days after sharing your prediction file to view your model scores and ranking on the leaderboard.**
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""@misc{wadhawan2024contextual,
title={ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models},
author={Rohan Wadhawan and Hritik Bansal and Kai-Wei Chang and Nanyun Peng},
year={2024},
eprint={2401.13311},
archivePrefix={arXiv},
primaryClass={cs.CV}
}"""
def format_error(msg):
return f"<p style='color: red; font-size: 20px; text-align: center;'>{msg}</p>"
def format_warning(msg):
return f"<p style='color: orange; font-size: 20px; text-align: center;'>{msg}</p>"
def format_log(msg):
return f"<p style='color: green; font-size: 20px; text-align: center;'>{msg}</p>"
def model_hyperlink(link, model_name):
return f'<a target="_blank" href="{link}" style="color: var(--link-text-color); text-decoration: underline;text-decoration-style: dotted;">{model_name}</a>'
|