--- task_categories: - question-answering tags: - science pretty_name: Scientific Figure Interpretation Benchmark size_categories: - 1k 'CS_Figure2Caption' and 'Caption2Figure' -> 'CS_Caption2Figure'. ## Dataset Description - **Homepage:** [SciFIBench](https://scifibench.github.io/) - **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807) - **Repository** [SciFIBench](https://github.com/jonathan-roberts1/SciFIBench) - ### Dataset Summary The SciFIBench (Scientific Figure Interpretation Benchmark) contains 2000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1: Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate figure given a caption. This benchmark was curated from the SciCap and ArxivCap datasets, using adversarial filtering to obtain hard negatives. Human verification has been performed on each question to ensure high-quality, answerable questions. ### Example Usage ```python from datasets import load_dataset # load dataset dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR" # there are 4 dataset splits, which can be indexed separately # cs_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Figure2Caption") # cs_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="CS_Caption2Figure") # general_figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Figure2Caption") # general_caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="General_Caption2Figure") """ DatasetDict({ CS_Caption2Figure: Dataset({ features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], num_rows: 500 }) CS_Figure2Caption: Dataset({ features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], num_rows: 500 }) General_Caption2Figure: Dataset({ features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], num_rows: 500 }) General_Figure2Caption: Dataset({ features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], num_rows: 500 }) }) """ # select task and split cs_figure2caption_dataset = dataset['CS_Figure2Caption'] """ Dataset({ features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'], num_rows: 500 }) """ # query items cs_figure2caption_dataset[40] # e.g., the 41st element """ {'ID': 40, 'Question': 'Which caption best matches the image?', 'Options': ['A) ber vs snr for fft size=2048 using ls , lmmse , lr-lmmse .', 'B) ber vs snr for fft size=1024 using ls , lmmse , lr-lmmse algorithms .', 'C) ber vs snr for fft size=512 using ls , lmmse , lr-lmmse algorithms .', 'D) ber vs snr for fft size=256 using ls , lmmse , lr-lmmse algorithms with a 16 qam modulation .', 'E) ber vs snr for a bpsk modulation .'], 'Answer': 'D', 'Category': 'other cs', 'Images': []} """ ``` ### Source Data More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap and https://mm-arxiv.github.io/. ### Dataset Curators This dataset was curated by Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie ### Citation Information ``` @article{roberts2024scifibench, title={SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation}, author={Roberts, Jonathan and Han, Kai and Houlsby, Neil and Albanie, Samuel}, journal={arXiv preprint arXiv:2405.08807}, year={2024} } ```