Datasets:

ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 791, in read_json
                  json_reader = JsonReader(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 905, in __init__
                  self.data = self._preprocess_data(data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
                  data = data.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Introduction

MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data. We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal Generation (MRAMG).

Data Structure

The dataset consists of three major components: Documents, Multimodal QA pairs, and Images. Each component is structured across six different sub-datasets, ensuring a diverse and comprehensive collection of multimodal content.


1. Document Collection

The dataset includes six JSONL files, each corresponding to a different data source:

File Name Description Num
doc_wit.jsonl MRAMG-Wit documents 639
doc_wiki.jsonl MRAMG-Wiki documents 538
doc_web.jsonl MRAMG-Web documents 1500
doc_arxiv.jsonl MRAMG-Arxiv documents 101
doc_recipe.jsonl MRAMG-Recipe documents 1528
doc_manual.jsonl MRAMG-Manual documents 40
Field Definitions
  • id (int): Unique identifier for the document.
  • content (str): The main textual content of the document. If an image is referenced, <PIC> is used as a placeholder indicating its position in the text.
  • images_list (list[int]): A list of image IDs associated with the document.

2. Multimodal QA pairs

The MQA component consists of six JSONL files, each corresponding to a different dataset:

File Name Description Num
wit_mqa.jsonl MRAMG-Wit multimodal QA pairs 600
wiki_mqa.jsonl MRAMG-Wiki multimodal QA pairs 500
web_mqa.jsonl MRAMG-Web multimodal QA pairs 750
arxiv_mqa.jsonl MRAMG-Arxiv multimodal QA pairs 200
recipe_mqa.jsonl MRAMG-Recipe multimodal QA pairs 2360
manual_mqa.jsonl MRAMG-Manual multimodal QA pairs 390

Each entry contains a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer.

Field Definitions

  • id (str): Unique identifier for the question.
  • question (str): The question text.
  • provenance (list[int]): A list of document IDs that serve as supporting evidence for the answer.
  • ground_truth (str): The correct answer, which may contain <PIC> placeholders indicating relevant images.
  • images_list (list[int]): A list of image IDs directly associated with the answer.

3. Image Metadata

The dataset contains a collection of images stored under the directory:


IMAGE/images/

Additionally, metadata about these images is provided in six JSON files, corresponding to each dataset:

File Name Description Num
wit_imgs_collection.json Image metadata from MRAMG-Wit 639
wiki_imgs_collection.json Image metadata from MRAMG-Web 538
web_imgs_collection.json Image metadata from MRAMG-Wiki 1500
arxiv_imgs_collection.json Image metadata from MRAMG-Arxiv 337
recipe_imgs_collection.json Image metadata from MRAMG-Recipe 8569
manual_imgs_collection.json Image metadata from MRAMG-Manual 2607

Field Definitions

  • id (int): Unique identifier for the image.
  • image_url (str): The URL where the image is originally sourced from.
  • image_path (str): The filename of the image as stored in the dataset.
  • image_caption (str): A textual description or caption of the image.

Contact

If you have any questions or suggestions, please contact [email protected]

Citation Information

If you use this benchmark in your research, please cite the benchmark as follows:

@article{yu2025mramg,
  title={MRAMG-Bench: A BeyondText Benchmark for Multimodal Retrieval-Augmented Multimodal Generation},
  author={Yu, Qinhan and Xiao, Zhiyou and Li, Binghui and Wang, Zhengren and Chen, Chong and Zhang, Wentao},
  journal={arXiv preprint arXiv:2502.04176},
  year={2025}
}
Downloads last month
1,969