Datasets:

Languages:
English
ArXiv:
License:
multimodal_textbook / README.md
qiwen.zwq
tt
e1dec9f
|
raw
history blame
4.3 kB
metadata
license: apache-2.0

Multimodal-Textbook

Image

arXiv Project

Overview

This repository is the official code for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining". It contains the implementation of pre-training LLaVA on our multimodal textbook (interleaved image-text corpora). Our dataset can be found in Huggingface Dataset.

  • Multimodal Textbook is a high-quality pre-training corpus that encompasses a wealth of foundational knowledge, which is presented in an image-text interleaved format.
  • This textbook is constructed from 2.5 years of instructional videos, amounting to 22,000 class hours, covering six fundamental subjects, including mathematics, physics, and others.
  • In multimodal textbooks, text is transcribed from audio, and images are extracted from video's kekframe. They are closely aligned, and provide more coherent context.
Image

🛠️ Installation

cd multimodal_textbook
# create and activate an enviroment
conda create -n interleaved_textbook python=3.10 -y
conda activate interleaved_textbook

# install package
pip install --upgrade pip  
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118  
pip install -e .
pip install open_flamingo --no-deps
pip install flash-attn --no-build-isolation

Visualize Our Textbook

Due to the large size of the dataset (our complete textbook dataset is 13GB for JSON files and 0.7TB for images), we sampled 100 samples and the corresponding images and stored them in the example_data folder: ./example_data/textbook_sample_100.json.

Each sample is stored in dict format as follows:

[
{'images':  [keyframe1, None, keyframe2, None, keyframe3, None,.....],
 'texts':   [None,      asr1,  None,      asr2, None,     asr3,.....],
 'text_ocr_list':  [None, asr1+ocr1,  None, asr2+ocr2, None, asr3+ocr3,.....],
 'metadata': [...],
 'image_num': 15,
 'text_num': 425,
 'token_num': 9065},
 ....
]

Just like OBELICS, the "images" and "texts" are arranged interleavely:

  • "Images" list contains multiple keyframes and "None", where "None" represents that the current position is text.
  • "texts" list contain multiple asr text. The position of "None" in "texts" list is image.
  • "text_ocr_list": In addition to asr text, "text_ocr_list" also includes OCR text.
  • "image_num", "text_num", "token_num": respectively represent the number of images, the number of asr text tokens, and the estimated total number of tokens in this sample.

To view our dataset more conveniently, we have written a jupyter notebook: ./llava/dataset/show_interleaved_dataset.ipynb

cd example_data
show_interleaved_dataset.ipynb

In the notebook, you can see keyframes interleaving with text.

Data Preparation

  • Training Corpus: multimodal_textbook.json (11GB) + images folder (700GB)
  • Benchmarks: OKVQA, TextVQA, scienceQ, Mathvista, mathvision, mathverse in ./playground/data/eval/

We provide a json file and corresponding images folder for textbook with 100 samples in the example_data folder, which is convenient for debugging. The full version of our dataset can be downloaded on our Huggingface Dataset.

Naming Format

For each keyframe, its naming format rule is:
video id@start-time_end-time#keyframe-number.jpg.
For example, the path and file name of a keyframe is
-1uixJ1V-As/[email protected]_55.0#2.jpg.

This means that this image is extracted from the video (-1uixJ1V-As), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through https://www.youtube.com/watch?v=-1uixJ1V-As.