Datasets:

Languages:
English
ArXiv:
License:
qiwen.zwq commited on
Commit
e1dec9f
·
1 Parent(s): d03ae21
Files changed (1) hide show
  1. README.md +98 -1
README.md CHANGED
@@ -1,3 +1,100 @@
1
  ---
2
  license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ ---
4
+
5
+ # Multimodal-Textbook
6
+ <img src="./src/logo.png" alt="Image" style="width: 900px;">
7
+
8
+ [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209)
9
+ [![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io)
10
+
11
+
12
+
13
+
14
+ ## Overview
15
+
16
+ This repository is the official code for ["2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"](https://arxiv.org/pdf/2306.07209). It contains the implementation of pre-training LLaVA on our multimodal textbook (interleaved image-text corpora). Our dataset can be found in [Huggingface Dataset](https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct).
17
+
18
+ - Multimodal Textbook is a high-quality **pre-training corpus** that encompasses a wealth of foundational knowledge, which is presented in an **image-text interleaved format**.
19
+ - This textbook is constructed from **2.5 years of instructional videos**, amounting to 22,000 class hours, covering six fundamental subjects, including mathematics, physics, and others.
20
+ - In multimodal textbooks, text is transcribed from audio, and images are extracted from video's kekframe. They are closely aligned, and provide more coherent context.
21
+
22
+
23
+
24
+ <img src="./src/page_fig.png" alt="Image" style="width: 900px;">
25
+
26
+ ## 🛠️ Installation
27
+
28
+ ```
29
+ cd multimodal_textbook
30
+ # create and activate an enviroment
31
+ conda create -n interleaved_textbook python=3.10 -y
32
+ conda activate interleaved_textbook
33
+
34
+ # install package
35
+ pip install --upgrade pip
36
+ pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118
37
+ pip install -e .
38
+ pip install open_flamingo --no-deps
39
+ pip install flash-attn --no-build-isolation
40
+ ```
41
+
42
+
43
+
44
+ ## Visualize Our Textbook
45
+
46
+ Due to the large size of the dataset (our complete textbook dataset is 13GB for JSON files and 0.7TB for images), we sampled 100 samples and the corresponding images and stored them in the `example_data` folder: `./example_data/textbook_sample_100.json`.
47
+
48
+ Each sample is stored in dict format as follows:
49
+ ```
50
+ [
51
+ {'images': [keyframe1, None, keyframe2, None, keyframe3, None,.....],
52
+ 'texts': [None, asr1, None, asr2, None, asr3,.....],
53
+ 'text_ocr_list': [None, asr1+ocr1, None, asr2+ocr2, None, asr3+ocr3,.....],
54
+ 'metadata': [...],
55
+ 'image_num': 15,
56
+ 'text_num': 425,
57
+ 'token_num': 9065},
58
+ ....
59
+ ]
60
+ ```
61
+ Just like [OBELICS](https://github.com/huggingface/OBELICS), the "images" and "texts" are arranged interleavely:
62
+ - "Images" list contains multiple keyframes and "None", where "None" represents that the current position is text.
63
+ - "texts" list contain multiple asr text. The position of "None" in "texts" list is image.
64
+ - "text_ocr_list": In addition to asr text, "text_ocr_list" also includes OCR text.
65
+ - "image_num", "text_num", "token_num": respectively represent the number of images, the number of asr text tokens, and the estimated total number of tokens in this sample.
66
+
67
+
68
+ To view our dataset more conveniently, we have written a jupyter notebook: `./llava/dataset/show_interleaved_dataset.ipynb`
69
+
70
+ ```
71
+ cd example_data
72
+ show_interleaved_dataset.ipynb
73
+ ```
74
+ In the notebook, you can see keyframes interleaving with text.
75
+
76
+
77
+
78
+
79
+ ## Data Preparation
80
+ - Training Corpus: `multimodal_textbook.json` (11GB) + images folder (700GB)
81
+ - Benchmarks: OKVQA, TextVQA, scienceQ, Mathvista, mathvision, mathverse in `./playground/data/eval/`
82
+
83
+ We provide a ``json file`` and corresponding images folder for textbook with 100 samples in the ``example_data`` folder, which is convenient for debugging. The full version of our dataset can be downloaded on our [Huggingface Dataset](https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct).
84
+
85
+
86
+ ### Naming Format
87
+
88
+ For each keyframe, its naming format rule is:
89
+ `video id@start-time_end-time#keyframe-number.jpg`.
90
+ For example, the path and file name of a keyframe is
91
+ `-1uixJ1V-As/[email protected]_55.0#2.jpg`.
92
+
93
+ This means that this image is extracted from the video (`-1uixJ1V-As`), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).
94
+
95
+
96
+
97
+
98
+
99
+
100
+