Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -18,43 +18,31 @@ size_categories:
|
|
18 |
|
19 |
[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209)
|
20 |
[![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io)
|
|
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
## Overview
|
26 |
|
27 |
-
This
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
-
- Multimodal Textbook is a high-quality **pre-training corpus** that encompasses a wealth of foundational knowledge, which is presented in an **image-text interleaved format**.
|
30 |
-
- This textbook is constructed from **2.5 years of instructional videos**, amounting to 22,000 class hours, covering six fundamental subjects, including mathematics, physics, and others.
|
31 |
-
- In multimodal textbooks, text is transcribed from audio, and images are extracted from video's kekframe. They are closely aligned, and provide more coherent context.
|
32 |
|
33 |
|
34 |
|
35 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
36 |
|
37 |
-
## 🛠️ Installation
|
38 |
-
|
39 |
-
```
|
40 |
-
cd multimodal_textbook
|
41 |
-
# create and activate an enviroment
|
42 |
-
conda create -n interleaved_textbook python=3.10 -y
|
43 |
-
conda activate interleaved_textbook
|
44 |
-
|
45 |
-
# install package
|
46 |
-
pip install --upgrade pip
|
47 |
-
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118
|
48 |
-
pip install -e .
|
49 |
-
pip install open_flamingo --no-deps
|
50 |
-
pip install flash-attn --no-build-isolation
|
51 |
-
```
|
52 |
|
53 |
|
54 |
|
55 |
## Visualize Our Textbook
|
56 |
|
57 |
-
Due to the large size of the dataset (our complete textbook dataset is
|
58 |
|
59 |
Each sample is stored in dict format as follows:
|
60 |
```
|
@@ -87,18 +75,43 @@ In the notebook, you can see keyframes interleaving with text.
|
|
87 |
|
88 |
|
89 |
|
90 |
-
##
|
91 |
-
|
92 |
-
-
|
93 |
-
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
|
97 |
-
### Naming Format
|
98 |
|
99 |
For each keyframe, its naming format rule is:
|
100 |
`video id@start-time_end-time#keyframe-number.jpg`.
|
101 |
For example, the path and file name of a keyframe is
|
102 |
`-1uixJ1V-As/[email protected]_55.0#2.jpg`.
|
103 |
|
104 |
-
This means that this image is extracted from the video (`-1uixJ1V-As`), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209)
|
20 |
[![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io)
|
21 |
+
[![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master)
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
## Overview
|
27 |
|
28 |
+
This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"](https://arxiv.org/pdf/2306.07209).
|
29 |
+
- It contains **pre-training corpus using interleaved image-text format**. Specifically, our multimodal-textbook includes **6.5M keyframes** extracted from instructional videos, interleaving with 0.8B **ASR texts**.
|
30 |
+
- All the images and text are extracted from online instructional videos (22,000 class hours), covering multiple fundamental subjects, e.g., mathematics, physics, and chemistry.
|
31 |
+
- Our textbook corpus providing a more coherent context and richer knowledge for image-text aligning.
|
32 |
+
- Our code can be found in [Multimodal-Textbook](https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct).
|
33 |
+
|
34 |
|
|
|
|
|
|
|
35 |
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
|
42 |
|
43 |
## Visualize Our Textbook
|
44 |
|
45 |
+
Due to the large size of the dataset (our complete textbook dataset is 11GB for JSON files and 0.7TB for images), we sampled 100 samples and the corresponding images and stored them in the `example_data` folder: `./example_data/textbook_sample_100.json`.
|
46 |
|
47 |
Each sample is stored in dict format as follows:
|
48 |
```
|
|
|
75 |
|
76 |
|
77 |
|
78 |
+
## Using Our Dataset
|
79 |
+
We provide the json file and corresponding images folder for textbook:
|
80 |
+
- json file: `multimodal_textbook.json` (610k samples ~ 11GB)
|
81 |
+
- image_folder: `dataset_images_interval_7.tar.gz` (6.5M image ~ 700GB)
|
82 |
+
|
83 |
+
Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|
84 |
+
|
85 |
+
```
|
86 |
+
"images": [
|
87 |
+
"/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/[email protected]_10.0#1.jpg",
|
88 |
+
null,
|
89 |
+
"/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/[email protected]_55.0#6.jpg",
|
90 |
+
null,
|
91 |
+
......
|
92 |
+
],
|
93 |
+
"texts": [
|
94 |
+
null,
|
95 |
+
" Hi everyone, and welcome to another lesson in our Eureka Tips for computers series.",
|
96 |
+
null,
|
97 |
+
" I'm actually trying to use the number line to find the sum for each. So to start I'm going to use the paint tool to demonstrate. Let's use the number line for four plus five. We're going to start at four then we're going to count up five. One two three four five. That equals nine. Now let's do three plus six for the next one.",
|
98 |
+
....
|
99 |
+
],
|
100 |
+
```
|
101 |
|
102 |
|
103 |
+
### Naming Format for keyframe
|
104 |
|
105 |
For each keyframe, its naming format rule is:
|
106 |
`video id@start-time_end-time#keyframe-number.jpg`.
|
107 |
For example, the path and file name of a keyframe is
|
108 |
`-1uixJ1V-As/[email protected]_55.0#2.jpg`.
|
109 |
|
110 |
+
This means that this image is extracted from the video (`-1uixJ1V-As`), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
|