Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ size_categories:
|
|
13 |
- 1M<n<10M
|
14 |
---
|
15 |
|
16 |
-
# Multimodal-Textbook
|
17 |
<img src="./src/logo.png" alt="Image" style="width: 900px;">
|
18 |
|
19 |
[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209) [![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master)
|
@@ -23,7 +23,8 @@ size_categories:
|
|
23 |
|
24 |
## Overview
|
25 |
|
26 |
-
This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"](https://arxiv.org/pdf/2306.07209).
|
|
|
27 |
- It contains **pre-training corpus using interleaved image-text format**. Specifically, our multimodal-textbook includes **6.5M keyframes** extracted from instructional videos, interleaving with 0.8B **ASR texts**.
|
28 |
- All the images and text are extracted from online instructional videos (22,000 class hours), covering multiple fundamental subjects, e.g., mathematics, physics, and chemistry.
|
29 |
- Our textbook corpus providing a more coherent context and richer knowledge for image-text aligning.
|
@@ -71,12 +72,16 @@ show_interleaved_dataset.ipynb
|
|
71 |
In the notebook, you can see keyframes interleaving with text.
|
72 |
|
73 |
|
|
|
|
|
74 |
|
75 |
|
76 |
## Using Our Dataset
|
|
|
77 |
We provide the json file and corresponding images folder for textbook:
|
78 |
-
- json
|
79 |
-
- image_folder:
|
|
|
80 |
|
81 |
Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|
82 |
|
@@ -98,6 +103,7 @@ Each sample has approximately 10.7 images and 1927 text tokens. After you downlo
|
|
98 |
```
|
99 |
|
100 |
|
|
|
101 |
### Naming Format for keyframe
|
102 |
|
103 |
For each keyframe, its naming format rule is:
|
@@ -110,6 +116,73 @@ This means that this image is extracted from the video (`-1uixJ1V-As`), more spe
|
|
110 |
|
111 |
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
|
|
|
114 |
|
115 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
- 1M<n<10M
|
14 |
---
|
15 |
|
16 |
+
# Multimodal-Textbook-6.5M
|
17 |
<img src="./src/logo.png" alt="Image" style="width: 900px;">
|
18 |
|
19 |
[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2306.07209) [![Project](https://img.shields.io/badge/Project-Website-blue.svg)](https://multi-modal-self-instruct.github.io) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master)
|
|
|
23 |
|
24 |
## Overview
|
25 |
|
26 |
+
This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"](https://arxiv.org/pdf/2306.07209), containing 6.5M images interleaving with 0.8B text from instructional videos.
|
27 |
+
|
28 |
- It contains **pre-training corpus using interleaved image-text format**. Specifically, our multimodal-textbook includes **6.5M keyframes** extracted from instructional videos, interleaving with 0.8B **ASR texts**.
|
29 |
- All the images and text are extracted from online instructional videos (22,000 class hours), covering multiple fundamental subjects, e.g., mathematics, physics, and chemistry.
|
30 |
- Our textbook corpus providing a more coherent context and richer knowledge for image-text aligning.
|
|
|
72 |
In the notebook, you can see keyframes interleaving with text.
|
73 |
|
74 |
|
75 |
+
## Dataset Statistics
|
76 |
+
|
77 |
|
78 |
|
79 |
## Using Our Dataset
|
80 |
+
### Dataset
|
81 |
We provide the json file and corresponding images folder for textbook:
|
82 |
+
- Dataset json-file: `./multimodal_textbook.json` (610k samples ~ 11GB)
|
83 |
+
- Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 700GB)
|
84 |
+
- videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` represent the meta information of crawled videos, including video vid, title, description, duration, language, and searched knowledge points. `multimodal_textbook_meta_data.json.zip` records the textbook in its original format, not in the OBELICS format.
|
85 |
|
86 |
Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|
87 |
|
|
|
103 |
```
|
104 |
|
105 |
|
106 |
+
|
107 |
### Naming Format for keyframe
|
108 |
|
109 |
For each keyframe, its naming format rule is:
|
|
|
116 |
|
117 |
|
118 |
|
119 |
+
### MetaData of Instructional Video
|
120 |
+
The format of the `video_meta_data/video_meta_data1.json`:
|
121 |
+
```
|
122 |
+
{
|
123 |
+
"file_path": xxx,
|
124 |
+
"file_size (MB)": 85.54160022735596,
|
125 |
+
"file_name": "-r7-s1z3lFY.mp4",
|
126 |
+
"video_duration": 0,
|
127 |
+
"unique": true,
|
128 |
+
"asr_path": xxxx,
|
129 |
+
"asr_len": 2990,
|
130 |
+
"caption_path": xxx,
|
131 |
+
"caption_len": 0,
|
132 |
+
"search_keyword": "1.3B parameter size models comparison",
|
133 |
+
"title": "DeepSeek Coder LLM | A Revolutionary Coder Model",
|
134 |
+
"desc": "In this video, we are going to test out Deepseek Coder, a coding LLM.....,
|
135 |
+
"llm_response": " The video appears to be a detailed and technical analysis of DeepSeek Coder LLM..... ###Score: 10###",
|
136 |
+
"language": "en",
|
137 |
+
"asr is repetive": false,
|
138 |
+
"deepseek_score": 10,
|
139 |
+
"llama_score": 2,
|
140 |
+
"deepseek_score long context": 10
|
141 |
+
},
|
142 |
+
```
|
143 |
|
144 |
+
In addition, the `multimodal_textbook_meta_data.json.zip` records the textbook in its original format as follows.It is stored with "video clip" as a dict. Each sample includes multiple video clips:
|
145 |
|
146 |
+
```
|
147 |
+
{'token_num': 1657,
|
148 |
+
'conversations': [
|
149 |
+
{
|
150 |
+
'vid': video id-1,
|
151 |
+
'clip_path': the path of video clip,
|
152 |
+
'asr': ASR transcribed from audio,
|
153 |
+
'extracted_frames': Extract keyframe sequences according to time intervals.,
|
154 |
+
'image_tokens': xxx,
|
155 |
+
'token_num': xxx,
|
156 |
+
'refined_asr': Refine the original ASR,
|
157 |
+
'ocr_internvl_8b': OCR obtained using internvl_8b,
|
158 |
+
'ocr_image': the image does OCR come from,
|
159 |
+
'ocr_internvl_8b_deduplicates': xxx,
|
160 |
+
'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm.,
|
161 |
+
'asr_token_num': xxx,
|
162 |
+
'ocr_qwen2_vl_72b': 'OCR obtained using qwen2_vl_72b'
|
163 |
+
},
|
164 |
+
{
|
165 |
+
'vid': 'End of a Video',
|
166 |
+
'clip_path': xxxx,
|
167 |
+
'image_tokens': 0,
|
168 |
+
'token_num': 0
|
169 |
+
},
|
170 |
+
{
|
171 |
+
'vid': video id-2,
|
172 |
+
'clip_path': the path of video clip,
|
173 |
+
'asr': ASR transcribed from audio,
|
174 |
+
'extracted_frames': Extract keyframe sequences according to time intervals.,
|
175 |
+
'image_tokens': xxx,
|
176 |
+
'token_num': xxx,
|
177 |
+
'refined_asr': Refine the original ASR,
|
178 |
+
'ocr_internvl_8b': OCR obtained using internvl_8b,
|
179 |
+
'ocr_image': the image does OCR come from,
|
180 |
+
'ocr_internvl_8b_deduplicates': xxx,
|
181 |
+
'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm.,
|
182 |
+
'asr_token_num': xxx,
|
183 |
+
'ocr_qwen2_vl_72b': 'OCR obtained using qwen2_vl_72b'
|
184 |
+
},
|
185 |
+
....
|
186 |
+
]
|
187 |
+
}
|
188 |
+
```
|