Datasets:

Languages:
English
ArXiv:
License:
Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 791, in read_json
                  json_reader = JsonReader(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 905, in __init__
                  self.data = self._preprocess_data(data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
                  data = data.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Multimodal-Textbook-6.5M

Image

arXiv Project GitHub

Overview

This dataset is for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining", containing 6.5M images interleaving with 0.8B text from instructional videos.

  • It contains pre-training corpus using interleaved image-text format. Specifically, our multimodal-textbook includes 6.5M keyframes extracted from instructional videos, interleaving with 0.8B ASR texts.
  • All the images and text are extracted from online instructional videos (22,000 class hours), covering multiple fundamental subjects, e.g., mathematics, physics, and chemistry.
  • Our textbook corpus providing a more coherent context and richer knowledge for image-text aligning.
  • Our code can be found in Multimodal-Textbook.

Note: We have uploaded the annotation file (./multimodal_textbook.json), which contains processed asr and ocr texts. Keyframes (./dataset_images_interval_7.tar.gz) are still being processed and uploading due to their large size. For more details, please refer to Using Our Dataset

Image

Visualize Our Textbook

Due to the large size of the dataset (our complete textbook dataset is 11GB for JSON files and 0.7TB for images), we sampled 100 samples and the corresponding images and stored them in the example_data folder: ./example_data/textbook_sample_100.json.

Each sample is stored in dict format as follows:

[
{'images':  [keyframe1, None, keyframe2, None, keyframe3, None,.....],
 'texts':   [None,      asr1,  None,      asr2, None,     asr3,.....],
 'text_ocr_list':  [None, asr1+ocr1,  None, asr2+ocr2, None, asr3+ocr3,.....],
 'metadata': [...],
 'image_num': 15,
 'text_num': 425,
 'token_num': 9065},
 ....
]

Just like OBELICS, the "images" and "texts" are arranged interleavely:

  • "Images" list contains multiple keyframes and "None", where "None" represents that the current position is text.
  • "texts" list contain multiple asr text. The position of "None" in "texts" list is image.
  • "text_ocr_list": In addition to asr text, "text_ocr_list" also includes OCR text.
  • "image_num", "text_num", "token_num": respectively represent the number of images, the number of asr text tokens, and the estimated total number of tokens in this sample.

To view our dataset more conveniently, we have written a jupyter notebook: ./llava/dataset/show_interleaved_dataset.ipynb

cd example_data
show_interleaved_dataset.ipynb

In the notebook, you can see keyframes interleaving with text.

Dataset Statistics

We utilize GPT-4o to synthesize our knowledge taxonomy with 3915 knowledge points across 6 subjects, which enabled us to automatically collect 159K English instructional videos based on this taxonomy.

Following our video-totextbook pipeline, we filter 53% low-quality or repetitive videos and retain 75K videos (22,697 class hours) with an average duration of 18 minutes.

Then we extract 6.5M keyframes and 0.75B text (ASR+OCR) tokens from these videos. To enhance training efficiency, we concatenate multiple video clips into a single sample, producing a total of 610K interleaved samples. Each sample contains an average of 10.7 keyframes and 1,230 text tokens. The detailed statistics for each subject are shown as follows:

Image

Using Our Dataset

Dataset

We provide the json file and corresponding images folder for textbook:

  • Dataset json-file: ./multimodal_textbook.json (610k samples ~ 11GB)
  • Dataset image_folder: ./dataset_images_interval_7.tar.gz (6.5M image ~ 700GB) (Due to its large size, it is still being processed and will be uploaded soon)
  • videometa_data: video_meta_data/video_meta_data1.json and video_meta_data/video_meta_data2.json represent the meta information of crawled videos, including video vid, title, description, duration, language, and searched knowledge points. multimodal_textbook_meta_data.json.zip records the textbook in its original format, not in the OBELICS format.

Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (/mnt/workspace/zwq_data/interleaved_dataset/) with your personal image folder path.

"images": [
            "/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/[email protected]_10.0#1.jpg",
            null,  
            "/mnt/workspace/zwq_data/interleaved_dataset/dataset_images_interval_7/-1uixJ1V-As/[email protected]_55.0#6.jpg",
            null,
            ......
        ],
        "texts": [
            null,
            " Hi everyone, and welcome to another lesson in our Eureka Tips for computers series.",
            null,
            " I'm actually trying to use the number line to find the sum for each. So to start I'm going to use the paint tool to demonstrate. Let's use the number line for four plus five. We're going to start at four then we're going to count up five. One two three four five. That equals nine. Now let's do three plus six for the next one.",
            ....
        ],

Naming Format for keyframe

For each keyframe, its naming format rule is:
video id@start-time_end-time#keyframe-number.jpg.
For example, the path and file name of a keyframe is
-1uixJ1V-As/[email protected]_55.0#2.jpg.

This means that this image is extracted from the video (-1uixJ1V-As), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through https://www.youtube.com/watch?v=-1uixJ1V-As.

MetaData of Instructional Video

The format of the video_meta_data/video_meta_data1.json:

    {
        "file_path": xxx,
        "file_size (MB)": 85.54160022735596,
        "file_name": "-r7-s1z3lFY.mp4",
        "video_duration": 0,
        "unique": true,
        "asr_path": xxxx,
        "asr_len": 2990,
        "caption_path": xxx,
        "caption_len": 0,
        "search_keyword": "1.3B parameter size models comparison",
        "title": "DeepSeek Coder LLM | A Revolutionary Coder Model",
        "desc": "In this video, we are going to test out Deepseek Coder, a coding LLM.....,
        "llm_response": " The video appears to be a detailed and technical analysis of DeepSeek Coder LLM..... ###Score: 10###",
        "language": "en",
        "asr is repetive": false,
        "deepseek_score": 10,
        "llama_score": 2,
        "deepseek_score long context": 10
    },

In addition, the multimodal_textbook_meta_data.json.zip records the textbook in video clip-level. It is stored with "video clip" as a dict. Each sample includes multiple consecutive video clips from the same long video. Sometimes one sample may also include video clips from different long videos. When a long video ends, it will store End of a Video.

{'token_num': 1657,
 'conversations': [
    {
        'vid': video id-1,
        'clip_path': the path of video clip,
        'asr': ASR transcribed from audio,
        'extracted_frames': Extract keyframe sequences according to time intervals.,
        'image_tokens': xxx,
        'token_num': xxx,
        'refined_asr': Refine the original ASR,
        'ocr_internvl_8b': OCR obtained using internvl_8b,
        'ocr_image': the image does OCR come from,
        'ocr_internvl_8b_deduplicates': xxx,
        'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm.,
        'asr_token_num': xxx,
        'ocr_qwen2_vl_72b': 'OCR obtained using qwen2_vl_72b'
   },
   {
        'vid': 'End of a Video',
        'clip_path': xxxx,
        'image_tokens': 0,
        'token_num': 0
   },
   {
        'vid': video id-2,
        'clip_path': the path of video clip,
        'asr': ASR transcribed from audio,
        'extracted_frames': Extract keyframe sequences according to time intervals.,
        'image_tokens': xxx,
        'token_num': xxx,
        'refined_asr': Refine the original ASR,
        'ocr_internvl_8b': OCR obtained using internvl_8b,
        'ocr_image': the image does OCR come from,
        'ocr_internvl_8b_deduplicates': xxx,
        'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm.,
        'asr_token_num': xxx,
        'ocr_qwen2_vl_72b': 'OCR obtained using qwen2_vl_72b'
   },
    ....
   ]
}
Downloads last month
632