Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
Magma-Video-ToM / README.md
jw2yang's picture
Update README.md
3f4eb7d verified
metadata
license: mit
task_categories:
  - video-text-to-text
  - robotics

Magma: A Foundation Model for Multimodal AI Agents

Jianwei Yang*1  Reuben Tan1  Qianhui Wu1  Ruijie Zheng2  Baolin Peng1  Yongyuan Liang2

Yu Gu1  Mu Cai3  Seonghyeon Ye4  Joel Jang5  Yuquan Deng5  Lars Liden1  Jianfeng Gao1

1 Microsoft Research; 2 University of Maryland; 3 University of Wisconsin-Madison
4 KAIST; 5 University of Washington

* Project lead First authors Second authors Leadership

[arXiv Paper]   [Project Page]   [Hugging Face Paper]   [Github Repo]   [Video]

Introduction

This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup".

The dataset is organized by following source datasets, with each source containing one or more arrow files:

Folder Number of Shards
ego4d 15
sthv2 6
instruct_video 14

Features

In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields:

  • dataset_name: Original source dataset name
  • video_name: video name
  • task_string: Description of the task
  • 'start_time': starting time stamp for the video segment
  • 'end_time': ending time stamp for the video segment
  • frame_index: starting index of the frame in the video segment
  • height: resized image height for visual trace extraction
  • 'width': resized image width for visual trace extraction
  • trace: Robot trajectory trace (serialized numpy array)
  • trace_visibility: Visibility mask for the trace (serialized numpy array)

Dataset Loading

Full Dataset Load

from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-Video-ToM", streaming=True, split="train")

Individual Dataset Load

or specify a dataset by:

from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-Video-ToM", data_dir="sthv2", streaming=True, split="train")

Sample Decoding

# Helper function to deserialize binary fields
def deserialize_array(bytes_data):
    return pickle.loads(bytes_data)

# Helper function to convert binary image data to PIL Image
def bytes_to_image(image_bytes):
    return Image.open(io.BytesIO(image_bytes))

for i, example in enumerate(dataset):   
    # decode trace: 1 x 16 x 256 x 2
    trace = deserialize_array(example['trace'])
    # decode trace visibility: 1 x 16 x 256 x 1
    trace_visibility = deserialize_array(example['trace_visibility'])

NOTE: the temporal length of traces for video data is 16 as we excluded the starting frame. For all robotics data, it is 17 as we did not exclude the starting frame.