Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
args: struct<data_dir: string, frame_freq: int64>
pca_components: int64
explained_variance_ratio: list<item: double>
total_explained_variance: double
vs
total_episodes: int64
unique_skills: int64
min_episode_length: int64
avg_episode_length: double
max_episode_length: int64
ground_truth_distribution: struct<log: int64, planks: int64, crafting_table: int64, stick: int64, wooden_pickaxe: int64, dirt: int64, cobblestone: int64, coal: int64>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              args: struct<data_dir: string, frame_freq: int64>
              pca_components: int64
              explained_variance_ratio: list<item: double>
              total_explained_variance: double
              vs
              total_episodes: int64
              unique_skills: int64
              min_episode_length: int64
              avg_episode_length: double
              max_episode_length: int64
              ground_truth_distribution: struct<log: int64, planks: int64, crafting_table: int64, stick: int64, wooden_pickaxe: int64, dirt: int64, cobblestone: int64, coal: int64>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Minecraft Expert Skill Data

This dataset consists of expert demonstration trajectories from a Minecraft simulation environment. Each trajectory includes ground-truth skill segmentation annotations, enabling research into action segmentation, skill discovery, imitation learning, and reinforcement learning with temporally-structured data.

Dataset Details

Dataset Description

The Minecraft Skill Segmentation Dataset contains gameplay trajectories from an expert policy in the Minecraft environment. Each trajectory is labeled with ground-truth skill boundaries and skill identifiers, allowing users to train and evaluate models for temporal segmentation, behavior cloning, and skill-based representation learning.

  • Curated by: [dami2106]
  • License: Apache 2.0
  • Language(s) (NLP): Not applicable (code/visual)

Dataset Sources

Uses

Direct Use

This dataset is designed for use in:

  • Training models to segment long-horizon behaviors into reusable skills.
  • Evaluating action segmentation or hierarchical RL approaches.
  • Studying object-centric or spatially grounded RL methods.
  • Pretraining representations from visual expert data.

Out-of-Scope Use

  • Language-based tasks (no natural language data is included).
  • Real-world robotics (simulation-only data).
  • Tasks requiring raw image pixels if they are not included in your setup.

Dataset Structure

Each data file includes:

  • A sequence of states (e.g., pixel POV observations, PCA features).
  • Skill labels marking where each skill begins and ends.

Example structure:

{
  "pixel_obs": [...],         // Raw visual observations (e.g., RGB frames)
  "pca_features": [...],      // Compressed feature vectors (e.g., from CNN or ResNet)
  "groundTruth": [...],       // Ground-truth skill segmentation labels
  "mapping": {                // Mapping metadata for skill ID -> groundTruth label
    "0": "chop_tree",
    "1": "craft_table",
    "2": "mine_stone",
    ...
  }
}

Dataset Creation

Curation Rationale

This dataset was created to support research in skill discovery and temporal abstraction in complex, open-ended environments like Minecraft. The environment supports high-level goals and diverse interactions, making it suitable for testing generalizable skills.

Source Data

Data Collection and Processing

  • Expert trajectories were generated using a scripted or trained policy within the Minecraft simulation.
  • Skill labels were added based on environment signals (e.g., changes to inventory, task completions, block state transitions) and verified using heuristics.

Who are the source data producers?

The data was generated programmatically in the Minecraft simulation environment by expert agents using scripted or learned behavior policies.

Annotations

Annotation process

Skill annotations were derived from internal game state events and heuristics related to player intent and task segmentation. Manual inspection was performed to ensure consistency across trajectories.

Who are the annotators?

Automated rule-based annotation systems with developer oversight during dataset development.

Bias, Risks, and Limitations

  • The dataset is derived from simulation, so its findings may not generalize to real-world robotics or broader RL environments.
  • Skill definitions depend on domain-specific heuristics, which may not reflect all valid strategies.
  • Expert strategies may be biased toward specific pathways (e.g., speedrunning logic).

Recommendations

Researchers should evaluate the robustness of learned skills across diverse environments and initial conditions. Segmentations reflect task approximations and should be interpreted within the scope of the simulation constraints.

Downloads last month
113