TUC-HRI-CS / README.md
SchulzR97's picture
Update README.md
a93df4d verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: action
      dtype:
        class_label:
          names:
            '0': None
            '1': Waving
            '2': Pointing
            '3': Clapping
            '4': Follow
            '5': Walking
            '6': Stop
            '7': Turn
            '8': Jumping
            '9': Come here
            '10': Calm
    - name: camera
      dtype: int64
    - name: subject
      dtype: int64
    - name: idx
      dtype: int64
    - name: label
      dtype: string
    - name: link
      dtype: string
  splits:
    - name: train
    - name: val
license: mit
tags:
  - computer vision
  - machine learning
  - video understanding
  - classification
  - human-machine-interaction
  - human-robot-interaction
  - human-action-recognition
task_categories:
  - video-classification
language:
  - en
pretty_name: University of Technology Chemnitz - Human Robot Interaction Dataset
drawing

University of Technology Chemnitz, Germany
Department Robotics and Human Machine Interaction
Author: Robert Schulz

TUC-HRI Dataset Card

TUC-AR is an action recognition dataset, containing 10(+1) action categories for human machine interaction. This version contains video sequences, stored as images, frame by frame.

We introduce two validation types: random validation and cross-subject validation. This is the cross-subject validation dataset. For random validation, please use https://huggingface.co/datasets/SchulzR97/TUC-HRI.

  • In random validation, a train and a validation split are obtained by randomly splitting the sequences while maintaining an allocation rate of approximately 80% train / 20% validation. This ensures that each action, subject, and camera, as well as the overall number of sequences, are distributed in this ratio among the splits. Thus, we obtained 17,263 train sequences and 4,220 validation sequences.
  • For cross-subject validation, subject 0 and 8 were chosen as validation subjects. All other subjects were assigned to the train split.

Dataset Details

  • RGB and depth input recorded by Intel RealSense D435 depth camera
  • 12 subjects
  • 11,031 sequences (train 8,893/ val 2,138)
  • 3 perspectives per scene
  • 10(+1) action classes
    Action Label
    A000 None
    A001 Waving
    A002 Pointing
    A003 Clapping
    A004 Follow
    A005 Walking
    A006 Stop
    A007 Turn
    A008 Jumping
    A009 Come here
    A010 Calm

How to Use this Dataset

  1. Install the RSProduction Machine Learning package (PyPi, GitHub)
pip install rsp-ml
  1. Use the HF datasat with rsp.ml.dataset.TUCHRI
from rsp.ml.dataset import TUCHRI
import rsp.ml.multi_transforms as multi_transforms
import torchvision.transforms as transforms
USE_DEPTH_DATA = True
class ToNumpy:
  def __call__(self, x):
    if isinstance(x, Image.Image):
      return np.array(x)
    elif isinstance(x, torch.Tensor):
      return x.permute(1, 2, 0).numpy()  # Tensor (C, H, W) -> (H, W, C)
    else:
      raise TypeError("Input must be a PIL.Image or torch.Tensor")
transform = transforms.Compose([
  transforms.Resize((600, 600)),
  transforms.ColorJitter(brightness=0.8, contrast=0.8, saturation=0.8, hue=0.5),
  transforms.RandomRotation(180, expand=True),
  transforms.CenterCrop((375, 500)),
  #transforms.RandomCrop(input_size),
  #transforms.ToTensor(),
  ToNumpy()
])
dtd_dataset = torchvision.datasets.DTD(download=True, split='val', transform=transform)
tranforms_train = multi_transforms.Compose([
  multi_transforms.ReplaceBackground(
      backgrounds = backgrounds,
      hsv_filter=[(69, 87, 139, 255, 52, 255)],
      p = 0.8
  ),
  multi_transforms.Resize((400, 400), auto_crop=False),
  multi_transforms.Color(0.1, p = 0.2),
  multi_transforms.Brightness(0.7, 1.3),
  multi_transforms.Satturation(0.7, 1.3),
  multi_transforms.RandomHorizontalFlip(),
  multi_transforms.GaussianNoise(0.002),
  multi_transforms.Rotate(max_angle=3),
  multi_transforms.Stack()
])
transforms_val = multi_transforms.Compose([
  multi_transforms.Resize((400, 400), auto_crop=False),
  multi_transforms.Stack()
])
ds_train = TUCHRI(
  phase='train',
  load_depth_data=True,
  sequence_length=30,
  num_classes=11,
  transforms=tranforms_train
)
ds_val = TUCHRI(
  phase='val',
  load_depth_data=True,
  sequence_length=30,
  num_classes=11,
  transforms=transforms_val
)

Dataset Card Contact

In case of any doubts about the dataset preprocessing and preparation, please contact TUC RHMi.