Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

EvHumanMotion

Dataset on HuggingFace πŸ€—

EvHumanMotion is a real-world dataset captured using the DAVIS346 event camera, focusing on human motion under diverse and challenging scenarios. It is designed to support event-driven human animation research, especially under motion blur, low-light, and overexposure conditions. This dataset was introduced in the EvAnimate paper.

Dataset Structure

The dataset is organized into two main parts:

  • EvHumanMotion_aedat4: Raw event streams in .aedat4 format.
  • EvHumanMotion_frame: Frame-level event slices and RGB frames.

Each is categorized into:

  • indoor_day/
  • indoor_night_high_noise/
  • indoor_night_low_noise/
  • outdoor_day/
  • outdoor_night/

Each environment contains four scenarios:

  • low_light/
  • motion_blur/
  • normal/
  • over_exposure/

Example path:

EvHumanMotion_frame/indoor_day/low_light/dvSave-2025_03_04_13_02_53/
β”œβ”€β”€ event_frames/
β”‚   β”œβ”€β”€ events_0000013494.png
β”‚   └── ...
└── frames/
    β”œβ”€β”€ frames_1741064573617350.png
    └── ...

Features

  • Total sequences: 113
  • Participants: 20 (10 male, 10 female)
  • Duration: ~10 seconds per video
  • Frame rate: 24 fps
  • Scenarios: Normal, Motion Blur, Overexposure, Low Light
  • Modalities: RGB + Event data (both .aedat4 and frame-level)

Applications

This dataset supports:

  • Event-to-video generation
  • Human animation in extreme conditions
  • Motion transfer with high temporal resolution

Usage

from datasets import load_dataset

dataset = load_dataset("potentialming/EvHumanMotion")

License

Apache 2.0 License

Citation

If you use this dataset, please cite:

@article{qu2025evanimate,
  title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},
  author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},
  journal={arXiv preprint arXiv:2503.18552},
  year={2025}
}

Contact

Dataset maintained by Ming Li.

Downloads last month
49