Datasets:

ArXiv:
License:
ariakang commited on
Commit
6454663
2 Parent(s): a199a9b 67bee35

Merge branch 'main' of https://huggingface.co/datasets/projectaria/Nymeria

Browse files
Files changed (1) hide show
  1. README.md +67 -36
README.md CHANGED
@@ -1,36 +1,67 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- viewer: false
4
-
5
- ---
6
-
7
-
8
- # Nymeria Dataset
9
- [[Project Page]](https://www.projectaria.com/datasets/nymeria/) [[Data Explorer]](https://explorer.projectaria.com/nymeria) [[Code]](https://github.com/facebookresearch/nymeria_dataset) [[Paper]](https://arxiv.org/abs/2406.09905)
10
-
11
- ![image](assets/teaser1.gif)
12
- ![image](assets/teaser2.gif)
13
-
14
- Nymeria is the world's largest dataset of human motion in the wild, capturing diverse people performing diverse activities across diverse locations. It is first of a kind to record body motion using multiple egocentric multimodal devices, all accurately synchronized and localized in one metric 3D world. Nymeria is also the world's largest motion dataset with natural language descriptions. The dataset is designed to accelerate research in egocentric human motion understanding and presents exciting challenges to advance contextualized computing and future AR/VR technology.
15
-
16
- ## Dataset Summary
17
- Nymeria dataset records over 300 hours of human motion across 1200 sequences. It captures the rich diversity of everyday activities from 264 participants, performing 20 unscripted scenarios across 50 indoor and outdoor locations. During data capture, participants wear an inertial-based motion capture suit which provides the ground-truth kinematic body motion at 240 Hz, which is retargetted into a linear blend skinning (LBS) based human model using Meta Momentum. In addition, participants also wear a pair of Project Aria glasses and two Project Aria-alike wristbands to collect multimodal egocentric data, including videos, IMUs, magnetometer, barometer, audio and eye tracking. Each sequence also contains an observer, who wears a pair of Project Aria glasses to record the scene from a third-person perspective. All devices are precisely synchronized via hardware solution and localized in one metric 3D world. Collectively, the traveling distance of participant headsets and wristbands are approximately 400Km and 1053Km, respectively. To further connect human motion with natural language, human annotators review the playback rendering of the scene and motion and narrate the motion coarse-to-fine, including detail-oriented motion narration, simplified atomic action and high-level activity summarization. In total, the dataset provides 310.5K sentences, 8.64M words with a vocabulary size of 6545.
18
-
19
- For more details, please visit the project page, view the dataset online, explore the github repository and read the ECCV 2024 paper.
20
-
21
- ## Citation
22
-
23
- @inproceedings{ma24eccv,
24
- title={Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild},
25
- author={Lingni Ma and Yuting Ye and Fangzhou Hong and Vladimir Guzov and Yifeng Jiang and Rowan Postyeni and Luis Pesqueira and Alexander Gamino and Vijay Baiyya and Hyo Jin Kim and Kevin Bailey and David Soriano Fosas and C. Karen Liu and Ziwei Liu and Jakob Engel and Renzo De Nardi and Richard Newcombe},
26
- booktitle={the 18th European Conference on Computer Vision (ECCV)},
27
- year={2024},
28
- url={https://arxiv.org/abs/2406.09905},
29
- }
30
-
31
-
32
- ## License
33
- Nymeria dataset and code is released by Meta under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Data and code may not be used for commercial purposes.
34
-
35
- ## Contributors
36
- Lingni Ma (@summericequeen), Yuting Ye, Fangzhou Hong, Vladimir Guzov, Yifeng Jiang, Rowan Postyeni, Luis Pesqueira, Alexander Gamino, Vijay Baiyya, Hyo Jin Kim, Kevin Bailey, David Soriano Fosas, C. Karen Liu, Ziwei Liu, Jakob Engel, Renzo De Nardi, Richard Newcombe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ viewer: false
4
+
5
+ ---
6
+
7
+
8
+ # Nymeria Dataset
9
+ [[Project Page]](https://www.projectaria.com/datasets/nymeria/)
10
+ [[Data Explorer]](https://explorer.projectaria.com/nymeria)
11
+ [[Code]](https://github.com/facebookresearch/nymeria_dataset)
12
+ [[Paper]](https://arxiv.org/abs/2406.09905)
13
+
14
+
15
+ <div style="display: flex; justify-content: space-around;">
16
+ <img src="assets/teaser1.gif" width="47%" alt="Nymeria dataset teaser with 100 random samples" />
17
+ <img src="assets/teaser2.gif" width="47%" alt="Nymeria dataset teaser with 100 random samples" />
18
+ </div>
19
+
20
+
21
+ Nymeria is the world's largest dataset of human motion in the wild, capturing diverse people performing diverse activities across diverse locations. It is first of a kind to record body motion using multiple egocentric multimodal devices, all accurately synchronized and localized in one metric 3D world. Nymeria is also the world's largest motion dataset with natural language descriptions. The dataset is designed to accelerate research in egocentric human motion understanding and presents exciting challenges to advance contextualized computing and future AR/VR technology.
22
+
23
+ ## Dataset Summary
24
+ Nymeria dataset records over **300 hours** of human motion across **1200 sequences**.
25
+ It captures the rich diversity of everyday activities from **264 participants**,
26
+ performing **20** unscripted scenarios across **50** indoor and outdoor locations.
27
+ During data capture, participants wear an inertial-based [motion capture suit](https://www.movella.com/products/motion-capture/xsens-mvn-link)
28
+ which provides the ground-truth kinematic body motion at **240 Hz**,
29
+ which is retargetted into a linear blend skinning (LBS) based human model using
30
+ **[Meta Momentum](https://github.com/facebookincubator/momentum/)**.
31
+ In addition, participants also wear a pair of Project Aria glasses
32
+ and two Project Aria-alike wristbands to collect multimodal egocentric data,
33
+ including videos, IMUs, magnetometer, barometer, audio and eye tracking.
34
+ Each sequence also contains an observer,
35
+ who wears a pair of Project Aria glasses to record the scene from a third-person perspective.
36
+ All devices are precisely synchronized via hardware solution and localized in one metric 3D world.
37
+ Collectively, the traveling distance of participant headsets and wristbands are approximately **400 Km** and **1053 Km**, respectively.
38
+ To further connect human motion with natural language,
39
+ human annotators review the playback rendering of the scene and motion and narrate the motion coarse-to-fine,
40
+ including detail-oriented motion narration, simplified atomic action and high-level activity summarization.
41
+ In total, the dataset provides **310.5K sentences** in **8.64M words** with a vocabulary size of **6545**.
42
+
43
+ For more details, please visit the [project page](https://www.projectaria.com/datasets/nymeria/),
44
+ access the dataset [online](https://explorer.projectaria.com/nymeria),
45
+ explore the [github repository](https://github.com/facebookresearch/nymeria_dataset) and
46
+ read the [ECCV 2024 paper](https://arxiv.org/abs/2406.09905).
47
+
48
+ ## Citation
49
+ When using the Nymeria dataset and code, please attribute it as follows:
50
+ ```
51
+ @inproceedings{ma24eccv,
52
+ title={Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild},
53
+ author={Lingni Ma and Yuting Ye and Fangzhou Hong and Vladimir Guzov and Yifeng Jiang and Rowan Postyeni and Luis Pesqueira and Alexander Gamino and Vijay Baiyya and Hyo Jin Kim and Kevin Bailey and David Soriano Fosas and C. Karen Liu and Ziwei Liu and Jakob Engel and Renzo De Nardi and Richard Newcombe},
54
+ booktitle={the 18th European Conference on Computer Vision (ECCV)},
55
+ year={2024},
56
+ url={https://arxiv.org/abs/2406.09905},
57
+ }
58
+ ```
59
+
60
+
61
+ ## License
62
+ Nymeria dataset and code is released by Meta under the Creative Commons Attribution-NonCommercial 4.0 International License
63
+ ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode)).
64
+ Data and code may not be used for commercial purposes.
65
+
66
+ ## Contributors
67
+ Lingni Ma ([@summericequeen](https://huggingface.co/summericequeen)), Yuting Ye, Fangzhou Hong, Vladimir Guzov, Yifeng Jiang, Rowan Postyeni, Luis Pesqueira, Alexander Gamino, Vijay Baiyya, Hyo Jin Kim, Kevin Bailey, David Soriano Fosas, C. Karen Liu, Ziwei Liu, Jakob Engel, Renzo De Nardi, Richard Newcombe