jmonas commited on
Commit
d71367a
·
verified ·
1 Parent(s): 29063ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -58
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  license: apache-2.0
3
  pretty_name: 1X World Model Challenge Dataset
@@ -5,73 +6,83 @@ size_categories:
5
  - 10M<n<100M
6
  viewer: false
7
  ---
8
- Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
9
 
10
- Download with:
11
- ```
 
 
 
12
  huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
13
  ```
14
 
15
- Changes from v1.1:
16
- - New train and val dataset of 100 hours, replacing the v1.1 datasets
17
- - Blur applied to faces
18
- - Shared a new raw video dataset under CC-BY-NC-SA 4.0: https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data
19
- - Example scripts to decode Cosmos Tokenized bins `cosmos_video_decoder.py` and load in frame data `unpack_data.py`
20
 
21
- Contents of train/val_v2.0:
 
 
 
 
 
 
 
22
 
23
- The training dataset is shareded into 100 independent shards. The definitions are as follows:
24
 
25
- - **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
26
- - **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
27
- - **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
28
- - **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard.
29
 
30
- #### Index-to-State Mapping (NEW)
31
- ```
32
- {
33
- 0: HIP_YAW
34
- 1: HIP_ROLL
35
- 2: HIP_PITCH
36
- 3: KNEE_PITCH
37
- 4: ANKLE_ROLL
38
- 5: ANKLE_PITCH
39
- 6: LEFT_SHOULDER_PITCH
40
- 7: LEFT_SHOULDER_ROLL
41
- 8: LEFT_SHOULDER_YAW
42
- 9: LEFT_ELBOW_PITCH
43
- 10: LEFT_ELBOW_YAW
44
- 11: LEFT_WRIST_PITCH
45
- 12: LEFT_WRIST_ROLL
46
- 13: RIGHT_SHOULDER_PITCH
47
- 14: RIGHT_SHOULDER_ROLL
48
- 15: RIGHT_SHOULDER_YAW
49
- 16: RIGHT_ELBOW_PITCH
50
- 17: RIGHT_ELBOW_YAW
51
- 18: RIGHT_WRIST_PITCH
52
- 19: RIGHT_WRIST_ROLL
53
- 20: NECK_PITCH
54
- 21: Left hand closure state (0 = open, 1 = closed)
55
- 22: Right hand closure state (0 = open, 1 = closed)
56
- 23: Linear Velocity
57
- 24: Angular Velocity
58
- }
 
 
 
 
 
 
 
 
 
 
 
59
 
 
 
 
 
60
 
61
- Previous version: v1.1
62
 
63
- - **magvit2.ckpt** - weights for [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.
 
 
64
 
65
- Contents of train/val_v1.1:
66
- - **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
67
- - **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
68
- - **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `joint_pos[i]`, `driving_command[i]`, `neck_desired[i]`, and so on. The shapes and definitions of the arrays are as follows (N is the number of frames):
69
- - **joint_pos** `(N, 21)`: Joint positions. See `Index-to-Joint Mapping` below.
70
- - **driving_command** `(N, 2)`: Linear and angular velocities.
71
- - **neck_desired** `(N, 1)`: Desired neck pitch.
72
- - **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
73
- - **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
74
- #### Index-to-Joint Mapping (OLD)
75
  ```
76
  {
77
  0: HIP_YAW
@@ -97,8 +108,34 @@ Contents of train/val_v1.1:
97
  20: NECK_PITCH
98
  }
99
 
100
- ```
101
 
102
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
- We also provide a small `val_v1.1` data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.
 
 
 
1
+ ```markdown
2
  ---
3
  license: apache-2.0
4
  pretty_name: 1X World Model Challenge Dataset
 
6
  - 10M<n<100M
7
  viewer: false
8
  ---
 
9
 
10
+ # 1X World Model Challenge Dataset
11
+
12
+ This repository hosts the dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
13
+
14
+ ```bash
15
  huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
16
  ```
17
 
18
+ ## Updates Since v1.1
 
 
 
 
19
 
20
+ - **Train/Val v2.0 (~100 hours)**, replacing v1.1
21
+ - **Test v2.0 dataset for the Compression Challenge**
22
+ - **Faces blurred** for privacy
23
+ - **New raw video dataset** (CC-BY-NC-SA 4.0) at [worldmodel_raw_data](https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data)
24
+ - **Example scripts** now split into:
25
+ - `cosmos_video_decoder.py` — for decoding Cosmos Tokenized bins
26
+ - `unpack_data_test.py` — for reading the new test set
27
+ - `unpack_data_train_val.py` — for reading the train/val sets
28
 
29
+ ---
30
 
31
+ ## Train & Val v2.0
 
 
 
32
 
33
+ ### Format
34
+
35
+ Each split is sharded:
36
+ - `video_{shard}.bin` — [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) discrete DV8×8×8 tokens at 30 Hz
37
+ - `segment_idx_{shard}.bin` — segment boundaries
38
+ - `states_{shard}.bin` — `np.float32` states (see below)
39
+ - `metadata.json` / `metadata_{shard}.json` — overall vs. per‐shard metadata
40
+
41
+ ### State Index Definition (New)
42
+ ```
43
+ 0: HIP_YAW
44
+ 1: HIP_ROLL
45
+ 2: HIP_PITCH
46
+ 3: KNEE_PITCH
47
+ 4: ANKLE_ROLL
48
+ 5: ANKLE_PITCH
49
+ 6: LEFT_SHOULDER_PITCH
50
+ 7: LEFT_SHOULDER_ROLL
51
+ 8: LEFT_SHOULDER_YAW
52
+ 9: LEFT_ELBOW_PITCH
53
+ 10: LEFT_ELBOW_YAW
54
+ 11: LEFT_WRIST_PITCH
55
+ 12: LEFT_WRIST_ROLL
56
+ 13: RIGHT_SHOULDER_PITCH
57
+ 14: RIGHT_SHOULDER_ROLL
58
+ 15: RIGHT_SHOULDER_YAW
59
+ 16: RIGHT_ELBOW_PITCH
60
+ 17: RIGHT_ELBOW_YAW
61
+ 18: RIGHT_WRIST_PITCH
62
+ 19: RIGHT_WRIST_ROLL
63
+ 20: NECK_PITCH
64
+ 21: Left hand closure (0= open, 1= closed)
65
+ 22: Right hand closure (0= open, 1= closed)
66
+ 23: Linear Velocity
67
+ 24: Angular Velocity
68
+ ```
69
+
70
+ ---
71
+
72
+ ## Test v2.0
73
 
74
+ We now provide a **test_v2.0** subset for the Compression World Model Challenge with a similar structure (`video_{shard}.bin`, `states_{shard}.bin`). Use:
75
+ - `unpack_data_test.py` to read the test set
76
+ - `unpack_data_train_val.py` to read train/val
77
+ ---
78
 
79
+ ## Previous v1.1
80
 
81
+ - `video.bin` 16×16 patches at 30Hz, quantized
82
+ - `segment_ids.bin` — segment boundaries
83
+ - `actions/` folder storing multiple `.bin`s for states, closures, etc.
84
 
85
+ ### v1.1 Joint Index
 
 
 
 
 
 
 
 
 
86
  ```
87
  {
88
  0: HIP_YAW
 
108
  20: NECK_PITCH
109
  }
110
 
111
+ A separate `val_v1.1` set is available.
112
 
113
+ ---
114
+
115
+ ## Provided Checkpoints
116
+
117
+ - `magvit2.ckpt` from [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) used in v1.1
118
+ - For v2.0, see [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer); we supply `cosmos_video_decoder.py`.
119
+
120
+ ---
121
+
122
+ ## Directory Structure Example
123
+
124
+ ```
125
+ train_v1.1/
126
+ val_v1.1/
127
+ train_v2.0/
128
+ val_v2.0/
129
+ test_v2.0/
130
+ ├── video_{shard}.bin
131
+ ├── states_{shard}.bin
132
+ ├── ...
133
+ ├── metadata_{shard}.json
134
+ cosmos_video_decoder.py
135
+ unpack_data_test.py
136
+ unpack_data_train_val.py
137
+ ```
138
 
139
+ **License**: [Apache-2.0](./LICENSE)
140
+ **Author**: 1X Technologies
141
+ ```