Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,4 @@
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pretty_name: 1X World Model Challenge Dataset
|
|
@@ -5,73 +6,83 @@ size_categories:
|
|
| 5 |
- 10M<n<100M
|
| 6 |
viewer: false
|
| 7 |
---
|
| 8 |
-
Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
|
| 9 |
|
| 10 |
-
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
| 12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
| 13 |
```
|
| 14 |
|
| 15 |
-
|
| 16 |
-
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
| 17 |
-
- Blur applied to faces
|
| 18 |
-
- Shared a new raw video dataset under CC-BY-NC-SA 4.0: https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data
|
| 19 |
-
- Example scripts to decode Cosmos Tokenized bins `cosmos_video_decoder.py` and load in frame data `unpack_data.py`
|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
| 26 |
-
- **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
| 27 |
-
- **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
|
| 28 |
-
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard.
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
-
Previous
|
| 62 |
|
| 63 |
-
-
|
|
|
|
|
|
|
| 64 |
|
| 65 |
-
|
| 66 |
-
- **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
|
| 67 |
-
- **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
| 68 |
-
- **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `joint_pos[i]`, `driving_command[i]`, `neck_desired[i]`, and so on. The shapes and definitions of the arrays are as follows (N is the number of frames):
|
| 69 |
-
- **joint_pos** `(N, 21)`: Joint positions. See `Index-to-Joint Mapping` below.
|
| 70 |
-
- **driving_command** `(N, 2)`: Linear and angular velocities.
|
| 71 |
-
- **neck_desired** `(N, 1)`: Desired neck pitch.
|
| 72 |
-
- **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
|
| 73 |
-
- **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
|
| 74 |
-
#### Index-to-Joint Mapping (OLD)
|
| 75 |
```
|
| 76 |
{
|
| 77 |
0: HIP_YAW
|
|
@@ -97,8 +108,34 @@ Contents of train/val_v1.1:
|
|
| 97 |
20: NECK_PITCH
|
| 98 |
}
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
```markdown
|
| 2 |
---
|
| 3 |
license: apache-2.0
|
| 4 |
pretty_name: 1X World Model Challenge Dataset
|
|
|
|
| 6 |
- 10M<n<100M
|
| 7 |
viewer: false
|
| 8 |
---
|
|
|
|
| 9 |
|
| 10 |
+
# 1X World Model Challenge Dataset
|
| 11 |
+
|
| 12 |
+
This repository hosts the dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
|
| 13 |
+
|
| 14 |
+
```bash
|
| 15 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
| 16 |
```
|
| 17 |
|
| 18 |
+
## Updates Since v1.1
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
- **Train/Val v2.0 (~100 hours)**, replacing v1.1
|
| 21 |
+
- **Test v2.0 dataset for the Compression Challenge**
|
| 22 |
+
- **Faces blurred** for privacy
|
| 23 |
+
- **New raw video dataset** (CC-BY-NC-SA 4.0) at [worldmodel_raw_data](https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data)
|
| 24 |
+
- **Example scripts** now split into:
|
| 25 |
+
- `cosmos_video_decoder.py` — for decoding Cosmos Tokenized bins
|
| 26 |
+
- `unpack_data_test.py` — for reading the new test set
|
| 27 |
+
- `unpack_data_train_val.py` — for reading the train/val sets
|
| 28 |
|
| 29 |
+
---
|
| 30 |
|
| 31 |
+
## Train & Val v2.0
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
### Format
|
| 34 |
+
|
| 35 |
+
Each split is sharded:
|
| 36 |
+
- `video_{shard}.bin` — [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) discrete DV8×8×8 tokens at 30 Hz
|
| 37 |
+
- `segment_idx_{shard}.bin` — segment boundaries
|
| 38 |
+
- `states_{shard}.bin` — `np.float32` states (see below)
|
| 39 |
+
- `metadata.json` / `metadata_{shard}.json` — overall vs. per‐shard metadata
|
| 40 |
+
|
| 41 |
+
### State Index Definition (New)
|
| 42 |
+
```
|
| 43 |
+
0: HIP_YAW
|
| 44 |
+
1: HIP_ROLL
|
| 45 |
+
2: HIP_PITCH
|
| 46 |
+
3: KNEE_PITCH
|
| 47 |
+
4: ANKLE_ROLL
|
| 48 |
+
5: ANKLE_PITCH
|
| 49 |
+
6: LEFT_SHOULDER_PITCH
|
| 50 |
+
7: LEFT_SHOULDER_ROLL
|
| 51 |
+
8: LEFT_SHOULDER_YAW
|
| 52 |
+
9: LEFT_ELBOW_PITCH
|
| 53 |
+
10: LEFT_ELBOW_YAW
|
| 54 |
+
11: LEFT_WRIST_PITCH
|
| 55 |
+
12: LEFT_WRIST_ROLL
|
| 56 |
+
13: RIGHT_SHOULDER_PITCH
|
| 57 |
+
14: RIGHT_SHOULDER_ROLL
|
| 58 |
+
15: RIGHT_SHOULDER_YAW
|
| 59 |
+
16: RIGHT_ELBOW_PITCH
|
| 60 |
+
17: RIGHT_ELBOW_YAW
|
| 61 |
+
18: RIGHT_WRIST_PITCH
|
| 62 |
+
19: RIGHT_WRIST_ROLL
|
| 63 |
+
20: NECK_PITCH
|
| 64 |
+
21: Left hand closure (0= open, 1= closed)
|
| 65 |
+
22: Right hand closure (0= open, 1= closed)
|
| 66 |
+
23: Linear Velocity
|
| 67 |
+
24: Angular Velocity
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## Test v2.0
|
| 73 |
|
| 74 |
+
We now provide a **test_v2.0** subset for the Compression World Model Challenge with a similar structure (`video_{shard}.bin`, `states_{shard}.bin`). Use:
|
| 75 |
+
- `unpack_data_test.py` to read the test set
|
| 76 |
+
- `unpack_data_train_val.py` to read train/val
|
| 77 |
+
---
|
| 78 |
|
| 79 |
+
## Previous v1.1
|
| 80 |
|
| 81 |
+
- `video.bin` — 16×16 patches at 30Hz, quantized
|
| 82 |
+
- `segment_ids.bin` — segment boundaries
|
| 83 |
+
- `actions/` folder storing multiple `.bin`s for states, closures, etc.
|
| 84 |
|
| 85 |
+
### v1.1 Joint Index
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
```
|
| 87 |
{
|
| 88 |
0: HIP_YAW
|
|
|
|
| 108 |
20: NECK_PITCH
|
| 109 |
}
|
| 110 |
|
| 111 |
+
A separate `val_v1.1` set is available.
|
| 112 |
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## Provided Checkpoints
|
| 116 |
+
|
| 117 |
+
- `magvit2.ckpt` from [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) used in v1.1
|
| 118 |
+
- For v2.0, see [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer); we supply `cosmos_video_decoder.py`.
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
## Directory Structure Example
|
| 123 |
+
|
| 124 |
+
```
|
| 125 |
+
train_v1.1/
|
| 126 |
+
val_v1.1/
|
| 127 |
+
train_v2.0/
|
| 128 |
+
val_v2.0/
|
| 129 |
+
test_v2.0/
|
| 130 |
+
├── video_{shard}.bin
|
| 131 |
+
├── states_{shard}.bin
|
| 132 |
+
├── ...
|
| 133 |
+
├── metadata_{shard}.json
|
| 134 |
+
cosmos_video_decoder.py
|
| 135 |
+
unpack_data_test.py
|
| 136 |
+
unpack_data_train_val.py
|
| 137 |
+
```
|
| 138 |
|
| 139 |
+
**License**: [Apache-2.0](./LICENSE)
|
| 140 |
+
**Author**: 1X Technologies
|
| 141 |
+
```
|