1x-technologies/GENIE_138M
Updated
•
150
•
3
Dataset for the 1X World Model Challenge.
Download with:
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
Current version: v1.1
Contents of train/val_v1.1:
magvig2.ckpt
weights.segment_ids[i]
uniquely points to the segment index that frame i
came from. You may want to use this to separate non-contiguous frames from different videos (transitions).np.float32
format. For frame i
, the corresponding action is given by joint_pos[i]
, driving_command[i]
, neck_desired[i]
, and so on. The shapes and definitions of the arrays are as follows (N is the number of frames):(N, 21)
: Joint positions. See Index-to-Joint Mapping
below. (N, 2)
: Linear and angular velocities.(N, 1)
: Desired neck pitch.(N, 1)
: Left hand closure state (0 = open, 1 = closed).(N, 1)
: Right hand closure state (0 = open, 1 = closed). {
0: HIP_YAW
1: HIP_ROLL
2: HIP_PITCH
3: KNEE_PITCH
4: ANKLE_ROLL
5: ANKLE_PITCH
6: LEFT_SHOULDER_PITCH
7: LEFT_SHOULDER_ROLL
8: LEFT_SHOULDER_YAW
9: LEFT_ELBOW_PITCH
10: LEFT_ELBOW_YAW
11: LEFT_WRIST_PITCH
12: LEFT_WRIST_ROLL
13: RIGHT_SHOULDER_PITCH
14: RIGHT_SHOULDER_ROLL
15: RIGHT_SHOULDER_YAW
16: RIGHT_ELBOW_PITCH
17: RIGHT_ELBOW_YAW
18: RIGHT_WRIST_PITCH
19: RIGHT_WRIST_ROLL
20: NECK_PITCH
}
We also provide a small val_v1.1
data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.