--- language: - en license: apache-2.0 size_categories: - 10M<n<100M task_categories: - reinforcement-learning pretty_name: Procgen Benchmark Dataset dataset_info: - config_name: bigfish features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 63430147458 dataset_size: 289372500000 - config_name: bossfight features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 106055802563 dataset_size: 289372500000 - config_name: caveflyer features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 99550129733 dataset_size: 289372500000 - config_name: chaser features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 42555126877 dataset_size: 289372500000 - config_name: climber features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 42586772888 dataset_size: 289372500000 - config_name: coinrun features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 51408569531 dataset_size: 289372500000 - config_name: dodgeball features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 69558620534 dataset_size: 289372500000 - config_name: fruitbot features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 184877608931 dataset_size: 289372500000 - config_name: heist features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 49646772230 dataset_size: 289372500000 - config_name: jumper features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 67531643053 dataset_size: 289372500000 - config_name: leaper features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 44464909989 dataset_size: 289372500000 - config_name: maze features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 50930919120 dataset_size: 289372500000 - config_name: miner features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 38184188600 dataset_size: 289372500000 - config_name: ninja features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 66644794325 dataset_size: 289372500000 - config_name: plunder features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 68795928729 dataset_size: 289372500000 - config_name: starpilot features: - name: observation dtype: array3_d: shape: - 64 - 64 - 3 dtype: uint8 - name: action dtype: uint8 - name: reward dtype: float32 - name: done dtype: bool - name: truncated dtype: bool splits: - name: train num_bytes: 260435250000 num_examples: 9000000 - name: test num_bytes: 28937250000 num_examples: 1000000 download_size: 170031712117 dataset_size: 289372500000 configs: - config_name: bigfish data_files: - split: train path: bigfish/train-* - split: test path: bigfish/test-* - config_name: bossfight data_files: - split: train path: bossfight/train-* - split: test path: bossfight/test-* - config_name: caveflyer data_files: - split: train path: caveflyer/train-* - split: test path: caveflyer/test-* - config_name: chaser data_files: - split: train path: chaser/train-* - split: test path: chaser/test-* - config_name: climber data_files: - split: train path: climber/train-* - split: test path: climber/test-* - config_name: coinrun data_files: - split: train path: coinrun/train-* - split: test path: coinrun/test-* - config_name: dodgeball data_files: - split: train path: dodgeball/train-* - split: test path: dodgeball/test-* - config_name: fruitbot data_files: - split: train path: fruitbot/train-* - split: test path: fruitbot/test-* - config_name: heist data_files: - split: train path: heist/train-* - split: test path: heist/test-* - config_name: jumper data_files: - split: train path: jumper/train-* - split: test path: jumper/test-* - config_name: leaper data_files: - split: train path: leaper/train-* - split: test path: leaper/test-* - config_name: maze data_files: - split: train path: maze/train-* - split: test path: maze/test-* - config_name: miner data_files: - split: train path: miner/train-* - split: test path: miner/test-* - config_name: ninja data_files: - split: train path: ninja/train-* - split: test path: ninja/test-* - config_name: plunder data_files: - split: train path: plunder/train-* - split: test path: plunder/test-* - config_name: starpilot data_files: - split: train path: starpilot/train-* - split: test path: starpilot/test-* tags: - procgen - bigfish - benchmark - openai - bossfight - caveflyer - chaser - climber - dodgeball - fruitbot - heist - jumper - leaper - maze - miner - ninja - plunder - starpilot --- # Procgen Benchmark This dataset contains expert trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The environments were created on `distribution_mode=easy` and with unlimited levels. Disclaimer: This is not an official repository from OpenAI. ## Dataset Usage Regular usage (for environment bigfish): ```python from datasets import load_dataset train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train") test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test") ``` Usage with PyTorch (for environment bossfight): ```python from datasets import load_dataset train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch") test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch") ``` ## Agent Performance The PPO RL agent was trained for 50M steps on each environment and obtained the following final performance metrics. | Environment | Steps (Train) | Steps (Test) | Return | Observation | |:------------|:----------------|:---------------|:-------|:------------| | bigfish | 9,000,000 | 1,000,000 | 29.38 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/lHQXBqLdoWicXlt68I9QX.mp4"></video> | | bossfight | 9,000,000 | 1,000,000 | 10.86 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/LPoafGi4YBWqqkuFlEN_l.mp4"></video> | | caveflyer | 9,000,000 | 1,000,000 | 09.58 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XVqRwu_9yfX4ECQc4At4G.mp4"></video> | | chaser | 9,000,000 | 1,000,000 | 11.52 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/FIKVv48SThqiC1Z2PYQ7U.mp4"></video> | | climber | 9,000,000 | 1,000,000 | 10.66 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XJQlA7IyF9_gwUiw-FkND.mp4"></video> | | coinrun | 9,000,000 | 1,000,000 | 09.84 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/Ucv3HZttewMRQzTL8r_Tw.mp4"></video> | | dodgeball | 9,000,000 | 1,000,000 | 16.73 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/5HetbKuXBpO-v1jcVyLTU.mp4"></video> | | fruitbot | 9,000,000 | 1,000,000 | 21.39 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/zKCyxXvauXjUac-5kEAWz.mp4"></video> | | heist | 9,000,000 | 1,000,000 | 09.95 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/AdZ6XNmUN5_00BKd9BN8R.mp4"></video> | | jumper | 9,000,000 | 1,000,000 | 08.76 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/s5k31gWK2Vc6Lp6QVzQXA.mp4"></video> | | leaper | 9,000,000 | 1,000,000 | 07.50 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/_hDMocxjmzutc0t5FfoTX.mp4"></video> | | maze | 9,000,000 | 1,000,000 | 09.99 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/uhNdDPuNhZpxVns91Ba-9.mp4"></video> | | miner | 9,000,000 | 1,000,000 | 12.66 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/ElpJ8l2WHJGrprZ3-giHU.mp4"></video> | | ninja | 9,000,000 | 1,000,000 | 09.61 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/b9i-fb2Twh8XmBBNf2DRG.mp4"></video> | | plunder | 9,000,000 | 1,000,000 | 25.68 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/JPeGNOVzrotuYUjfzZj40.mp4"></video> | | starpilot | 9,000,000 | 1,000,000 | 57.25 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/wY9lZgkw5tor19hCWmm6A.mp4"></video> | ## Dataset Structure ### Data Instances Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}). ```json {'action': 1, 'done': False, 'observation': [[[0, 166, 253], [0, 174, 255], [0, 170, 251], [0, 191, 255], [0, 191, 255], [0, 221, 255], [0, 243, 255], [0, 248, 255], [0, 243, 255], [10, 239, 255], [25, 255, 255], [0, 241, 255], [0, 235, 255], [17, 240, 255], [10, 243, 255], [27, 253, 255], [39, 255, 255], [58, 255, 255], [85, 255, 255], [111, 255, 255], [135, 255, 255], [151, 255, 255], [173, 255, 255], ... [0, 0, 37], [0, 0, 39]]], 'reward': 0.0, 'truncated': False} ``` ### Data Fields - `observation`: The current RGB observation from the environment. - `action`: The action predicted by the agent for the current observation. - `reward`: The received reward from stepping the environment with the current action. - `done`: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action. - `truncated`: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action. ### Data Splits The dataset is divided into a `train` (90%) and `test` (10%) split. Each environment-dataset has in sum 10M steps (data points). ## Dataset Creation The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 25M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on `distribution_mode=easy` and with unlimited levels. ## Procgen Benchmark The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.