jarski commited on
Commit
e1e165a
1 Parent(s): a9067fe

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1725186806.a2f4d27d95e5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c0f81be4ddd2400cffb66fb0a147f2285b94ec9b33fee60d624bdb36284c415
3
+ size 2545
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 13.58 +/- 4.73
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 10.53 +/- 6.29
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/checkpoint_000002934_12017664.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8cad2e39da2f0347adb76d84e96eed33c61a2e07913f2a183e7ed7a54495595
3
+ size 34928851
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9740afa6dab78d165dc2405c6bf77726f9248441ddb5dea40f14725ac06c78b1
3
- size 26250307
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c901054044bceb1335263c6b8dbf95c410bbb3d6e5df817c7551cfe66aa18399
3
+ size 20128244
sf_log.txt CHANGED
@@ -7743,3 +7743,541 @@ main_loop: 11833.6824
7743
  [2024-09-01 10:31:45,088][00307] Avg episode rewards: #0: 34.183, true rewards: #0: 13.583
7744
  [2024-09-01 10:31:45,091][00307] Avg episode reward: 34.183, avg true_objective: 13.583
7745
  [2024-09-01 10:33:19,526][00307] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7743
  [2024-09-01 10:31:45,088][00307] Avg episode rewards: #0: 34.183, true rewards: #0: 13.583
7744
  [2024-09-01 10:31:45,091][00307] Avg episode reward: 34.183, avg true_objective: 13.583
7745
  [2024-09-01 10:33:19,526][00307] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
7746
+ [2024-09-01 10:33:24,356][00307] The model has been pushed to https://huggingface.co/jarski/rl_course_vizdoom_health_gathering_supreme
7747
+ [2024-09-01 10:33:26,056][00307] Environment doom_basic already registered, overwriting...
7748
+ [2024-09-01 10:33:26,058][00307] Environment doom_two_colors_easy already registered, overwriting...
7749
+ [2024-09-01 10:33:26,061][00307] Environment doom_two_colors_hard already registered, overwriting...
7750
+ [2024-09-01 10:33:26,066][00307] Environment doom_dm already registered, overwriting...
7751
+ [2024-09-01 10:33:26,069][00307] Environment doom_dwango5 already registered, overwriting...
7752
+ [2024-09-01 10:33:26,074][00307] Environment doom_my_way_home_flat_actions already registered, overwriting...
7753
+ [2024-09-01 10:33:26,075][00307] Environment doom_defend_the_center_flat_actions already registered, overwriting...
7754
+ [2024-09-01 10:33:26,078][00307] Environment doom_my_way_home already registered, overwriting...
7755
+ [2024-09-01 10:33:26,079][00307] Environment doom_deadly_corridor already registered, overwriting...
7756
+ [2024-09-01 10:33:26,083][00307] Environment doom_defend_the_center already registered, overwriting...
7757
+ [2024-09-01 10:33:26,084][00307] Environment doom_defend_the_line already registered, overwriting...
7758
+ [2024-09-01 10:33:26,086][00307] Environment doom_health_gathering already registered, overwriting...
7759
+ [2024-09-01 10:33:26,087][00307] Environment doom_health_gathering_supreme already registered, overwriting...
7760
+ [2024-09-01 10:33:26,088][00307] Environment doom_battle already registered, overwriting...
7761
+ [2024-09-01 10:33:26,091][00307] Environment doom_battle2 already registered, overwriting...
7762
+ [2024-09-01 10:33:26,092][00307] Environment doom_duel_bots already registered, overwriting...
7763
+ [2024-09-01 10:33:26,095][00307] Environment doom_deathmatch_bots already registered, overwriting...
7764
+ [2024-09-01 10:33:26,096][00307] Environment doom_duel already registered, overwriting...
7765
+ [2024-09-01 10:33:26,097][00307] Environment doom_deathmatch_full already registered, overwriting...
7766
+ [2024-09-01 10:33:26,101][00307] Environment doom_benchmark already registered, overwriting...
7767
+ [2024-09-01 10:33:26,103][00307] register_encoder_factory: <function make_vizdoom_encoder at 0x789ab41c1b40>
7768
+ [2024-09-01 10:33:26,141][00307] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
7769
+ [2024-09-01 10:33:26,154][00307] Experiment dir /content/train_dir/default_experiment already exists!
7770
+ [2024-09-01 10:33:26,156][00307] Resuming existing experiment from /content/train_dir/default_experiment...
7771
+ [2024-09-01 10:33:26,159][00307] Weights and Biases integration disabled
7772
+ [2024-09-01 10:33:26,170][00307] Environment var CUDA_VISIBLE_DEVICES is
7773
+
7774
+ [2024-09-01 10:33:31,713][00307] Starting experiment with the following configuration:
7775
+ help=False
7776
+ algo=APPO
7777
+ env=doom_health_gathering_supreme
7778
+ experiment=default_experiment
7779
+ train_dir=/content/train_dir
7780
+ restart_behavior=resume
7781
+ device=cpu
7782
+ seed=None
7783
+ num_policies=1
7784
+ async_rl=True
7785
+ serial_mode=False
7786
+ batched_sampling=False
7787
+ num_batches_to_accumulate=2
7788
+ worker_num_splits=2
7789
+ policy_workers_per_policy=1
7790
+ max_policy_lag=1000
7791
+ num_workers=8
7792
+ num_envs_per_worker=4
7793
+ batch_size=1024
7794
+ num_batches_per_epoch=1
7795
+ num_epochs=1
7796
+ rollout=32
7797
+ recurrence=32
7798
+ shuffle_minibatches=False
7799
+ gamma=0.99
7800
+ reward_scale=1.0
7801
+ reward_clip=1000.0
7802
+ value_bootstrap=False
7803
+ normalize_returns=True
7804
+ exploration_loss_coeff=0.001
7805
+ value_loss_coeff=0.5
7806
+ kl_loss_coeff=0.0
7807
+ exploration_loss=symmetric_kl
7808
+ gae_lambda=0.95
7809
+ ppo_clip_ratio=0.1
7810
+ ppo_clip_value=0.2
7811
+ with_vtrace=False
7812
+ vtrace_rho=1.0
7813
+ vtrace_c=1.0
7814
+ optimizer=adam
7815
+ adam_eps=1e-06
7816
+ adam_beta1=0.9
7817
+ adam_beta2=0.999
7818
+ max_grad_norm=4.0
7819
+ learning_rate=0.0001
7820
+ lr_schedule=constant
7821
+ lr_schedule_kl_threshold=0.008
7822
+ lr_adaptive_min=1e-06
7823
+ lr_adaptive_max=0.01
7824
+ obs_subtract_mean=0.0
7825
+ obs_scale=255.0
7826
+ normalize_input=True
7827
+ normalize_input_keys=None
7828
+ decorrelate_experience_max_seconds=0
7829
+ decorrelate_envs_on_one_worker=True
7830
+ actor_worker_gpus=[]
7831
+ set_workers_cpu_affinity=True
7832
+ force_envs_single_thread=False
7833
+ default_niceness=0
7834
+ log_to_file=True
7835
+ experiment_summaries_interval=10
7836
+ flush_summaries_interval=30
7837
+ stats_avg=100
7838
+ summaries_use_frameskip=True
7839
+ heartbeat_interval=20
7840
+ heartbeat_reporting_interval=600
7841
+ train_for_env_steps=12000000
7842
+ train_for_seconds=10000000000
7843
+ save_every_sec=120
7844
+ keep_checkpoints=2
7845
+ load_checkpoint_kind=latest
7846
+ save_milestones_sec=-1
7847
+ save_best_every_sec=5
7848
+ save_best_metric=reward
7849
+ save_best_after=100000
7850
+ benchmark=False
7851
+ encoder_mlp_layers=[512, 512]
7852
+ encoder_conv_architecture=convnet_simple
7853
+ encoder_conv_mlp_layers=[512]
7854
+ use_rnn=True
7855
+ rnn_size=512
7856
+ rnn_type=gru
7857
+ rnn_num_layers=1
7858
+ decoder_mlp_layers=[]
7859
+ nonlinearity=elu
7860
+ policy_initialization=orthogonal
7861
+ policy_init_gain=1.0
7862
+ actor_critic_share_weights=True
7863
+ adaptive_stddev=True
7864
+ continuous_tanh_scale=0.0
7865
+ initial_stddev=1.0
7866
+ use_env_info_cache=False
7867
+ env_gpu_actions=False
7868
+ env_gpu_observations=True
7869
+ env_frameskip=4
7870
+ env_framestack=1
7871
+ pixel_format=CHW
7872
+ use_record_episode_statistics=False
7873
+ with_wandb=False
7874
+ wandb_user=None
7875
+ wandb_project=sample_factory
7876
+ wandb_group=None
7877
+ wandb_job_type=SF
7878
+ wandb_tags=[]
7879
+ with_pbt=False
7880
+ pbt_mix_policies_in_one_env=True
7881
+ pbt_period_env_steps=5000000
7882
+ pbt_start_mutation=20000000
7883
+ pbt_replace_fraction=0.3
7884
+ pbt_mutation_rate=0.15
7885
+ pbt_replace_reward_gap=0.1
7886
+ pbt_replace_reward_gap_absolute=1e-06
7887
+ pbt_optimize_gamma=False
7888
+ pbt_target_objective=true_objective
7889
+ pbt_perturb_min=1.1
7890
+ pbt_perturb_max=1.5
7891
+ num_agents=-1
7892
+ num_humans=0
7893
+ num_bots=-1
7894
+ start_bot_difficulty=None
7895
+ timelimit=None
7896
+ res_w=128
7897
+ res_h=72
7898
+ wide_aspect_ratio=False
7899
+ eval_env_frameskip=1
7900
+ fps=35
7901
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
7902
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
7903
+ git_hash=unknown
7904
+ git_repo_name=not a git repository
7905
+ [2024-09-01 10:33:31,723][00307] Saving configuration to /content/train_dir/default_experiment/config.json...
7906
+ [2024-09-01 10:33:31,729][00307] Rollout worker 0 uses device cpu
7907
+ [2024-09-01 10:33:31,731][00307] Rollout worker 1 uses device cpu
7908
+ [2024-09-01 10:33:31,738][00307] Rollout worker 2 uses device cpu
7909
+ [2024-09-01 10:33:31,741][00307] Rollout worker 3 uses device cpu
7910
+ [2024-09-01 10:33:31,746][00307] Rollout worker 4 uses device cpu
7911
+ [2024-09-01 10:33:31,750][00307] Rollout worker 5 uses device cpu
7912
+ [2024-09-01 10:33:31,752][00307] Rollout worker 6 uses device cpu
7913
+ [2024-09-01 10:33:31,755][00307] Rollout worker 7 uses device cpu
7914
+ [2024-09-01 10:33:31,963][00307] InferenceWorker_p0-w0: min num requests: 2
7915
+ [2024-09-01 10:33:32,035][00307] Starting all processes...
7916
+ [2024-09-01 10:33:32,043][00307] Starting process learner_proc0
7917
+ [2024-09-01 10:33:32,128][00307] Starting all processes...
7918
+ [2024-09-01 10:33:32,201][00307] Starting process inference_proc0-0
7919
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc0
7920
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc1
7921
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc2
7922
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc3
7923
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc4
7924
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc5
7925
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc6
7926
+ [2024-09-01 10:33:32,209][00307] Starting process rollout_proc7
7927
+ [2024-09-01 10:33:50,188][65187] Worker 0 uses CPU cores [0]
7928
+ [2024-09-01 10:33:50,798][65174] Starting seed is not provided
7929
+ [2024-09-01 10:33:50,798][65174] Initializing actor-critic model on device cpu
7930
+ [2024-09-01 10:33:50,799][65174] RunningMeanStd input shape: (3, 72, 128)
7931
+ [2024-09-01 10:33:50,802][65174] RunningMeanStd input shape: (1,)
7932
+ [2024-09-01 10:33:50,882][65174] ConvEncoder: input_channels=3
7933
+ [2024-09-01 10:33:50,946][65192] Worker 4 uses CPU cores [0]
7934
+ [2024-09-01 10:33:50,959][65191] Worker 3 uses CPU cores [1]
7935
+ [2024-09-01 10:33:50,978][65189] Worker 1 uses CPU cores [1]
7936
+ [2024-09-01 10:33:51,023][65195] Worker 7 uses CPU cores [1]
7937
+ [2024-09-01 10:33:51,154][65194] Worker 5 uses CPU cores [1]
7938
+ [2024-09-01 10:33:51,472][65190] Worker 2 uses CPU cores [0]
7939
+ [2024-09-01 10:33:51,492][65193] Worker 6 uses CPU cores [0]
7940
+ [2024-09-01 10:33:51,611][65174] Conv encoder output size: 512
7941
+ [2024-09-01 10:33:51,612][65174] Policy head output size: 512
7942
+ [2024-09-01 10:33:51,668][65174] Created Actor Critic model with architecture:
7943
+ [2024-09-01 10:33:51,676][65174] ActorCriticSharedWeights(
7944
+ (obs_normalizer): ObservationNormalizer(
7945
+ (running_mean_std): RunningMeanStdDictInPlace(
7946
+ (running_mean_std): ModuleDict(
7947
+ (obs): RunningMeanStdInPlace()
7948
+ )
7949
+ )
7950
+ )
7951
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
7952
+ (encoder): VizdoomEncoder(
7953
+ (basic_encoder): ConvEncoder(
7954
+ (enc): RecursiveScriptModule(
7955
+ original_name=ConvEncoderImpl
7956
+ (conv_head): RecursiveScriptModule(
7957
+ original_name=Sequential
7958
+ (0): RecursiveScriptModule(original_name=Conv2d)
7959
+ (1): RecursiveScriptModule(original_name=ELU)
7960
+ (2): RecursiveScriptModule(original_name=Conv2d)
7961
+ (3): RecursiveScriptModule(original_name=ELU)
7962
+ (4): RecursiveScriptModule(original_name=Conv2d)
7963
+ (5): RecursiveScriptModule(original_name=ELU)
7964
+ )
7965
+ (mlp_layers): RecursiveScriptModule(
7966
+ original_name=Sequential
7967
+ (0): RecursiveScriptModule(original_name=Linear)
7968
+ (1): RecursiveScriptModule(original_name=ELU)
7969
+ )
7970
+ )
7971
+ )
7972
+ )
7973
+ (core): ModelCoreRNN(
7974
+ (core): GRU(512, 512)
7975
+ )
7976
+ (decoder): MlpDecoder(
7977
+ (mlp): Identity()
7978
+ )
7979
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
7980
+ (action_parameterization): ActionParameterizationDefault(
7981
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
7982
+ )
7983
+ )
7984
+ [2024-09-01 10:33:51,963][00307] Heartbeat connected on InferenceWorker_p0-w0
7985
+ [2024-09-01 10:33:51,978][00307] Heartbeat connected on RolloutWorker_w0
7986
+ [2024-09-01 10:33:51,991][00307] Heartbeat connected on RolloutWorker_w1
7987
+ [2024-09-01 10:33:52,002][00307] Heartbeat connected on RolloutWorker_w2
7988
+ [2024-09-01 10:33:52,007][00307] Heartbeat connected on RolloutWorker_w3
7989
+ [2024-09-01 10:33:52,016][00307] Heartbeat connected on RolloutWorker_w4
7990
+ [2024-09-01 10:33:52,021][00307] Heartbeat connected on RolloutWorker_w5
7991
+ [2024-09-01 10:33:52,029][00307] Heartbeat connected on RolloutWorker_w6
7992
+ [2024-09-01 10:33:52,035][00307] Heartbeat connected on RolloutWorker_w7
7993
+ [2024-09-01 10:33:52,927][00307] Heartbeat connected on Batcher_0
7994
+ [2024-09-01 10:33:52,949][65174] Using optimizer <class 'torch.optim.adam.Adam'>
7995
+ [2024-09-01 10:33:52,951][65174] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002932_12009472.pth...
7996
+ [2024-09-01 10:33:53,010][65174] Loading model from checkpoint
7997
+ [2024-09-01 10:33:53,058][65174] Loaded experiment state at self.train_step=2932, self.env_steps=12009472
7998
+ [2024-09-01 10:33:53,058][65174] Initialized policy 0 weights for model version 2932
7999
+ [2024-09-01 10:33:53,064][65188] RunningMeanStd input shape: (3, 72, 128)
8000
+ [2024-09-01 10:33:53,066][65174] LearnerWorker_p0 finished initialization!
8001
+ [2024-09-01 10:33:53,067][00307] Heartbeat connected on LearnerWorker_p0
8002
+ [2024-09-01 10:33:53,072][65188] RunningMeanStd input shape: (1,)
8003
+ [2024-09-01 10:33:53,098][65188] ConvEncoder: input_channels=3
8004
+ [2024-09-01 10:33:53,326][65188] Conv encoder output size: 512
8005
+ [2024-09-01 10:33:53,327][65188] Policy head output size: 512
8006
+ [2024-09-01 10:33:53,357][00307] Inference worker 0-0 is ready!
8007
+ [2024-09-01 10:33:53,359][00307] All inference workers are ready! Signal rollout workers to start!
8008
+ [2024-09-01 10:33:53,526][65187] Doom resolution: 160x120, resize resolution: (128, 72)
8009
+ [2024-09-01 10:33:53,545][65192] Doom resolution: 160x120, resize resolution: (128, 72)
8010
+ [2024-09-01 10:33:53,548][65193] Doom resolution: 160x120, resize resolution: (128, 72)
8011
+ [2024-09-01 10:33:53,555][65190] Doom resolution: 160x120, resize resolution: (128, 72)
8012
+ [2024-09-01 10:33:53,563][65191] Doom resolution: 160x120, resize resolution: (128, 72)
8013
+ [2024-09-01 10:33:53,561][65195] Doom resolution: 160x120, resize resolution: (128, 72)
8014
+ [2024-09-01 10:33:53,560][65189] Doom resolution: 160x120, resize resolution: (128, 72)
8015
+ [2024-09-01 10:33:53,566][65194] Doom resolution: 160x120, resize resolution: (128, 72)
8016
+ [2024-09-01 10:33:55,770][65187] Decorrelating experience for 0 frames...
8017
+ [2024-09-01 10:33:55,816][65192] Decorrelating experience for 0 frames...
8018
+ [2024-09-01 10:33:55,835][65193] Decorrelating experience for 0 frames...
8019
+ [2024-09-01 10:33:55,830][65189] Decorrelating experience for 0 frames...
8020
+ [2024-09-01 10:33:55,846][65195] Decorrelating experience for 0 frames...
8021
+ [2024-09-01 10:33:55,853][65191] Decorrelating experience for 0 frames...
8022
+ [2024-09-01 10:33:55,860][65190] Decorrelating experience for 0 frames...
8023
+ [2024-09-01 10:33:56,171][00307] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 12009472. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
8024
+ [2024-09-01 10:33:57,632][65194] Decorrelating experience for 0 frames...
8025
+ [2024-09-01 10:33:57,653][65191] Decorrelating experience for 32 frames...
8026
+ [2024-09-01 10:33:57,667][65195] Decorrelating experience for 32 frames...
8027
+ [2024-09-01 10:33:58,359][65187] Decorrelating experience for 32 frames...
8028
+ [2024-09-01 10:33:58,371][65192] Decorrelating experience for 32 frames...
8029
+ [2024-09-01 10:33:58,468][65193] Decorrelating experience for 32 frames...
8030
+ [2024-09-01 10:33:59,837][65189] Decorrelating experience for 32 frames...
8031
+ [2024-09-01 10:33:59,889][65194] Decorrelating experience for 32 frames...
8032
+ [2024-09-01 10:34:00,556][65187] Decorrelating experience for 64 frames...
8033
+ [2024-09-01 10:34:00,558][65192] Decorrelating experience for 64 frames...
8034
+ [2024-09-01 10:34:00,673][65195] Decorrelating experience for 64 frames...
8035
+ [2024-09-01 10:34:00,692][65191] Decorrelating experience for 64 frames...
8036
+ [2024-09-01 10:34:01,173][00307] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 12009472. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
8037
+ [2024-09-01 10:34:02,035][65194] Decorrelating experience for 64 frames...
8038
+ [2024-09-01 10:34:02,409][65191] Decorrelating experience for 96 frames...
8039
+ [2024-09-01 10:34:02,791][65193] Decorrelating experience for 64 frames...
8040
+ [2024-09-01 10:34:02,840][65190] Decorrelating experience for 32 frames...
8041
+ [2024-09-01 10:34:02,997][65192] Decorrelating experience for 96 frames...
8042
+ [2024-09-01 10:34:04,449][65189] Decorrelating experience for 64 frames...
8043
+ [2024-09-01 10:34:04,647][65194] Decorrelating experience for 96 frames...
8044
+ [2024-09-01 10:34:05,380][65187] Decorrelating experience for 96 frames...
8045
+ [2024-09-01 10:34:05,540][65193] Decorrelating experience for 96 frames...
8046
+ [2024-09-01 10:34:06,170][00307] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 12009472. Throughput: 0: 61.2. Samples: 612. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
8047
+ [2024-09-01 10:34:06,177][00307] Avg episode reward: [(0, '3.492')]
8048
+ [2024-09-01 10:34:07,094][65190] Decorrelating experience for 64 frames...
8049
+ [2024-09-01 10:34:07,285][65195] Decorrelating experience for 96 frames...
8050
+ [2024-09-01 10:34:07,295][65189] Decorrelating experience for 96 frames...
8051
+ [2024-09-01 10:34:10,760][65190] Decorrelating experience for 96 frames...
8052
+ [2024-09-01 10:34:11,045][65174] Signal inference workers to stop experience collection...
8053
+ [2024-09-01 10:34:11,148][65188] InferenceWorker_p0-w0: stopping experience collection
8054
+ [2024-09-01 10:34:11,170][00307] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 12009472. Throughput: 0: 105.9. Samples: 1588. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
8055
+ [2024-09-01 10:34:11,179][00307] Avg episode reward: [(0, '4.038')]
8056
+ [2024-09-01 10:34:13,282][65174] Signal inference workers to resume experience collection...
8057
+ [2024-09-01 10:34:13,284][65174] Stopping Batcher_0...
8058
+ [2024-09-01 10:34:13,284][65174] Loop batcher_evt_loop terminating...
8059
+ [2024-09-01 10:34:13,312][00307] Component Batcher_0 stopped!
8060
+ [2024-09-01 10:34:13,351][65188] Weights refcount: 2 0
8061
+ [2024-09-01 10:34:13,361][00307] Component InferenceWorker_p0-w0 stopped!
8062
+ [2024-09-01 10:34:13,365][65188] Stopping InferenceWorker_p0-w0...
8063
+ [2024-09-01 10:34:13,370][65188] Loop inference_proc0-0_evt_loop terminating...
8064
+ [2024-09-01 10:34:14,036][00307] Component RolloutWorker_w5 stopped!
8065
+ [2024-09-01 10:34:14,036][65194] Stopping RolloutWorker_w5...
8066
+ [2024-09-01 10:34:14,050][65194] Loop rollout_proc5_evt_loop terminating...
8067
+ [2024-09-01 10:34:14,067][00307] Component RolloutWorker_w7 stopped!
8068
+ [2024-09-01 10:34:14,072][00307] Component RolloutWorker_w3 stopped!
8069
+ [2024-09-01 10:34:14,080][65191] Stopping RolloutWorker_w3...
8070
+ [2024-09-01 10:34:14,081][65191] Loop rollout_proc3_evt_loop terminating...
8071
+ [2024-09-01 10:34:14,067][65195] Stopping RolloutWorker_w7...
8072
+ [2024-09-01 10:34:14,089][65195] Loop rollout_proc7_evt_loop terminating...
8073
+ [2024-09-01 10:34:14,093][00307] Component RolloutWorker_w2 stopped!
8074
+ [2024-09-01 10:34:14,099][65190] Stopping RolloutWorker_w2...
8075
+ [2024-09-01 10:34:14,101][65190] Loop rollout_proc2_evt_loop terminating...
8076
+ [2024-09-01 10:34:14,126][00307] Component RolloutWorker_w0 stopped!
8077
+ [2024-09-01 10:34:14,130][65187] Stopping RolloutWorker_w0...
8078
+ [2024-09-01 10:34:14,136][65187] Loop rollout_proc0_evt_loop terminating...
8079
+ [2024-09-01 10:34:14,192][65189] Stopping RolloutWorker_w1...
8080
+ [2024-09-01 10:34:14,192][00307] Component RolloutWorker_w1 stopped!
8081
+ [2024-09-01 10:34:14,208][65189] Loop rollout_proc1_evt_loop terminating...
8082
+ [2024-09-01 10:34:14,239][00307] Component RolloutWorker_w4 stopped!
8083
+ [2024-09-01 10:34:14,247][65192] Stopping RolloutWorker_w4...
8084
+ [2024-09-01 10:34:14,248][65192] Loop rollout_proc4_evt_loop terminating...
8085
+ [2024-09-01 10:34:14,263][00307] Component RolloutWorker_w6 stopped!
8086
+ [2024-09-01 10:34:14,267][65193] Stopping RolloutWorker_w6...
8087
+ [2024-09-01 10:34:14,267][65193] Loop rollout_proc6_evt_loop terminating...
8088
+ [2024-09-01 10:34:21,389][65174] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002934_12017664.pth...
8089
+ [2024-09-01 10:34:21,705][65174] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002917_11948032.pth
8090
+ [2024-09-01 10:34:21,753][65174] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002934_12017664.pth...
8091
+ [2024-09-01 10:34:22,098][65174] Stopping LearnerWorker_p0...
8092
+ [2024-09-01 10:34:22,100][65174] Loop learner_proc0_evt_loop terminating...
8093
+ [2024-09-01 10:34:22,101][00307] Component LearnerWorker_p0 stopped!
8094
+ [2024-09-01 10:34:22,120][00307] Waiting for process learner_proc0 to stop...
8095
+ [2024-09-01 10:34:23,094][00307] Waiting for process inference_proc0-0 to join...
8096
+ [2024-09-01 10:34:23,107][00307] Waiting for process rollout_proc0 to join...
8097
+ [2024-09-01 10:34:23,120][00307] Waiting for process rollout_proc1 to join...
8098
+ [2024-09-01 10:34:23,130][00307] Waiting for process rollout_proc2 to join...
8099
+ [2024-09-01 10:34:23,143][00307] Waiting for process rollout_proc3 to join...
8100
+ [2024-09-01 10:34:23,152][00307] Waiting for process rollout_proc4 to join...
8101
+ [2024-09-01 10:34:23,166][00307] Waiting for process rollout_proc5 to join...
8102
+ [2024-09-01 10:34:23,178][00307] Waiting for process rollout_proc6 to join...
8103
+ [2024-09-01 10:34:23,196][00307] Waiting for process rollout_proc7 to join...
8104
+ [2024-09-01 10:34:23,214][00307] Batcher 0 profile tree view:
8105
+ batching: 0.0664, releasing_batches: 0.0005
8106
+ [2024-09-01 10:34:23,220][00307] InferenceWorker_p0-w0 profile tree view:
8107
+ update_model: 0.0290
8108
+ wait_policy: 0.0001
8109
+ wait_policy_total: 10.2431
8110
+ one_step: 0.0931
8111
+ handle_policy_step: 7.0708
8112
+ deserialize: 0.1442, stack: 0.0418, obs_to_device_normalize: 0.8121, forward: 5.4789, send_messages: 0.1427
8113
+ prepare_outputs: 0.1556
8114
+ to_cpu: 0.0223
8115
+ [2024-09-01 10:34:23,229][00307] Learner 0 profile tree view:
8116
+ misc: 0.0000, prepare_batch: 4.7974
8117
+ train: 8.9264
8118
+ epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0002, kl_divergence: 0.0007, after_optimizer: 0.0078
8119
+ calculate_losses: 3.4527
8120
+ losses_init: 0.0000, forward_head: 3.1163, bptt_initial: 0.0080, tail: 0.0106, advantages_returns: 0.0028, losses: 0.0030
8121
+ bptt: 0.3114
8122
+ bptt_forward_core: 0.3103
8123
+ update: 5.4628
8124
+ clip: 0.0221
8125
+ [2024-09-01 10:34:23,235][00307] RolloutWorker_w0 profile tree view:
8126
+ wait_for_trajectories: 0.0115, enqueue_policy_requests: 0.1034, env_step: 2.8720, overhead: 0.1024, complete_rollouts: 0.0464
8127
+ save_policy_outputs: 0.0846
8128
+ split_output_tensors: 0.0340
8129
+ [2024-09-01 10:34:23,244][00307] RolloutWorker_w7 profile tree view:
8130
+ wait_for_trajectories: 0.0249, enqueue_policy_requests: 0.0434, env_step: 1.8572, overhead: 0.0427, complete_rollouts: 0.0754
8131
+ save_policy_outputs: 0.0636
8132
+ split_output_tensors: 0.0064
8133
+ [2024-09-01 10:34:23,252][00307] Loop Runner_EvtLoop terminating...
8134
+ [2024-09-01 10:34:23,286][00307] Runner profile tree view:
8135
+ main_loop: 51.2513
8136
+ [2024-09-01 10:34:23,294][00307] Collected {0: 12017664}, FPS: 159.8
8137
+ [2024-09-01 10:52:55,466][00307] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
8138
+ [2024-09-01 10:52:55,468][00307] Overriding arg 'num_workers' with value 1 passed from command line
8139
+ [2024-09-01 10:52:55,474][00307] Adding new argument 'no_render'=True that is not in the saved config file!
8140
+ [2024-09-01 10:52:55,477][00307] Adding new argument 'save_video'=True that is not in the saved config file!
8141
+ [2024-09-01 10:52:55,481][00307] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
8142
+ [2024-09-01 10:52:55,483][00307] Adding new argument 'video_name'=None that is not in the saved config file!
8143
+ [2024-09-01 10:52:55,485][00307] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
8144
+ [2024-09-01 10:52:55,490][00307] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
8145
+ [2024-09-01 10:52:55,491][00307] Adding new argument 'push_to_hub'=True that is not in the saved config file!
8146
+ [2024-09-01 10:52:55,492][00307] Adding new argument 'hf_repository'='jarski/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
8147
+ [2024-09-01 10:52:55,494][00307] Adding new argument 'policy_index'=0 that is not in the saved config file!
8148
+ [2024-09-01 10:52:55,495][00307] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
8149
+ [2024-09-01 10:52:55,496][00307] Adding new argument 'train_script'=None that is not in the saved config file!
8150
+ [2024-09-01 10:52:55,498][00307] Adding new argument 'enjoy_script'=None that is not in the saved config file!
8151
+ [2024-09-01 10:52:55,499][00307] Using frameskip 1 and render_action_repeat=4 for evaluation
8152
+ [2024-09-01 10:52:55,523][00307] RunningMeanStd input shape: (3, 72, 128)
8153
+ [2024-09-01 10:52:55,525][00307] RunningMeanStd input shape: (1,)
8154
+ [2024-09-01 10:52:55,546][00307] ConvEncoder: input_channels=3
8155
+ [2024-09-01 10:52:55,598][00307] Conv encoder output size: 512
8156
+ [2024-09-01 10:52:55,601][00307] Policy head output size: 512
8157
+ [2024-09-01 10:52:55,621][00307] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002934_12017664.pth...
8158
+ [2024-09-01 10:52:56,283][00307] Num frames 100...
8159
+ [2024-09-01 10:52:56,587][00307] Num frames 200...
8160
+ [2024-09-01 10:52:56,857][00307] Num frames 300...
8161
+ [2024-09-01 10:52:57,133][00307] Num frames 400...
8162
+ [2024-09-01 10:52:57,406][00307] Num frames 500...
8163
+ [2024-09-01 10:52:57,683][00307] Num frames 600...
8164
+ [2024-09-01 10:52:57,954][00307] Num frames 700...
8165
+ [2024-09-01 10:52:58,248][00307] Num frames 800...
8166
+ [2024-09-01 10:52:58,531][00307] Num frames 900...
8167
+ [2024-09-01 10:52:58,817][00307] Num frames 1000...
8168
+ [2024-09-01 10:52:59,088][00307] Num frames 1100...
8169
+ [2024-09-01 10:52:59,369][00307] Num frames 1200...
8170
+ [2024-09-01 10:52:59,597][00307] Num frames 1300...
8171
+ [2024-09-01 10:52:59,823][00307] Avg episode rewards: #0: 37.810, true rewards: #0: 13.810
8172
+ [2024-09-01 10:52:59,825][00307] Avg episode reward: 37.810, avg true_objective: 13.810
8173
+ [2024-09-01 10:52:59,864][00307] Num frames 1400...
8174
+ [2024-09-01 10:53:00,085][00307] Num frames 1500...
8175
+ [2024-09-01 10:53:00,288][00307] Num frames 1600...
8176
+ [2024-09-01 10:53:00,488][00307] Num frames 1700...
8177
+ [2024-09-01 10:53:00,686][00307] Num frames 1800...
8178
+ [2024-09-01 10:53:00,892][00307] Num frames 1900...
8179
+ [2024-09-01 10:53:01,090][00307] Num frames 2000...
8180
+ [2024-09-01 10:53:01,300][00307] Num frames 2100...
8181
+ [2024-09-01 10:53:01,494][00307] Num frames 2200...
8182
+ [2024-09-01 10:53:01,709][00307] Num frames 2300...
8183
+ [2024-09-01 10:53:01,929][00307] Num frames 2400...
8184
+ [2024-09-01 10:53:02,163][00307] Num frames 2500...
8185
+ [2024-09-01 10:53:02,381][00307] Num frames 2600...
8186
+ [2024-09-01 10:53:02,591][00307] Num frames 2700...
8187
+ [2024-09-01 10:53:02,843][00307] Num frames 2800...
8188
+ [2024-09-01 10:53:03,050][00307] Num frames 2900...
8189
+ [2024-09-01 10:53:03,252][00307] Num frames 3000...
8190
+ [2024-09-01 10:53:03,451][00307] Num frames 3100...
8191
+ [2024-09-01 10:53:03,653][00307] Num frames 3200...
8192
+ [2024-09-01 10:53:03,865][00307] Num frames 3300...
8193
+ [2024-09-01 10:53:03,967][00307] Avg episode rewards: #0: 45.089, true rewards: #0: 16.590
8194
+ [2024-09-01 10:53:03,969][00307] Avg episode reward: 45.089, avg true_objective: 16.590
8195
+ [2024-09-01 10:53:04,125][00307] Num frames 3400...
8196
+ [2024-09-01 10:53:04,327][00307] Num frames 3500...
8197
+ [2024-09-01 10:53:04,528][00307] Num frames 3600...
8198
+ [2024-09-01 10:53:04,661][00307] Avg episode rewards: #0: 31.460, true rewards: #0: 12.127
8199
+ [2024-09-01 10:53:04,663][00307] Avg episode reward: 31.460, avg true_objective: 12.127
8200
+ [2024-09-01 10:53:04,780][00307] Num frames 3700...
8201
+ [2024-09-01 10:53:05,006][00307] Num frames 3800...
8202
+ [2024-09-01 10:53:05,215][00307] Num frames 3900...
8203
+ [2024-09-01 10:53:05,416][00307] Num frames 4000...
8204
+ [2024-09-01 10:53:05,618][00307] Num frames 4100...
8205
+ [2024-09-01 10:53:05,713][00307] Avg episode rewards: #0: 26.045, true rewards: #0: 10.295
8206
+ [2024-09-01 10:53:05,716][00307] Avg episode reward: 26.045, avg true_objective: 10.295
8207
+ [2024-09-01 10:53:05,879][00307] Num frames 4200...
8208
+ [2024-09-01 10:53:06,092][00307] Num frames 4300...
8209
+ [2024-09-01 10:53:06,303][00307] Num frames 4400...
8210
+ [2024-09-01 10:53:06,502][00307] Num frames 4500...
8211
+ [2024-09-01 10:53:06,705][00307] Num frames 4600...
8212
+ [2024-09-01 10:53:06,904][00307] Num frames 4700...
8213
+ [2024-09-01 10:53:07,118][00307] Num frames 4800...
8214
+ [2024-09-01 10:53:07,332][00307] Num frames 4900...
8215
+ [2024-09-01 10:53:07,534][00307] Num frames 5000...
8216
+ [2024-09-01 10:53:07,728][00307] Num frames 5100...
8217
+ [2024-09-01 10:53:07,852][00307] Avg episode rewards: #0: 25.470, true rewards: #0: 10.270
8218
+ [2024-09-01 10:53:07,853][00307] Avg episode reward: 25.470, avg true_objective: 10.270
8219
+ [2024-09-01 10:53:07,985][00307] Num frames 5200...
8220
+ [2024-09-01 10:53:08,188][00307] Num frames 5300...
8221
+ [2024-09-01 10:53:08,392][00307] Num frames 5400...
8222
+ [2024-09-01 10:53:08,586][00307] Num frames 5500...
8223
+ [2024-09-01 10:53:08,741][00307] Avg episode rewards: #0: 22.252, true rewards: #0: 9.252
8224
+ [2024-09-01 10:53:08,744][00307] Avg episode reward: 22.252, avg true_objective: 9.252
8225
+ [2024-09-01 10:53:08,844][00307] Num frames 5600...
8226
+ [2024-09-01 10:53:09,061][00307] Num frames 5700...
8227
+ [2024-09-01 10:53:09,259][00307] Num frames 5800...
8228
+ [2024-09-01 10:53:09,397][00307] Avg episode rewards: #0: 19.913, true rewards: #0: 8.341
8229
+ [2024-09-01 10:53:09,400][00307] Avg episode reward: 19.913, avg true_objective: 8.341
8230
+ [2024-09-01 10:53:09,555][00307] Num frames 5900...
8231
+ [2024-09-01 10:53:09,838][00307] Num frames 6000...
8232
+ [2024-09-01 10:53:10,124][00307] Num frames 6100...
8233
+ [2024-09-01 10:53:10,393][00307] Num frames 6200...
8234
+ [2024-09-01 10:53:10,661][00307] Num frames 6300...
8235
+ [2024-09-01 10:53:10,926][00307] Num frames 6400...
8236
+ [2024-09-01 10:53:11,226][00307] Num frames 6500...
8237
+ [2024-09-01 10:53:11,501][00307] Num frames 6600...
8238
+ [2024-09-01 10:53:11,778][00307] Num frames 6700...
8239
+ [2024-09-01 10:53:12,056][00307] Num frames 6800...
8240
+ [2024-09-01 10:53:12,357][00307] Num frames 6900...
8241
+ [2024-09-01 10:53:12,637][00307] Num frames 7000...
8242
+ [2024-09-01 10:53:12,863][00307] Num frames 7100...
8243
+ [2024-09-01 10:53:13,026][00307] Avg episode rewards: #0: 21.439, true rewards: #0: 8.939
8244
+ [2024-09-01 10:53:13,029][00307] Avg episode reward: 21.439, avg true_objective: 8.939
8245
+ [2024-09-01 10:53:13,127][00307] Num frames 7200...
8246
+ [2024-09-01 10:53:13,345][00307] Num frames 7300...
8247
+ [2024-09-01 10:53:13,540][00307] Num frames 7400...
8248
+ [2024-09-01 10:53:13,746][00307] Num frames 7500...
8249
+ [2024-09-01 10:53:13,952][00307] Num frames 7600...
8250
+ [2024-09-01 10:53:14,157][00307] Num frames 7700...
8251
+ [2024-09-01 10:53:14,373][00307] Num frames 7800...
8252
+ [2024-09-01 10:53:14,583][00307] Num frames 7900...
8253
+ [2024-09-01 10:53:14,780][00307] Num frames 8000...
8254
+ [2024-09-01 10:53:14,982][00307] Num frames 8100...
8255
+ [2024-09-01 10:53:15,186][00307] Num frames 8200...
8256
+ [2024-09-01 10:53:15,403][00307] Num frames 8300...
8257
+ [2024-09-01 10:53:15,607][00307] Num frames 8400...
8258
+ [2024-09-01 10:53:15,726][00307] Avg episode rewards: #0: 22.146, true rewards: #0: 9.368
8259
+ [2024-09-01 10:53:15,728][00307] Avg episode reward: 22.146, avg true_objective: 9.368
8260
+ [2024-09-01 10:53:15,880][00307] Num frames 8500...
8261
+ [2024-09-01 10:53:16,088][00307] Num frames 8600...
8262
+ [2024-09-01 10:53:16,311][00307] Num frames 8700...
8263
+ [2024-09-01 10:53:16,522][00307] Num frames 8800...
8264
+ [2024-09-01 10:53:16,719][00307] Num frames 8900...
8265
+ [2024-09-01 10:53:16,927][00307] Num frames 9000...
8266
+ [2024-09-01 10:53:17,154][00307] Num frames 9100...
8267
+ [2024-09-01 10:53:17,379][00307] Num frames 9200...
8268
+ [2024-09-01 10:53:17,574][00307] Num frames 9300...
8269
+ [2024-09-01 10:53:17,761][00307] Num frames 9400...
8270
+ [2024-09-01 10:53:17,960][00307] Num frames 9500...
8271
+ [2024-09-01 10:53:18,175][00307] Num frames 9600...
8272
+ [2024-09-01 10:53:18,391][00307] Num frames 9700...
8273
+ [2024-09-01 10:53:18,595][00307] Num frames 9800...
8274
+ [2024-09-01 10:53:18,812][00307] Num frames 9900...
8275
+ [2024-09-01 10:53:19,017][00307] Num frames 10000...
8276
+ [2024-09-01 10:53:19,226][00307] Num frames 10100...
8277
+ [2024-09-01 10:53:19,444][00307] Num frames 10200...
8278
+ [2024-09-01 10:53:19,650][00307] Num frames 10300...
8279
+ [2024-09-01 10:53:19,850][00307] Num frames 10400...
8280
+ [2024-09-01 10:53:20,057][00307] Num frames 10500...
8281
+ [2024-09-01 10:53:20,175][00307] Avg episode rewards: #0: 25.831, true rewards: #0: 10.531
8282
+ [2024-09-01 10:53:20,178][00307] Avg episode reward: 25.831, avg true_objective: 10.531
8283
+ [2024-09-01 10:54:35,854][00307] Replay video saved to /content/train_dir/default_experiment/replay.mp4!