gyaan commited on
Commit
c6b891b
·
verified ·
1 Parent(s): b4504cf

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1739519280.aa19c74a8cf4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a09c858348ab343e70da7be86efdda16e7e7af721596e6f602377e1d06b9f388
3
+ size 192893
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 9.44 +/- 4.58
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 11.94 +/- 5.03
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/best_000001210_4956160_reward_29.554.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a65f556b44715eff601c370ac38110d08eeaaa07abe6b5e8ffa79cd58c76bd3a
3
+ size 34929243
checkpoint_p0/checkpoint_000001180_4833280.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda48ade5117adad6d1284ff811afcfda5d3d8dbc998a127db452956ea14b1d4
3
+ size 34929669
checkpoint_p0/checkpoint_000001222_5005312.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13123f6b62115d0efeb7a078d61fafd767e094579923a0cdd0a12558a54a669b
3
+ size 34929669
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 4000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 5000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:faa5d7fc9add684655501f42e9e3e1401ee1437f9a1fd9ce0d37e303964b6f57
3
- size 17960113
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:894fe29a891f802a279687a7ee5c9977efa14e2fb32442e454b45a53242947c8
3
+ size 23102686
sf_log.txt CHANGED
@@ -1070,3 +1070,838 @@ main_loop: 1042.0112
1070
  [2025-02-14 07:43:43,685][00436] Avg episode rewards: #0: 21.437, true rewards: #0: 9.437
1071
  [2025-02-14 07:43:43,686][00436] Avg episode reward: 21.437, avg true_objective: 9.437
1072
  [2025-02-14 07:44:38,128][00436] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1070
  [2025-02-14 07:43:43,685][00436] Avg episode rewards: #0: 21.437, true rewards: #0: 9.437
1071
  [2025-02-14 07:43:43,686][00436] Avg episode reward: 21.437, avg true_objective: 9.437
1072
  [2025-02-14 07:44:38,128][00436] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1073
+ [2025-02-14 07:44:42,755][00436] The model has been pushed to https://huggingface.co/gyaan/rl_course_vizdoom_health_gathering_supreme
1074
+ [2025-02-14 07:48:00,387][00436] Environment doom_basic already registered, overwriting...
1075
+ [2025-02-14 07:48:00,389][00436] Environment doom_two_colors_easy already registered, overwriting...
1076
+ [2025-02-14 07:48:00,391][00436] Environment doom_two_colors_hard already registered, overwriting...
1077
+ [2025-02-14 07:48:00,393][00436] Environment doom_dm already registered, overwriting...
1078
+ [2025-02-14 07:48:00,398][00436] Environment doom_dwango5 already registered, overwriting...
1079
+ [2025-02-14 07:48:00,399][00436] Environment doom_my_way_home_flat_actions already registered, overwriting...
1080
+ [2025-02-14 07:48:00,400][00436] Environment doom_defend_the_center_flat_actions already registered, overwriting...
1081
+ [2025-02-14 07:48:00,401][00436] Environment doom_my_way_home already registered, overwriting...
1082
+ [2025-02-14 07:48:00,405][00436] Environment doom_deadly_corridor already registered, overwriting...
1083
+ [2025-02-14 07:48:00,406][00436] Environment doom_defend_the_center already registered, overwriting...
1084
+ [2025-02-14 07:48:00,407][00436] Environment doom_defend_the_line already registered, overwriting...
1085
+ [2025-02-14 07:48:00,408][00436] Environment doom_health_gathering already registered, overwriting...
1086
+ [2025-02-14 07:48:00,409][00436] Environment doom_health_gathering_supreme already registered, overwriting...
1087
+ [2025-02-14 07:48:00,413][00436] Environment doom_battle already registered, overwriting...
1088
+ [2025-02-14 07:48:00,414][00436] Environment doom_battle2 already registered, overwriting...
1089
+ [2025-02-14 07:48:00,415][00436] Environment doom_duel_bots already registered, overwriting...
1090
+ [2025-02-14 07:48:00,416][00436] Environment doom_deathmatch_bots already registered, overwriting...
1091
+ [2025-02-14 07:48:00,417][00436] Environment doom_duel already registered, overwriting...
1092
+ [2025-02-14 07:48:00,417][00436] Environment doom_deathmatch_full already registered, overwriting...
1093
+ [2025-02-14 07:48:00,418][00436] Environment doom_benchmark already registered, overwriting...
1094
+ [2025-02-14 07:48:00,419][00436] register_encoder_factory: <function make_vizdoom_encoder at 0x790af59fec00>
1095
+ [2025-02-14 07:48:00,444][00436] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1096
+ [2025-02-14 07:48:00,453][00436] Overriding arg 'train_for_env_steps' with value 5000000 passed from command line
1097
+ [2025-02-14 07:48:00,465][00436] Experiment dir /content/train_dir/default_experiment already exists!
1098
+ [2025-02-14 07:48:00,467][00436] Resuming existing experiment from /content/train_dir/default_experiment...
1099
+ [2025-02-14 07:48:00,468][00436] Weights and Biases integration disabled
1100
+ [2025-02-14 07:48:00,473][00436] Environment var CUDA_VISIBLE_DEVICES is 0
1101
+
1102
+ [2025-02-14 07:48:03,692][00436] Starting experiment with the following configuration:
1103
+ help=False
1104
+ algo=APPO
1105
+ env=doom_health_gathering_supreme
1106
+ experiment=default_experiment
1107
+ train_dir=/content/train_dir
1108
+ restart_behavior=resume
1109
+ device=gpu
1110
+ seed=None
1111
+ num_policies=1
1112
+ async_rl=True
1113
+ serial_mode=False
1114
+ batched_sampling=False
1115
+ num_batches_to_accumulate=2
1116
+ worker_num_splits=2
1117
+ policy_workers_per_policy=1
1118
+ max_policy_lag=1000
1119
+ num_workers=8
1120
+ num_envs_per_worker=4
1121
+ batch_size=1024
1122
+ num_batches_per_epoch=1
1123
+ num_epochs=1
1124
+ rollout=32
1125
+ recurrence=32
1126
+ shuffle_minibatches=False
1127
+ gamma=0.99
1128
+ reward_scale=1.0
1129
+ reward_clip=1000.0
1130
+ value_bootstrap=False
1131
+ normalize_returns=True
1132
+ exploration_loss_coeff=0.001
1133
+ value_loss_coeff=0.5
1134
+ kl_loss_coeff=0.0
1135
+ exploration_loss=symmetric_kl
1136
+ gae_lambda=0.95
1137
+ ppo_clip_ratio=0.1
1138
+ ppo_clip_value=0.2
1139
+ with_vtrace=False
1140
+ vtrace_rho=1.0
1141
+ vtrace_c=1.0
1142
+ optimizer=adam
1143
+ adam_eps=1e-06
1144
+ adam_beta1=0.9
1145
+ adam_beta2=0.999
1146
+ max_grad_norm=4.0
1147
+ learning_rate=0.0001
1148
+ lr_schedule=constant
1149
+ lr_schedule_kl_threshold=0.008
1150
+ lr_adaptive_min=1e-06
1151
+ lr_adaptive_max=0.01
1152
+ obs_subtract_mean=0.0
1153
+ obs_scale=255.0
1154
+ normalize_input=True
1155
+ normalize_input_keys=None
1156
+ decorrelate_experience_max_seconds=0
1157
+ decorrelate_envs_on_one_worker=True
1158
+ actor_worker_gpus=[]
1159
+ set_workers_cpu_affinity=True
1160
+ force_envs_single_thread=False
1161
+ default_niceness=0
1162
+ log_to_file=True
1163
+ experiment_summaries_interval=10
1164
+ flush_summaries_interval=30
1165
+ stats_avg=100
1166
+ summaries_use_frameskip=True
1167
+ heartbeat_interval=20
1168
+ heartbeat_reporting_interval=600
1169
+ train_for_env_steps=5000000
1170
+ train_for_seconds=10000000000
1171
+ save_every_sec=120
1172
+ keep_checkpoints=2
1173
+ load_checkpoint_kind=latest
1174
+ save_milestones_sec=-1
1175
+ save_best_every_sec=5
1176
+ save_best_metric=reward
1177
+ save_best_after=100000
1178
+ benchmark=False
1179
+ encoder_mlp_layers=[512, 512]
1180
+ encoder_conv_architecture=convnet_simple
1181
+ encoder_conv_mlp_layers=[512]
1182
+ use_rnn=True
1183
+ rnn_size=512
1184
+ rnn_type=gru
1185
+ rnn_num_layers=1
1186
+ decoder_mlp_layers=[]
1187
+ nonlinearity=elu
1188
+ policy_initialization=orthogonal
1189
+ policy_init_gain=1.0
1190
+ actor_critic_share_weights=True
1191
+ adaptive_stddev=True
1192
+ continuous_tanh_scale=0.0
1193
+ initial_stddev=1.0
1194
+ use_env_info_cache=False
1195
+ env_gpu_actions=False
1196
+ env_gpu_observations=True
1197
+ env_frameskip=4
1198
+ env_framestack=1
1199
+ pixel_format=CHW
1200
+ use_record_episode_statistics=False
1201
+ with_wandb=False
1202
+ wandb_user=None
1203
+ wandb_project=sample_factory
1204
+ wandb_group=None
1205
+ wandb_job_type=SF
1206
+ wandb_tags=[]
1207
+ with_pbt=False
1208
+ pbt_mix_policies_in_one_env=True
1209
+ pbt_period_env_steps=5000000
1210
+ pbt_start_mutation=20000000
1211
+ pbt_replace_fraction=0.3
1212
+ pbt_mutation_rate=0.15
1213
+ pbt_replace_reward_gap=0.1
1214
+ pbt_replace_reward_gap_absolute=1e-06
1215
+ pbt_optimize_gamma=False
1216
+ pbt_target_objective=true_objective
1217
+ pbt_perturb_min=1.1
1218
+ pbt_perturb_max=1.5
1219
+ num_agents=-1
1220
+ num_humans=0
1221
+ num_bots=-1
1222
+ start_bot_difficulty=None
1223
+ timelimit=None
1224
+ res_w=128
1225
+ res_h=72
1226
+ wide_aspect_ratio=False
1227
+ eval_env_frameskip=1
1228
+ fps=35
1229
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
1230
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
1231
+ git_hash=unknown
1232
+ git_repo_name=not a git repository
1233
+ [2025-02-14 07:48:03,694][00436] Saving configuration to /content/train_dir/default_experiment/config.json...
1234
+ [2025-02-14 07:48:03,696][00436] Rollout worker 0 uses device cpu
1235
+ [2025-02-14 07:48:03,699][00436] Rollout worker 1 uses device cpu
1236
+ [2025-02-14 07:48:03,701][00436] Rollout worker 2 uses device cpu
1237
+ [2025-02-14 07:48:03,702][00436] Rollout worker 3 uses device cpu
1238
+ [2025-02-14 07:48:03,703][00436] Rollout worker 4 uses device cpu
1239
+ [2025-02-14 07:48:03,710][00436] Rollout worker 5 uses device cpu
1240
+ [2025-02-14 07:48:03,712][00436] Rollout worker 6 uses device cpu
1241
+ [2025-02-14 07:48:03,713][00436] Rollout worker 7 uses device cpu
1242
+ [2025-02-14 07:48:03,787][00436] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1243
+ [2025-02-14 07:48:03,790][00436] InferenceWorker_p0-w0: min num requests: 2
1244
+ [2025-02-14 07:48:03,821][00436] Starting all processes...
1245
+ [2025-02-14 07:48:03,822][00436] Starting process learner_proc0
1246
+ [2025-02-14 07:48:03,886][00436] Starting all processes...
1247
+ [2025-02-14 07:48:03,898][00436] Starting process inference_proc0-0
1248
+ [2025-02-14 07:48:03,898][00436] Starting process rollout_proc0
1249
+ [2025-02-14 07:48:03,899][00436] Starting process rollout_proc1
1250
+ [2025-02-14 07:48:03,899][00436] Starting process rollout_proc2
1251
+ [2025-02-14 07:48:03,900][00436] Starting process rollout_proc3
1252
+ [2025-02-14 07:48:03,900][00436] Starting process rollout_proc4
1253
+ [2025-02-14 07:48:03,901][00436] Starting process rollout_proc5
1254
+ [2025-02-14 07:48:03,903][00436] Starting process rollout_proc6
1255
+ [2025-02-14 07:48:03,903][00436] Starting process rollout_proc7
1256
+ [2025-02-14 07:48:19,194][13629] Worker 4 uses CPU cores [0]
1257
+ [2025-02-14 07:48:19,300][13627] Worker 2 uses CPU cores [0]
1258
+ [2025-02-14 07:48:19,403][13631] Worker 6 uses CPU cores [0]
1259
+ [2025-02-14 07:48:19,410][13630] Worker 5 uses CPU cores [1]
1260
+ [2025-02-14 07:48:19,419][13626] Worker 1 uses CPU cores [1]
1261
+ [2025-02-14 07:48:19,511][13607] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1262
+ [2025-02-14 07:48:19,512][13607] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
1263
+ [2025-02-14 07:48:19,524][13632] Worker 7 uses CPU cores [1]
1264
+ [2025-02-14 07:48:19,540][13624] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1265
+ [2025-02-14 07:48:19,541][13624] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
1266
+ [2025-02-14 07:48:19,565][13624] Num visible devices: 1
1267
+ [2025-02-14 07:48:19,566][13607] Num visible devices: 1
1268
+ [2025-02-14 07:48:19,575][13628] Worker 3 uses CPU cores [1]
1269
+ [2025-02-14 07:48:19,579][13607] Starting seed is not provided
1270
+ [2025-02-14 07:48:19,579][13607] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1271
+ [2025-02-14 07:48:19,579][13607] Initializing actor-critic model on device cuda:0
1272
+ [2025-02-14 07:48:19,580][13607] RunningMeanStd input shape: (3, 72, 128)
1273
+ [2025-02-14 07:48:19,581][13607] RunningMeanStd input shape: (1,)
1274
+ [2025-02-14 07:48:19,592][13625] Worker 0 uses CPU cores [0]
1275
+ [2025-02-14 07:48:19,600][13607] ConvEncoder: input_channels=3
1276
+ [2025-02-14 07:48:19,718][13607] Conv encoder output size: 512
1277
+ [2025-02-14 07:48:19,718][13607] Policy head output size: 512
1278
+ [2025-02-14 07:48:19,734][13607] Created Actor Critic model with architecture:
1279
+ [2025-02-14 07:48:19,734][13607] ActorCriticSharedWeights(
1280
+ (obs_normalizer): ObservationNormalizer(
1281
+ (running_mean_std): RunningMeanStdDictInPlace(
1282
+ (running_mean_std): ModuleDict(
1283
+ (obs): RunningMeanStdInPlace()
1284
+ )
1285
+ )
1286
+ )
1287
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
1288
+ (encoder): VizdoomEncoder(
1289
+ (basic_encoder): ConvEncoder(
1290
+ (enc): RecursiveScriptModule(
1291
+ original_name=ConvEncoderImpl
1292
+ (conv_head): RecursiveScriptModule(
1293
+ original_name=Sequential
1294
+ (0): RecursiveScriptModule(original_name=Conv2d)
1295
+ (1): RecursiveScriptModule(original_name=ELU)
1296
+ (2): RecursiveScriptModule(original_name=Conv2d)
1297
+ (3): RecursiveScriptModule(original_name=ELU)
1298
+ (4): RecursiveScriptModule(original_name=Conv2d)
1299
+ (5): RecursiveScriptModule(original_name=ELU)
1300
+ )
1301
+ (mlp_layers): RecursiveScriptModule(
1302
+ original_name=Sequential
1303
+ (0): RecursiveScriptModule(original_name=Linear)
1304
+ (1): RecursiveScriptModule(original_name=ELU)
1305
+ )
1306
+ )
1307
+ )
1308
+ )
1309
+ (core): ModelCoreRNN(
1310
+ (core): GRU(512, 512)
1311
+ )
1312
+ (decoder): MlpDecoder(
1313
+ (mlp): Identity()
1314
+ )
1315
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
1316
+ (action_parameterization): ActionParameterizationDefault(
1317
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
1318
+ )
1319
+ )
1320
+ [2025-02-14 07:48:19,858][13607] Using optimizer <class 'torch.optim.adam.Adam'>
1321
+ [2025-02-14 07:48:21,021][13607] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
1322
+ [2025-02-14 07:48:21,059][13607] Loading model from checkpoint
1323
+ [2025-02-14 07:48:21,061][13607] Loaded experiment state at self.train_step=978, self.env_steps=4005888
1324
+ [2025-02-14 07:48:21,061][13607] Initialized policy 0 weights for model version 978
1325
+ [2025-02-14 07:48:21,063][13607] LearnerWorker_p0 finished initialization!
1326
+ [2025-02-14 07:48:21,064][13607] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1327
+ [2025-02-14 07:48:21,190][13624] RunningMeanStd input shape: (3, 72, 128)
1328
+ [2025-02-14 07:48:21,192][13624] RunningMeanStd input shape: (1,)
1329
+ [2025-02-14 07:48:21,204][13624] ConvEncoder: input_channels=3
1330
+ [2025-02-14 07:48:21,305][13624] Conv encoder output size: 512
1331
+ [2025-02-14 07:48:21,305][13624] Policy head output size: 512
1332
+ [2025-02-14 07:48:21,343][00436] Inference worker 0-0 is ready!
1333
+ [2025-02-14 07:48:21,344][00436] All inference workers are ready! Signal rollout workers to start!
1334
+ [2025-02-14 07:48:21,595][13629] Doom resolution: 160x120, resize resolution: (128, 72)
1335
+ [2025-02-14 07:48:21,670][13631] Doom resolution: 160x120, resize resolution: (128, 72)
1336
+ [2025-02-14 07:48:21,668][13628] Doom resolution: 160x120, resize resolution: (128, 72)
1337
+ [2025-02-14 07:48:21,675][13632] Doom resolution: 160x120, resize resolution: (128, 72)
1338
+ [2025-02-14 07:48:21,697][13625] Doom resolution: 160x120, resize resolution: (128, 72)
1339
+ [2025-02-14 07:48:21,723][13630] Doom resolution: 160x120, resize resolution: (128, 72)
1340
+ [2025-02-14 07:48:21,740][13627] Doom resolution: 160x120, resize resolution: (128, 72)
1341
+ [2025-02-14 07:48:21,774][13626] Doom resolution: 160x120, resize resolution: (128, 72)
1342
+ [2025-02-14 07:48:23,118][13628] Decorrelating experience for 0 frames...
1343
+ [2025-02-14 07:48:23,114][13632] Decorrelating experience for 0 frames...
1344
+ [2025-02-14 07:48:23,593][13629] Decorrelating experience for 0 frames...
1345
+ [2025-02-14 07:48:23,632][13631] Decorrelating experience for 0 frames...
1346
+ [2025-02-14 07:48:23,653][13625] Decorrelating experience for 0 frames...
1347
+ [2025-02-14 07:48:23,673][13627] Decorrelating experience for 0 frames...
1348
+ [2025-02-14 07:48:23,779][00436] Heartbeat connected on Batcher_0
1349
+ [2025-02-14 07:48:23,784][00436] Heartbeat connected on LearnerWorker_p0
1350
+ [2025-02-14 07:48:23,814][00436] Heartbeat connected on InferenceWorker_p0-w0
1351
+ [2025-02-14 07:48:24,111][13632] Decorrelating experience for 32 frames...
1352
+ [2025-02-14 07:48:24,201][13630] Decorrelating experience for 0 frames...
1353
+ [2025-02-14 07:48:24,527][13631] Decorrelating experience for 32 frames...
1354
+ [2025-02-14 07:48:24,597][13627] Decorrelating experience for 32 frames...
1355
+ [2025-02-14 07:48:24,801][13628] Decorrelating experience for 32 frames...
1356
+ [2025-02-14 07:48:25,297][13625] Decorrelating experience for 32 frames...
1357
+ [2025-02-14 07:48:25,301][13626] Decorrelating experience for 0 frames...
1358
+ [2025-02-14 07:48:25,473][00436] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1359
+ [2025-02-14 07:48:26,119][13630] Decorrelating experience for 32 frames...
1360
+ [2025-02-14 07:48:26,149][13631] Decorrelating experience for 64 frames...
1361
+ [2025-02-14 07:48:26,748][13628] Decorrelating experience for 64 frames...
1362
+ [2025-02-14 07:48:26,998][13632] Decorrelating experience for 64 frames...
1363
+ [2025-02-14 07:48:27,257][13629] Decorrelating experience for 32 frames...
1364
+ [2025-02-14 07:48:27,730][13625] Decorrelating experience for 64 frames...
1365
+ [2025-02-14 07:48:28,211][13630] Decorrelating experience for 64 frames...
1366
+ [2025-02-14 07:48:28,218][13627] Decorrelating experience for 64 frames...
1367
+ [2025-02-14 07:48:28,639][13628] Decorrelating experience for 96 frames...
1368
+ [2025-02-14 07:48:29,022][00436] Heartbeat connected on RolloutWorker_w3
1369
+ [2025-02-14 07:48:29,138][13631] Decorrelating experience for 96 frames...
1370
+ [2025-02-14 07:48:29,793][00436] Heartbeat connected on RolloutWorker_w6
1371
+ [2025-02-14 07:48:30,180][13629] Decorrelating experience for 64 frames...
1372
+ [2025-02-14 07:48:30,291][13632] Decorrelating experience for 96 frames...
1373
+ [2025-02-14 07:48:30,301][13626] Decorrelating experience for 32 frames...
1374
+ [2025-02-14 07:48:30,421][13625] Decorrelating experience for 96 frames...
1375
+ [2025-02-14 07:48:30,473][00436] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 12.0. Samples: 60. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1376
+ [2025-02-14 07:48:30,600][00436] Heartbeat connected on RolloutWorker_w7
1377
+ [2025-02-14 07:48:30,889][00436] Heartbeat connected on RolloutWorker_w0
1378
+ [2025-02-14 07:48:31,876][13627] Decorrelating experience for 96 frames...
1379
+ [2025-02-14 07:48:32,485][00436] Heartbeat connected on RolloutWorker_w2
1380
+ [2025-02-14 07:48:32,744][13630] Decorrelating experience for 96 frames...
1381
+ [2025-02-14 07:48:33,274][00436] Heartbeat connected on RolloutWorker_w5
1382
+ [2025-02-14 07:48:33,899][13626] Decorrelating experience for 64 frames...
1383
+ [2025-02-14 07:48:34,563][13607] Signal inference workers to stop experience collection...
1384
+ [2025-02-14 07:48:34,588][13624] InferenceWorker_p0-w0: stopping experience collection
1385
+ [2025-02-14 07:48:35,005][13626] Decorrelating experience for 96 frames...
1386
+ [2025-02-14 07:48:35,045][13629] Decorrelating experience for 96 frames...
1387
+ [2025-02-14 07:48:35,127][00436] Heartbeat connected on RolloutWorker_w4
1388
+ [2025-02-14 07:48:35,173][00436] Heartbeat connected on RolloutWorker_w1
1389
+ [2025-02-14 07:48:35,473][00436] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 179.2. Samples: 1792. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1390
+ [2025-02-14 07:48:35,485][00436] Avg episode reward: [(0, '5.519')]
1391
+ [2025-02-14 07:48:35,556][13607] Signal inference workers to resume experience collection...
1392
+ [2025-02-14 07:48:35,557][13624] InferenceWorker_p0-w0: resuming experience collection
1393
+ [2025-02-14 07:48:40,474][00436] Fps is (10 sec: 2457.5, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 4030464. Throughput: 0: 439.6. Samples: 6594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1394
+ [2025-02-14 07:48:40,476][00436] Avg episode reward: [(0, '10.389')]
1395
+ [2025-02-14 07:48:45,001][13624] Updated weights for policy 0, policy_version 988 (0.0027)
1396
+ [2025-02-14 07:48:45,473][00436] Fps is (10 sec: 4096.0, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 4046848. Throughput: 0: 561.5. Samples: 11230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1397
+ [2025-02-14 07:48:45,480][00436] Avg episode reward: [(0, '14.393')]
1398
+ [2025-02-14 07:48:50,473][00436] Fps is (10 sec: 4096.1, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 4071424. Throughput: 0: 590.2. Samples: 14756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1399
+ [2025-02-14 07:48:50,478][00436] Avg episode reward: [(0, '16.854')]
1400
+ [2025-02-14 07:48:54,104][13624] Updated weights for policy 0, policy_version 998 (0.0029)
1401
+ [2025-02-14 07:48:55,474][00436] Fps is (10 sec: 4505.4, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 4091904. Throughput: 0: 706.3. Samples: 21188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1402
+ [2025-02-14 07:48:55,478][00436] Avg episode reward: [(0, '17.677')]
1403
+ [2025-02-14 07:49:00,473][00436] Fps is (10 sec: 3276.8, 60 sec: 2808.7, 300 sec: 2808.7). Total num frames: 4104192. Throughput: 0: 737.3. Samples: 25804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1404
+ [2025-02-14 07:49:00,478][00436] Avg episode reward: [(0, '19.975')]
1405
+ [2025-02-14 07:49:05,091][13624] Updated weights for policy 0, policy_version 1008 (0.0021)
1406
+ [2025-02-14 07:49:05,473][00436] Fps is (10 sec: 3686.6, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 4128768. Throughput: 0: 731.0. Samples: 29238. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1407
+ [2025-02-14 07:49:05,479][00436] Avg episode reward: [(0, '22.568')]
1408
+ [2025-02-14 07:49:10,473][00436] Fps is (10 sec: 4505.6, 60 sec: 3185.8, 300 sec: 3185.8). Total num frames: 4149248. Throughput: 0: 804.5. Samples: 36204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1409
+ [2025-02-14 07:49:10,477][00436] Avg episode reward: [(0, '25.940')]
1410
+ [2025-02-14 07:49:15,473][00436] Fps is (10 sec: 3686.4, 60 sec: 3194.9, 300 sec: 3194.9). Total num frames: 4165632. Throughput: 0: 907.2. Samples: 40886. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1411
+ [2025-02-14 07:49:15,480][00436] Avg episode reward: [(0, '26.159')]
1412
+ [2025-02-14 07:49:16,080][13624] Updated weights for policy 0, policy_version 1018 (0.0026)
1413
+ [2025-02-14 07:49:20,473][00436] Fps is (10 sec: 4096.0, 60 sec: 3351.3, 300 sec: 3351.3). Total num frames: 4190208. Throughput: 0: 946.0. Samples: 44364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1414
+ [2025-02-14 07:49:20,482][00436] Avg episode reward: [(0, '28.399')]
1415
+ [2025-02-14 07:49:20,489][13607] Saving new best policy, reward=28.399!
1416
+ [2025-02-14 07:49:24,782][13624] Updated weights for policy 0, policy_version 1028 (0.0015)
1417
+ [2025-02-14 07:49:25,476][00436] Fps is (10 sec: 4504.4, 60 sec: 3413.2, 300 sec: 3413.2). Total num frames: 4210688. Throughput: 0: 993.7. Samples: 51312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1418
+ [2025-02-14 07:49:25,484][00436] Avg episode reward: [(0, '29.158')]
1419
+ [2025-02-14 07:49:25,486][13607] Saving new best policy, reward=29.158!
1420
+ [2025-02-14 07:49:30,473][00436] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3339.8). Total num frames: 4222976. Throughput: 0: 987.0. Samples: 55646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1421
+ [2025-02-14 07:49:30,475][00436] Avg episode reward: [(0, '27.438')]
1422
+ [2025-02-14 07:49:35,473][00436] Fps is (10 sec: 3687.4, 60 sec: 4027.7, 300 sec: 3452.3). Total num frames: 4247552. Throughput: 0: 983.4. Samples: 59008. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1423
+ [2025-02-14 07:49:35,480][00436] Avg episode reward: [(0, '26.877')]
1424
+ [2025-02-14 07:49:35,970][13624] Updated weights for policy 0, policy_version 1038 (0.0012)
1425
+ [2025-02-14 07:49:40,474][00436] Fps is (10 sec: 4505.4, 60 sec: 3959.5, 300 sec: 3495.2). Total num frames: 4268032. Throughput: 0: 993.6. Samples: 65898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1426
+ [2025-02-14 07:49:40,476][00436] Avg episode reward: [(0, '25.331')]
1427
+ [2025-02-14 07:49:45,473][00436] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3481.6). Total num frames: 4284416. Throughput: 0: 993.7. Samples: 70522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1428
+ [2025-02-14 07:49:45,476][00436] Avg episode reward: [(0, '25.188')]
1429
+ [2025-02-14 07:49:46,752][13624] Updated weights for policy 0, policy_version 1048 (0.0015)
1430
+ [2025-02-14 07:49:50,473][00436] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3565.9). Total num frames: 4308992. Throughput: 0: 996.2. Samples: 74068. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1431
+ [2025-02-14 07:49:50,478][00436] Avg episode reward: [(0, '24.083')]
1432
+ [2025-02-14 07:49:55,476][00436] Fps is (10 sec: 4504.5, 60 sec: 3959.3, 300 sec: 3595.3). Total num frames: 4329472. Throughput: 0: 998.6. Samples: 81144. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1433
+ [2025-02-14 07:49:55,480][00436] Avg episode reward: [(0, '25.341')]
1434
+ [2025-02-14 07:49:56,023][13624] Updated weights for policy 0, policy_version 1058 (0.0014)
1435
+ [2025-02-14 07:50:00,477][00436] Fps is (10 sec: 3685.1, 60 sec: 4027.5, 300 sec: 3578.5). Total num frames: 4345856. Throughput: 0: 992.5. Samples: 85552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1436
+ [2025-02-14 07:50:00,486][00436] Avg episode reward: [(0, '25.838')]
1437
+ [2025-02-14 07:50:00,498][13607] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001061_4345856.pth...
1438
+ [2025-02-14 07:50:00,627][13607] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000894_3661824.pth
1439
+ [2025-02-14 07:50:05,473][00436] Fps is (10 sec: 3687.3, 60 sec: 3959.5, 300 sec: 3604.5). Total num frames: 4366336. Throughput: 0: 989.3. Samples: 88882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1440
+ [2025-02-14 07:50:05,480][00436] Avg episode reward: [(0, '25.862')]
1441
+ [2025-02-14 07:50:06,789][13624] Updated weights for policy 0, policy_version 1068 (0.0016)
1442
+ [2025-02-14 07:50:10,475][00436] Fps is (10 sec: 4506.4, 60 sec: 4027.6, 300 sec: 3666.8). Total num frames: 4390912. Throughput: 0: 987.1. Samples: 95732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1443
+ [2025-02-14 07:50:10,478][00436] Avg episode reward: [(0, '27.260')]
1444
+ [2025-02-14 07:50:15,473][00436] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3611.9). Total num frames: 4403200. Throughput: 0: 994.1. Samples: 100382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1445
+ [2025-02-14 07:50:15,480][00436] Avg episode reward: [(0, '26.786')]
1446
+ [2025-02-14 07:50:17,519][13624] Updated weights for policy 0, policy_version 1078 (0.0023)
1447
+ [2025-02-14 07:50:20,473][00436] Fps is (10 sec: 3687.0, 60 sec: 3959.5, 300 sec: 3668.6). Total num frames: 4427776. Throughput: 0: 997.2. Samples: 103884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1448
+ [2025-02-14 07:50:20,478][00436] Avg episode reward: [(0, '26.185')]
1449
+ [2025-02-14 07:50:25,473][00436] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3686.4). Total num frames: 4448256. Throughput: 0: 996.1. Samples: 110720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1450
+ [2025-02-14 07:50:25,479][00436] Avg episode reward: [(0, '25.814')]
1451
+ [2025-02-14 07:50:27,557][13624] Updated weights for policy 0, policy_version 1088 (0.0013)
1452
+ [2025-02-14 07:50:30,473][00436] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3637.2). Total num frames: 4460544. Throughput: 0: 991.2. Samples: 115124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1453
+ [2025-02-14 07:50:30,476][00436] Avg episode reward: [(0, '25.918')]
1454
+ [2025-02-14 07:50:35,473][00436] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 4485120. Throughput: 0: 985.7. Samples: 118424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1455
+ [2025-02-14 07:50:35,476][00436] Avg episode reward: [(0, '24.770')]
1456
+ [2025-02-14 07:50:37,525][13624] Updated weights for policy 0, policy_version 1098 (0.0015)
1457
+ [2025-02-14 07:50:40,473][00436] Fps is (10 sec: 4915.2, 60 sec: 4027.8, 300 sec: 3731.9). Total num frames: 4509696. Throughput: 0: 986.2. Samples: 125520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1458
+ [2025-02-14 07:50:40,476][00436] Avg episode reward: [(0, '24.364')]
1459
+ [2025-02-14 07:50:45,474][00436] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3686.4). Total num frames: 4521984. Throughput: 0: 997.4. Samples: 130434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1460
+ [2025-02-14 07:50:45,482][00436] Avg episode reward: [(0, '24.075')]
1461
+ [2025-02-14 07:50:48,142][13624] Updated weights for policy 0, policy_version 1108 (0.0026)
1462
+ [2025-02-14 07:50:50,473][00436] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3728.8). Total num frames: 4546560. Throughput: 0: 999.6. Samples: 133866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1463
+ [2025-02-14 07:50:50,475][00436] Avg episode reward: [(0, '23.896')]
1464
+ [2025-02-14 07:50:55,473][00436] Fps is (10 sec: 4915.4, 60 sec: 4027.9, 300 sec: 3768.3). Total num frames: 4571136. Throughput: 0: 1006.4. Samples: 141018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1465
+ [2025-02-14 07:50:55,478][00436] Avg episode reward: [(0, '24.519')]
1466
+ [2025-02-14 07:50:57,741][13624] Updated weights for policy 0, policy_version 1118 (0.0015)
1467
+ [2025-02-14 07:51:00,475][00436] Fps is (10 sec: 3685.9, 60 sec: 3959.6, 300 sec: 3726.0). Total num frames: 4583424. Throughput: 0: 1009.6. Samples: 145816. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1468
+ [2025-02-14 07:51:00,482][00436] Avg episode reward: [(0, '25.762')]
1469
+ [2025-02-14 07:51:05,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3763.2). Total num frames: 4608000. Throughput: 0: 1009.6. Samples: 149316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1470
+ [2025-02-14 07:51:05,476][00436] Avg episode reward: [(0, '26.854')]
1471
+ [2025-02-14 07:51:07,433][13624] Updated weights for policy 0, policy_version 1128 (0.0021)
1472
+ [2025-02-14 07:51:10,473][00436] Fps is (10 sec: 4915.9, 60 sec: 4027.8, 300 sec: 3798.1). Total num frames: 4632576. Throughput: 0: 1016.5. Samples: 156464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1473
+ [2025-02-14 07:51:10,475][00436] Avg episode reward: [(0, '26.332')]
1474
+ [2025-02-14 07:51:15,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3758.7). Total num frames: 4644864. Throughput: 0: 1027.6. Samples: 161368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1475
+ [2025-02-14 07:51:15,476][00436] Avg episode reward: [(0, '25.744')]
1476
+ [2025-02-14 07:51:18,084][13624] Updated weights for policy 0, policy_version 1138 (0.0020)
1477
+ [2025-02-14 07:51:20,474][00436] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3791.7). Total num frames: 4669440. Throughput: 0: 1031.1. Samples: 164824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1478
+ [2025-02-14 07:51:20,478][00436] Avg episode reward: [(0, '26.056')]
1479
+ [2025-02-14 07:51:25,473][00436] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3822.9). Total num frames: 4694016. Throughput: 0: 1032.4. Samples: 171980. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1480
+ [2025-02-14 07:51:25,475][00436] Avg episode reward: [(0, '24.355')]
1481
+ [2025-02-14 07:51:27,589][13624] Updated weights for policy 0, policy_version 1148 (0.0018)
1482
+ [2025-02-14 07:51:30,477][00436] Fps is (10 sec: 3685.2, 60 sec: 4095.8, 300 sec: 3786.0). Total num frames: 4706304. Throughput: 0: 1028.2. Samples: 176708. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1483
+ [2025-02-14 07:51:30,482][00436] Avg episode reward: [(0, '24.539')]
1484
+ [2025-02-14 07:51:35,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3815.7). Total num frames: 4730880. Throughput: 0: 1027.6. Samples: 180106. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1485
+ [2025-02-14 07:51:35,476][00436] Avg episode reward: [(0, '24.743')]
1486
+ [2025-02-14 07:51:37,471][13624] Updated weights for policy 0, policy_version 1158 (0.0018)
1487
+ [2025-02-14 07:51:40,474][00436] Fps is (10 sec: 4916.7, 60 sec: 4096.0, 300 sec: 3843.9). Total num frames: 4755456. Throughput: 0: 1028.2. Samples: 187286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1488
+ [2025-02-14 07:51:40,477][00436] Avg episode reward: [(0, '26.191')]
1489
+ [2025-02-14 07:51:45,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3809.3). Total num frames: 4767744. Throughput: 0: 1028.9. Samples: 192116. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1490
+ [2025-02-14 07:51:45,476][00436] Avg episode reward: [(0, '26.387')]
1491
+ [2025-02-14 07:51:48,010][13624] Updated weights for policy 0, policy_version 1168 (0.0017)
1492
+ [2025-02-14 07:51:50,473][00436] Fps is (10 sec: 3686.6, 60 sec: 4096.0, 300 sec: 3836.3). Total num frames: 4792320. Throughput: 0: 1029.6. Samples: 195648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1493
+ [2025-02-14 07:51:50,476][00436] Avg episode reward: [(0, '26.799')]
1494
+ [2025-02-14 07:51:55,473][00436] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3861.9). Total num frames: 4816896. Throughput: 0: 1028.6. Samples: 202750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1495
+ [2025-02-14 07:51:55,477][00436] Avg episode reward: [(0, '26.529')]
1496
+ [2025-02-14 07:51:57,355][13624] Updated weights for policy 0, policy_version 1178 (0.0024)
1497
+ [2025-02-14 07:52:00,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4096.1, 300 sec: 3829.3). Total num frames: 4829184. Throughput: 0: 1025.3. Samples: 207506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1498
+ [2025-02-14 07:52:00,478][00436] Avg episode reward: [(0, '27.132')]
1499
+ [2025-02-14 07:52:00,518][13607] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001180_4833280.pth...
1500
+ [2025-02-14 07:52:00,666][13607] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth
1501
+ [2025-02-14 07:52:05,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3854.0). Total num frames: 4853760. Throughput: 0: 1021.8. Samples: 210804. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1502
+ [2025-02-14 07:52:05,476][00436] Avg episode reward: [(0, '27.476')]
1503
+ [2025-02-14 07:52:07,449][13624] Updated weights for policy 0, policy_version 1188 (0.0027)
1504
+ [2025-02-14 07:52:10,473][00436] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3877.5). Total num frames: 4878336. Throughput: 0: 1019.9. Samples: 217874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1505
+ [2025-02-14 07:52:10,476][00436] Avg episode reward: [(0, '27.026')]
1506
+ [2025-02-14 07:52:15,473][00436] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 3864.5). Total num frames: 4894720. Throughput: 0: 1020.7. Samples: 222634. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1507
+ [2025-02-14 07:52:15,477][00436] Avg episode reward: [(0, '26.811')]
1508
+ [2025-02-14 07:52:18,001][13624] Updated weights for policy 0, policy_version 1198 (0.0015)
1509
+ [2025-02-14 07:52:20,473][00436] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3869.4). Total num frames: 4915200. Throughput: 0: 1023.9. Samples: 226182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1510
+ [2025-02-14 07:52:20,483][00436] Avg episode reward: [(0, '28.712')]
1511
+ [2025-02-14 07:52:25,475][00436] Fps is (10 sec: 4504.7, 60 sec: 4095.9, 300 sec: 3891.2). Total num frames: 4939776. Throughput: 0: 1023.5. Samples: 233344. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1512
+ [2025-02-14 07:52:25,482][00436] Avg episode reward: [(0, '29.438')]
1513
+ [2025-02-14 07:52:25,487][13607] Saving new best policy, reward=29.438!
1514
+ [2025-02-14 07:52:27,425][13624] Updated weights for policy 0, policy_version 1208 (0.0013)
1515
+ [2025-02-14 07:52:30,473][00436] Fps is (10 sec: 4096.0, 60 sec: 4164.5, 300 sec: 3878.7). Total num frames: 4956160. Throughput: 0: 1018.7. Samples: 237956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1516
+ [2025-02-14 07:52:30,479][00436] Avg episode reward: [(0, '29.554')]
1517
+ [2025-02-14 07:52:30,492][13607] Saving new best policy, reward=29.554!
1518
+ [2025-02-14 07:52:35,473][00436] Fps is (10 sec: 3687.1, 60 sec: 4096.0, 300 sec: 3883.0). Total num frames: 4976640. Throughput: 0: 1014.8. Samples: 241316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1519
+ [2025-02-14 07:52:35,480][00436] Avg episode reward: [(0, '29.219')]
1520
+ [2025-02-14 07:52:37,506][13624] Updated weights for policy 0, policy_version 1218 (0.0022)
1521
+ [2025-02-14 07:52:40,473][00436] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3903.2). Total num frames: 5001216. Throughput: 0: 1016.5. Samples: 248492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1522
+ [2025-02-14 07:52:40,476][00436] Avg episode reward: [(0, '28.054')]
1523
+ [2025-02-14 07:52:41,361][13607] Stopping Batcher_0...
1524
+ [2025-02-14 07:52:41,365][00436] Component Batcher_0 stopped!
1525
+ [2025-02-14 07:52:41,367][13607] Loop batcher_evt_loop terminating...
1526
+ [2025-02-14 07:52:41,373][13607] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1527
+ [2025-02-14 07:52:41,467][13624] Weights refcount: 2 0
1528
+ [2025-02-14 07:52:41,479][00436] Component InferenceWorker_p0-w0 stopped!
1529
+ [2025-02-14 07:52:41,483][13624] Stopping InferenceWorker_p0-w0...
1530
+ [2025-02-14 07:52:41,483][13624] Loop inference_proc0-0_evt_loop terminating...
1531
+ [2025-02-14 07:52:41,548][13607] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001061_4345856.pth
1532
+ [2025-02-14 07:52:41,575][13607] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1533
+ [2025-02-14 07:52:41,827][00436] Component LearnerWorker_p0 stopped!
1534
+ [2025-02-14 07:52:41,833][13607] Stopping LearnerWorker_p0...
1535
+ [2025-02-14 07:52:41,833][13607] Loop learner_proc0_evt_loop terminating...
1536
+ [2025-02-14 07:52:42,072][00436] Component RolloutWorker_w5 stopped!
1537
+ [2025-02-14 07:52:42,078][13630] Stopping RolloutWorker_w5...
1538
+ [2025-02-14 07:52:42,083][00436] Component RolloutWorker_w7 stopped!
1539
+ [2025-02-14 07:52:42,087][13632] Stopping RolloutWorker_w7...
1540
+ [2025-02-14 07:52:42,094][00436] Component RolloutWorker_w3 stopped!
1541
+ [2025-02-14 07:52:42,098][13628] Stopping RolloutWorker_w3...
1542
+ [2025-02-14 07:52:42,099][13628] Loop rollout_proc3_evt_loop terminating...
1543
+ [2025-02-14 07:52:42,100][13630] Loop rollout_proc5_evt_loop terminating...
1544
+ [2025-02-14 07:52:42,112][00436] Component RolloutWorker_w1 stopped!
1545
+ [2025-02-14 07:52:42,115][13626] Stopping RolloutWorker_w1...
1546
+ [2025-02-14 07:52:42,116][13626] Loop rollout_proc1_evt_loop terminating...
1547
+ [2025-02-14 07:52:42,107][13632] Loop rollout_proc7_evt_loop terminating...
1548
+ [2025-02-14 07:52:42,270][00436] Component RolloutWorker_w0 stopped!
1549
+ [2025-02-14 07:52:42,270][13625] Stopping RolloutWorker_w0...
1550
+ [2025-02-14 07:52:42,276][13625] Loop rollout_proc0_evt_loop terminating...
1551
+ [2025-02-14 07:52:42,326][00436] Component RolloutWorker_w2 stopped!
1552
+ [2025-02-14 07:52:42,334][13627] Stopping RolloutWorker_w2...
1553
+ [2025-02-14 07:52:42,335][13627] Loop rollout_proc2_evt_loop terminating...
1554
+ [2025-02-14 07:52:42,437][13631] Stopping RolloutWorker_w6...
1555
+ [2025-02-14 07:52:42,437][00436] Component RolloutWorker_w6 stopped!
1556
+ [2025-02-14 07:52:42,443][13629] Stopping RolloutWorker_w4...
1557
+ [2025-02-14 07:52:42,443][13629] Loop rollout_proc4_evt_loop terminating...
1558
+ [2025-02-14 07:52:42,443][00436] Component RolloutWorker_w4 stopped!
1559
+ [2025-02-14 07:52:42,446][00436] Waiting for process learner_proc0 to stop...
1560
+ [2025-02-14 07:52:42,449][13631] Loop rollout_proc6_evt_loop terminating...
1561
+ [2025-02-14 07:52:44,115][00436] Waiting for process inference_proc0-0 to join...
1562
+ [2025-02-14 07:52:44,174][00436] Waiting for process rollout_proc0 to join...
1563
+ [2025-02-14 07:52:46,466][00436] Waiting for process rollout_proc1 to join...
1564
+ [2025-02-14 07:52:46,506][00436] Waiting for process rollout_proc2 to join...
1565
+ [2025-02-14 07:52:46,509][00436] Waiting for process rollout_proc3 to join...
1566
+ [2025-02-14 07:52:46,511][00436] Waiting for process rollout_proc4 to join...
1567
+ [2025-02-14 07:52:46,515][00436] Waiting for process rollout_proc5 to join...
1568
+ [2025-02-14 07:52:46,516][00436] Waiting for process rollout_proc6 to join...
1569
+ [2025-02-14 07:52:46,518][00436] Waiting for process rollout_proc7 to join...
1570
+ [2025-02-14 07:52:46,522][00436] Batcher 0 profile tree view:
1571
+ batching: 6.1311, releasing_batches: 0.0063
1572
+ [2025-02-14 07:52:46,523][00436] InferenceWorker_p0-w0 profile tree view:
1573
+ wait_policy: 0.0000
1574
+ wait_policy_total: 103.2326
1575
+ update_model: 2.0046
1576
+ weight_update: 0.0022
1577
+ one_step: 0.0104
1578
+ handle_policy_step: 144.4081
1579
+ deserialize: 3.4548, stack: 0.7423, obs_to_device_normalize: 30.3564, forward: 74.6529, send_messages: 7.1637
1580
+ prepare_outputs: 22.0848
1581
+ to_cpu: 13.6550
1582
+ [2025-02-14 07:52:46,524][00436] Learner 0 profile tree view:
1583
+ misc: 0.0009, prepare_batch: 4.2370
1584
+ train: 20.0195
1585
+ epoch_init: 0.0012, minibatch_init: 0.0014, losses_postprocess: 0.1660, kl_divergence: 0.1880, after_optimizer: 0.8447
1586
+ calculate_losses: 6.5899
1587
+ losses_init: 0.0008, forward_head: 0.6238, bptt_initial: 4.1340, tail: 0.3297, advantages_returns: 0.0705, losses: 0.8757
1588
+ bptt: 0.4848
1589
+ bptt_forward_core: 0.4517
1590
+ update: 12.0735
1591
+ clip: 0.2249
1592
+ [2025-02-14 07:52:46,526][00436] RolloutWorker_w0 profile tree view:
1593
+ wait_for_trajectories: 0.0736, enqueue_policy_requests: 22.8663, env_step: 200.0027, overhead: 2.9398, complete_rollouts: 2.0251
1594
+ save_policy_outputs: 4.4570
1595
+ split_output_tensors: 1.7724
1596
+ [2025-02-14 07:52:46,527][00436] RolloutWorker_w7 profile tree view:
1597
+ wait_for_trajectories: 0.0737, enqueue_policy_requests: 24.7789, env_step: 198.5096, overhead: 2.8235, complete_rollouts: 1.8543
1598
+ save_policy_outputs: 4.2478
1599
+ split_output_tensors: 1.7220
1600
+ [2025-02-14 07:52:46,528][00436] Loop Runner_EvtLoop terminating...
1601
+ [2025-02-14 07:52:46,530][00436] Runner profile tree view:
1602
+ main_loop: 282.7091
1603
+ [2025-02-14 07:52:46,531][00436] Collected {0: 5005312}, FPS: 3535.2
1604
+ [2025-02-14 07:54:33,274][00436] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1605
+ [2025-02-14 07:54:33,276][00436] Overriding arg 'num_workers' with value 1 passed from command line
1606
+ [2025-02-14 07:54:33,278][00436] Adding new argument 'no_render'=True that is not in the saved config file!
1607
+ [2025-02-14 07:54:33,279][00436] Adding new argument 'save_video'=True that is not in the saved config file!
1608
+ [2025-02-14 07:54:33,281][00436] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1609
+ [2025-02-14 07:54:33,283][00436] Adding new argument 'video_name'=None that is not in the saved config file!
1610
+ [2025-02-14 07:54:33,284][00436] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
1611
+ [2025-02-14 07:54:33,285][00436] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1612
+ [2025-02-14 07:54:33,288][00436] Adding new argument 'push_to_hub'=False that is not in the saved config file!
1613
+ [2025-02-14 07:54:33,290][00436] Adding new argument 'hf_repository'=None that is not in the saved config file!
1614
+ [2025-02-14 07:54:33,291][00436] Adding new argument 'policy_index'=0 that is not in the saved config file!
1615
+ [2025-02-14 07:54:33,292][00436] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1616
+ [2025-02-14 07:54:33,293][00436] Adding new argument 'train_script'=None that is not in the saved config file!
1617
+ [2025-02-14 07:54:33,296][00436] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1618
+ [2025-02-14 07:54:33,298][00436] Using frameskip 1 and render_action_repeat=4 for evaluation
1619
+ [2025-02-14 07:54:33,335][00436] RunningMeanStd input shape: (3, 72, 128)
1620
+ [2025-02-14 07:54:33,337][00436] RunningMeanStd input shape: (1,)
1621
+ [2025-02-14 07:54:33,354][00436] ConvEncoder: input_channels=3
1622
+ [2025-02-14 07:54:33,391][00436] Conv encoder output size: 512
1623
+ [2025-02-14 07:54:33,392][00436] Policy head output size: 512
1624
+ [2025-02-14 07:54:33,413][00436] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1625
+ [2025-02-14 07:54:33,841][00436] Num frames 100...
1626
+ [2025-02-14 07:54:33,977][00436] Num frames 200...
1627
+ [2025-02-14 07:54:34,108][00436] Num frames 300...
1628
+ [2025-02-14 07:54:34,257][00436] Num frames 400...
1629
+ [2025-02-14 07:54:34,401][00436] Num frames 500...
1630
+ [2025-02-14 07:54:34,559][00436] Avg episode rewards: #0: 9.760, true rewards: #0: 5.760
1631
+ [2025-02-14 07:54:34,561][00436] Avg episode reward: 9.760, avg true_objective: 5.760
1632
+ [2025-02-14 07:54:34,597][00436] Num frames 600...
1633
+ [2025-02-14 07:54:34,738][00436] Num frames 700...
1634
+ [2025-02-14 07:54:34,873][00436] Num frames 800...
1635
+ [2025-02-14 07:54:35,004][00436] Num frames 900...
1636
+ [2025-02-14 07:54:35,137][00436] Num frames 1000...
1637
+ [2025-02-14 07:54:35,277][00436] Num frames 1100...
1638
+ [2025-02-14 07:54:35,409][00436] Num frames 1200...
1639
+ [2025-02-14 07:54:35,548][00436] Num frames 1300...
1640
+ [2025-02-14 07:54:35,687][00436] Num frames 1400...
1641
+ [2025-02-14 07:54:35,825][00436] Num frames 1500...
1642
+ [2025-02-14 07:54:35,957][00436] Num frames 1600...
1643
+ [2025-02-14 07:54:36,097][00436] Num frames 1700...
1644
+ [2025-02-14 07:54:36,191][00436] Avg episode rewards: #0: 19.640, true rewards: #0: 8.640
1645
+ [2025-02-14 07:54:36,193][00436] Avg episode reward: 19.640, avg true_objective: 8.640
1646
+ [2025-02-14 07:54:36,299][00436] Num frames 1800...
1647
+ [2025-02-14 07:54:36,435][00436] Num frames 1900...
1648
+ [2025-02-14 07:54:36,565][00436] Num frames 2000...
1649
+ [2025-02-14 07:54:36,721][00436] Num frames 2100...
1650
+ [2025-02-14 07:54:36,915][00436] Num frames 2200...
1651
+ [2025-02-14 07:54:37,096][00436] Num frames 2300...
1652
+ [2025-02-14 07:54:37,281][00436] Num frames 2400...
1653
+ [2025-02-14 07:54:37,452][00436] Num frames 2500...
1654
+ [2025-02-14 07:54:37,630][00436] Num frames 2600...
1655
+ [2025-02-14 07:54:37,760][00436] Avg episode rewards: #0: 20.807, true rewards: #0: 8.807
1656
+ [2025-02-14 07:54:37,762][00436] Avg episode reward: 20.807, avg true_objective: 8.807
1657
+ [2025-02-14 07:54:37,870][00436] Num frames 2700...
1658
+ [2025-02-14 07:54:38,039][00436] Num frames 2800...
1659
+ [2025-02-14 07:54:38,232][00436] Num frames 2900...
1660
+ [2025-02-14 07:54:38,424][00436] Num frames 3000...
1661
+ [2025-02-14 07:54:38,609][00436] Num frames 3100...
1662
+ [2025-02-14 07:54:38,793][00436] Num frames 3200...
1663
+ [2025-02-14 07:54:38,986][00436] Num frames 3300...
1664
+ [2025-02-14 07:54:39,130][00436] Num frames 3400...
1665
+ [2025-02-14 07:54:39,268][00436] Num frames 3500...
1666
+ [2025-02-14 07:54:39,403][00436] Num frames 3600...
1667
+ [2025-02-14 07:54:39,531][00436] Num frames 3700...
1668
+ [2025-02-14 07:54:39,665][00436] Num frames 3800...
1669
+ [2025-02-14 07:54:39,766][00436] Avg episode rewards: #0: 22.330, true rewards: #0: 9.580
1670
+ [2025-02-14 07:54:39,767][00436] Avg episode reward: 22.330, avg true_objective: 9.580
1671
+ [2025-02-14 07:54:39,864][00436] Num frames 3900...
1672
+ [2025-02-14 07:54:39,993][00436] Num frames 4000...
1673
+ [2025-02-14 07:54:40,125][00436] Num frames 4100...
1674
+ [2025-02-14 07:54:40,264][00436] Num frames 4200...
1675
+ [2025-02-14 07:54:40,400][00436] Num frames 4300...
1676
+ [2025-02-14 07:54:40,530][00436] Num frames 4400...
1677
+ [2025-02-14 07:54:40,598][00436] Avg episode rewards: #0: 19.816, true rewards: #0: 8.816
1678
+ [2025-02-14 07:54:40,599][00436] Avg episode reward: 19.816, avg true_objective: 8.816
1679
+ [2025-02-14 07:54:40,727][00436] Num frames 4500...
1680
+ [2025-02-14 07:54:40,865][00436] Num frames 4600...
1681
+ [2025-02-14 07:54:41,002][00436] Num frames 4700...
1682
+ [2025-02-14 07:54:41,131][00436] Num frames 4800...
1683
+ [2025-02-14 07:54:41,269][00436] Num frames 4900...
1684
+ [2025-02-14 07:54:41,401][00436] Num frames 5000...
1685
+ [2025-02-14 07:54:41,530][00436] Num frames 5100...
1686
+ [2025-02-14 07:54:41,661][00436] Num frames 5200...
1687
+ [2025-02-14 07:54:41,793][00436] Num frames 5300...
1688
+ [2025-02-14 07:54:41,854][00436] Avg episode rewards: #0: 19.840, true rewards: #0: 8.840
1689
+ [2025-02-14 07:54:41,856][00436] Avg episode reward: 19.840, avg true_objective: 8.840
1690
+ [2025-02-14 07:54:41,991][00436] Num frames 5400...
1691
+ [2025-02-14 07:54:42,124][00436] Num frames 5500...
1692
+ [2025-02-14 07:54:42,265][00436] Num frames 5600...
1693
+ [2025-02-14 07:54:42,400][00436] Num frames 5700...
1694
+ [2025-02-14 07:54:42,535][00436] Num frames 5800...
1695
+ [2025-02-14 07:54:42,667][00436] Num frames 5900...
1696
+ [2025-02-14 07:54:42,797][00436] Num frames 6000...
1697
+ [2025-02-14 07:54:42,932][00436] Num frames 6100...
1698
+ [2025-02-14 07:54:43,073][00436] Num frames 6200...
1699
+ [2025-02-14 07:54:43,212][00436] Num frames 6300...
1700
+ [2025-02-14 07:54:43,343][00436] Num frames 6400...
1701
+ [2025-02-14 07:54:43,431][00436] Avg episode rewards: #0: 21.034, true rewards: #0: 9.177
1702
+ [2025-02-14 07:54:43,432][00436] Avg episode reward: 21.034, avg true_objective: 9.177
1703
+ [2025-02-14 07:54:43,531][00436] Num frames 6500...
1704
+ [2025-02-14 07:54:43,660][00436] Num frames 6600...
1705
+ [2025-02-14 07:54:43,788][00436] Num frames 6700...
1706
+ [2025-02-14 07:54:43,920][00436] Num frames 6800...
1707
+ [2025-02-14 07:54:44,059][00436] Num frames 6900...
1708
+ [2025-02-14 07:54:44,198][00436] Num frames 7000...
1709
+ [2025-02-14 07:54:44,329][00436] Num frames 7100...
1710
+ [2025-02-14 07:54:44,459][00436] Num frames 7200...
1711
+ [2025-02-14 07:54:44,589][00436] Num frames 7300...
1712
+ [2025-02-14 07:54:44,718][00436] Num frames 7400...
1713
+ [2025-02-14 07:54:44,848][00436] Num frames 7500...
1714
+ [2025-02-14 07:54:44,983][00436] Num frames 7600...
1715
+ [2025-02-14 07:54:45,136][00436] Avg episode rewards: #0: 21.340, true rewards: #0: 9.590
1716
+ [2025-02-14 07:54:45,137][00436] Avg episode reward: 21.340, avg true_objective: 9.590
1717
+ [2025-02-14 07:54:45,180][00436] Num frames 7700...
1718
+ [2025-02-14 07:54:45,316][00436] Num frames 7800...
1719
+ [2025-02-14 07:54:45,456][00436] Num frames 7900...
1720
+ [2025-02-14 07:54:45,593][00436] Num frames 8000...
1721
+ [2025-02-14 07:54:45,728][00436] Num frames 8100...
1722
+ [2025-02-14 07:54:45,860][00436] Num frames 8200...
1723
+ [2025-02-14 07:54:45,992][00436] Num frames 8300...
1724
+ [2025-02-14 07:54:46,130][00436] Num frames 8400...
1725
+ [2025-02-14 07:54:46,270][00436] Num frames 8500...
1726
+ [2025-02-14 07:54:46,404][00436] Num frames 8600...
1727
+ [2025-02-14 07:54:46,536][00436] Num frames 8700...
1728
+ [2025-02-14 07:54:46,672][00436] Avg episode rewards: #0: 21.622, true rewards: #0: 9.733
1729
+ [2025-02-14 07:54:46,674][00436] Avg episode reward: 21.622, avg true_objective: 9.733
1730
+ [2025-02-14 07:54:46,731][00436] Num frames 8800...
1731
+ [2025-02-14 07:54:46,865][00436] Num frames 8900...
1732
+ [2025-02-14 07:54:47,000][00436] Num frames 9000...
1733
+ [2025-02-14 07:54:47,140][00436] Num frames 9100...
1734
+ [2025-02-14 07:54:47,281][00436] Num frames 9200...
1735
+ [2025-02-14 07:54:47,421][00436] Num frames 9300...
1736
+ [2025-02-14 07:54:47,555][00436] Num frames 9400...
1737
+ [2025-02-14 07:54:47,686][00436] Num frames 9500...
1738
+ [2025-02-14 07:54:47,823][00436] Num frames 9600...
1739
+ [2025-02-14 07:54:47,959][00436] Num frames 9700...
1740
+ [2025-02-14 07:54:48,100][00436] Num frames 9800...
1741
+ [2025-02-14 07:54:48,244][00436] Num frames 9900...
1742
+ [2025-02-14 07:54:48,387][00436] Num frames 10000...
1743
+ [2025-02-14 07:54:48,527][00436] Num frames 10100...
1744
+ [2025-02-14 07:54:48,591][00436] Avg episode rewards: #0: 22.404, true rewards: #0: 10.104
1745
+ [2025-02-14 07:54:48,594][00436] Avg episode reward: 22.404, avg true_objective: 10.104
1746
+ [2025-02-14 07:55:49,275][00436] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1747
+ [2025-02-14 07:56:08,753][00436] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1748
+ [2025-02-14 07:56:08,755][00436] Overriding arg 'num_workers' with value 1 passed from command line
1749
+ [2025-02-14 07:56:08,757][00436] Adding new argument 'no_render'=True that is not in the saved config file!
1750
+ [2025-02-14 07:56:08,758][00436] Adding new argument 'save_video'=True that is not in the saved config file!
1751
+ [2025-02-14 07:56:08,761][00436] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1752
+ [2025-02-14 07:56:08,763][00436] Adding new argument 'video_name'=None that is not in the saved config file!
1753
+ [2025-02-14 07:56:08,764][00436] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
1754
+ [2025-02-14 07:56:08,768][00436] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1755
+ [2025-02-14 07:56:08,769][00436] Adding new argument 'push_to_hub'=True that is not in the saved config file!
1756
+ [2025-02-14 07:56:08,770][00436] Adding new argument 'hf_repository'='gyaan/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
1757
+ [2025-02-14 07:56:08,771][00436] Adding new argument 'policy_index'=0 that is not in the saved config file!
1758
+ [2025-02-14 07:56:08,775][00436] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1759
+ [2025-02-14 07:56:08,777][00436] Adding new argument 'train_script'=None that is not in the saved config file!
1760
+ [2025-02-14 07:56:08,778][00436] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1761
+ [2025-02-14 07:56:08,779][00436] Using frameskip 1 and render_action_repeat=4 for evaluation
1762
+ [2025-02-14 07:56:08,813][00436] RunningMeanStd input shape: (3, 72, 128)
1763
+ [2025-02-14 07:56:08,815][00436] RunningMeanStd input shape: (1,)
1764
+ [2025-02-14 07:56:08,828][00436] ConvEncoder: input_channels=3
1765
+ [2025-02-14 07:56:08,864][00436] Conv encoder output size: 512
1766
+ [2025-02-14 07:56:08,865][00436] Policy head output size: 512
1767
+ [2025-02-14 07:56:08,883][00436] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1768
+ [2025-02-14 07:56:09,360][00436] Num frames 100...
1769
+ [2025-02-14 07:56:09,497][00436] Num frames 200...
1770
+ [2025-02-14 07:56:09,624][00436] Num frames 300...
1771
+ [2025-02-14 07:56:09,761][00436] Num frames 400...
1772
+ [2025-02-14 07:56:09,899][00436] Num frames 500...
1773
+ [2025-02-14 07:56:10,028][00436] Num frames 600...
1774
+ [2025-02-14 07:56:10,169][00436] Num frames 700...
1775
+ [2025-02-14 07:56:10,310][00436] Num frames 800...
1776
+ [2025-02-14 07:56:10,441][00436] Num frames 900...
1777
+ [2025-02-14 07:56:10,571][00436] Num frames 1000...
1778
+ [2025-02-14 07:56:10,705][00436] Num frames 1100...
1779
+ [2025-02-14 07:56:10,854][00436] Num frames 1200...
1780
+ [2025-02-14 07:56:10,986][00436] Num frames 1300...
1781
+ [2025-02-14 07:56:11,114][00436] Num frames 1400...
1782
+ [2025-02-14 07:56:11,249][00436] Num frames 1500...
1783
+ [2025-02-14 07:56:11,386][00436] Num frames 1600...
1784
+ [2025-02-14 07:56:11,439][00436] Avg episode rewards: #0: 45.000, true rewards: #0: 16.000
1785
+ [2025-02-14 07:56:11,440][00436] Avg episode reward: 45.000, avg true_objective: 16.000
1786
+ [2025-02-14 07:56:11,570][00436] Num frames 1700...
1787
+ [2025-02-14 07:56:11,699][00436] Num frames 1800...
1788
+ [2025-02-14 07:56:11,827][00436] Num frames 1900...
1789
+ [2025-02-14 07:56:11,973][00436] Num frames 2000...
1790
+ [2025-02-14 07:56:12,051][00436] Avg episode rewards: #0: 25.580, true rewards: #0: 10.080
1791
+ [2025-02-14 07:56:12,052][00436] Avg episode reward: 25.580, avg true_objective: 10.080
1792
+ [2025-02-14 07:56:12,171][00436] Num frames 2100...
1793
+ [2025-02-14 07:56:12,309][00436] Num frames 2200...
1794
+ [2025-02-14 07:56:12,439][00436] Num frames 2300...
1795
+ [2025-02-14 07:56:12,571][00436] Num frames 2400...
1796
+ [2025-02-14 07:56:12,704][00436] Num frames 2500...
1797
+ [2025-02-14 07:56:12,840][00436] Num frames 2600...
1798
+ [2025-02-14 07:56:12,979][00436] Num frames 2700...
1799
+ [2025-02-14 07:56:13,112][00436] Num frames 2800...
1800
+ [2025-02-14 07:56:13,252][00436] Num frames 2900...
1801
+ [2025-02-14 07:56:13,386][00436] Num frames 3000...
1802
+ [2025-02-14 07:56:13,518][00436] Num frames 3100...
1803
+ [2025-02-14 07:56:13,649][00436] Num frames 3200...
1804
+ [2025-02-14 07:56:13,784][00436] Num frames 3300...
1805
+ [2025-02-14 07:56:13,928][00436] Num frames 3400...
1806
+ [2025-02-14 07:56:14,061][00436] Num frames 3500...
1807
+ [2025-02-14 07:56:14,198][00436] Num frames 3600...
1808
+ [2025-02-14 07:56:14,331][00436] Num frames 3700...
1809
+ [2025-02-14 07:56:14,462][00436] Num frames 3800...
1810
+ [2025-02-14 07:56:14,596][00436] Num frames 3900...
1811
+ [2025-02-14 07:56:14,727][00436] Num frames 4000...
1812
+ [2025-02-14 07:56:14,857][00436] Num frames 4100...
1813
+ [2025-02-14 07:56:14,933][00436] Avg episode rewards: #0: 36.053, true rewards: #0: 13.720
1814
+ [2025-02-14 07:56:14,935][00436] Avg episode reward: 36.053, avg true_objective: 13.720
1815
+ [2025-02-14 07:56:15,042][00436] Num frames 4200...
1816
+ [2025-02-14 07:56:15,177][00436] Num frames 4300...
1817
+ [2025-02-14 07:56:15,315][00436] Num frames 4400...
1818
+ [2025-02-14 07:56:15,445][00436] Num frames 4500...
1819
+ [2025-02-14 07:56:15,590][00436] Num frames 4600...
1820
+ [2025-02-14 07:56:15,724][00436] Num frames 4700...
1821
+ [2025-02-14 07:56:15,854][00436] Num frames 4800...
1822
+ [2025-02-14 07:56:15,937][00436] Avg episode rewards: #0: 30.550, true rewards: #0: 12.050
1823
+ [2025-02-14 07:56:15,939][00436] Avg episode reward: 30.550, avg true_objective: 12.050
1824
+ [2025-02-14 07:56:16,077][00436] Num frames 4900...
1825
+ [2025-02-14 07:56:16,266][00436] Num frames 5000...
1826
+ [2025-02-14 07:56:16,437][00436] Num frames 5100...
1827
+ [2025-02-14 07:56:16,613][00436] Num frames 5200...
1828
+ [2025-02-14 07:56:16,781][00436] Num frames 5300...
1829
+ [2025-02-14 07:56:17,006][00436] Avg episode rewards: #0: 26.392, true rewards: #0: 10.792
1830
+ [2025-02-14 07:56:17,008][00436] Avg episode reward: 26.392, avg true_objective: 10.792
1831
+ [2025-02-14 07:56:17,021][00436] Num frames 5400...
1832
+ [2025-02-14 07:56:17,192][00436] Num frames 5500...
1833
+ [2025-02-14 07:56:17,363][00436] Num frames 5600...
1834
+ [2025-02-14 07:56:17,544][00436] Num frames 5700...
1835
+ [2025-02-14 07:56:17,733][00436] Num frames 5800...
1836
+ [2025-02-14 07:56:17,914][00436] Num frames 5900...
1837
+ [2025-02-14 07:56:18,102][00436] Num frames 6000...
1838
+ [2025-02-14 07:56:18,289][00436] Num frames 6100...
1839
+ [2025-02-14 07:56:18,424][00436] Num frames 6200...
1840
+ [2025-02-14 07:56:18,555][00436] Num frames 6300...
1841
+ [2025-02-14 07:56:18,687][00436] Num frames 6400...
1842
+ [2025-02-14 07:56:18,822][00436] Num frames 6500...
1843
+ [2025-02-14 07:56:18,955][00436] Num frames 6600...
1844
+ [2025-02-14 07:56:19,095][00436] Num frames 6700...
1845
+ [2025-02-14 07:56:19,236][00436] Num frames 6800...
1846
+ [2025-02-14 07:56:19,298][00436] Avg episode rewards: #0: 28.173, true rewards: #0: 11.340
1847
+ [2025-02-14 07:56:19,299][00436] Avg episode reward: 28.173, avg true_objective: 11.340
1848
+ [2025-02-14 07:56:19,425][00436] Num frames 6900...
1849
+ [2025-02-14 07:56:19,553][00436] Num frames 7000...
1850
+ [2025-02-14 07:56:19,684][00436] Num frames 7100...
1851
+ [2025-02-14 07:56:19,815][00436] Num frames 7200...
1852
+ [2025-02-14 07:56:19,957][00436] Num frames 7300...
1853
+ [2025-02-14 07:56:20,104][00436] Num frames 7400...
1854
+ [2025-02-14 07:56:20,243][00436] Num frames 7500...
1855
+ [2025-02-14 07:56:20,374][00436] Num frames 7600...
1856
+ [2025-02-14 07:56:20,512][00436] Num frames 7700...
1857
+ [2025-02-14 07:56:20,647][00436] Num frames 7800...
1858
+ [2025-02-14 07:56:20,780][00436] Num frames 7900...
1859
+ [2025-02-14 07:56:20,913][00436] Num frames 8000...
1860
+ [2025-02-14 07:56:20,983][00436] Avg episode rewards: #0: 27.871, true rewards: #0: 11.443
1861
+ [2025-02-14 07:56:20,985][00436] Avg episode reward: 27.871, avg true_objective: 11.443
1862
+ [2025-02-14 07:56:21,112][00436] Num frames 8100...
1863
+ [2025-02-14 07:56:21,249][00436] Num frames 8200...
1864
+ [2025-02-14 07:56:21,380][00436] Num frames 8300...
1865
+ [2025-02-14 07:56:21,515][00436] Num frames 8400...
1866
+ [2025-02-14 07:56:21,649][00436] Num frames 8500...
1867
+ [2025-02-14 07:56:21,779][00436] Num frames 8600...
1868
+ [2025-02-14 07:56:21,912][00436] Num frames 8700...
1869
+ [2025-02-14 07:56:22,046][00436] Num frames 8800...
1870
+ [2025-02-14 07:56:22,194][00436] Num frames 8900...
1871
+ [2025-02-14 07:56:22,327][00436] Num frames 9000...
1872
+ [2025-02-14 07:56:22,505][00436] Avg episode rewards: #0: 28.237, true rewards: #0: 11.362
1873
+ [2025-02-14 07:56:22,507][00436] Avg episode reward: 28.237, avg true_objective: 11.362
1874
+ [2025-02-14 07:56:22,525][00436] Num frames 9100...
1875
+ [2025-02-14 07:56:22,656][00436] Num frames 9200...
1876
+ [2025-02-14 07:56:22,788][00436] Num frames 9300...
1877
+ [2025-02-14 07:56:22,925][00436] Num frames 9400...
1878
+ [2025-02-14 07:56:23,055][00436] Num frames 9500...
1879
+ [2025-02-14 07:56:23,203][00436] Num frames 9600...
1880
+ [2025-02-14 07:56:23,335][00436] Num frames 9700...
1881
+ [2025-02-14 07:56:23,468][00436] Num frames 9800...
1882
+ [2025-02-14 07:56:23,599][00436] Num frames 9900...
1883
+ [2025-02-14 07:56:23,731][00436] Num frames 10000...
1884
+ [2025-02-14 07:56:23,862][00436] Num frames 10100...
1885
+ [2025-02-14 07:56:23,994][00436] Num frames 10200...
1886
+ [2025-02-14 07:56:24,105][00436] Avg episode rewards: #0: 27.935, true rewards: #0: 11.380
1887
+ [2025-02-14 07:56:24,107][00436] Avg episode reward: 27.935, avg true_objective: 11.380
1888
+ [2025-02-14 07:56:24,194][00436] Num frames 10300...
1889
+ [2025-02-14 07:56:24,330][00436] Num frames 10400...
1890
+ [2025-02-14 07:56:24,462][00436] Num frames 10500...
1891
+ [2025-02-14 07:56:24,591][00436] Num frames 10600...
1892
+ [2025-02-14 07:56:24,722][00436] Num frames 10700...
1893
+ [2025-02-14 07:56:24,857][00436] Num frames 10800...
1894
+ [2025-02-14 07:56:24,990][00436] Num frames 10900...
1895
+ [2025-02-14 07:56:25,126][00436] Num frames 11000...
1896
+ [2025-02-14 07:56:25,279][00436] Num frames 11100...
1897
+ [2025-02-14 07:56:25,413][00436] Num frames 11200...
1898
+ [2025-02-14 07:56:25,548][00436] Num frames 11300...
1899
+ [2025-02-14 07:56:25,685][00436] Num frames 11400...
1900
+ [2025-02-14 07:56:25,816][00436] Num frames 11500...
1901
+ [2025-02-14 07:56:25,949][00436] Num frames 11600...
1902
+ [2025-02-14 07:56:26,076][00436] Num frames 11700...
1903
+ [2025-02-14 07:56:26,216][00436] Num frames 11800...
1904
+ [2025-02-14 07:56:26,357][00436] Num frames 11900...
1905
+ [2025-02-14 07:56:26,463][00436] Avg episode rewards: #0: 29.138, true rewards: #0: 11.938
1906
+ [2025-02-14 07:56:26,466][00436] Avg episode reward: 29.138, avg true_objective: 11.938
1907
+ [2025-02-14 07:57:38,965][00436] Replay video saved to /content/train_dir/default_experiment/replay.mp4!