diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,6462 @@ +[2024-12-28 13:30:50,219][78983] Saving configuration to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json... +[2024-12-28 13:30:50,228][78983] Rollout worker 0 uses device cpu +[2024-12-28 13:30:50,229][78983] Rollout worker 1 uses device cpu +[2024-12-28 13:30:50,229][78983] Rollout worker 2 uses device cpu +[2024-12-28 13:30:50,230][78983] Rollout worker 3 uses device cpu +[2024-12-28 13:30:50,230][78983] Rollout worker 4 uses device cpu +[2024-12-28 13:30:50,231][78983] Rollout worker 5 uses device cpu +[2024-12-28 13:30:50,231][78983] Rollout worker 6 uses device cpu +[2024-12-28 13:30:50,232][78983] Rollout worker 7 uses device cpu +[2024-12-28 13:30:50,275][78983] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:30:50,276][78983] InferenceWorker_p0-w0: min num requests: 2 +[2024-12-28 13:30:50,291][78983] Starting all processes... +[2024-12-28 13:30:50,291][78983] Starting process learner_proc0 +[2024-12-28 13:30:50,341][78983] Starting all processes... +[2024-12-28 13:30:50,346][78983] Starting process inference_proc0-0 +[2024-12-28 13:30:50,346][78983] Starting process rollout_proc0 +[2024-12-28 13:30:50,347][78983] Starting process rollout_proc1 +[2024-12-28 13:30:50,347][78983] Starting process rollout_proc2 +[2024-12-28 13:30:50,348][78983] Starting process rollout_proc3 +[2024-12-28 13:30:50,348][78983] Starting process rollout_proc4 +[2024-12-28 13:30:50,348][78983] Starting process rollout_proc5 +[2024-12-28 13:30:50,349][78983] Starting process rollout_proc6 +[2024-12-28 13:30:50,349][78983] Starting process rollout_proc7 +[2024-12-28 13:30:51,686][80241] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:30:51,686][80241] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-12-28 13:30:51,745][80241] Num visible devices: 1 +[2024-12-28 13:30:51,785][80241] Starting seed is not provided +[2024-12-28 13:30:51,785][80241] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:30:51,785][80241] Initializing actor-critic model on device cuda:0 +[2024-12-28 13:30:51,786][80241] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 13:30:51,787][80241] RunningMeanStd input shape: (1,) +[2024-12-28 13:30:51,800][80241] ConvEncoder: input_channels=3 +[2024-12-28 13:30:51,821][80262] Worker 6 uses CPU cores [24, 25, 26, 27] +[2024-12-28 13:30:51,822][80263] Worker 5 uses CPU cores [20, 21, 22, 23] +[2024-12-28 13:30:51,822][80259] Worker 2 uses CPU cores [8, 9, 10, 11] +[2024-12-28 13:30:51,878][80255] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:30:51,878][80255] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-12-28 13:30:51,897][80260] Worker 4 uses CPU cores [16, 17, 18, 19] +[2024-12-28 13:30:51,899][80255] Num visible devices: 1 +[2024-12-28 13:30:51,899][80257] Worker 1 uses CPU cores [4, 5, 6, 7] +[2024-12-28 13:30:51,918][80261] Worker 7 uses CPU cores [28, 29, 30, 31] +[2024-12-28 13:30:51,935][80258] Worker 3 uses CPU cores [12, 13, 14, 15] +[2024-12-28 13:30:51,944][80256] Worker 0 uses CPU cores [0, 1, 2, 3] +[2024-12-28 13:30:51,949][80241] Conv encoder output size: 512 +[2024-12-28 13:30:51,949][80241] Policy head output size: 512 +[2024-12-28 13:30:51,980][80241] Created Actor Critic model with architecture: +[2024-12-28 13:30:51,981][80241] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2024-12-28 13:30:52,582][80241] Using optimizer +[2024-12-28 13:30:53,404][80241] No checkpoints found +[2024-12-28 13:30:53,404][80241] Did not load from checkpoint, starting from scratch! +[2024-12-28 13:30:53,405][80241] Initialized policy 0 weights for model version 0 +[2024-12-28 13:30:53,408][80241] LearnerWorker_p0 finished initialization! +[2024-12-28 13:30:53,409][80241] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:30:53,508][80255] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 13:30:53,508][80255] RunningMeanStd input shape: (1,) +[2024-12-28 13:30:53,515][80255] ConvEncoder: input_channels=3 +[2024-12-28 13:30:53,567][80255] Conv encoder output size: 512 +[2024-12-28 13:30:53,567][80255] Policy head output size: 512 +[2024-12-28 13:30:53,595][78983] Inference worker 0-0 is ready! +[2024-12-28 13:30:53,595][78983] All inference workers are ready! Signal rollout workers to start! +[2024-12-28 13:30:53,615][80259] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,616][80261] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,616][80263] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,616][80258] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,616][80257] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,618][80260] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,618][80262] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,619][80256] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:30:53,974][80260] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80256] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80262] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80261] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80257] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80258] Decorrelating experience for 0 frames... +[2024-12-28 13:30:53,974][80259] Decorrelating experience for 0 frames... +[2024-12-28 13:30:54,124][80261] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,125][80257] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,152][80260] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,152][80256] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,155][80263] Decorrelating experience for 0 frames... +[2024-12-28 13:30:54,173][80262] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,323][80263] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,336][80261] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,357][80260] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,361][80256] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,373][80262] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,532][80261] Decorrelating experience for 96 frames... +[2024-12-28 13:30:54,550][80257] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,560][80260] Decorrelating experience for 96 frames... +[2024-12-28 13:30:54,561][80262] Decorrelating experience for 96 frames... +[2024-12-28 13:30:54,708][80256] Decorrelating experience for 96 frames... +[2024-12-28 13:30:54,739][80259] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,779][80263] Decorrelating experience for 64 frames... +[2024-12-28 13:30:54,898][80257] Decorrelating experience for 96 frames... +[2024-12-28 13:30:54,935][80258] Decorrelating experience for 32 frames... +[2024-12-28 13:30:54,964][80263] Decorrelating experience for 96 frames... +[2024-12-28 13:30:55,084][80259] Decorrelating experience for 64 frames... +[2024-12-28 13:30:55,125][80258] Decorrelating experience for 64 frames... +[2024-12-28 13:30:55,290][80259] Decorrelating experience for 96 frames... +[2024-12-28 13:30:55,326][80258] Decorrelating experience for 96 frames... +[2024-12-28 13:30:55,731][80241] Signal inference workers to stop experience collection... +[2024-12-28 13:30:55,737][80255] InferenceWorker_p0-w0: stopping experience collection +[2024-12-28 13:30:56,778][80241] Signal inference workers to resume experience collection... +[2024-12-28 13:30:56,778][80255] InferenceWorker_p0-w0: resuming experience collection +[2024-12-28 13:30:57,535][78983] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 24576. Throughput: 0: nan. Samples: 2832. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-12-28 13:30:57,536][78983] Avg episode reward: [(0, '3.574')] +[2024-12-28 13:30:58,147][80255] Updated weights for policy 0, policy_version 10 (0.0047) +[2024-12-28 13:30:59,728][80255] Updated weights for policy 0, policy_version 20 (0.0008) +[2024-12-28 13:31:01,532][80255] Updated weights for policy 0, policy_version 30 (0.0008) +[2024-12-28 13:31:02,535][78983] Fps is (10 sec: 23757.3, 60 sec: 23757.3, 300 sec: 23757.3). Total num frames: 143360. Throughput: 0: 3487.7. Samples: 20270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:31:02,536][78983] Avg episode reward: [(0, '4.320')] +[2024-12-28 13:31:02,537][80241] Saving new best policy, reward=4.320! +[2024-12-28 13:31:03,482][80255] Updated weights for policy 0, policy_version 40 (0.0009) +[2024-12-28 13:31:05,347][80255] Updated weights for policy 0, policy_version 50 (0.0009) +[2024-12-28 13:31:07,146][80255] Updated weights for policy 0, policy_version 60 (0.0008) +[2024-12-28 13:31:07,535][78983] Fps is (10 sec: 22937.4, 60 sec: 22937.4, 300 sec: 22937.4). Total num frames: 253952. Throughput: 0: 5045.8. Samples: 53290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:31:07,537][78983] Avg episode reward: [(0, '4.440')] +[2024-12-28 13:31:07,542][80241] Saving new best policy, reward=4.440! +[2024-12-28 13:31:08,997][80255] Updated weights for policy 0, policy_version 70 (0.0009) +[2024-12-28 13:31:10,270][78983] Heartbeat connected on Batcher_0 +[2024-12-28 13:31:10,273][78983] Heartbeat connected on LearnerWorker_p0 +[2024-12-28 13:31:10,279][78983] Heartbeat connected on RolloutWorker_w0 +[2024-12-28 13:31:10,281][78983] Heartbeat connected on InferenceWorker_p0-w0 +[2024-12-28 13:31:10,282][78983] Heartbeat connected on RolloutWorker_w1 +[2024-12-28 13:31:10,285][78983] Heartbeat connected on RolloutWorker_w2 +[2024-12-28 13:31:10,287][78983] Heartbeat connected on RolloutWorker_w3 +[2024-12-28 13:31:10,288][78983] Heartbeat connected on RolloutWorker_w4 +[2024-12-28 13:31:10,290][78983] Heartbeat connected on RolloutWorker_w6 +[2024-12-28 13:31:10,291][78983] Heartbeat connected on RolloutWorker_w7 +[2024-12-28 13:31:10,292][78983] Heartbeat connected on RolloutWorker_w5 +[2024-12-28 13:31:10,894][80255] Updated weights for policy 0, policy_version 80 (0.0009) +[2024-12-28 13:31:12,535][78983] Fps is (10 sec: 22118.2, 60 sec: 22664.5, 300 sec: 22664.5). Total num frames: 364544. Throughput: 0: 5604.3. Samples: 86896. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:31:12,536][78983] Avg episode reward: [(0, '4.331')] +[2024-12-28 13:31:12,565][80255] Updated weights for policy 0, policy_version 90 (0.0007) +[2024-12-28 13:31:14,203][80255] Updated weights for policy 0, policy_version 100 (0.0008) +[2024-12-28 13:31:15,803][80255] Updated weights for policy 0, policy_version 110 (0.0007) +[2024-12-28 13:31:17,320][80255] Updated weights for policy 0, policy_version 120 (0.0007) +[2024-12-28 13:31:17,535][78983] Fps is (10 sec: 24166.7, 60 sec: 23552.0, 300 sec: 23552.0). Total num frames: 495616. Throughput: 0: 5142.0. Samples: 105672. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:31:17,536][78983] Avg episode reward: [(0, '4.446')] +[2024-12-28 13:31:17,541][80241] Saving new best policy, reward=4.446! +[2024-12-28 13:31:18,976][80255] Updated weights for policy 0, policy_version 130 (0.0007) +[2024-12-28 13:31:20,541][80255] Updated weights for policy 0, policy_version 140 (0.0008) +[2024-12-28 13:31:22,082][80255] Updated weights for policy 0, policy_version 150 (0.0007) +[2024-12-28 13:31:22,535][78983] Fps is (10 sec: 25804.8, 60 sec: 23920.6, 300 sec: 23920.6). Total num frames: 622592. Throughput: 0: 5666.2. Samples: 144488. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:31:22,536][78983] Avg episode reward: [(0, '4.224')] +[2024-12-28 13:31:23,720][80255] Updated weights for policy 0, policy_version 160 (0.0006) +[2024-12-28 13:31:25,498][80255] Updated weights for policy 0, policy_version 170 (0.0009) +[2024-12-28 13:31:27,353][80255] Updated weights for policy 0, policy_version 180 (0.0009) +[2024-12-28 13:31:27,535][78983] Fps is (10 sec: 24166.6, 60 sec: 23756.9, 300 sec: 23756.9). Total num frames: 737280. Throughput: 0: 5935.0. Samples: 180880. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:31:27,536][78983] Avg episode reward: [(0, '4.509')] +[2024-12-28 13:31:27,541][80241] Saving new best policy, reward=4.509! +[2024-12-28 13:31:29,254][80255] Updated weights for policy 0, policy_version 190 (0.0009) +[2024-12-28 13:31:31,166][80255] Updated weights for policy 0, policy_version 200 (0.0008) +[2024-12-28 13:31:32,535][78983] Fps is (10 sec: 22528.3, 60 sec: 23522.8, 300 sec: 23522.8). Total num frames: 847872. Throughput: 0: 5556.1. Samples: 197296. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:31:32,536][78983] Avg episode reward: [(0, '4.290')] +[2024-12-28 13:31:33,020][80255] Updated weights for policy 0, policy_version 210 (0.0009) +[2024-12-28 13:31:34,844][80255] Updated weights for policy 0, policy_version 220 (0.0009) +[2024-12-28 13:31:36,425][80255] Updated weights for policy 0, policy_version 230 (0.0007) +[2024-12-28 13:31:37,535][78983] Fps is (10 sec: 23347.2, 60 sec: 23654.5, 300 sec: 23654.5). Total num frames: 970752. Throughput: 0: 5700.6. Samples: 230856. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:31:37,536][78983] Avg episode reward: [(0, '4.468')] +[2024-12-28 13:31:38,003][80255] Updated weights for policy 0, policy_version 240 (0.0007) +[2024-12-28 13:31:39,570][80255] Updated weights for policy 0, policy_version 250 (0.0007) +[2024-12-28 13:31:41,124][80255] Updated weights for policy 0, policy_version 260 (0.0007) +[2024-12-28 13:31:42,535][78983] Fps is (10 sec: 24985.7, 60 sec: 23847.9, 300 sec: 23847.9). Total num frames: 1097728. Throughput: 0: 5936.9. Samples: 269994. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:31:42,535][78983] Avg episode reward: [(0, '4.362')] +[2024-12-28 13:31:42,709][80255] Updated weights for policy 0, policy_version 270 (0.0007) +[2024-12-28 13:31:44,311][80255] Updated weights for policy 0, policy_version 280 (0.0007) +[2024-12-28 13:31:46,049][80255] Updated weights for policy 0, policy_version 290 (0.0008) +[2024-12-28 13:31:47,535][78983] Fps is (10 sec: 24985.6, 60 sec: 23920.7, 300 sec: 23920.7). Total num frames: 1220608. Throughput: 0: 5975.4. Samples: 289164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:31:47,536][78983] Avg episode reward: [(0, '4.414')] +[2024-12-28 13:31:47,735][80255] Updated weights for policy 0, policy_version 300 (0.0008) +[2024-12-28 13:31:49,399][80255] Updated weights for policy 0, policy_version 310 (0.0007) +[2024-12-28 13:31:51,147][80255] Updated weights for policy 0, policy_version 320 (0.0009) +[2024-12-28 13:31:52,535][78983] Fps is (10 sec: 24576.0, 60 sec: 23980.3, 300 sec: 23980.3). Total num frames: 1343488. Throughput: 0: 6044.5. Samples: 325292. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:31:52,536][78983] Avg episode reward: [(0, '4.602')] +[2024-12-28 13:31:52,537][80241] Saving new best policy, reward=4.602! +[2024-12-28 13:31:52,840][80255] Updated weights for policy 0, policy_version 330 (0.0008) +[2024-12-28 13:31:54,635][80255] Updated weights for policy 0, policy_version 340 (0.0010) +[2024-12-28 13:31:56,283][80255] Updated weights for policy 0, policy_version 350 (0.0007) +[2024-12-28 13:31:57,535][78983] Fps is (10 sec: 24165.9, 60 sec: 23961.6, 300 sec: 23961.6). Total num frames: 1462272. Throughput: 0: 6093.8. Samples: 361118. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:31:57,536][78983] Avg episode reward: [(0, '4.581')] +[2024-12-28 13:31:57,919][80255] Updated weights for policy 0, policy_version 360 (0.0007) +[2024-12-28 13:31:59,754][80255] Updated weights for policy 0, policy_version 370 (0.0008) +[2024-12-28 13:32:01,682][80255] Updated weights for policy 0, policy_version 380 (0.0009) +[2024-12-28 13:32:02,535][78983] Fps is (10 sec: 22937.6, 60 sec: 23825.1, 300 sec: 23819.8). Total num frames: 1572864. Throughput: 0: 6068.5. Samples: 378752. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:02,536][78983] Avg episode reward: [(0, '4.342')] +[2024-12-28 13:32:03,595][80255] Updated weights for policy 0, policy_version 390 (0.0010) +[2024-12-28 13:32:05,531][80255] Updated weights for policy 0, policy_version 400 (0.0008) +[2024-12-28 13:32:07,368][80255] Updated weights for policy 0, policy_version 410 (0.0008) +[2024-12-28 13:32:07,535][78983] Fps is (10 sec: 21708.9, 60 sec: 23756.8, 300 sec: 23639.8). Total num frames: 1679360. Throughput: 0: 5917.7. Samples: 410786. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:32:07,536][78983] Avg episode reward: [(0, '4.289')] +[2024-12-28 13:32:09,254][80255] Updated weights for policy 0, policy_version 420 (0.0009) +[2024-12-28 13:32:11,144][80255] Updated weights for policy 0, policy_version 430 (0.0008) +[2024-12-28 13:32:12,535][78983] Fps is (10 sec: 22118.5, 60 sec: 23825.1, 300 sec: 23593.0). Total num frames: 1794048. Throughput: 0: 5845.4. Samples: 443922. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:12,536][78983] Avg episode reward: [(0, '4.532')] +[2024-12-28 13:32:12,771][80255] Updated weights for policy 0, policy_version 440 (0.0008) +[2024-12-28 13:32:14,416][80255] Updated weights for policy 0, policy_version 450 (0.0008) +[2024-12-28 13:32:15,988][80255] Updated weights for policy 0, policy_version 460 (0.0007) +[2024-12-28 13:32:17,535][78983] Fps is (10 sec: 24166.7, 60 sec: 23756.8, 300 sec: 23705.6). Total num frames: 1921024. Throughput: 0: 5898.2. Samples: 462714. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:32:17,536][78983] Avg episode reward: [(0, '4.670')] +[2024-12-28 13:32:17,540][80241] Saving new best policy, reward=4.670! +[2024-12-28 13:32:17,617][80255] Updated weights for policy 0, policy_version 470 (0.0007) +[2024-12-28 13:32:19,289][80255] Updated weights for policy 0, policy_version 480 (0.0008) +[2024-12-28 13:32:20,883][80255] Updated weights for policy 0, policy_version 490 (0.0008) +[2024-12-28 13:32:22,530][80255] Updated weights for policy 0, policy_version 500 (0.0008) +[2024-12-28 13:32:22,535][78983] Fps is (10 sec: 25395.2, 60 sec: 23756.9, 300 sec: 23805.0). Total num frames: 2048000. Throughput: 0: 5997.0. Samples: 500720. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:22,535][78983] Avg episode reward: [(0, '4.334')] +[2024-12-28 13:32:24,181][80255] Updated weights for policy 0, policy_version 510 (0.0008) +[2024-12-28 13:32:25,803][80255] Updated weights for policy 0, policy_version 520 (0.0007) +[2024-12-28 13:32:27,421][80255] Updated weights for policy 0, policy_version 530 (0.0006) +[2024-12-28 13:32:27,535][78983] Fps is (10 sec: 24985.5, 60 sec: 23893.3, 300 sec: 23847.8). Total num frames: 2170880. Throughput: 0: 5960.6. Samples: 538220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:32:27,536][78983] Avg episode reward: [(0, '4.290')] +[2024-12-28 13:32:29,015][80255] Updated weights for policy 0, policy_version 540 (0.0007) +[2024-12-28 13:32:30,666][80255] Updated weights for policy 0, policy_version 550 (0.0008) +[2024-12-28 13:32:32,271][80255] Updated weights for policy 0, policy_version 560 (0.0007) +[2024-12-28 13:32:32,535][78983] Fps is (10 sec: 24985.6, 60 sec: 24166.4, 300 sec: 23929.3). Total num frames: 2297856. Throughput: 0: 5956.6. Samples: 557210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:32:32,536][78983] Avg episode reward: [(0, '4.509')] +[2024-12-28 13:32:33,906][80255] Updated weights for policy 0, policy_version 570 (0.0007) +[2024-12-28 13:32:35,531][80255] Updated weights for policy 0, policy_version 580 (0.0008) +[2024-12-28 13:32:37,149][80255] Updated weights for policy 0, policy_version 590 (0.0007) +[2024-12-28 13:32:37,535][78983] Fps is (10 sec: 25395.2, 60 sec: 24234.7, 300 sec: 24002.6). Total num frames: 2424832. Throughput: 0: 5994.8. Samples: 595060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:32:37,536][78983] Avg episode reward: [(0, '4.352')] +[2024-12-28 13:32:38,746][80255] Updated weights for policy 0, policy_version 600 (0.0006) +[2024-12-28 13:32:40,385][80255] Updated weights for policy 0, policy_version 610 (0.0007) +[2024-12-28 13:32:41,986][80255] Updated weights for policy 0, policy_version 620 (0.0007) +[2024-12-28 13:32:42,535][78983] Fps is (10 sec: 25395.2, 60 sec: 24234.7, 300 sec: 24068.9). Total num frames: 2551808. Throughput: 0: 6041.4. Samples: 632978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:32:42,536][78983] Avg episode reward: [(0, '4.476')] +[2024-12-28 13:32:43,619][80255] Updated weights for policy 0, policy_version 630 (0.0006) +[2024-12-28 13:32:45,223][80255] Updated weights for policy 0, policy_version 640 (0.0007) +[2024-12-28 13:32:46,861][80255] Updated weights for policy 0, policy_version 650 (0.0007) +[2024-12-28 13:32:47,535][78983] Fps is (10 sec: 24985.5, 60 sec: 24234.6, 300 sec: 24091.9). Total num frames: 2674688. Throughput: 0: 6072.9. Samples: 652034. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:47,536][78983] Avg episode reward: [(0, '4.504')] +[2024-12-28 13:32:47,542][80241] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000654_2678784.pth... +[2024-12-28 13:32:48,542][80255] Updated weights for policy 0, policy_version 660 (0.0009) +[2024-12-28 13:32:50,168][80255] Updated weights for policy 0, policy_version 670 (0.0006) +[2024-12-28 13:32:51,790][80255] Updated weights for policy 0, policy_version 680 (0.0007) +[2024-12-28 13:32:52,535][78983] Fps is (10 sec: 24985.6, 60 sec: 24302.9, 300 sec: 24148.6). Total num frames: 2801664. Throughput: 0: 6195.3. Samples: 689574. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:52,536][78983] Avg episode reward: [(0, '4.328')] +[2024-12-28 13:32:53,489][80255] Updated weights for policy 0, policy_version 690 (0.0007) +[2024-12-28 13:32:55,338][80255] Updated weights for policy 0, policy_version 700 (0.0008) +[2024-12-28 13:32:57,194][80255] Updated weights for policy 0, policy_version 710 (0.0008) +[2024-12-28 13:32:57,535][78983] Fps is (10 sec: 23757.0, 60 sec: 24166.5, 300 sec: 24064.0). Total num frames: 2912256. Throughput: 0: 6234.8. Samples: 724490. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:32:57,536][78983] Avg episode reward: [(0, '4.544')] +[2024-12-28 13:32:59,090][80255] Updated weights for policy 0, policy_version 720 (0.0007) +[2024-12-28 13:33:01,011][80255] Updated weights for policy 0, policy_version 730 (0.0010) +[2024-12-28 13:33:02,535][78983] Fps is (10 sec: 21708.8, 60 sec: 24098.1, 300 sec: 23953.4). Total num frames: 3018752. Throughput: 0: 6180.7. Samples: 740844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:33:02,536][78983] Avg episode reward: [(0, '4.404')] +[2024-12-28 13:33:02,997][80255] Updated weights for policy 0, policy_version 740 (0.0009) +[2024-12-28 13:33:04,688][80255] Updated weights for policy 0, policy_version 750 (0.0007) +[2024-12-28 13:33:06,290][80255] Updated weights for policy 0, policy_version 760 (0.0006) +[2024-12-28 13:33:07,535][78983] Fps is (10 sec: 22936.6, 60 sec: 24371.1, 300 sec: 23977.3). Total num frames: 3141632. Throughput: 0: 6085.7. Samples: 774580. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:33:07,536][78983] Avg episode reward: [(0, '4.432')] +[2024-12-28 13:33:07,913][80255] Updated weights for policy 0, policy_version 770 (0.0007) +[2024-12-28 13:33:09,502][80255] Updated weights for policy 0, policy_version 780 (0.0008) +[2024-12-28 13:33:11,078][80255] Updated weights for policy 0, policy_version 790 (0.0006) +[2024-12-28 13:33:12,535][78983] Fps is (10 sec: 25395.1, 60 sec: 24644.3, 300 sec: 24060.2). Total num frames: 3272704. Throughput: 0: 6107.3. Samples: 813050. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:33:12,536][78983] Avg episode reward: [(0, '4.455')] +[2024-12-28 13:33:12,644][80255] Updated weights for policy 0, policy_version 800 (0.0007) +[2024-12-28 13:33:14,254][80255] Updated weights for policy 0, policy_version 810 (0.0007) +[2024-12-28 13:33:15,832][80255] Updated weights for policy 0, policy_version 820 (0.0007) +[2024-12-28 13:33:17,433][80255] Updated weights for policy 0, policy_version 830 (0.0008) +[2024-12-28 13:33:17,535][78983] Fps is (10 sec: 25805.8, 60 sec: 24644.2, 300 sec: 24107.9). Total num frames: 3399680. Throughput: 0: 6115.8. Samples: 832422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:33:17,536][78983] Avg episode reward: [(0, '4.718')] +[2024-12-28 13:33:17,541][80241] Saving new best policy, reward=4.718! +[2024-12-28 13:33:19,084][80255] Updated weights for policy 0, policy_version 840 (0.0007) +[2024-12-28 13:33:20,663][80255] Updated weights for policy 0, policy_version 850 (0.0006) +[2024-12-28 13:33:22,225][80255] Updated weights for policy 0, policy_version 860 (0.0007) +[2024-12-28 13:33:22,535][78983] Fps is (10 sec: 25804.9, 60 sec: 24712.5, 300 sec: 24180.5). Total num frames: 3530752. Throughput: 0: 6128.5. Samples: 870840. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:33:22,536][78983] Avg episode reward: [(0, '4.359')] +[2024-12-28 13:33:23,864][80255] Updated weights for policy 0, policy_version 870 (0.0008) +[2024-12-28 13:33:25,434][80255] Updated weights for policy 0, policy_version 880 (0.0007) +[2024-12-28 13:33:27,038][80255] Updated weights for policy 0, policy_version 890 (0.0007) +[2024-12-28 13:33:27,535][78983] Fps is (10 sec: 25804.9, 60 sec: 24780.8, 300 sec: 24221.0). Total num frames: 3657728. Throughput: 0: 6139.0. Samples: 909232. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:33:27,536][78983] Avg episode reward: [(0, '4.437')] +[2024-12-28 13:33:28,652][80255] Updated weights for policy 0, policy_version 900 (0.0006) +[2024-12-28 13:33:30,227][80255] Updated weights for policy 0, policy_version 910 (0.0007) +[2024-12-28 13:33:31,805][80255] Updated weights for policy 0, policy_version 920 (0.0007) +[2024-12-28 13:33:32,535][78983] Fps is (10 sec: 25395.1, 60 sec: 24780.8, 300 sec: 24258.9). Total num frames: 3784704. Throughput: 0: 6141.2. Samples: 928386. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:33:32,536][78983] Avg episode reward: [(0, '4.452')] +[2024-12-28 13:33:33,409][80255] Updated weights for policy 0, policy_version 930 (0.0007) +[2024-12-28 13:33:35,003][80255] Updated weights for policy 0, policy_version 940 (0.0007) +[2024-12-28 13:33:36,600][80255] Updated weights for policy 0, policy_version 950 (0.0008) +[2024-12-28 13:33:37,535][78983] Fps is (10 sec: 25395.2, 60 sec: 24780.8, 300 sec: 24294.4). Total num frames: 3911680. Throughput: 0: 6169.7. Samples: 967212. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:33:37,536][78983] Avg episode reward: [(0, '4.171')] +[2024-12-28 13:33:38,235][80255] Updated weights for policy 0, policy_version 960 (0.0007) +[2024-12-28 13:33:39,845][80255] Updated weights for policy 0, policy_version 970 (0.0006) +[2024-12-28 13:33:41,113][80241] Stopping Batcher_0... +[2024-12-28 13:33:41,114][80241] Loop batcher_evt_loop terminating... +[2024-12-28 13:33:41,113][78983] Component Batcher_0 stopped! +[2024-12-28 13:33:41,114][80241] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2024-12-28 13:33:41,125][80255] Weights refcount: 2 0 +[2024-12-28 13:33:41,127][80255] Stopping InferenceWorker_p0-w0... +[2024-12-28 13:33:41,127][80255] Loop inference_proc0-0_evt_loop terminating... +[2024-12-28 13:33:41,127][78983] Component InferenceWorker_p0-w0 stopped! +[2024-12-28 13:33:41,135][80256] Stopping RolloutWorker_w0... +[2024-12-28 13:33:41,135][80261] Stopping RolloutWorker_w7... +[2024-12-28 13:33:41,136][80256] Loop rollout_proc0_evt_loop terminating... +[2024-12-28 13:33:41,136][80260] Stopping RolloutWorker_w4... +[2024-12-28 13:33:41,136][80261] Loop rollout_proc7_evt_loop terminating... +[2024-12-28 13:33:41,136][80260] Loop rollout_proc4_evt_loop terminating... +[2024-12-28 13:33:41,137][80257] Stopping RolloutWorker_w1... +[2024-12-28 13:33:41,136][78983] Component RolloutWorker_w7 stopped! +[2024-12-28 13:33:41,137][80257] Loop rollout_proc1_evt_loop terminating... +[2024-12-28 13:33:41,137][80262] Stopping RolloutWorker_w6... +[2024-12-28 13:33:41,137][80259] Stopping RolloutWorker_w2... +[2024-12-28 13:33:41,138][80259] Loop rollout_proc2_evt_loop terminating... +[2024-12-28 13:33:41,137][78983] Component RolloutWorker_w0 stopped! +[2024-12-28 13:33:41,138][80262] Loop rollout_proc6_evt_loop terminating... +[2024-12-28 13:33:41,138][78983] Component RolloutWorker_w4 stopped! +[2024-12-28 13:33:41,139][80263] Stopping RolloutWorker_w5... +[2024-12-28 13:33:41,140][80263] Loop rollout_proc5_evt_loop terminating... +[2024-12-28 13:33:41,140][80258] Stopping RolloutWorker_w3... +[2024-12-28 13:33:41,140][80258] Loop rollout_proc3_evt_loop terminating... +[2024-12-28 13:33:41,140][78983] Component RolloutWorker_w1 stopped! +[2024-12-28 13:33:41,141][78983] Component RolloutWorker_w6 stopped! +[2024-12-28 13:33:41,142][78983] Component RolloutWorker_w2 stopped! +[2024-12-28 13:33:41,143][78983] Component RolloutWorker_w5 stopped! +[2024-12-28 13:33:41,143][78983] Component RolloutWorker_w3 stopped! +[2024-12-28 13:33:41,152][80241] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2024-12-28 13:33:41,204][80241] Stopping LearnerWorker_p0... +[2024-12-28 13:33:41,204][80241] Loop learner_proc0_evt_loop terminating... +[2024-12-28 13:33:41,204][78983] Component LearnerWorker_p0 stopped! +[2024-12-28 13:33:41,205][78983] Waiting for process learner_proc0 to stop... +[2024-12-28 13:33:41,742][78983] Waiting for process inference_proc0-0 to join... +[2024-12-28 13:33:41,743][78983] Waiting for process rollout_proc0 to join... +[2024-12-28 13:33:41,743][78983] Waiting for process rollout_proc1 to join... +[2024-12-28 13:33:41,744][78983] Waiting for process rollout_proc2 to join... +[2024-12-28 13:33:41,745][78983] Waiting for process rollout_proc3 to join... +[2024-12-28 13:33:41,745][78983] Waiting for process rollout_proc4 to join... +[2024-12-28 13:33:41,746][78983] Waiting for process rollout_proc5 to join... +[2024-12-28 13:33:41,747][78983] Waiting for process rollout_proc6 to join... +[2024-12-28 13:33:41,747][78983] Waiting for process rollout_proc7 to join... +[2024-12-28 13:33:41,748][78983] Batcher 0 profile tree view: +batching: 10.5649, releasing_batches: 0.0220 +[2024-12-28 13:33:41,749][78983] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 2.2765 +update_model: 2.3197 + weight_update: 0.0008 +one_step: 0.0017 + handle_policy_step: 156.1432 + deserialize: 4.3631, stack: 0.6948, obs_to_device_normalize: 35.6014, forward: 65.5456, send_messages: 12.2337 + prepare_outputs: 32.8547 + to_cpu: 26.8660 +[2024-12-28 13:33:41,749][78983] Learner 0 profile tree view: +misc: 0.0038, prepare_batch: 8.5100 +train: 19.9732 + epoch_init: 0.0032, minibatch_init: 0.0042, losses_postprocess: 0.3169, kl_divergence: 0.3490, after_optimizer: 6.4562 + calculate_losses: 7.7016 + losses_init: 0.0016, forward_head: 0.5881, bptt_initial: 4.5030, tail: 0.4039, advantages_returns: 0.1110, losses: 1.1579 + bptt: 0.8261 + bptt_forward_core: 0.7912 + update: 4.8890 + clip: 0.5054 +[2024-12-28 13:33:41,750][78983] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.0881, enqueue_policy_requests: 4.5226, env_step: 97.1308, overhead: 6.0633, complete_rollouts: 0.1777 +save_policy_outputs: 4.7868 + split_output_tensors: 2.2881 +[2024-12-28 13:33:41,750][78983] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.0912, enqueue_policy_requests: 4.4861, env_step: 96.4783, overhead: 6.1217, complete_rollouts: 0.1798 +save_policy_outputs: 4.7999 + split_output_tensors: 2.3242 +[2024-12-28 13:33:41,751][78983] Loop Runner_EvtLoop terminating... +[2024-12-28 13:33:41,752][78983] Runner profile tree view: +main_loop: 171.4617 +[2024-12-28 13:33:41,753][78983] Collected {0: 4005888}, FPS: 23363.2 +[2024-12-28 13:34:37,964][78983] Loading existing experiment configuration from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json +[2024-12-28 13:34:37,964][78983] Overriding arg 'num_workers' with value 1 passed from command line +[2024-12-28 13:34:37,965][78983] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-12-28 13:34:37,965][78983] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-12-28 13:34:37,966][78983] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 13:34:37,966][78983] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-12-28 13:34:37,967][78983] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 13:34:37,968][78983] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-12-28 13:34:37,968][78983] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2024-12-28 13:34:37,968][78983] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2024-12-28 13:34:37,969][78983] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-12-28 13:34:37,970][78983] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-12-28 13:34:37,970][78983] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-12-28 13:34:37,971][78983] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-12-28 13:34:37,971][78983] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-12-28 13:34:37,983][78983] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:34:37,984][78983] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 13:34:37,986][78983] RunningMeanStd input shape: (1,) +[2024-12-28 13:34:37,993][78983] ConvEncoder: input_channels=3 +[2024-12-28 13:34:38,053][78983] Conv encoder output size: 512 +[2024-12-28 13:34:38,055][78983] Policy head output size: 512 +[2024-12-28 13:34:38,728][78983] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2024-12-28 13:34:39,209][78983] Num frames 100... +[2024-12-28 13:34:39,294][78983] Num frames 200... +[2024-12-28 13:34:39,380][78983] Num frames 300... +[2024-12-28 13:34:39,465][78983] Num frames 400... +[2024-12-28 13:34:39,560][78983] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480 +[2024-12-28 13:34:39,561][78983] Avg episode reward: 5.480, avg true_objective: 4.480 +[2024-12-28 13:34:39,607][78983] Num frames 500... +[2024-12-28 13:34:39,695][78983] Num frames 600... +[2024-12-28 13:34:39,780][78983] Num frames 700... +[2024-12-28 13:34:39,865][78983] Num frames 800... +[2024-12-28 13:34:39,973][78983] Avg episode rewards: #0: 5.320, true rewards: #0: 4.320 +[2024-12-28 13:34:39,974][78983] Avg episode reward: 5.320, avg true_objective: 4.320 +[2024-12-28 13:34:40,007][78983] Num frames 900... +[2024-12-28 13:34:40,097][78983] Num frames 1000... +[2024-12-28 13:34:40,184][78983] Num frames 1100... +[2024-12-28 13:34:40,268][78983] Num frames 1200... +[2024-12-28 13:34:40,363][78983] Avg episode rewards: #0: 4.827, true rewards: #0: 4.160 +[2024-12-28 13:34:40,365][78983] Avg episode reward: 4.827, avg true_objective: 4.160 +[2024-12-28 13:34:40,413][78983] Num frames 1300... +[2024-12-28 13:34:40,499][78983] Num frames 1400... +[2024-12-28 13:34:40,584][78983] Num frames 1500... +[2024-12-28 13:34:40,671][78983] Num frames 1600... +[2024-12-28 13:34:40,752][78983] Avg episode rewards: #0: 4.580, true rewards: #0: 4.080 +[2024-12-28 13:34:40,753][78983] Avg episode reward: 4.580, avg true_objective: 4.080 +[2024-12-28 13:34:40,813][78983] Num frames 1700... +[2024-12-28 13:34:40,902][78983] Num frames 1800... +[2024-12-28 13:34:40,987][78983] Num frames 1900... +[2024-12-28 13:34:41,070][78983] Num frames 2000... +[2024-12-28 13:34:41,155][78983] Num frames 2100... +[2024-12-28 13:34:41,218][78983] Avg episode rewards: #0: 5.024, true rewards: #0: 4.224 +[2024-12-28 13:34:41,219][78983] Avg episode reward: 5.024, avg true_objective: 4.224 +[2024-12-28 13:34:41,295][78983] Num frames 2200... +[2024-12-28 13:34:41,379][78983] Num frames 2300... +[2024-12-28 13:34:41,463][78983] Num frames 2400... +[2024-12-28 13:34:41,595][78983] Avg episode rewards: #0: 4.827, true rewards: #0: 4.160 +[2024-12-28 13:34:41,596][78983] Avg episode reward: 4.827, avg true_objective: 4.160 +[2024-12-28 13:34:41,601][78983] Num frames 2500... +[2024-12-28 13:34:41,688][78983] Num frames 2600... +[2024-12-28 13:34:41,773][78983] Num frames 2700... +[2024-12-28 13:34:41,858][78983] Num frames 2800... +[2024-12-28 13:34:41,952][78983] Num frames 2900... +[2024-12-28 13:34:42,043][78983] Avg episode rewards: #0: 4.920, true rewards: #0: 4.206 +[2024-12-28 13:34:42,044][78983] Avg episode reward: 4.920, avg true_objective: 4.206 +[2024-12-28 13:34:42,096][78983] Num frames 3000... +[2024-12-28 13:34:42,180][78983] Num frames 3100... +[2024-12-28 13:34:42,264][78983] Num frames 3200... +[2024-12-28 13:34:42,348][78983] Num frames 3300... +[2024-12-28 13:34:42,426][78983] Avg episode rewards: #0: 4.785, true rewards: #0: 4.160 +[2024-12-28 13:34:42,427][78983] Avg episode reward: 4.785, avg true_objective: 4.160 +[2024-12-28 13:34:42,491][78983] Num frames 3400... +[2024-12-28 13:34:42,577][78983] Num frames 3500... +[2024-12-28 13:34:42,663][78983] Num frames 3600... +[2024-12-28 13:34:42,747][78983] Num frames 3700... +[2024-12-28 13:34:42,810][78983] Avg episode rewards: #0: 4.680, true rewards: #0: 4.124 +[2024-12-28 13:34:42,811][78983] Avg episode reward: 4.680, avg true_objective: 4.124 +[2024-12-28 13:34:42,886][78983] Num frames 3800... +[2024-12-28 13:34:42,977][78983] Num frames 3900... +[2024-12-28 13:34:43,060][78983] Num frames 4000... +[2024-12-28 13:34:43,192][78983] Avg episode rewards: #0: 4.596, true rewards: #0: 4.096 +[2024-12-28 13:34:43,193][78983] Avg episode reward: 4.596, avg true_objective: 4.096 +[2024-12-28 13:34:47,200][78983] Replay video saved to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/replay.mp4! +[2024-12-28 13:36:21,141][78983] Environment doom_basic already registered, overwriting... +[2024-12-28 13:36:21,142][78983] Environment doom_two_colors_easy already registered, overwriting... +[2024-12-28 13:36:21,143][78983] Environment doom_two_colors_hard already registered, overwriting... +[2024-12-28 13:36:21,144][78983] Environment doom_dm already registered, overwriting... +[2024-12-28 13:36:21,145][78983] Environment doom_dwango5 already registered, overwriting... +[2024-12-28 13:36:21,145][78983] Environment doom_my_way_home_flat_actions already registered, overwriting... +[2024-12-28 13:36:21,146][78983] Environment doom_defend_the_center_flat_actions already registered, overwriting... +[2024-12-28 13:36:21,146][78983] Environment doom_my_way_home already registered, overwriting... +[2024-12-28 13:36:21,147][78983] Environment doom_deadly_corridor already registered, overwriting... +[2024-12-28 13:36:21,148][78983] Environment doom_defend_the_center already registered, overwriting... +[2024-12-28 13:36:21,148][78983] Environment doom_defend_the_line already registered, overwriting... +[2024-12-28 13:36:21,149][78983] Environment doom_health_gathering already registered, overwriting... +[2024-12-28 13:36:21,149][78983] Environment doom_health_gathering_supreme already registered, overwriting... +[2024-12-28 13:36:21,149][78983] Environment doom_battle already registered, overwriting... +[2024-12-28 13:36:21,150][78983] Environment doom_battle2 already registered, overwriting... +[2024-12-28 13:36:21,151][78983] Environment doom_duel_bots already registered, overwriting... +[2024-12-28 13:36:21,151][78983] Environment doom_deathmatch_bots already registered, overwriting... +[2024-12-28 13:36:21,152][78983] Environment doom_duel already registered, overwriting... +[2024-12-28 13:36:21,152][78983] Environment doom_deathmatch_full already registered, overwriting... +[2024-12-28 13:36:21,153][78983] Environment doom_benchmark already registered, overwriting... +[2024-12-28 13:36:21,153][78983] register_encoder_factory: +[2024-12-28 13:36:21,161][78983] Loading existing experiment configuration from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json +[2024-12-28 13:36:21,162][78983] Overriding arg 'train_for_env_steps' with value 40000000 passed from command line +[2024-12-28 13:36:21,165][78983] Experiment dir /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment already exists! +[2024-12-28 13:36:21,165][78983] Resuming existing experiment from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment... +[2024-12-28 13:36:21,165][78983] Weights and Biases integration disabled +[2024-12-28 13:36:21,167][78983] Environment var CUDA_VISIBLE_DEVICES is 0 + +[2024-12-28 13:36:22,167][78983] Starting experiment with the following configuration: +help=False +algo=APPO +env=doom_health_gathering_supreme +experiment=default_experiment +train_dir=/home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir +restart_behavior=resume +device=gpu +seed=None +num_policies=1 +async_rl=True +serial_mode=False +batched_sampling=False +num_batches_to_accumulate=2 +worker_num_splits=2 +policy_workers_per_policy=1 +max_policy_lag=1000 +num_workers=8 +num_envs_per_worker=4 +batch_size=1024 +num_batches_per_epoch=1 +num_epochs=1 +rollout=32 +recurrence=32 +shuffle_minibatches=False +gamma=0.99 +reward_scale=1.0 +reward_clip=1000.0 +value_bootstrap=False +normalize_returns=True +exploration_loss_coeff=0.001 +value_loss_coeff=0.5 +kl_loss_coeff=0.0 +exploration_loss=symmetric_kl +gae_lambda=0.95 +ppo_clip_ratio=0.1 +ppo_clip_value=0.2 +with_vtrace=False +vtrace_rho=1.0 +vtrace_c=1.0 +optimizer=adam +adam_eps=1e-06 +adam_beta1=0.9 +adam_beta2=0.999 +max_grad_norm=4.0 +learning_rate=0.0001 +lr_schedule=constant +lr_schedule_kl_threshold=0.008 +lr_adaptive_min=1e-06 +lr_adaptive_max=0.01 +obs_subtract_mean=0.0 +obs_scale=255.0 +normalize_input=True +normalize_input_keys=None +decorrelate_experience_max_seconds=0 +decorrelate_envs_on_one_worker=True +actor_worker_gpus=[] +set_workers_cpu_affinity=True +force_envs_single_thread=False +default_niceness=0 +log_to_file=True +experiment_summaries_interval=10 +flush_summaries_interval=30 +stats_avg=100 +summaries_use_frameskip=True +heartbeat_interval=20 +heartbeat_reporting_interval=600 +train_for_env_steps=40000000 +train_for_seconds=10000000000 +save_every_sec=120 +keep_checkpoints=2 +load_checkpoint_kind=latest +save_milestones_sec=-1 +save_best_every_sec=5 +save_best_metric=reward +save_best_after=100000 +benchmark=False +encoder_mlp_layers=[512, 512] +encoder_conv_architecture=convnet_simple +encoder_conv_mlp_layers=[512] +use_rnn=True +rnn_size=512 +rnn_type=gru +rnn_num_layers=1 +decoder_mlp_layers=[] +nonlinearity=elu +policy_initialization=orthogonal +policy_init_gain=1.0 +actor_critic_share_weights=True +adaptive_stddev=True +continuous_tanh_scale=0.0 +initial_stddev=1.0 +use_env_info_cache=False +env_gpu_actions=False +env_gpu_observations=True +env_frameskip=4 +env_framestack=1 +pixel_format=CHW +use_record_episode_statistics=False +with_wandb=False +wandb_user=None +wandb_project=sample_factory +wandb_group=None +wandb_job_type=SF +wandb_tags=[] +with_pbt=False +pbt_mix_policies_in_one_env=True +pbt_period_env_steps=5000000 +pbt_start_mutation=20000000 +pbt_replace_fraction=0.3 +pbt_mutation_rate=0.15 +pbt_replace_reward_gap=0.1 +pbt_replace_reward_gap_absolute=1e-06 +pbt_optimize_gamma=False +pbt_target_objective=true_objective +pbt_perturb_min=1.1 +pbt_perturb_max=1.5 +num_agents=-1 +num_humans=0 +num_bots=-1 +start_bot_difficulty=None +timelimit=None +res_w=128 +res_h=72 +wide_aspect_ratio=False +eval_env_frameskip=1 +fps=35 +command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 +cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} +git_hash=75361eaddde9905a7fdf22c21f8ec6ea25940ccd +git_repo_name=git@github.com:zhangsz1998/My_Deep_RL.git +[2024-12-28 13:36:22,168][78983] Saving configuration to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json... +[2024-12-28 13:36:22,177][78983] Rollout worker 0 uses device cpu +[2024-12-28 13:36:22,178][78983] Rollout worker 1 uses device cpu +[2024-12-28 13:36:22,178][78983] Rollout worker 2 uses device cpu +[2024-12-28 13:36:22,179][78983] Rollout worker 3 uses device cpu +[2024-12-28 13:36:22,179][78983] Rollout worker 4 uses device cpu +[2024-12-28 13:36:22,180][78983] Rollout worker 5 uses device cpu +[2024-12-28 13:36:22,181][78983] Rollout worker 6 uses device cpu +[2024-12-28 13:36:22,181][78983] Rollout worker 7 uses device cpu +[2024-12-28 13:36:22,199][78983] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:36:22,200][78983] InferenceWorker_p0-w0: min num requests: 2 +[2024-12-28 13:36:22,216][78983] Starting all processes... +[2024-12-28 13:36:22,216][78983] Starting process learner_proc0 +[2024-12-28 13:36:22,266][78983] Starting all processes... +[2024-12-28 13:36:22,269][78983] Starting process inference_proc0-0 +[2024-12-28 13:36:22,269][78983] Starting process rollout_proc0 +[2024-12-28 13:36:22,269][78983] Starting process rollout_proc1 +[2024-12-28 13:36:22,270][78983] Starting process rollout_proc2 +[2024-12-28 13:36:22,270][78983] Starting process rollout_proc3 +[2024-12-28 13:36:22,270][78983] Starting process rollout_proc4 +[2024-12-28 13:36:22,270][78983] Starting process rollout_proc5 +[2024-12-28 13:36:22,271][78983] Starting process rollout_proc6 +[2024-12-28 13:36:22,271][78983] Starting process rollout_proc7 +[2024-12-28 13:36:23,727][84560] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:36:23,727][84560] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-12-28 13:36:23,746][84566] Worker 5 uses CPU cores [20, 21, 22, 23] +[2024-12-28 13:36:23,746][84564] Worker 3 uses CPU cores [12, 13, 14, 15] +[2024-12-28 13:36:23,748][84560] Num visible devices: 1 +[2024-12-28 13:36:23,750][84563] Worker 1 uses CPU cores [4, 5, 6, 7] +[2024-12-28 13:36:23,760][84543] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:36:23,761][84543] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-12-28 13:36:23,783][84561] Worker 0 uses CPU cores [0, 1, 2, 3] +[2024-12-28 13:36:23,783][84543] Num visible devices: 1 +[2024-12-28 13:36:23,800][84567] Worker 6 uses CPU cores [24, 25, 26, 27] +[2024-12-28 13:36:23,802][84568] Worker 7 uses CPU cores [28, 29, 30, 31] +[2024-12-28 13:36:23,809][84562] Worker 2 uses CPU cores [8, 9, 10, 11] +[2024-12-28 13:36:23,832][84543] Starting seed is not provided +[2024-12-28 13:36:23,833][84543] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:36:23,833][84565] Worker 4 uses CPU cores [16, 17, 18, 19] +[2024-12-28 13:36:23,833][84543] Initializing actor-critic model on device cuda:0 +[2024-12-28 13:36:23,833][84543] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 13:36:23,834][84543] RunningMeanStd input shape: (1,) +[2024-12-28 13:36:23,850][84543] ConvEncoder: input_channels=3 +[2024-12-28 13:36:23,951][84543] Conv encoder output size: 512 +[2024-12-28 13:36:23,951][84543] Policy head output size: 512 +[2024-12-28 13:36:23,960][84543] Created Actor Critic model with architecture: +[2024-12-28 13:36:23,960][84543] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2024-12-28 13:36:24,099][84543] Using optimizer +[2024-12-28 13:36:24,585][84543] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2024-12-28 13:36:24,609][84543] Loading model from checkpoint +[2024-12-28 13:36:24,610][84543] Loaded experiment state at self.train_step=978, self.env_steps=4005888 +[2024-12-28 13:36:24,610][84543] Initialized policy 0 weights for model version 978 +[2024-12-28 13:36:24,613][84543] LearnerWorker_p0 finished initialization! +[2024-12-28 13:36:24,613][84543] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 13:36:24,711][84560] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 13:36:24,711][84560] RunningMeanStd input shape: (1,) +[2024-12-28 13:36:24,718][84560] ConvEncoder: input_channels=3 +[2024-12-28 13:36:24,770][84560] Conv encoder output size: 512 +[2024-12-28 13:36:24,770][84560] Policy head output size: 512 +[2024-12-28 13:36:24,798][78983] Inference worker 0-0 is ready! +[2024-12-28 13:36:24,799][78983] All inference workers are ready! Signal rollout workers to start! +[2024-12-28 13:36:24,818][84566] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,819][84563] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,819][84567] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,819][84565] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,819][84561] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,820][84568] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,826][84564] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:24,827][84562] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 13:36:25,006][84563] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,011][84567] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,027][84566] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,028][84568] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,040][84561] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,174][84563] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,198][84566] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,206][84565] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,210][84562] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,329][84568] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,361][84565] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,370][84563] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,378][84567] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,499][84566] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,527][84568] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,561][84565] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,571][84563] Decorrelating experience for 96 frames... +[2024-12-28 13:36:25,582][84561] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,585][84562] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,658][84564] Decorrelating experience for 0 frames... +[2024-12-28 13:36:25,720][84568] Decorrelating experience for 96 frames... +[2024-12-28 13:36:25,737][84567] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,753][84566] Decorrelating experience for 96 frames... +[2024-12-28 13:36:25,758][84565] Decorrelating experience for 96 frames... +[2024-12-28 13:36:25,922][84564] Decorrelating experience for 32 frames... +[2024-12-28 13:36:25,932][84567] Decorrelating experience for 96 frames... +[2024-12-28 13:36:25,935][84562] Decorrelating experience for 64 frames... +[2024-12-28 13:36:25,951][84561] Decorrelating experience for 64 frames... +[2024-12-28 13:36:26,122][84564] Decorrelating experience for 64 frames... +[2024-12-28 13:36:26,154][84562] Decorrelating experience for 96 frames... +[2024-12-28 13:36:26,167][78983] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-12-28 13:36:26,168][78983] Avg episode reward: [(0, '1.120')] +[2024-12-28 13:36:26,308][84561] Decorrelating experience for 96 frames... +[2024-12-28 13:36:26,330][84564] Decorrelating experience for 96 frames... +[2024-12-28 13:36:26,566][84543] Signal inference workers to stop experience collection... +[2024-12-28 13:36:26,571][84560] InferenceWorker_p0-w0: stopping experience collection +[2024-12-28 13:36:27,513][84543] Signal inference workers to resume experience collection... +[2024-12-28 13:36:27,513][84560] InferenceWorker_p0-w0: resuming experience collection +[2024-12-28 13:36:29,233][84560] Updated weights for policy 0, policy_version 988 (0.0047) +[2024-12-28 13:36:31,167][78983] Fps is (10 sec: 15564.6, 60 sec: 15564.6, 300 sec: 15564.6). Total num frames: 4083712. Throughput: 0: 3839.9. Samples: 19200. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:36:31,169][78983] Avg episode reward: [(0, '4.545')] +[2024-12-28 13:36:31,210][84560] Updated weights for policy 0, policy_version 998 (0.0008) +[2024-12-28 13:36:33,091][84560] Updated weights for policy 0, policy_version 1008 (0.0008) +[2024-12-28 13:36:34,996][84560] Updated weights for policy 0, policy_version 1018 (0.0009) +[2024-12-28 13:36:36,167][78983] Fps is (10 sec: 18841.5, 60 sec: 18841.5, 300 sec: 18841.5). Total num frames: 4194304. Throughput: 0: 3539.6. Samples: 35396. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:36:36,168][78983] Avg episode reward: [(0, '4.306')] +[2024-12-28 13:36:36,876][84560] Updated weights for policy 0, policy_version 1028 (0.0007) +[2024-12-28 13:36:38,758][84560] Updated weights for policy 0, policy_version 1038 (0.0009) +[2024-12-28 13:36:40,583][84560] Updated weights for policy 0, policy_version 1048 (0.0008) +[2024-12-28 13:36:41,167][78983] Fps is (10 sec: 22118.4, 60 sec: 19933.8, 300 sec: 19933.8). Total num frames: 4304896. Throughput: 0: 4535.7. Samples: 68036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:36:41,169][78983] Avg episode reward: [(0, '4.356')] +[2024-12-28 13:36:42,195][78983] Heartbeat connected on Batcher_0 +[2024-12-28 13:36:42,197][78983] Heartbeat connected on LearnerWorker_p0 +[2024-12-28 13:36:42,204][78983] Heartbeat connected on InferenceWorker_p0-w0 +[2024-12-28 13:36:42,206][78983] Heartbeat connected on RolloutWorker_w0 +[2024-12-28 13:36:42,207][78983] Heartbeat connected on RolloutWorker_w1 +[2024-12-28 13:36:42,208][78983] Heartbeat connected on RolloutWorker_w2 +[2024-12-28 13:36:42,211][78983] Heartbeat connected on RolloutWorker_w4 +[2024-12-28 13:36:42,212][78983] Heartbeat connected on RolloutWorker_w3 +[2024-12-28 13:36:42,214][78983] Heartbeat connected on RolloutWorker_w6 +[2024-12-28 13:36:42,215][78983] Heartbeat connected on RolloutWorker_w5 +[2024-12-28 13:36:42,216][78983] Heartbeat connected on RolloutWorker_w7 +[2024-12-28 13:36:42,458][84560] Updated weights for policy 0, policy_version 1058 (0.0010) +[2024-12-28 13:36:44,312][84560] Updated weights for policy 0, policy_version 1068 (0.0009) +[2024-12-28 13:36:45,895][84560] Updated weights for policy 0, policy_version 1078 (0.0007) +[2024-12-28 13:36:46,167][78983] Fps is (10 sec: 22527.8, 60 sec: 20684.7, 300 sec: 20684.7). Total num frames: 4419584. Throughput: 0: 5126.8. Samples: 102536. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:36:46,169][78983] Avg episode reward: [(0, '4.478')] +[2024-12-28 13:36:47,671][84560] Updated weights for policy 0, policy_version 1088 (0.0009) +[2024-12-28 13:36:49,539][84560] Updated weights for policy 0, policy_version 1098 (0.0007) +[2024-12-28 13:36:51,167][78983] Fps is (10 sec: 22528.1, 60 sec: 20971.5, 300 sec: 20971.5). Total num frames: 4530176. Throughput: 0: 4786.9. Samples: 119672. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:36:51,168][78983] Avg episode reward: [(0, '4.396')] +[2024-12-28 13:36:51,428][84560] Updated weights for policy 0, policy_version 1108 (0.0008) +[2024-12-28 13:36:53,348][84560] Updated weights for policy 0, policy_version 1118 (0.0009) +[2024-12-28 13:36:55,177][84560] Updated weights for policy 0, policy_version 1128 (0.0009) +[2024-12-28 13:36:56,167][78983] Fps is (10 sec: 22118.5, 60 sec: 21162.6, 300 sec: 21162.6). Total num frames: 4640768. Throughput: 0: 5074.7. Samples: 152240. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:36:56,169][78983] Avg episode reward: [(0, '4.394')] +[2024-12-28 13:36:57,051][84560] Updated weights for policy 0, policy_version 1138 (0.0008) +[2024-12-28 13:36:58,656][84560] Updated weights for policy 0, policy_version 1148 (0.0006) +[2024-12-28 13:37:00,241][84560] Updated weights for policy 0, policy_version 1158 (0.0007) +[2024-12-28 13:37:01,167][78983] Fps is (10 sec: 23347.2, 60 sec: 21650.3, 300 sec: 21650.3). Total num frames: 4763648. Throughput: 0: 5390.4. Samples: 188664. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:37:01,169][78983] Avg episode reward: [(0, '4.607')] +[2024-12-28 13:37:01,840][84560] Updated weights for policy 0, policy_version 1168 (0.0006) +[2024-12-28 13:37:03,523][84560] Updated weights for policy 0, policy_version 1178 (0.0007) +[2024-12-28 13:37:05,337][84560] Updated weights for policy 0, policy_version 1188 (0.0009) +[2024-12-28 13:37:06,167][78983] Fps is (10 sec: 24166.5, 60 sec: 21913.6, 300 sec: 21913.6). Total num frames: 4882432. Throughput: 0: 5180.7. Samples: 207230. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:37:06,169][78983] Avg episode reward: [(0, '4.488')] +[2024-12-28 13:37:07,229][84560] Updated weights for policy 0, policy_version 1198 (0.0007) +[2024-12-28 13:37:09,099][84560] Updated weights for policy 0, policy_version 1208 (0.0009) +[2024-12-28 13:37:10,912][84560] Updated weights for policy 0, policy_version 1218 (0.0007) +[2024-12-28 13:37:11,167][78983] Fps is (10 sec: 22937.6, 60 sec: 21936.3, 300 sec: 21936.3). Total num frames: 4993024. Throughput: 0: 5341.2. Samples: 240352. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:37:11,170][78983] Avg episode reward: [(0, '4.626')] +[2024-12-28 13:37:12,810][84560] Updated weights for policy 0, policy_version 1228 (0.0007) +[2024-12-28 13:37:14,482][84560] Updated weights for policy 0, policy_version 1238 (0.0007) +[2024-12-28 13:37:16,102][84560] Updated weights for policy 0, policy_version 1248 (0.0007) +[2024-12-28 13:37:16,167][78983] Fps is (10 sec: 22937.7, 60 sec: 22118.4, 300 sec: 22118.4). Total num frames: 5111808. Throughput: 0: 5692.1. Samples: 275344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:37:16,170][78983] Avg episode reward: [(0, '4.376')] +[2024-12-28 13:37:17,740][84560] Updated weights for policy 0, policy_version 1258 (0.0006) +[2024-12-28 13:37:19,312][84560] Updated weights for policy 0, policy_version 1268 (0.0007) +[2024-12-28 13:37:20,901][84560] Updated weights for policy 0, policy_version 1278 (0.0008) +[2024-12-28 13:37:21,167][78983] Fps is (10 sec: 24575.7, 60 sec: 22416.2, 300 sec: 22416.2). Total num frames: 5238784. Throughput: 0: 5756.7. Samples: 294446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:37:21,168][78983] Avg episode reward: [(0, '4.694')] +[2024-12-28 13:37:22,522][84560] Updated weights for policy 0, policy_version 1288 (0.0007) +[2024-12-28 13:37:24,113][84560] Updated weights for policy 0, policy_version 1298 (0.0007) +[2024-12-28 13:37:25,895][84560] Updated weights for policy 0, policy_version 1308 (0.0008) +[2024-12-28 13:37:26,167][78983] Fps is (10 sec: 24985.5, 60 sec: 22596.2, 300 sec: 22596.2). Total num frames: 5361664. Throughput: 0: 5874.7. Samples: 332396. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:37:26,169][78983] Avg episode reward: [(0, '4.549')] +[2024-12-28 13:37:27,729][84560] Updated weights for policy 0, policy_version 1318 (0.0008) +[2024-12-28 13:37:29,587][84560] Updated weights for policy 0, policy_version 1328 (0.0007) +[2024-12-28 13:37:31,167][78983] Fps is (10 sec: 23347.5, 60 sec: 23142.4, 300 sec: 22559.5). Total num frames: 5472256. Throughput: 0: 5851.0. Samples: 365832. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:37:31,168][78983] Avg episode reward: [(0, '4.474')] +[2024-12-28 13:37:31,432][84560] Updated weights for policy 0, policy_version 1338 (0.0009) +[2024-12-28 13:37:33,297][84560] Updated weights for policy 0, policy_version 1348 (0.0008) +[2024-12-28 13:37:35,163][84560] Updated weights for policy 0, policy_version 1358 (0.0008) +[2024-12-28 13:37:36,167][78983] Fps is (10 sec: 22528.1, 60 sec: 23210.7, 300 sec: 22586.5). Total num frames: 5586944. Throughput: 0: 5840.4. Samples: 382488. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:37:36,168][78983] Avg episode reward: [(0, '4.210')] +[2024-12-28 13:37:36,801][84560] Updated weights for policy 0, policy_version 1368 (0.0006) +[2024-12-28 13:37:38,378][84560] Updated weights for policy 0, policy_version 1378 (0.0007) +[2024-12-28 13:37:39,926][84560] Updated weights for policy 0, policy_version 1388 (0.0007) +[2024-12-28 13:37:41,167][78983] Fps is (10 sec: 24166.2, 60 sec: 23483.7, 300 sec: 22773.7). Total num frames: 5713920. Throughput: 0: 5942.0. Samples: 419630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:37:41,169][78983] Avg episode reward: [(0, '4.391')] +[2024-12-28 13:37:41,542][84560] Updated weights for policy 0, policy_version 1398 (0.0008) +[2024-12-28 13:37:43,122][84560] Updated weights for policy 0, policy_version 1408 (0.0008) +[2024-12-28 13:37:44,701][84560] Updated weights for policy 0, policy_version 1418 (0.0006) +[2024-12-28 13:37:46,167][78983] Fps is (10 sec: 25804.7, 60 sec: 23756.8, 300 sec: 22988.8). Total num frames: 5844992. Throughput: 0: 5994.9. Samples: 458436. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:37:46,168][78983] Avg episode reward: [(0, '4.642')] +[2024-12-28 13:37:46,295][84560] Updated weights for policy 0, policy_version 1428 (0.0007) +[2024-12-28 13:37:47,859][84560] Updated weights for policy 0, policy_version 1438 (0.0007) +[2024-12-28 13:37:49,503][84560] Updated weights for policy 0, policy_version 1448 (0.0009) +[2024-12-28 13:37:51,167][78983] Fps is (10 sec: 25395.4, 60 sec: 23961.6, 300 sec: 23082.1). Total num frames: 5967872. Throughput: 0: 6016.7. Samples: 477982. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:37:51,168][78983] Avg episode reward: [(0, '4.362')] +[2024-12-28 13:37:51,310][84560] Updated weights for policy 0, policy_version 1458 (0.0008) +[2024-12-28 13:37:53,187][84560] Updated weights for policy 0, policy_version 1468 (0.0007) +[2024-12-28 13:37:55,087][84560] Updated weights for policy 0, policy_version 1478 (0.0009) +[2024-12-28 13:37:56,167][78983] Fps is (10 sec: 22937.4, 60 sec: 23893.3, 300 sec: 22983.1). Total num frames: 6074368. Throughput: 0: 6021.6. Samples: 511326. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:37:56,168][78983] Avg episode reward: [(0, '4.265')] +[2024-12-28 13:37:56,986][84560] Updated weights for policy 0, policy_version 1488 (0.0009) +[2024-12-28 13:37:58,911][84560] Updated weights for policy 0, policy_version 1498 (0.0008) +[2024-12-28 13:38:00,585][84560] Updated weights for policy 0, policy_version 1508 (0.0007) +[2024-12-28 13:38:01,167][78983] Fps is (10 sec: 22118.5, 60 sec: 23756.8, 300 sec: 22980.7). Total num frames: 6189056. Throughput: 0: 5989.4. Samples: 544868. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:38:01,169][78983] Avg episode reward: [(0, '4.438')] +[2024-12-28 13:38:02,181][84560] Updated weights for policy 0, policy_version 1518 (0.0006) +[2024-12-28 13:38:03,724][84560] Updated weights for policy 0, policy_version 1528 (0.0006) +[2024-12-28 13:38:05,278][84560] Updated weights for policy 0, policy_version 1538 (0.0007) +[2024-12-28 13:38:06,167][78983] Fps is (10 sec: 24576.2, 60 sec: 23961.6, 300 sec: 23142.4). Total num frames: 6320128. Throughput: 0: 5999.0. Samples: 564400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:38:06,168][78983] Avg episode reward: [(0, '4.442')] +[2024-12-28 13:38:06,891][84560] Updated weights for policy 0, policy_version 1548 (0.0007) +[2024-12-28 13:38:08,482][84560] Updated weights for policy 0, policy_version 1558 (0.0008) +[2024-12-28 13:38:10,049][84560] Updated weights for policy 0, policy_version 1568 (0.0007) +[2024-12-28 13:38:11,167][78983] Fps is (10 sec: 26214.4, 60 sec: 24302.9, 300 sec: 23288.7). Total num frames: 6451200. Throughput: 0: 6024.6. Samples: 603504. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:38:11,168][78983] Avg episode reward: [(0, '4.569')] +[2024-12-28 13:38:11,630][84560] Updated weights for policy 0, policy_version 1578 (0.0007) +[2024-12-28 13:38:13,210][84560] Updated weights for policy 0, policy_version 1588 (0.0007) +[2024-12-28 13:38:14,782][84560] Updated weights for policy 0, policy_version 1598 (0.0007) +[2024-12-28 13:38:16,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24439.5, 300 sec: 23384.4). Total num frames: 6578176. Throughput: 0: 6143.8. Samples: 642302. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:38:16,168][78983] Avg episode reward: [(0, '4.439')] +[2024-12-28 13:38:16,352][84560] Updated weights for policy 0, policy_version 1608 (0.0008) +[2024-12-28 13:38:17,948][84560] Updated weights for policy 0, policy_version 1618 (0.0007) +[2024-12-28 13:38:19,477][84560] Updated weights for policy 0, policy_version 1628 (0.0007) +[2024-12-28 13:38:21,067][84560] Updated weights for policy 0, policy_version 1638 (0.0006) +[2024-12-28 13:38:21,167][78983] Fps is (10 sec: 25804.7, 60 sec: 24507.8, 300 sec: 23507.5). Total num frames: 6709248. Throughput: 0: 6211.1. Samples: 661988. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:38:21,168][78983] Avg episode reward: [(0, '4.352')] +[2024-12-28 13:38:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000001638_6709248.pth... +[2024-12-28 13:38:21,217][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000654_2678784.pth +[2024-12-28 13:38:22,709][84560] Updated weights for policy 0, policy_version 1648 (0.0007) +[2024-12-28 13:38:24,294][84560] Updated weights for policy 0, policy_version 1658 (0.0007) +[2024-12-28 13:38:25,905][84560] Updated weights for policy 0, policy_version 1668 (0.0007) +[2024-12-28 13:38:26,167][78983] Fps is (10 sec: 25804.9, 60 sec: 24576.0, 300 sec: 23586.1). Total num frames: 6836224. Throughput: 0: 6236.8. Samples: 700286. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:38:26,168][78983] Avg episode reward: [(0, '4.551')] +[2024-12-28 13:38:27,520][84560] Updated weights for policy 0, policy_version 1678 (0.0008) +[2024-12-28 13:38:29,089][84560] Updated weights for policy 0, policy_version 1688 (0.0007) +[2024-12-28 13:38:30,658][84560] Updated weights for policy 0, policy_version 1698 (0.0007) +[2024-12-28 13:38:31,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24917.3, 300 sec: 23691.3). Total num frames: 6967296. Throughput: 0: 6234.4. Samples: 738982. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:38:31,168][78983] Avg episode reward: [(0, '4.710')] +[2024-12-28 13:38:32,251][84560] Updated weights for policy 0, policy_version 1708 (0.0006) +[2024-12-28 13:38:33,841][84560] Updated weights for policy 0, policy_version 1718 (0.0008) +[2024-12-28 13:38:35,595][84560] Updated weights for policy 0, policy_version 1728 (0.0008) +[2024-12-28 13:38:36,167][78983] Fps is (10 sec: 25395.1, 60 sec: 25053.8, 300 sec: 23725.3). Total num frames: 7090176. Throughput: 0: 6229.6. Samples: 758312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:38:36,169][78983] Avg episode reward: [(0, '4.376')] +[2024-12-28 13:38:37,506][84560] Updated weights for policy 0, policy_version 1738 (0.0008) +[2024-12-28 13:38:39,412][84560] Updated weights for policy 0, policy_version 1748 (0.0009) +[2024-12-28 13:38:41,167][78983] Fps is (10 sec: 22937.4, 60 sec: 24712.5, 300 sec: 23635.4). Total num frames: 7196672. Throughput: 0: 6221.2. Samples: 791282. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:38:41,169][78983] Avg episode reward: [(0, '4.390')] +[2024-12-28 13:38:41,352][84560] Updated weights for policy 0, policy_version 1758 (0.0009) +[2024-12-28 13:38:43,316][84560] Updated weights for policy 0, policy_version 1768 (0.0008) +[2024-12-28 13:38:45,196][84560] Updated weights for policy 0, policy_version 1778 (0.0008) +[2024-12-28 13:38:46,167][78983] Fps is (10 sec: 21299.3, 60 sec: 24302.9, 300 sec: 23552.0). Total num frames: 7303168. Throughput: 0: 6201.5. Samples: 823934. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:38:46,168][78983] Avg episode reward: [(0, '4.468')] +[2024-12-28 13:38:46,824][84560] Updated weights for policy 0, policy_version 1788 (0.0007) +[2024-12-28 13:38:48,418][84560] Updated weights for policy 0, policy_version 1798 (0.0007) +[2024-12-28 13:38:50,014][84560] Updated weights for policy 0, policy_version 1808 (0.0007) +[2024-12-28 13:38:51,167][78983] Fps is (10 sec: 23757.1, 60 sec: 24439.5, 300 sec: 23643.8). Total num frames: 7434240. Throughput: 0: 6188.4. Samples: 842876. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:38:51,168][78983] Avg episode reward: [(0, '4.481')] +[2024-12-28 13:38:51,646][84560] Updated weights for policy 0, policy_version 1818 (0.0008) +[2024-12-28 13:38:53,255][84560] Updated weights for policy 0, policy_version 1828 (0.0007) +[2024-12-28 13:38:54,991][84560] Updated weights for policy 0, policy_version 1838 (0.0008) +[2024-12-28 13:38:56,167][78983] Fps is (10 sec: 24985.4, 60 sec: 24644.3, 300 sec: 23647.6). Total num frames: 7553024. Throughput: 0: 6153.9. Samples: 880428. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:38:56,169][78983] Avg episode reward: [(0, '4.419')] +[2024-12-28 13:38:56,840][84560] Updated weights for policy 0, policy_version 1848 (0.0008) +[2024-12-28 13:38:58,733][84560] Updated weights for policy 0, policy_version 1858 (0.0009) +[2024-12-28 13:39:00,659][84560] Updated weights for policy 0, policy_version 1868 (0.0008) +[2024-12-28 13:39:01,167][78983] Fps is (10 sec: 22527.8, 60 sec: 24507.7, 300 sec: 23571.8). Total num frames: 7659520. Throughput: 0: 6013.9. Samples: 912928. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:39:01,169][78983] Avg episode reward: [(0, '4.508')] +[2024-12-28 13:39:02,598][84560] Updated weights for policy 0, policy_version 1878 (0.0008) +[2024-12-28 13:39:04,477][84560] Updated weights for policy 0, policy_version 1888 (0.0007) +[2024-12-28 13:39:06,167][78983] Fps is (10 sec: 21708.9, 60 sec: 24166.4, 300 sec: 23526.4). Total num frames: 7770112. Throughput: 0: 5927.7. Samples: 928734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:39:06,168][78983] Avg episode reward: [(0, '4.418')] +[2024-12-28 13:39:06,355][84560] Updated weights for policy 0, policy_version 1898 (0.0008) +[2024-12-28 13:39:08,160][84560] Updated weights for policy 0, policy_version 1908 (0.0007) +[2024-12-28 13:39:09,741][84560] Updated weights for policy 0, policy_version 1918 (0.0008) +[2024-12-28 13:39:11,167][78983] Fps is (10 sec: 23347.4, 60 sec: 24029.9, 300 sec: 23558.2). Total num frames: 7892992. Throughput: 0: 5852.0. Samples: 963624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:39:11,168][78983] Avg episode reward: [(0, '4.591')] +[2024-12-28 13:39:11,304][84560] Updated weights for policy 0, policy_version 1928 (0.0006) +[2024-12-28 13:39:12,929][84560] Updated weights for policy 0, policy_version 1938 (0.0007) +[2024-12-28 13:39:14,451][84560] Updated weights for policy 0, policy_version 1948 (0.0007) +[2024-12-28 13:39:16,013][84560] Updated weights for policy 0, policy_version 1958 (0.0007) +[2024-12-28 13:39:16,167][78983] Fps is (10 sec: 24985.4, 60 sec: 24029.8, 300 sec: 23612.2). Total num frames: 8019968. Throughput: 0: 5866.3. Samples: 1002966. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:39:16,169][78983] Avg episode reward: [(0, '4.390')] +[2024-12-28 13:39:17,575][84560] Updated weights for policy 0, policy_version 1968 (0.0007) +[2024-12-28 13:39:19,120][84560] Updated weights for policy 0, policy_version 1978 (0.0008) +[2024-12-28 13:39:20,689][84560] Updated weights for policy 0, policy_version 1988 (0.0006) +[2024-12-28 13:39:21,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24029.9, 300 sec: 23686.6). Total num frames: 8151040. Throughput: 0: 5874.0. Samples: 1022642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:39:21,168][78983] Avg episode reward: [(0, '4.512')] +[2024-12-28 13:39:22,305][84560] Updated weights for policy 0, policy_version 1998 (0.0006) +[2024-12-28 13:39:23,843][84560] Updated weights for policy 0, policy_version 2008 (0.0006) +[2024-12-28 13:39:25,400][84560] Updated weights for policy 0, policy_version 2018 (0.0006) +[2024-12-28 13:39:26,167][78983] Fps is (10 sec: 26214.6, 60 sec: 24098.1, 300 sec: 23756.8). Total num frames: 8282112. Throughput: 0: 6017.2. Samples: 1062054. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:39:26,169][78983] Avg episode reward: [(0, '4.629')] +[2024-12-28 13:39:26,973][84560] Updated weights for policy 0, policy_version 2028 (0.0006) +[2024-12-28 13:39:28,517][84560] Updated weights for policy 0, policy_version 2038 (0.0006) +[2024-12-28 13:39:30,067][84560] Updated weights for policy 0, policy_version 2048 (0.0006) +[2024-12-28 13:39:31,167][78983] Fps is (10 sec: 26624.0, 60 sec: 24166.4, 300 sec: 23845.4). Total num frames: 8417280. Throughput: 0: 6165.6. Samples: 1101388. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:39:31,168][78983] Avg episode reward: [(0, '4.243')] +[2024-12-28 13:39:31,616][84560] Updated weights for policy 0, policy_version 2058 (0.0007) +[2024-12-28 13:39:33,200][84560] Updated weights for policy 0, policy_version 2068 (0.0007) +[2024-12-28 13:39:34,766][84560] Updated weights for policy 0, policy_version 2078 (0.0007) +[2024-12-28 13:39:36,167][78983] Fps is (10 sec: 25804.7, 60 sec: 24166.4, 300 sec: 23864.6). Total num frames: 8540160. Throughput: 0: 6178.7. Samples: 1120918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:39:36,169][78983] Avg episode reward: [(0, '4.483')] +[2024-12-28 13:39:36,580][84560] Updated weights for policy 0, policy_version 2088 (0.0009) +[2024-12-28 13:39:38,393][84560] Updated weights for policy 0, policy_version 2098 (0.0008) +[2024-12-28 13:39:40,178][84560] Updated weights for policy 0, policy_version 2108 (0.0008) +[2024-12-28 13:39:41,167][78983] Fps is (10 sec: 23756.8, 60 sec: 24303.0, 300 sec: 23840.8). Total num frames: 8654848. Throughput: 0: 6116.0. Samples: 1155646. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:39:41,168][78983] Avg episode reward: [(0, '4.414')] +[2024-12-28 13:39:42,038][84560] Updated weights for policy 0, policy_version 2118 (0.0008) +[2024-12-28 13:39:43,930][84560] Updated weights for policy 0, policy_version 2128 (0.0009) +[2024-12-28 13:39:45,743][84560] Updated weights for policy 0, policy_version 2138 (0.0009) +[2024-12-28 13:39:46,167][78983] Fps is (10 sec: 22528.1, 60 sec: 24371.2, 300 sec: 23797.8). Total num frames: 8765440. Throughput: 0: 6136.2. Samples: 1189058. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:39:46,168][78983] Avg episode reward: [(0, '4.491')] +[2024-12-28 13:39:47,317][84560] Updated weights for policy 0, policy_version 2148 (0.0007) +[2024-12-28 13:39:48,902][84560] Updated weights for policy 0, policy_version 2158 (0.0006) +[2024-12-28 13:39:50,420][84560] Updated weights for policy 0, policy_version 2168 (0.0007) +[2024-12-28 13:39:51,167][78983] Fps is (10 sec: 24166.2, 60 sec: 24371.1, 300 sec: 23856.7). Total num frames: 8896512. Throughput: 0: 6218.6. Samples: 1208572. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:39:51,168][78983] Avg episode reward: [(0, '4.437')] +[2024-12-28 13:39:51,988][84560] Updated weights for policy 0, policy_version 2178 (0.0007) +[2024-12-28 13:39:53,551][84560] Updated weights for policy 0, policy_version 2188 (0.0007) +[2024-12-28 13:39:55,163][84560] Updated weights for policy 0, policy_version 2198 (0.0007) +[2024-12-28 13:39:56,167][78983] Fps is (10 sec: 26214.4, 60 sec: 24576.0, 300 sec: 23912.8). Total num frames: 9027584. Throughput: 0: 6317.3. Samples: 1247902. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:39:56,169][78983] Avg episode reward: [(0, '4.572')] +[2024-12-28 13:39:56,773][84560] Updated weights for policy 0, policy_version 2208 (0.0008) +[2024-12-28 13:39:58,341][84560] Updated weights for policy 0, policy_version 2218 (0.0007) +[2024-12-28 13:39:59,934][84560] Updated weights for policy 0, policy_version 2228 (0.0007) +[2024-12-28 13:40:01,167][78983] Fps is (10 sec: 25805.1, 60 sec: 24917.4, 300 sec: 23947.3). Total num frames: 9154560. Throughput: 0: 6296.8. Samples: 1286320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:40:01,168][78983] Avg episode reward: [(0, '4.350')] +[2024-12-28 13:40:01,523][84560] Updated weights for policy 0, policy_version 2238 (0.0007) +[2024-12-28 13:40:03,129][84560] Updated weights for policy 0, policy_version 2248 (0.0007) +[2024-12-28 13:40:04,926][84560] Updated weights for policy 0, policy_version 2258 (0.0008) +[2024-12-28 13:40:06,167][78983] Fps is (10 sec: 24575.8, 60 sec: 25053.8, 300 sec: 23943.0). Total num frames: 9273344. Throughput: 0: 6278.2. Samples: 1305162. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:40:06,169][78983] Avg episode reward: [(0, '4.633')] +[2024-12-28 13:40:06,770][84560] Updated weights for policy 0, policy_version 2268 (0.0007) +[2024-12-28 13:40:08,610][84560] Updated weights for policy 0, policy_version 2278 (0.0008) +[2024-12-28 13:40:10,427][84560] Updated weights for policy 0, policy_version 2288 (0.0008) +[2024-12-28 13:40:11,167][78983] Fps is (10 sec: 22937.5, 60 sec: 24849.1, 300 sec: 23902.4). Total num frames: 9383936. Throughput: 0: 6145.9. Samples: 1338620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:40:11,169][78983] Avg episode reward: [(0, '4.503')] +[2024-12-28 13:40:12,323][84560] Updated weights for policy 0, policy_version 2298 (0.0008) +[2024-12-28 13:40:14,082][84560] Updated weights for policy 0, policy_version 2308 (0.0007) +[2024-12-28 13:40:15,748][84560] Updated weights for policy 0, policy_version 2318 (0.0008) +[2024-12-28 13:40:16,167][78983] Fps is (10 sec: 22937.8, 60 sec: 24712.6, 300 sec: 23899.3). Total num frames: 9502720. Throughput: 0: 6041.0. Samples: 1373232. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:40:16,169][78983] Avg episode reward: [(0, '4.412')] +[2024-12-28 13:40:17,627][84560] Updated weights for policy 0, policy_version 2328 (0.0009) +[2024-12-28 13:40:19,501][84560] Updated weights for policy 0, policy_version 2338 (0.0008) +[2024-12-28 13:40:21,167][78983] Fps is (10 sec: 22528.0, 60 sec: 24302.9, 300 sec: 23843.9). Total num frames: 9609216. Throughput: 0: 5972.0. Samples: 1389656. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:40:21,169][78983] Avg episode reward: [(0, '4.243')] +[2024-12-28 13:40:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000002346_9609216.pth... +[2024-12-28 13:40:21,214][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth +[2024-12-28 13:40:21,432][84560] Updated weights for policy 0, policy_version 2348 (0.0008) +[2024-12-28 13:40:23,304][84560] Updated weights for policy 0, policy_version 2358 (0.0007) +[2024-12-28 13:40:25,108][84560] Updated weights for policy 0, policy_version 2368 (0.0007) +[2024-12-28 13:40:26,167][78983] Fps is (10 sec: 21708.7, 60 sec: 23961.6, 300 sec: 23808.0). Total num frames: 9719808. Throughput: 0: 5928.1. Samples: 1422412. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:40:26,168][78983] Avg episode reward: [(0, '4.280')] +[2024-12-28 13:40:26,876][84560] Updated weights for policy 0, policy_version 2378 (0.0006) +[2024-12-28 13:40:28,402][84560] Updated weights for policy 0, policy_version 2388 (0.0006) +[2024-12-28 13:40:29,933][84560] Updated weights for policy 0, policy_version 2398 (0.0007) +[2024-12-28 13:40:31,167][78983] Fps is (10 sec: 24166.4, 60 sec: 23893.3, 300 sec: 23857.1). Total num frames: 9850880. Throughput: 0: 6030.7. Samples: 1460440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:40:31,168][78983] Avg episode reward: [(0, '4.245')] +[2024-12-28 13:40:31,487][84560] Updated weights for policy 0, policy_version 2408 (0.0006) +[2024-12-28 13:40:33,068][84560] Updated weights for policy 0, policy_version 2418 (0.0007) +[2024-12-28 13:40:34,616][84560] Updated weights for policy 0, policy_version 2428 (0.0007) +[2024-12-28 13:40:36,145][84560] Updated weights for policy 0, policy_version 2438 (0.0006) +[2024-12-28 13:40:36,167][78983] Fps is (10 sec: 26624.0, 60 sec: 24098.1, 300 sec: 23920.6). Total num frames: 9986048. Throughput: 0: 6037.1. Samples: 1480240. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:40:36,169][78983] Avg episode reward: [(0, '4.491')] +[2024-12-28 13:40:37,699][84560] Updated weights for policy 0, policy_version 2448 (0.0007) +[2024-12-28 13:40:39,282][84560] Updated weights for policy 0, policy_version 2458 (0.0006) +[2024-12-28 13:40:40,808][84560] Updated weights for policy 0, policy_version 2468 (0.0006) +[2024-12-28 13:40:41,167][78983] Fps is (10 sec: 26623.8, 60 sec: 24371.2, 300 sec: 23965.6). Total num frames: 10117120. Throughput: 0: 6037.9. Samples: 1519606. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-12-28 13:40:41,169][78983] Avg episode reward: [(0, '4.708')] +[2024-12-28 13:40:42,390][84560] Updated weights for policy 0, policy_version 2478 (0.0006) +[2024-12-28 13:40:43,967][84560] Updated weights for policy 0, policy_version 2488 (0.0007) +[2024-12-28 13:40:45,525][84560] Updated weights for policy 0, policy_version 2498 (0.0006) +[2024-12-28 13:40:46,167][78983] Fps is (10 sec: 26214.6, 60 sec: 24712.5, 300 sec: 24008.9). Total num frames: 10248192. Throughput: 0: 6057.8. Samples: 1558920. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:40:46,168][78983] Avg episode reward: [(0, '4.360')] +[2024-12-28 13:40:47,131][84560] Updated weights for policy 0, policy_version 2508 (0.0007) +[2024-12-28 13:40:48,822][84560] Updated weights for policy 0, policy_version 2518 (0.0008) +[2024-12-28 13:40:50,660][84560] Updated weights for policy 0, policy_version 2528 (0.0008) +[2024-12-28 13:40:51,167][78983] Fps is (10 sec: 24576.3, 60 sec: 24439.5, 300 sec: 23988.6). Total num frames: 10362880. Throughput: 0: 6052.9. Samples: 1577540. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:40:51,168][78983] Avg episode reward: [(0, '4.511')] +[2024-12-28 13:40:52,560][84560] Updated weights for policy 0, policy_version 2538 (0.0007) +[2024-12-28 13:40:54,417][84560] Updated weights for policy 0, policy_version 2548 (0.0008) +[2024-12-28 13:40:56,167][78983] Fps is (10 sec: 22528.0, 60 sec: 24098.1, 300 sec: 23954.0). Total num frames: 10473472. Throughput: 0: 6038.0. Samples: 1610328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:40:56,169][78983] Avg episode reward: [(0, '4.364')] +[2024-12-28 13:40:56,308][84560] Updated weights for policy 0, policy_version 2558 (0.0009) +[2024-12-28 13:40:58,210][84560] Updated weights for policy 0, policy_version 2568 (0.0008) +[2024-12-28 13:41:00,034][84560] Updated weights for policy 0, policy_version 2578 (0.0008) +[2024-12-28 13:41:01,167][78983] Fps is (10 sec: 22118.2, 60 sec: 23825.0, 300 sec: 23920.6). Total num frames: 10584064. Throughput: 0: 6017.1. Samples: 1644002. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:41:01,168][78983] Avg episode reward: [(0, '4.373')] +[2024-12-28 13:41:01,653][84560] Updated weights for policy 0, policy_version 2588 (0.0006) +[2024-12-28 13:41:03,217][84560] Updated weights for policy 0, policy_version 2598 (0.0006) +[2024-12-28 13:41:04,775][84560] Updated weights for policy 0, policy_version 2608 (0.0007) +[2024-12-28 13:41:06,167][78983] Fps is (10 sec: 24166.1, 60 sec: 24029.9, 300 sec: 23961.6). Total num frames: 10715136. Throughput: 0: 6085.5. Samples: 1663502. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:41:06,169][78983] Avg episode reward: [(0, '4.355')] +[2024-12-28 13:41:06,384][84560] Updated weights for policy 0, policy_version 2618 (0.0008) +[2024-12-28 13:41:07,941][84560] Updated weights for policy 0, policy_version 2628 (0.0007) +[2024-12-28 13:41:09,457][84560] Updated weights for policy 0, policy_version 2638 (0.0007) +[2024-12-28 13:41:11,001][84560] Updated weights for policy 0, policy_version 2648 (0.0006) +[2024-12-28 13:41:11,167][78983] Fps is (10 sec: 26214.7, 60 sec: 24371.2, 300 sec: 24001.1). Total num frames: 10846208. Throughput: 0: 6228.8. Samples: 1702706. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:41:11,168][78983] Avg episode reward: [(0, '4.386')] +[2024-12-28 13:41:12,623][84560] Updated weights for policy 0, policy_version 2658 (0.0008) +[2024-12-28 13:41:14,210][84560] Updated weights for policy 0, policy_version 2668 (0.0007) +[2024-12-28 13:41:15,770][84560] Updated weights for policy 0, policy_version 2678 (0.0008) +[2024-12-28 13:41:16,167][78983] Fps is (10 sec: 26214.6, 60 sec: 24576.0, 300 sec: 24039.3). Total num frames: 10977280. Throughput: 0: 6249.6. Samples: 1741670. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:41:16,168][78983] Avg episode reward: [(0, '4.363')] +[2024-12-28 13:41:17,399][84560] Updated weights for policy 0, policy_version 2688 (0.0006) +[2024-12-28 13:41:19,263][84560] Updated weights for policy 0, policy_version 2698 (0.0007) +[2024-12-28 13:41:21,167][78983] Fps is (10 sec: 23756.5, 60 sec: 24576.0, 300 sec: 23992.8). Total num frames: 11083776. Throughput: 0: 6207.1. Samples: 1759560. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:41:21,168][78983] Avg episode reward: [(0, '4.257')] +[2024-12-28 13:41:21,989][84560] Updated weights for policy 0, policy_version 2708 (0.0009) +[2024-12-28 13:41:23,788][84560] Updated weights for policy 0, policy_version 2718 (0.0008) +[2024-12-28 13:41:25,552][84560] Updated weights for policy 0, policy_version 2728 (0.0008) +[2024-12-28 13:41:26,167][78983] Fps is (10 sec: 20889.2, 60 sec: 24439.4, 300 sec: 24076.1). Total num frames: 11186176. Throughput: 0: 5977.8. Samples: 1788608. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:41:26,169][78983] Avg episode reward: [(0, '4.347')] +[2024-12-28 13:41:27,548][84560] Updated weights for policy 0, policy_version 2738 (0.0010) +[2024-12-28 13:41:29,225][84560] Updated weights for policy 0, policy_version 2748 (0.0007) +[2024-12-28 13:41:30,765][84560] Updated weights for policy 0, policy_version 2758 (0.0007) +[2024-12-28 13:41:31,167][78983] Fps is (10 sec: 22118.7, 60 sec: 24234.7, 300 sec: 24103.9). Total num frames: 11304960. Throughput: 0: 5885.9. Samples: 1823786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:41:31,168][78983] Avg episode reward: [(0, '4.525')] +[2024-12-28 13:41:32,286][84560] Updated weights for policy 0, policy_version 2768 (0.0007) +[2024-12-28 13:41:33,793][84560] Updated weights for policy 0, policy_version 2778 (0.0006) +[2024-12-28 13:41:35,290][84560] Updated weights for policy 0, policy_version 2788 (0.0006) +[2024-12-28 13:41:36,167][78983] Fps is (10 sec: 25395.8, 60 sec: 24234.7, 300 sec: 24187.2). Total num frames: 11440128. Throughput: 0: 5920.7. Samples: 1843970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:41:36,168][78983] Avg episode reward: [(0, '4.441')] +[2024-12-28 13:41:36,850][84560] Updated weights for policy 0, policy_version 2798 (0.0007) +[2024-12-28 13:41:38,359][84560] Updated weights for policy 0, policy_version 2808 (0.0008) +[2024-12-28 13:41:39,902][84560] Updated weights for policy 0, policy_version 2818 (0.0006) +[2024-12-28 13:41:41,167][78983] Fps is (10 sec: 27033.5, 60 sec: 24303.0, 300 sec: 24256.7). Total num frames: 11575296. Throughput: 0: 6089.7. Samples: 1884366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 13:41:41,168][78983] Avg episode reward: [(0, '4.635')] +[2024-12-28 13:41:41,393][84560] Updated weights for policy 0, policy_version 2828 (0.0006) +[2024-12-28 13:41:42,887][84560] Updated weights for policy 0, policy_version 2838 (0.0007) +[2024-12-28 13:41:44,417][84560] Updated weights for policy 0, policy_version 2848 (0.0008) +[2024-12-28 13:41:45,913][84560] Updated weights for policy 0, policy_version 2858 (0.0007) +[2024-12-28 13:41:46,168][78983] Fps is (10 sec: 27032.0, 60 sec: 24371.0, 300 sec: 24339.9). Total num frames: 11710464. Throughput: 0: 6245.8. Samples: 1925068. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:41:46,169][78983] Avg episode reward: [(0, '4.393')] +[2024-12-28 13:41:47,459][84560] Updated weights for policy 0, policy_version 2868 (0.0007) +[2024-12-28 13:41:49,054][84560] Updated weights for policy 0, policy_version 2878 (0.0006) +[2024-12-28 13:41:50,598][84560] Updated weights for policy 0, policy_version 2888 (0.0007) +[2024-12-28 13:41:51,167][78983] Fps is (10 sec: 26623.6, 60 sec: 24644.2, 300 sec: 24409.4). Total num frames: 11841536. Throughput: 0: 6253.1. Samples: 1944894. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:41:51,168][78983] Avg episode reward: [(0, '4.457')] +[2024-12-28 13:41:52,106][84560] Updated weights for policy 0, policy_version 2898 (0.0006) +[2024-12-28 13:41:53,620][84560] Updated weights for policy 0, policy_version 2908 (0.0006) +[2024-12-28 13:41:55,189][84560] Updated weights for policy 0, policy_version 2918 (0.0006) +[2024-12-28 13:41:56,167][78983] Fps is (10 sec: 26625.7, 60 sec: 25053.9, 300 sec: 24451.0). Total num frames: 11976704. Throughput: 0: 6265.6. Samples: 1984658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:41:56,168][78983] Avg episode reward: [(0, '4.506')] +[2024-12-28 13:41:56,755][84560] Updated weights for policy 0, policy_version 2928 (0.0007) +[2024-12-28 13:41:58,263][84560] Updated weights for policy 0, policy_version 2938 (0.0007) +[2024-12-28 13:41:59,805][84560] Updated weights for policy 0, policy_version 2948 (0.0007) +[2024-12-28 13:42:01,167][78983] Fps is (10 sec: 27034.1, 60 sec: 25463.5, 300 sec: 24506.6). Total num frames: 12111872. Throughput: 0: 6290.9. Samples: 2024762. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:42:01,168][78983] Avg episode reward: [(0, '4.389')] +[2024-12-28 13:42:01,327][84560] Updated weights for policy 0, policy_version 2958 (0.0007) +[2024-12-28 13:42:02,827][84560] Updated weights for policy 0, policy_version 2968 (0.0008) +[2024-12-28 13:42:04,362][84560] Updated weights for policy 0, policy_version 2978 (0.0006) +[2024-12-28 13:42:05,940][84560] Updated weights for policy 0, policy_version 2988 (0.0008) +[2024-12-28 13:42:06,167][78983] Fps is (10 sec: 26623.8, 60 sec: 25463.5, 300 sec: 24576.0). Total num frames: 12242944. Throughput: 0: 6343.7. Samples: 2045024. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:42:06,169][78983] Avg episode reward: [(0, '4.549')] +[2024-12-28 13:42:07,430][84560] Updated weights for policy 0, policy_version 2998 (0.0007) +[2024-12-28 13:42:08,905][84560] Updated weights for policy 0, policy_version 3008 (0.0007) +[2024-12-28 13:42:10,454][84560] Updated weights for policy 0, policy_version 3018 (0.0007) +[2024-12-28 13:42:11,167][78983] Fps is (10 sec: 26623.9, 60 sec: 25531.7, 300 sec: 24631.5). Total num frames: 12378112. Throughput: 0: 6597.7. Samples: 2085502. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:42:11,168][78983] Avg episode reward: [(0, '4.322')] +[2024-12-28 13:42:11,955][84560] Updated weights for policy 0, policy_version 3028 (0.0006) +[2024-12-28 13:42:13,514][84560] Updated weights for policy 0, policy_version 3038 (0.0006) +[2024-12-28 13:42:15,016][84560] Updated weights for policy 0, policy_version 3048 (0.0007) +[2024-12-28 13:42:16,167][78983] Fps is (10 sec: 27033.6, 60 sec: 25600.0, 300 sec: 24659.3). Total num frames: 12513280. Throughput: 0: 6705.0. Samples: 2125512. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:42:16,169][78983] Avg episode reward: [(0, '4.423')] +[2024-12-28 13:42:16,539][84560] Updated weights for policy 0, policy_version 3058 (0.0007) +[2024-12-28 13:42:18,067][84560] Updated weights for policy 0, policy_version 3068 (0.0006) +[2024-12-28 13:42:19,617][84560] Updated weights for policy 0, policy_version 3078 (0.0006) +[2024-12-28 13:42:21,133][84560] Updated weights for policy 0, policy_version 3088 (0.0007) +[2024-12-28 13:42:21,167][78983] Fps is (10 sec: 27033.3, 60 sec: 26077.9, 300 sec: 24701.0). Total num frames: 12648448. Throughput: 0: 6707.6. Samples: 2145814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:42:21,168][78983] Avg episode reward: [(0, '4.426')] +[2024-12-28 13:42:21,175][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000003088_12648448.pth... +[2024-12-28 13:42:21,213][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000001638_6709248.pth +[2024-12-28 13:42:23,740][84560] Updated weights for policy 0, policy_version 3098 (0.0009) +[2024-12-28 13:42:25,920][84560] Updated weights for policy 0, policy_version 3108 (0.0008) +[2024-12-28 13:42:26,167][78983] Fps is (10 sec: 22118.3, 60 sec: 25804.9, 300 sec: 24617.7). Total num frames: 12734464. Throughput: 0: 6495.8. Samples: 2176678. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:42:26,168][78983] Avg episode reward: [(0, '4.351')] +[2024-12-28 13:42:27,950][84560] Updated weights for policy 0, policy_version 3118 (0.0009) +[2024-12-28 13:42:29,855][84560] Updated weights for policy 0, policy_version 3128 (0.0008) +[2024-12-28 13:42:31,167][78983] Fps is (10 sec: 19251.4, 60 sec: 25600.0, 300 sec: 24589.9). Total num frames: 12840960. Throughput: 0: 6275.7. Samples: 2207470. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:42:31,168][78983] Avg episode reward: [(0, '4.478')] +[2024-12-28 13:42:31,737][84560] Updated weights for policy 0, policy_version 3138 (0.0008) +[2024-12-28 13:42:33,384][84560] Updated weights for policy 0, policy_version 3148 (0.0008) +[2024-12-28 13:42:34,896][84560] Updated weights for policy 0, policy_version 3158 (0.0006) +[2024-12-28 13:42:36,167][78983] Fps is (10 sec: 22937.5, 60 sec: 25395.2, 300 sec: 24576.0). Total num frames: 12963840. Throughput: 0: 6238.2. Samples: 2225612. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:42:36,168][78983] Avg episode reward: [(0, '4.233')] +[2024-12-28 13:42:36,505][84560] Updated weights for policy 0, policy_version 3168 (0.0006) +[2024-12-28 13:42:38,018][84560] Updated weights for policy 0, policy_version 3178 (0.0007) +[2024-12-28 13:42:39,586][84560] Updated weights for policy 0, policy_version 3188 (0.0007) +[2024-12-28 13:42:41,146][84560] Updated weights for policy 0, policy_version 3198 (0.0007) +[2024-12-28 13:42:41,167][78983] Fps is (10 sec: 25804.9, 60 sec: 25395.2, 300 sec: 24589.9). Total num frames: 13099008. Throughput: 0: 6234.3. Samples: 2265202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:42:41,168][78983] Avg episode reward: [(0, '4.333')] +[2024-12-28 13:42:42,720][84560] Updated weights for policy 0, policy_version 3208 (0.0007) +[2024-12-28 13:42:44,264][84560] Updated weights for policy 0, policy_version 3218 (0.0007) +[2024-12-28 13:42:45,808][84560] Updated weights for policy 0, policy_version 3228 (0.0006) +[2024-12-28 13:42:46,167][78983] Fps is (10 sec: 26624.4, 60 sec: 25327.2, 300 sec: 24617.7). Total num frames: 13230080. Throughput: 0: 6218.0. Samples: 2304574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:42:46,168][78983] Avg episode reward: [(0, '4.253')] +[2024-12-28 13:42:47,354][84560] Updated weights for policy 0, policy_version 3238 (0.0006) +[2024-12-28 13:42:48,907][84560] Updated weights for policy 0, policy_version 3248 (0.0007) +[2024-12-28 13:42:50,458][84560] Updated weights for policy 0, policy_version 3258 (0.0007) +[2024-12-28 13:42:51,167][78983] Fps is (10 sec: 26214.4, 60 sec: 25327.0, 300 sec: 24701.0). Total num frames: 13361152. Throughput: 0: 6210.8. Samples: 2324512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:42:51,169][78983] Avg episode reward: [(0, '4.267')] +[2024-12-28 13:42:52,035][84560] Updated weights for policy 0, policy_version 3268 (0.0006) +[2024-12-28 13:42:53,648][84560] Updated weights for policy 0, policy_version 3278 (0.0007) +[2024-12-28 13:42:55,306][84560] Updated weights for policy 0, policy_version 3288 (0.0008) +[2024-12-28 13:42:56,167][78983] Fps is (10 sec: 25804.2, 60 sec: 25190.3, 300 sec: 24742.6). Total num frames: 13488128. Throughput: 0: 6169.2. Samples: 2363118. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:42:56,169][78983] Avg episode reward: [(0, '4.347')] +[2024-12-28 13:42:56,929][84560] Updated weights for policy 0, policy_version 3298 (0.0007) +[2024-12-28 13:42:58,557][84560] Updated weights for policy 0, policy_version 3308 (0.0007) +[2024-12-28 13:43:00,176][84560] Updated weights for policy 0, policy_version 3318 (0.0008) +[2024-12-28 13:43:01,167][78983] Fps is (10 sec: 24985.7, 60 sec: 24985.6, 300 sec: 24714.9). Total num frames: 13611008. Throughput: 0: 6117.5. Samples: 2400798. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 13:43:01,168][78983] Avg episode reward: [(0, '4.465')] +[2024-12-28 13:43:01,820][84560] Updated weights for policy 0, policy_version 3328 (0.0007) +[2024-12-28 13:43:03,439][84560] Updated weights for policy 0, policy_version 3338 (0.0007) +[2024-12-28 13:43:05,056][84560] Updated weights for policy 0, policy_version 3348 (0.0008) +[2024-12-28 13:43:06,167][78983] Fps is (10 sec: 24986.0, 60 sec: 24917.3, 300 sec: 24701.0). Total num frames: 13737984. Throughput: 0: 6085.8. Samples: 2419672. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:43:06,168][78983] Avg episode reward: [(0, '4.360')] +[2024-12-28 13:43:06,693][84560] Updated weights for policy 0, policy_version 3358 (0.0007) +[2024-12-28 13:43:08,291][84560] Updated weights for policy 0, policy_version 3368 (0.0008) +[2024-12-28 13:43:09,920][84560] Updated weights for policy 0, policy_version 3378 (0.0008) +[2024-12-28 13:43:11,167][78983] Fps is (10 sec: 25394.9, 60 sec: 24780.8, 300 sec: 24701.0). Total num frames: 13864960. Throughput: 0: 6243.3. Samples: 2457628. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:43:11,168][78983] Avg episode reward: [(0, '4.444')] +[2024-12-28 13:43:11,618][84560] Updated weights for policy 0, policy_version 3388 (0.0008) +[2024-12-28 13:43:13,273][84560] Updated weights for policy 0, policy_version 3398 (0.0007) +[2024-12-28 13:43:14,864][84560] Updated weights for policy 0, policy_version 3408 (0.0006) +[2024-12-28 13:43:16,167][78983] Fps is (10 sec: 24985.4, 60 sec: 24576.0, 300 sec: 24673.2). Total num frames: 13987840. Throughput: 0: 6390.3. Samples: 2495034. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:43:16,168][78983] Avg episode reward: [(0, '4.593')] +[2024-12-28 13:43:16,504][84560] Updated weights for policy 0, policy_version 3418 (0.0007) +[2024-12-28 13:43:18,145][84560] Updated weights for policy 0, policy_version 3428 (0.0008) +[2024-12-28 13:43:19,790][84560] Updated weights for policy 0, policy_version 3438 (0.0008) +[2024-12-28 13:43:21,167][78983] Fps is (10 sec: 24985.8, 60 sec: 24439.5, 300 sec: 24673.2). Total num frames: 14114816. Throughput: 0: 6400.1. Samples: 2513618. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:43:21,168][78983] Avg episode reward: [(0, '4.465')] +[2024-12-28 13:43:21,421][84560] Updated weights for policy 0, policy_version 3448 (0.0007) +[2024-12-28 13:43:23,000][84560] Updated weights for policy 0, policy_version 3458 (0.0007) +[2024-12-28 13:43:24,640][84560] Updated weights for policy 0, policy_version 3468 (0.0006) +[2024-12-28 13:43:26,167][78983] Fps is (10 sec: 25395.4, 60 sec: 25122.1, 300 sec: 24659.3). Total num frames: 14241792. Throughput: 0: 6362.6. Samples: 2551520. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:43:26,168][78983] Avg episode reward: [(0, '4.344')] +[2024-12-28 13:43:26,239][84560] Updated weights for policy 0, policy_version 3478 (0.0006) +[2024-12-28 13:43:27,862][84560] Updated weights for policy 0, policy_version 3488 (0.0008) +[2024-12-28 13:43:29,491][84560] Updated weights for policy 0, policy_version 3498 (0.0007) +[2024-12-28 13:43:31,116][84560] Updated weights for policy 0, policy_version 3508 (0.0007) +[2024-12-28 13:43:31,167][78983] Fps is (10 sec: 25395.1, 60 sec: 25463.5, 300 sec: 24673.2). Total num frames: 14368768. Throughput: 0: 6330.8. Samples: 2589462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:43:31,168][78983] Avg episode reward: [(0, '4.350')] +[2024-12-28 13:43:32,786][84560] Updated weights for policy 0, policy_version 3518 (0.0007) +[2024-12-28 13:43:34,464][84560] Updated weights for policy 0, policy_version 3528 (0.0007) +[2024-12-28 13:43:36,103][84560] Updated weights for policy 0, policy_version 3538 (0.0007) +[2024-12-28 13:43:36,167][78983] Fps is (10 sec: 24985.5, 60 sec: 25463.5, 300 sec: 24728.7). Total num frames: 14491648. Throughput: 0: 6297.7. Samples: 2607908. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:43:36,168][78983] Avg episode reward: [(0, '4.356')] +[2024-12-28 13:43:37,713][84560] Updated weights for policy 0, policy_version 3548 (0.0007) +[2024-12-28 13:43:39,326][84560] Updated weights for policy 0, policy_version 3558 (0.0008) +[2024-12-28 13:43:40,922][84560] Updated weights for policy 0, policy_version 3568 (0.0007) +[2024-12-28 13:43:41,167][78983] Fps is (10 sec: 24985.7, 60 sec: 25326.9, 300 sec: 24798.2). Total num frames: 14618624. Throughput: 0: 6279.4. Samples: 2645688. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:43:41,168][78983] Avg episode reward: [(0, '4.536')] +[2024-12-28 13:43:42,537][84560] Updated weights for policy 0, policy_version 3578 (0.0007) +[2024-12-28 13:43:44,194][84560] Updated weights for policy 0, policy_version 3588 (0.0009) +[2024-12-28 13:43:45,826][84560] Updated weights for policy 0, policy_version 3598 (0.0006) +[2024-12-28 13:43:46,168][78983] Fps is (10 sec: 25394.6, 60 sec: 25258.5, 300 sec: 24784.2). Total num frames: 14745600. Throughput: 0: 6277.3. Samples: 2683280. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2024-12-28 13:43:46,169][78983] Avg episode reward: [(0, '4.498')] +[2024-12-28 13:43:47,477][84560] Updated weights for policy 0, policy_version 3608 (0.0007) +[2024-12-28 13:43:49,085][84560] Updated weights for policy 0, policy_version 3618 (0.0007) +[2024-12-28 13:43:50,680][84560] Updated weights for policy 0, policy_version 3628 (0.0008) +[2024-12-28 13:43:51,167][78983] Fps is (10 sec: 25395.2, 60 sec: 25190.4, 300 sec: 24812.0). Total num frames: 14872576. Throughput: 0: 6280.2. Samples: 2702280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:43:51,168][78983] Avg episode reward: [(0, '4.285')] +[2024-12-28 13:43:52,315][84560] Updated weights for policy 0, policy_version 3638 (0.0007) +[2024-12-28 13:43:53,974][84560] Updated weights for policy 0, policy_version 3648 (0.0007) +[2024-12-28 13:43:55,845][84560] Updated weights for policy 0, policy_version 3658 (0.0010) +[2024-12-28 13:43:56,167][78983] Fps is (10 sec: 24167.1, 60 sec: 24985.7, 300 sec: 24839.8). Total num frames: 14987264. Throughput: 0: 6262.5. Samples: 2739440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:43:56,168][78983] Avg episode reward: [(0, '4.673')] +[2024-12-28 13:43:57,786][84560] Updated weights for policy 0, policy_version 3668 (0.0008) +[2024-12-28 13:43:59,724][84560] Updated weights for policy 0, policy_version 3678 (0.0008) +[2024-12-28 13:44:01,167][78983] Fps is (10 sec: 22118.1, 60 sec: 24712.5, 300 sec: 24825.9). Total num frames: 15093760. Throughput: 0: 6138.9. Samples: 2771286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:01,169][78983] Avg episode reward: [(0, '4.490')] +[2024-12-28 13:44:01,654][84560] Updated weights for policy 0, policy_version 3688 (0.0009) +[2024-12-28 13:44:03,556][84560] Updated weights for policy 0, policy_version 3698 (0.0009) +[2024-12-28 13:44:05,399][84560] Updated weights for policy 0, policy_version 3708 (0.0009) +[2024-12-28 13:44:06,167][78983] Fps is (10 sec: 21708.9, 60 sec: 24439.5, 300 sec: 24784.3). Total num frames: 15204352. Throughput: 0: 6080.3. Samples: 2787232. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:06,168][78983] Avg episode reward: [(0, '4.409')] +[2024-12-28 13:44:07,118][84560] Updated weights for policy 0, policy_version 3718 (0.0008) +[2024-12-28 13:44:08,829][84560] Updated weights for policy 0, policy_version 3728 (0.0009) +[2024-12-28 13:44:10,611][84560] Updated weights for policy 0, policy_version 3738 (0.0007) +[2024-12-28 13:44:11,167][78983] Fps is (10 sec: 22937.8, 60 sec: 24302.9, 300 sec: 24756.5). Total num frames: 15323136. Throughput: 0: 6019.6. Samples: 2822404. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:11,169][78983] Avg episode reward: [(0, '4.355')] +[2024-12-28 13:44:12,349][84560] Updated weights for policy 0, policy_version 3748 (0.0008) +[2024-12-28 13:44:13,996][84560] Updated weights for policy 0, policy_version 3758 (0.0007) +[2024-12-28 13:44:15,619][84560] Updated weights for policy 0, policy_version 3768 (0.0007) +[2024-12-28 13:44:16,167][78983] Fps is (10 sec: 24166.3, 60 sec: 24303.0, 300 sec: 24728.7). Total num frames: 15446016. Throughput: 0: 5985.3. Samples: 2858800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:44:16,168][78983] Avg episode reward: [(0, '4.380')] +[2024-12-28 13:44:17,281][84560] Updated weights for policy 0, policy_version 3778 (0.0007) +[2024-12-28 13:44:18,936][84560] Updated weights for policy 0, policy_version 3788 (0.0007) +[2024-12-28 13:44:20,560][84560] Updated weights for policy 0, policy_version 3798 (0.0006) +[2024-12-28 13:44:21,167][78983] Fps is (10 sec: 24575.9, 60 sec: 24234.6, 300 sec: 24701.0). Total num frames: 15568896. Throughput: 0: 5991.5. Samples: 2877524. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 13:44:21,168][78983] Avg episode reward: [(0, '4.260')] +[2024-12-28 13:44:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000003801_15568896.pth... +[2024-12-28 13:44:21,210][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000002346_9609216.pth +[2024-12-28 13:44:22,253][84560] Updated weights for policy 0, policy_version 3808 (0.0008) +[2024-12-28 13:44:23,894][84560] Updated weights for policy 0, policy_version 3818 (0.0007) +[2024-12-28 13:44:25,550][84560] Updated weights for policy 0, policy_version 3828 (0.0008) +[2024-12-28 13:44:26,167][78983] Fps is (10 sec: 24576.0, 60 sec: 24166.4, 300 sec: 24659.3). Total num frames: 15691776. Throughput: 0: 5979.5. Samples: 2914766. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:26,168][78983] Avg episode reward: [(0, '4.399')] +[2024-12-28 13:44:27,189][84560] Updated weights for policy 0, policy_version 3838 (0.0007) +[2024-12-28 13:44:28,713][84560] Updated weights for policy 0, policy_version 3848 (0.0006) +[2024-12-28 13:44:30,309][84560] Updated weights for policy 0, policy_version 3858 (0.0007) +[2024-12-28 13:44:31,167][78983] Fps is (10 sec: 25395.5, 60 sec: 24234.7, 300 sec: 24687.1). Total num frames: 15822848. Throughput: 0: 6001.9. Samples: 2953362. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:31,168][78983] Avg episode reward: [(0, '4.458')] +[2024-12-28 13:44:31,861][84560] Updated weights for policy 0, policy_version 3868 (0.0006) +[2024-12-28 13:44:33,403][84560] Updated weights for policy 0, policy_version 3878 (0.0007) +[2024-12-28 13:44:34,977][84560] Updated weights for policy 0, policy_version 3888 (0.0007) +[2024-12-28 13:44:36,167][78983] Fps is (10 sec: 26214.4, 60 sec: 24371.2, 300 sec: 24742.6). Total num frames: 15953920. Throughput: 0: 6013.3. Samples: 2972878. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:44:36,168][78983] Avg episode reward: [(0, '4.456')] +[2024-12-28 13:44:36,537][84560] Updated weights for policy 0, policy_version 3898 (0.0008) +[2024-12-28 13:44:38,086][84560] Updated weights for policy 0, policy_version 3908 (0.0007) +[2024-12-28 13:44:39,666][84560] Updated weights for policy 0, policy_version 3918 (0.0007) +[2024-12-28 13:44:41,167][78983] Fps is (10 sec: 26214.3, 60 sec: 24439.4, 300 sec: 24812.0). Total num frames: 16084992. Throughput: 0: 6063.3. Samples: 3012288. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:44:41,168][78983] Avg episode reward: [(0, '4.281')] +[2024-12-28 13:44:41,242][84560] Updated weights for policy 0, policy_version 3928 (0.0008) +[2024-12-28 13:44:42,811][84560] Updated weights for policy 0, policy_version 3938 (0.0007) +[2024-12-28 13:44:44,428][84560] Updated weights for policy 0, policy_version 3948 (0.0006) +[2024-12-28 13:44:45,983][84560] Updated weights for policy 0, policy_version 3958 (0.0006) +[2024-12-28 13:44:46,167][78983] Fps is (10 sec: 26214.1, 60 sec: 24507.8, 300 sec: 24812.0). Total num frames: 16216064. Throughput: 0: 6222.0. Samples: 3051276. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:44:46,168][78983] Avg episode reward: [(0, '4.448')] +[2024-12-28 13:44:47,607][84560] Updated weights for policy 0, policy_version 3968 (0.0006) +[2024-12-28 13:44:49,296][84560] Updated weights for policy 0, policy_version 3978 (0.0008) +[2024-12-28 13:44:51,153][84560] Updated weights for policy 0, policy_version 3988 (0.0009) +[2024-12-28 13:44:51,167][78983] Fps is (10 sec: 24985.5, 60 sec: 24371.2, 300 sec: 24770.4). Total num frames: 16334848. Throughput: 0: 6287.0. Samples: 3070150. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:44:51,168][78983] Avg episode reward: [(0, '4.428')] +[2024-12-28 13:44:52,983][84560] Updated weights for policy 0, policy_version 3998 (0.0008) +[2024-12-28 13:44:54,848][84560] Updated weights for policy 0, policy_version 4008 (0.0008) +[2024-12-28 13:44:56,167][78983] Fps is (10 sec: 22937.8, 60 sec: 24302.9, 300 sec: 24714.8). Total num frames: 16445440. Throughput: 0: 6242.7. Samples: 3103324. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:44:56,168][78983] Avg episode reward: [(0, '4.281')] +[2024-12-28 13:44:56,746][84560] Updated weights for policy 0, policy_version 4018 (0.0010) +[2024-12-28 13:44:58,640][84560] Updated weights for policy 0, policy_version 4028 (0.0008) +[2024-12-28 13:45:00,360][84560] Updated weights for policy 0, policy_version 4038 (0.0007) +[2024-12-28 13:45:01,167][78983] Fps is (10 sec: 22528.3, 60 sec: 24439.5, 300 sec: 24701.0). Total num frames: 16560128. Throughput: 0: 6182.7. Samples: 3137022. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:45:01,168][78983] Avg episode reward: [(0, '4.673')] +[2024-12-28 13:45:02,054][84560] Updated weights for policy 0, policy_version 4048 (0.0008) +[2024-12-28 13:45:03,697][84560] Updated weights for policy 0, policy_version 4058 (0.0008) +[2024-12-28 13:45:05,234][84560] Updated weights for policy 0, policy_version 4068 (0.0006) +[2024-12-28 13:45:06,167][78983] Fps is (10 sec: 24166.5, 60 sec: 24712.5, 300 sec: 24756.5). Total num frames: 16687104. Throughput: 0: 6173.8. Samples: 3155346. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:45:06,168][78983] Avg episode reward: [(0, '4.262')] +[2024-12-28 13:45:06,759][84560] Updated weights for policy 0, policy_version 4078 (0.0006) +[2024-12-28 13:45:08,314][84560] Updated weights for policy 0, policy_version 4088 (0.0006) +[2024-12-28 13:45:09,911][84560] Updated weights for policy 0, policy_version 4098 (0.0006) +[2024-12-28 13:45:11,167][78983] Fps is (10 sec: 25395.1, 60 sec: 24849.1, 300 sec: 24784.3). Total num frames: 16814080. Throughput: 0: 6223.2. Samples: 3194810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:45:11,168][78983] Avg episode reward: [(0, '4.309')] +[2024-12-28 13:45:11,532][84560] Updated weights for policy 0, policy_version 4108 (0.0007) +[2024-12-28 13:45:13,060][84560] Updated weights for policy 0, policy_version 4118 (0.0007) +[2024-12-28 13:45:14,631][84560] Updated weights for policy 0, policy_version 4128 (0.0007) +[2024-12-28 13:45:16,167][78983] Fps is (10 sec: 25804.7, 60 sec: 24985.6, 300 sec: 24867.6). Total num frames: 16945152. Throughput: 0: 6233.9. Samples: 3233888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:45:16,168][78983] Avg episode reward: [(0, '4.327')] +[2024-12-28 13:45:16,211][84560] Updated weights for policy 0, policy_version 4138 (0.0007) +[2024-12-28 13:45:17,799][84560] Updated weights for policy 0, policy_version 4148 (0.0006) +[2024-12-28 13:45:19,392][84560] Updated weights for policy 0, policy_version 4158 (0.0006) +[2024-12-28 13:45:20,954][84560] Updated weights for policy 0, policy_version 4168 (0.0007) +[2024-12-28 13:45:21,167][78983] Fps is (10 sec: 26214.2, 60 sec: 25122.2, 300 sec: 24937.0). Total num frames: 17076224. Throughput: 0: 6234.9. Samples: 3253450. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:45:21,168][78983] Avg episode reward: [(0, '4.499')] +[2024-12-28 13:45:22,502][84560] Updated weights for policy 0, policy_version 4178 (0.0006) +[2024-12-28 13:45:24,097][84560] Updated weights for policy 0, policy_version 4188 (0.0008) +[2024-12-28 13:45:25,694][84560] Updated weights for policy 0, policy_version 4198 (0.0006) +[2024-12-28 13:45:26,167][78983] Fps is (10 sec: 25804.7, 60 sec: 25190.4, 300 sec: 24923.1). Total num frames: 17203200. Throughput: 0: 6225.3. Samples: 3292426. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:45:26,169][78983] Avg episode reward: [(0, '4.359')] +[2024-12-28 13:45:27,412][84560] Updated weights for policy 0, policy_version 4208 (0.0008) +[2024-12-28 13:45:29,244][84560] Updated weights for policy 0, policy_version 4218 (0.0008) +[2024-12-28 13:45:31,096][84560] Updated weights for policy 0, policy_version 4228 (0.0008) +[2024-12-28 13:45:31,167][78983] Fps is (10 sec: 24166.3, 60 sec: 24917.3, 300 sec: 24853.7). Total num frames: 17317888. Throughput: 0: 6129.2. Samples: 3327092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:45:31,168][78983] Avg episode reward: [(0, '4.267')] +[2024-12-28 13:45:32,992][84560] Updated weights for policy 0, policy_version 4238 (0.0008) +[2024-12-28 13:45:34,927][84560] Updated weights for policy 0, policy_version 4248 (0.0008) +[2024-12-28 13:45:36,167][78983] Fps is (10 sec: 22118.5, 60 sec: 24507.7, 300 sec: 24770.4). Total num frames: 17424384. Throughput: 0: 6068.2. Samples: 3343220. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:45:36,169][78983] Avg episode reward: [(0, '4.538')] +[2024-12-28 13:45:36,849][84560] Updated weights for policy 0, policy_version 4258 (0.0008) +[2024-12-28 13:45:38,419][84560] Updated weights for policy 0, policy_version 4268 (0.0006) +[2024-12-28 13:45:39,952][84560] Updated weights for policy 0, policy_version 4278 (0.0006) +[2024-12-28 13:45:41,167][78983] Fps is (10 sec: 23346.9, 60 sec: 24439.4, 300 sec: 24756.5). Total num frames: 17551360. Throughput: 0: 6125.1. Samples: 3378956. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:45:41,168][78983] Avg episode reward: [(0, '4.543')] +[2024-12-28 13:45:41,681][84560] Updated weights for policy 0, policy_version 4288 (0.0008) +[2024-12-28 13:45:43,548][84560] Updated weights for policy 0, policy_version 4298 (0.0009) +[2024-12-28 13:45:45,435][84560] Updated weights for policy 0, policy_version 4308 (0.0009) +[2024-12-28 13:45:46,167][78983] Fps is (10 sec: 23756.8, 60 sec: 24098.2, 300 sec: 24742.6). Total num frames: 17661952. Throughput: 0: 6129.1. Samples: 3412830. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:45:46,168][78983] Avg episode reward: [(0, '4.303')] +[2024-12-28 13:45:47,239][84560] Updated weights for policy 0, policy_version 4318 (0.0007) +[2024-12-28 13:45:49,057][84560] Updated weights for policy 0, policy_version 4328 (0.0008) +[2024-12-28 13:45:50,898][84560] Updated weights for policy 0, policy_version 4338 (0.0008) +[2024-12-28 13:45:51,168][78983] Fps is (10 sec: 22117.9, 60 sec: 23961.4, 300 sec: 24742.6). Total num frames: 17772544. Throughput: 0: 6101.0. Samples: 3429892. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:45:51,169][78983] Avg episode reward: [(0, '4.561')] +[2024-12-28 13:45:52,533][84560] Updated weights for policy 0, policy_version 4348 (0.0007) +[2024-12-28 13:45:54,106][84560] Updated weights for policy 0, policy_version 4358 (0.0006) +[2024-12-28 13:45:55,718][84560] Updated weights for policy 0, policy_version 4368 (0.0008) +[2024-12-28 13:45:56,167][78983] Fps is (10 sec: 24166.5, 60 sec: 24303.0, 300 sec: 24812.1). Total num frames: 17903616. Throughput: 0: 6032.5. Samples: 3466274. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:45:56,169][78983] Avg episode reward: [(0, '4.453')] +[2024-12-28 13:45:57,314][84560] Updated weights for policy 0, policy_version 4378 (0.0006) +[2024-12-28 13:45:59,075][84560] Updated weights for policy 0, policy_version 4388 (0.0008) +[2024-12-28 13:46:00,978][84560] Updated weights for policy 0, policy_version 4398 (0.0009) +[2024-12-28 13:46:01,167][78983] Fps is (10 sec: 24167.5, 60 sec: 24234.6, 300 sec: 24742.6). Total num frames: 18014208. Throughput: 0: 5956.0. Samples: 3501910. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:46:01,168][78983] Avg episode reward: [(0, '4.424')] +[2024-12-28 13:46:02,849][84560] Updated weights for policy 0, policy_version 4408 (0.0008) +[2024-12-28 13:46:04,729][84560] Updated weights for policy 0, policy_version 4418 (0.0008) +[2024-12-28 13:46:06,167][78983] Fps is (10 sec: 22118.3, 60 sec: 23961.6, 300 sec: 24673.2). Total num frames: 18124800. Throughput: 0: 5885.2. Samples: 3518284. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:46:06,169][78983] Avg episode reward: [(0, '4.484')] +[2024-12-28 13:46:06,601][84560] Updated weights for policy 0, policy_version 4428 (0.0008) +[2024-12-28 13:46:08,453][84560] Updated weights for policy 0, policy_version 4438 (0.0007) +[2024-12-28 13:46:10,106][84560] Updated weights for policy 0, policy_version 4448 (0.0007) +[2024-12-28 13:46:11,167][78983] Fps is (10 sec: 22937.6, 60 sec: 23825.1, 300 sec: 24631.5). Total num frames: 18243584. Throughput: 0: 5775.2. Samples: 3552312. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:46:11,168][78983] Avg episode reward: [(0, '4.356')] +[2024-12-28 13:46:11,655][84560] Updated weights for policy 0, policy_version 4458 (0.0007) +[2024-12-28 13:46:13,157][84560] Updated weights for policy 0, policy_version 4468 (0.0007) +[2024-12-28 13:46:14,730][84560] Updated weights for policy 0, policy_version 4478 (0.0006) +[2024-12-28 13:46:16,167][78983] Fps is (10 sec: 25395.3, 60 sec: 23893.3, 300 sec: 24728.7). Total num frames: 18378752. Throughput: 0: 5883.1. Samples: 3591830. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:46:16,168][78983] Avg episode reward: [(0, '4.509')] +[2024-12-28 13:46:16,270][84560] Updated weights for policy 0, policy_version 4488 (0.0007) +[2024-12-28 13:46:17,800][84560] Updated weights for policy 0, policy_version 4498 (0.0007) +[2024-12-28 13:46:19,389][84560] Updated weights for policy 0, policy_version 4508 (0.0007) +[2024-12-28 13:46:20,944][84560] Updated weights for policy 0, policy_version 4518 (0.0007) +[2024-12-28 13:46:21,167][78983] Fps is (10 sec: 26624.0, 60 sec: 23893.4, 300 sec: 24825.9). Total num frames: 18509824. Throughput: 0: 5964.1. Samples: 3611606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:46:21,168][78983] Avg episode reward: [(0, '4.515')] +[2024-12-28 13:46:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000004519_18509824.pth... +[2024-12-28 13:46:21,209][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000003088_12648448.pth +[2024-12-28 13:46:22,531][84560] Updated weights for policy 0, policy_version 4528 (0.0006) +[2024-12-28 13:46:24,091][84560] Updated weights for policy 0, policy_version 4538 (0.0006) +[2024-12-28 13:46:25,638][84560] Updated weights for policy 0, policy_version 4548 (0.0007) +[2024-12-28 13:46:26,167][78983] Fps is (10 sec: 26214.4, 60 sec: 23961.6, 300 sec: 24867.6). Total num frames: 18640896. Throughput: 0: 6040.3. Samples: 3650770. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:46:26,168][78983] Avg episode reward: [(0, '4.249')] +[2024-12-28 13:46:27,213][84560] Updated weights for policy 0, policy_version 4558 (0.0006) +[2024-12-28 13:46:28,762][84560] Updated weights for policy 0, policy_version 4568 (0.0006) +[2024-12-28 13:46:30,425][84560] Updated weights for policy 0, policy_version 4578 (0.0007) +[2024-12-28 13:46:31,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24166.5, 300 sec: 24839.8). Total num frames: 18767872. Throughput: 0: 6148.0. Samples: 3689490. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:46:31,168][78983] Avg episode reward: [(0, '4.271')] +[2024-12-28 13:46:32,040][84560] Updated weights for policy 0, policy_version 4588 (0.0007) +[2024-12-28 13:46:33,603][84560] Updated weights for policy 0, policy_version 4598 (0.0007) +[2024-12-28 13:46:35,369][84560] Updated weights for policy 0, policy_version 4608 (0.0009) +[2024-12-28 13:46:36,167][78983] Fps is (10 sec: 24985.7, 60 sec: 24439.5, 300 sec: 24798.2). Total num frames: 18890752. Throughput: 0: 6199.0. Samples: 3708842. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:46:36,169][78983] Avg episode reward: [(0, '4.440')] +[2024-12-28 13:46:37,268][84560] Updated weights for policy 0, policy_version 4618 (0.0008) +[2024-12-28 13:46:39,188][84560] Updated weights for policy 0, policy_version 4628 (0.0008) +[2024-12-28 13:46:41,046][84560] Updated weights for policy 0, policy_version 4638 (0.0007) +[2024-12-28 13:46:41,167][78983] Fps is (10 sec: 22937.4, 60 sec: 24098.2, 300 sec: 24701.0). Total num frames: 18997248. Throughput: 0: 6119.0. Samples: 3741628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:46:41,168][78983] Avg episode reward: [(0, '4.251')] +[2024-12-28 13:46:42,874][84560] Updated weights for policy 0, policy_version 4648 (0.0009) +[2024-12-28 13:46:44,676][84560] Updated weights for policy 0, policy_version 4658 (0.0008) +[2024-12-28 13:46:46,167][78983] Fps is (10 sec: 22528.0, 60 sec: 24234.7, 300 sec: 24659.3). Total num frames: 19116032. Throughput: 0: 6095.7. Samples: 3776218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:46:46,168][78983] Avg episode reward: [(0, '4.249')] +[2024-12-28 13:46:46,278][84560] Updated weights for policy 0, policy_version 4668 (0.0008) +[2024-12-28 13:46:47,889][84560] Updated weights for policy 0, policy_version 4678 (0.0007) +[2024-12-28 13:46:49,421][84560] Updated weights for policy 0, policy_version 4688 (0.0006) +[2024-12-28 13:46:51,045][84560] Updated weights for policy 0, policy_version 4698 (0.0008) +[2024-12-28 13:46:51,167][78983] Fps is (10 sec: 24576.2, 60 sec: 24507.9, 300 sec: 24631.5). Total num frames: 19243008. Throughput: 0: 6161.3. Samples: 3795544. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:46:51,168][78983] Avg episode reward: [(0, '4.540')] +[2024-12-28 13:46:52,586][84560] Updated weights for policy 0, policy_version 4708 (0.0006) +[2024-12-28 13:46:54,166][84560] Updated weights for policy 0, policy_version 4718 (0.0008) +[2024-12-28 13:46:55,721][84560] Updated weights for policy 0, policy_version 4728 (0.0006) +[2024-12-28 13:46:56,167][78983] Fps is (10 sec: 25804.4, 60 sec: 24507.7, 300 sec: 24617.6). Total num frames: 19374080. Throughput: 0: 6274.5. Samples: 3834666. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:46:56,169][78983] Avg episode reward: [(0, '4.566')] +[2024-12-28 13:46:57,283][84560] Updated weights for policy 0, policy_version 4738 (0.0007) +[2024-12-28 13:46:58,862][84560] Updated weights for policy 0, policy_version 4748 (0.0006) +[2024-12-28 13:47:00,446][84560] Updated weights for policy 0, policy_version 4758 (0.0007) +[2024-12-28 13:47:01,167][78983] Fps is (10 sec: 26214.4, 60 sec: 24849.1, 300 sec: 24617.7). Total num frames: 19505152. Throughput: 0: 6266.0. Samples: 3873802. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:01,168][78983] Avg episode reward: [(0, '4.359')] +[2024-12-28 13:47:02,005][84560] Updated weights for policy 0, policy_version 4768 (0.0007) +[2024-12-28 13:47:03,697][84560] Updated weights for policy 0, policy_version 4778 (0.0008) +[2024-12-28 13:47:05,530][84560] Updated weights for policy 0, policy_version 4788 (0.0007) +[2024-12-28 13:47:06,167][78983] Fps is (10 sec: 24986.0, 60 sec: 24985.6, 300 sec: 24562.1). Total num frames: 19623936. Throughput: 0: 6242.4. Samples: 3892516. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:06,169][78983] Avg episode reward: [(0, '4.529')] +[2024-12-28 13:47:07,377][84560] Updated weights for policy 0, policy_version 4798 (0.0009) +[2024-12-28 13:47:09,219][84560] Updated weights for policy 0, policy_version 4808 (0.0007) +[2024-12-28 13:47:11,101][84560] Updated weights for policy 0, policy_version 4818 (0.0008) +[2024-12-28 13:47:11,167][78983] Fps is (10 sec: 22937.1, 60 sec: 24849.0, 300 sec: 24478.8). Total num frames: 19734528. Throughput: 0: 6109.1. Samples: 3925680. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:11,169][78983] Avg episode reward: [(0, '4.398')] +[2024-12-28 13:47:12,979][84560] Updated weights for policy 0, policy_version 4828 (0.0007) +[2024-12-28 13:47:14,812][84560] Updated weights for policy 0, policy_version 4838 (0.0007) +[2024-12-28 13:47:16,167][78983] Fps is (10 sec: 21708.7, 60 sec: 24371.2, 300 sec: 24381.6). Total num frames: 19841024. Throughput: 0: 5982.7. Samples: 3958710. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:16,169][78983] Avg episode reward: [(0, '4.570')] +[2024-12-28 13:47:16,713][84560] Updated weights for policy 0, policy_version 4848 (0.0009) +[2024-12-28 13:47:18,273][84560] Updated weights for policy 0, policy_version 4858 (0.0007) +[2024-12-28 13:47:19,859][84560] Updated weights for policy 0, policy_version 4868 (0.0007) +[2024-12-28 13:47:21,167][78983] Fps is (10 sec: 23757.0, 60 sec: 24371.2, 300 sec: 24534.3). Total num frames: 19972096. Throughput: 0: 5965.5. Samples: 3977292. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:47:21,168][78983] Avg episode reward: [(0, '4.259')] +[2024-12-28 13:47:21,420][84560] Updated weights for policy 0, policy_version 4878 (0.0007) +[2024-12-28 13:47:22,976][84560] Updated weights for policy 0, policy_version 4888 (0.0007) +[2024-12-28 13:47:24,537][84560] Updated weights for policy 0, policy_version 4898 (0.0007) +[2024-12-28 13:47:26,167][78983] Fps is (10 sec: 23757.0, 60 sec: 23961.6, 300 sec: 24534.3). Total num frames: 20078592. Throughput: 0: 6100.1. Samples: 4016130. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:26,168][78983] Avg episode reward: [(0, '4.412')] +[2024-12-28 13:47:27,165][84560] Updated weights for policy 0, policy_version 4908 (0.0008) +[2024-12-28 13:47:28,928][84560] Updated weights for policy 0, policy_version 4918 (0.0007) +[2024-12-28 13:47:31,006][84560] Updated weights for policy 0, policy_version 4928 (0.0010) +[2024-12-28 13:47:31,167][78983] Fps is (10 sec: 21299.4, 60 sec: 23620.3, 300 sec: 24478.8). Total num frames: 20185088. Throughput: 0: 5965.0. Samples: 4044642. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:47:31,169][78983] Avg episode reward: [(0, '4.540')] +[2024-12-28 13:47:32,836][84560] Updated weights for policy 0, policy_version 4938 (0.0007) +[2024-12-28 13:47:34,604][84560] Updated weights for policy 0, policy_version 4948 (0.0009) +[2024-12-28 13:47:36,167][78983] Fps is (10 sec: 22118.3, 60 sec: 23483.7, 300 sec: 24409.4). Total num frames: 20299776. Throughput: 0: 5911.0. Samples: 4061538. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:36,168][78983] Avg episode reward: [(0, '4.513')] +[2024-12-28 13:47:36,459][84560] Updated weights for policy 0, policy_version 4958 (0.0008) +[2024-12-28 13:47:38,358][84560] Updated weights for policy 0, policy_version 4968 (0.0010) +[2024-12-28 13:47:40,132][84560] Updated weights for policy 0, policy_version 4978 (0.0007) +[2024-12-28 13:47:41,167][78983] Fps is (10 sec: 22937.7, 60 sec: 23620.3, 300 sec: 24353.8). Total num frames: 20414464. Throughput: 0: 5778.9. Samples: 4094716. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:41,168][78983] Avg episode reward: [(0, '4.399')] +[2024-12-28 13:47:41,656][84560] Updated weights for policy 0, policy_version 4988 (0.0007) +[2024-12-28 13:47:43,176][84560] Updated weights for policy 0, policy_version 4998 (0.0007) +[2024-12-28 13:47:44,722][84560] Updated weights for policy 0, policy_version 5008 (0.0008) +[2024-12-28 13:47:46,167][78983] Fps is (10 sec: 24985.7, 60 sec: 23893.3, 300 sec: 24367.7). Total num frames: 20549632. Throughput: 0: 5796.0. Samples: 4134622. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:47:46,168][78983] Avg episode reward: [(0, '4.405')] +[2024-12-28 13:47:46,262][84560] Updated weights for policy 0, policy_version 5018 (0.0008) +[2024-12-28 13:47:47,794][84560] Updated weights for policy 0, policy_version 5028 (0.0007) +[2024-12-28 13:47:49,292][84560] Updated weights for policy 0, policy_version 5038 (0.0007) +[2024-12-28 13:47:50,796][84560] Updated weights for policy 0, policy_version 5048 (0.0007) +[2024-12-28 13:47:51,167][78983] Fps is (10 sec: 27033.2, 60 sec: 24029.8, 300 sec: 24395.5). Total num frames: 20684800. Throughput: 0: 5828.4. Samples: 4154794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:47:51,168][78983] Avg episode reward: [(0, '4.534')] +[2024-12-28 13:47:52,357][84560] Updated weights for policy 0, policy_version 5058 (0.0007) +[2024-12-28 13:47:53,848][84560] Updated weights for policy 0, policy_version 5068 (0.0007) +[2024-12-28 13:47:55,378][84560] Updated weights for policy 0, policy_version 5078 (0.0007) +[2024-12-28 13:47:56,167][78983] Fps is (10 sec: 27033.5, 60 sec: 24098.2, 300 sec: 24437.1). Total num frames: 20819968. Throughput: 0: 5988.6. Samples: 4195164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:47:56,168][78983] Avg episode reward: [(0, '4.313')] +[2024-12-28 13:47:56,923][84560] Updated weights for policy 0, policy_version 5088 (0.0007) +[2024-12-28 13:47:58,438][84560] Updated weights for policy 0, policy_version 5098 (0.0007) +[2024-12-28 13:47:59,978][84560] Updated weights for policy 0, policy_version 5108 (0.0007) +[2024-12-28 13:48:01,167][78983] Fps is (10 sec: 26624.2, 60 sec: 24098.1, 300 sec: 24451.0). Total num frames: 20951040. Throughput: 0: 6146.7. Samples: 4235310. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:01,168][78983] Avg episode reward: [(0, '4.386')] +[2024-12-28 13:48:01,518][84560] Updated weights for policy 0, policy_version 5118 (0.0007) +[2024-12-28 13:48:03,050][84560] Updated weights for policy 0, policy_version 5128 (0.0007) +[2024-12-28 13:48:04,570][84560] Updated weights for policy 0, policy_version 5138 (0.0007) +[2024-12-28 13:48:06,118][84560] Updated weights for policy 0, policy_version 5148 (0.0007) +[2024-12-28 13:48:06,167][78983] Fps is (10 sec: 26624.0, 60 sec: 24371.2, 300 sec: 24478.8). Total num frames: 21086208. Throughput: 0: 6181.8. Samples: 4255474. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:48:06,168][78983] Avg episode reward: [(0, '4.515')] +[2024-12-28 13:48:07,633][84560] Updated weights for policy 0, policy_version 5158 (0.0007) +[2024-12-28 13:48:09,208][84560] Updated weights for policy 0, policy_version 5168 (0.0007) +[2024-12-28 13:48:10,701][84560] Updated weights for policy 0, policy_version 5178 (0.0007) +[2024-12-28 13:48:11,167][78983] Fps is (10 sec: 27033.8, 60 sec: 24780.9, 300 sec: 24520.5). Total num frames: 21221376. Throughput: 0: 6205.9. Samples: 4295396. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:11,168][78983] Avg episode reward: [(0, '4.478')] +[2024-12-28 13:48:12,239][84560] Updated weights for policy 0, policy_version 5188 (0.0007) +[2024-12-28 13:48:13,734][84560] Updated weights for policy 0, policy_version 5198 (0.0007) +[2024-12-28 13:48:15,259][84560] Updated weights for policy 0, policy_version 5208 (0.0007) +[2024-12-28 13:48:16,167][78983] Fps is (10 sec: 26624.1, 60 sec: 25190.4, 300 sec: 24534.3). Total num frames: 21352448. Throughput: 0: 6471.3. Samples: 4335852. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:16,168][78983] Avg episode reward: [(0, '4.442')] +[2024-12-28 13:48:16,801][84560] Updated weights for policy 0, policy_version 5218 (0.0007) +[2024-12-28 13:48:18,344][84560] Updated weights for policy 0, policy_version 5228 (0.0008) +[2024-12-28 13:48:19,893][84560] Updated weights for policy 0, policy_version 5238 (0.0006) +[2024-12-28 13:48:21,167][78983] Fps is (10 sec: 26624.0, 60 sec: 25258.7, 300 sec: 24562.1). Total num frames: 21487616. Throughput: 0: 6539.5. Samples: 4355814. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:48:21,168][78983] Avg episode reward: [(0, '4.386')] +[2024-12-28 13:48:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000005246_21487616.pth... +[2024-12-28 13:48:21,207][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000003801_15568896.pth +[2024-12-28 13:48:21,411][84560] Updated weights for policy 0, policy_version 5248 (0.0007) +[2024-12-28 13:48:22,977][84560] Updated weights for policy 0, policy_version 5258 (0.0007) +[2024-12-28 13:48:24,537][84560] Updated weights for policy 0, policy_version 5268 (0.0007) +[2024-12-28 13:48:26,071][84560] Updated weights for policy 0, policy_version 5278 (0.0007) +[2024-12-28 13:48:26,167][78983] Fps is (10 sec: 26624.0, 60 sec: 25668.3, 300 sec: 24576.0). Total num frames: 21618688. Throughput: 0: 6688.4. Samples: 4395696. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:26,168][78983] Avg episode reward: [(0, '4.379')] +[2024-12-28 13:48:27,624][84560] Updated weights for policy 0, policy_version 5288 (0.0007) +[2024-12-28 13:48:29,163][84560] Updated weights for policy 0, policy_version 5298 (0.0006) +[2024-12-28 13:48:30,710][84560] Updated weights for policy 0, policy_version 5308 (0.0007) +[2024-12-28 13:48:31,167][78983] Fps is (10 sec: 26214.2, 60 sec: 26077.8, 300 sec: 24603.8). Total num frames: 21749760. Throughput: 0: 6677.9. Samples: 4435130. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:31,168][78983] Avg episode reward: [(0, '4.516')] +[2024-12-28 13:48:32,296][84560] Updated weights for policy 0, policy_version 5318 (0.0007) +[2024-12-28 13:48:33,853][84560] Updated weights for policy 0, policy_version 5328 (0.0007) +[2024-12-28 13:48:35,394][84560] Updated weights for policy 0, policy_version 5338 (0.0006) +[2024-12-28 13:48:36,167][78983] Fps is (10 sec: 26214.0, 60 sec: 26350.9, 300 sec: 24617.6). Total num frames: 21880832. Throughput: 0: 6665.5. Samples: 4454740. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:48:36,168][78983] Avg episode reward: [(0, '4.263')] +[2024-12-28 13:48:37,073][84560] Updated weights for policy 0, policy_version 5348 (0.0007) +[2024-12-28 13:48:38,961][84560] Updated weights for policy 0, policy_version 5358 (0.0008) +[2024-12-28 13:48:40,848][84560] Updated weights for policy 0, policy_version 5368 (0.0008) +[2024-12-28 13:48:41,167][78983] Fps is (10 sec: 24166.4, 60 sec: 26282.6, 300 sec: 24562.1). Total num frames: 21991424. Throughput: 0: 6565.7. Samples: 4490622. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:41,169][78983] Avg episode reward: [(0, '4.534')] +[2024-12-28 13:48:42,726][84560] Updated weights for policy 0, policy_version 5378 (0.0008) +[2024-12-28 13:48:44,593][84560] Updated weights for policy 0, policy_version 5388 (0.0009) +[2024-12-28 13:48:46,167][78983] Fps is (10 sec: 22118.6, 60 sec: 25873.1, 300 sec: 24506.6). Total num frames: 22102016. Throughput: 0: 6393.7. Samples: 4523028. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:48:46,169][78983] Avg episode reward: [(0, '4.475')] +[2024-12-28 13:48:46,519][84560] Updated weights for policy 0, policy_version 5398 (0.0008) +[2024-12-28 13:48:48,287][84560] Updated weights for policy 0, policy_version 5408 (0.0007) +[2024-12-28 13:48:49,828][84560] Updated weights for policy 0, policy_version 5418 (0.0006) +[2024-12-28 13:48:51,167][78983] Fps is (10 sec: 23347.3, 60 sec: 25668.3, 300 sec: 24534.3). Total num frames: 22224896. Throughput: 0: 6338.2. Samples: 4540692. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:48:51,168][78983] Avg episode reward: [(0, '4.348')] +[2024-12-28 13:48:51,365][84560] Updated weights for policy 0, policy_version 5428 (0.0006) +[2024-12-28 13:48:53,145][84560] Updated weights for policy 0, policy_version 5438 (0.0009) +[2024-12-28 13:48:55,046][84560] Updated weights for policy 0, policy_version 5448 (0.0009) +[2024-12-28 13:48:56,167][78983] Fps is (10 sec: 23347.2, 60 sec: 25258.7, 300 sec: 24548.2). Total num frames: 22335488. Throughput: 0: 6251.0. Samples: 4576692. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:48:56,168][78983] Avg episode reward: [(0, '4.280')] +[2024-12-28 13:48:56,938][84560] Updated weights for policy 0, policy_version 5458 (0.0007) +[2024-12-28 13:48:58,836][84560] Updated weights for policy 0, policy_version 5468 (0.0008) +[2024-12-28 13:49:00,833][84560] Updated weights for policy 0, policy_version 5478 (0.0010) +[2024-12-28 13:49:01,167][78983] Fps is (10 sec: 21708.7, 60 sec: 24849.1, 300 sec: 24534.3). Total num frames: 22441984. Throughput: 0: 6062.7. Samples: 4608672. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:49:01,168][78983] Avg episode reward: [(0, '4.625')] +[2024-12-28 13:49:02,781][84560] Updated weights for policy 0, policy_version 5488 (0.0008) +[2024-12-28 13:49:04,452][84560] Updated weights for policy 0, policy_version 5498 (0.0008) +[2024-12-28 13:49:05,987][84560] Updated weights for policy 0, policy_version 5508 (0.0006) +[2024-12-28 13:49:06,167][78983] Fps is (10 sec: 22937.4, 60 sec: 24644.2, 300 sec: 24548.2). Total num frames: 22564864. Throughput: 0: 5983.7. Samples: 4625080. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:06,168][78983] Avg episode reward: [(0, '4.215')] +[2024-12-28 13:49:07,516][84560] Updated weights for policy 0, policy_version 5518 (0.0006) +[2024-12-28 13:49:09,079][84560] Updated weights for policy 0, policy_version 5528 (0.0007) +[2024-12-28 13:49:10,648][84560] Updated weights for policy 0, policy_version 5538 (0.0007) +[2024-12-28 13:49:11,167][78983] Fps is (10 sec: 25395.2, 60 sec: 24576.0, 300 sec: 24576.0). Total num frames: 22695936. Throughput: 0: 5976.6. Samples: 4664642. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:49:11,168][78983] Avg episode reward: [(0, '4.330')] +[2024-12-28 13:49:12,180][84560] Updated weights for policy 0, policy_version 5548 (0.0007) +[2024-12-28 13:49:13,699][84560] Updated weights for policy 0, policy_version 5558 (0.0008) +[2024-12-28 13:49:15,266][84560] Updated weights for policy 0, policy_version 5568 (0.0008) +[2024-12-28 13:49:16,167][78983] Fps is (10 sec: 26214.7, 60 sec: 24576.0, 300 sec: 24603.8). Total num frames: 22827008. Throughput: 0: 5986.1. Samples: 4704502. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:16,168][78983] Avg episode reward: [(0, '4.346')] +[2024-12-28 13:49:16,815][84560] Updated weights for policy 0, policy_version 5578 (0.0007) +[2024-12-28 13:49:18,328][84560] Updated weights for policy 0, policy_version 5588 (0.0007) +[2024-12-28 13:49:19,882][84560] Updated weights for policy 0, policy_version 5598 (0.0008) +[2024-12-28 13:49:21,167][78983] Fps is (10 sec: 26624.1, 60 sec: 24576.0, 300 sec: 24645.4). Total num frames: 22962176. Throughput: 0: 5994.3. Samples: 4724482. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:21,168][78983] Avg episode reward: [(0, '4.345')] +[2024-12-28 13:49:21,453][84560] Updated weights for policy 0, policy_version 5608 (0.0007) +[2024-12-28 13:49:22,990][84560] Updated weights for policy 0, policy_version 5618 (0.0006) +[2024-12-28 13:49:24,484][84560] Updated weights for policy 0, policy_version 5628 (0.0006) +[2024-12-28 13:49:26,000][84560] Updated weights for policy 0, policy_version 5638 (0.0006) +[2024-12-28 13:49:26,167][78983] Fps is (10 sec: 27033.4, 60 sec: 24644.2, 300 sec: 24659.3). Total num frames: 23097344. Throughput: 0: 6081.9. Samples: 4764306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:49:26,168][78983] Avg episode reward: [(0, '4.611')] +[2024-12-28 13:49:27,562][84560] Updated weights for policy 0, policy_version 5648 (0.0007) +[2024-12-28 13:49:29,145][84560] Updated weights for policy 0, policy_version 5658 (0.0008) +[2024-12-28 13:49:30,786][84560] Updated weights for policy 0, policy_version 5668 (0.0007) +[2024-12-28 13:49:31,167][78983] Fps is (10 sec: 26214.3, 60 sec: 24576.0, 300 sec: 24645.4). Total num frames: 23224320. Throughput: 0: 6228.8. Samples: 4803324. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:31,168][78983] Avg episode reward: [(0, '4.337')] +[2024-12-28 13:49:32,371][84560] Updated weights for policy 0, policy_version 5678 (0.0007) +[2024-12-28 13:49:33,940][84560] Updated weights for policy 0, policy_version 5688 (0.0007) +[2024-12-28 13:49:35,504][84560] Updated weights for policy 0, policy_version 5698 (0.0007) +[2024-12-28 13:49:36,167][78983] Fps is (10 sec: 25805.0, 60 sec: 24576.0, 300 sec: 24645.4). Total num frames: 23355392. Throughput: 0: 6270.8. Samples: 4822880. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:36,168][78983] Avg episode reward: [(0, '4.565')] +[2024-12-28 13:49:37,052][84560] Updated weights for policy 0, policy_version 5708 (0.0007) +[2024-12-28 13:49:38,625][84560] Updated weights for policy 0, policy_version 5718 (0.0008) +[2024-12-28 13:49:40,169][84560] Updated weights for policy 0, policy_version 5728 (0.0006) +[2024-12-28 13:49:41,167][78983] Fps is (10 sec: 26214.3, 60 sec: 24917.3, 300 sec: 24645.4). Total num frames: 23486464. Throughput: 0: 6345.7. Samples: 4862250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:49:41,169][78983] Avg episode reward: [(0, '4.483')] +[2024-12-28 13:49:41,718][84560] Updated weights for policy 0, policy_version 5738 (0.0006) +[2024-12-28 13:49:43,393][84560] Updated weights for policy 0, policy_version 5748 (0.0008) +[2024-12-28 13:49:45,187][84560] Updated weights for policy 0, policy_version 5758 (0.0008) +[2024-12-28 13:49:46,167][78983] Fps is (10 sec: 24985.7, 60 sec: 25053.9, 300 sec: 24645.4). Total num frames: 23605248. Throughput: 0: 6451.6. Samples: 4898992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:49:46,168][78983] Avg episode reward: [(0, '4.542')] +[2024-12-28 13:49:47,021][84560] Updated weights for policy 0, policy_version 5768 (0.0010) +[2024-12-28 13:49:48,838][84560] Updated weights for policy 0, policy_version 5778 (0.0007) +[2024-12-28 13:49:50,680][84560] Updated weights for policy 0, policy_version 5788 (0.0008) +[2024-12-28 13:49:51,167][78983] Fps is (10 sec: 22937.6, 60 sec: 24849.1, 300 sec: 24645.4). Total num frames: 23715840. Throughput: 0: 6460.4. Samples: 4915796. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:49:51,169][78983] Avg episode reward: [(0, '4.266')] +[2024-12-28 13:49:52,548][84560] Updated weights for policy 0, policy_version 5798 (0.0009) +[2024-12-28 13:49:54,233][84560] Updated weights for policy 0, policy_version 5808 (0.0008) +[2024-12-28 13:49:55,813][84560] Updated weights for policy 0, policy_version 5818 (0.0007) +[2024-12-28 13:49:56,167][78983] Fps is (10 sec: 23347.1, 60 sec: 25053.9, 300 sec: 24673.2). Total num frames: 23838720. Throughput: 0: 6348.9. Samples: 4950344. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:49:56,168][78983] Avg episode reward: [(0, '4.456')] +[2024-12-28 13:49:57,327][84560] Updated weights for policy 0, policy_version 5828 (0.0007) +[2024-12-28 13:49:58,898][84560] Updated weights for policy 0, policy_version 5838 (0.0007) +[2024-12-28 13:50:00,466][84560] Updated weights for policy 0, policy_version 5848 (0.0008) +[2024-12-28 13:50:01,167][78983] Fps is (10 sec: 25395.2, 60 sec: 25463.5, 300 sec: 24687.1). Total num frames: 23969792. Throughput: 0: 6339.1. Samples: 4989762. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:01,168][78983] Avg episode reward: [(0, '4.513')] +[2024-12-28 13:50:02,036][84560] Updated weights for policy 0, policy_version 5858 (0.0007) +[2024-12-28 13:50:03,574][84560] Updated weights for policy 0, policy_version 5868 (0.0007) +[2024-12-28 13:50:05,086][84560] Updated weights for policy 0, policy_version 5878 (0.0007) +[2024-12-28 13:50:06,167][78983] Fps is (10 sec: 26623.9, 60 sec: 25668.3, 300 sec: 24714.8). Total num frames: 24104960. Throughput: 0: 6340.4. Samples: 5009802. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:50:06,168][78983] Avg episode reward: [(0, '4.529')] +[2024-12-28 13:50:06,607][84560] Updated weights for policy 0, policy_version 5888 (0.0007) +[2024-12-28 13:50:08,141][84560] Updated weights for policy 0, policy_version 5898 (0.0007) +[2024-12-28 13:50:09,709][84560] Updated weights for policy 0, policy_version 5908 (0.0007) +[2024-12-28 13:50:11,167][78983] Fps is (10 sec: 26624.1, 60 sec: 25668.3, 300 sec: 24714.8). Total num frames: 24236032. Throughput: 0: 6344.2. Samples: 5049794. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:11,168][78983] Avg episode reward: [(0, '4.425')] +[2024-12-28 13:50:11,267][84560] Updated weights for policy 0, policy_version 5918 (0.0007) +[2024-12-28 13:50:12,824][84560] Updated weights for policy 0, policy_version 5928 (0.0007) +[2024-12-28 13:50:14,362][84560] Updated weights for policy 0, policy_version 5938 (0.0007) +[2024-12-28 13:50:15,893][84560] Updated weights for policy 0, policy_version 5948 (0.0007) +[2024-12-28 13:50:16,167][78983] Fps is (10 sec: 26214.5, 60 sec: 25668.3, 300 sec: 24714.9). Total num frames: 24367104. Throughput: 0: 6359.9. Samples: 5089518. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:50:16,168][78983] Avg episode reward: [(0, '4.325')] +[2024-12-28 13:50:17,516][84560] Updated weights for policy 0, policy_version 5958 (0.0008) +[2024-12-28 13:50:19,319][84560] Updated weights for policy 0, policy_version 5968 (0.0008) +[2024-12-28 13:50:21,134][84560] Updated weights for policy 0, policy_version 5978 (0.0008) +[2024-12-28 13:50:21,167][78983] Fps is (10 sec: 24985.4, 60 sec: 25395.1, 300 sec: 24687.1). Total num frames: 24485888. Throughput: 0: 6328.7. Samples: 5107674. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0) +[2024-12-28 13:50:21,169][78983] Avg episode reward: [(0, '4.410')] +[2024-12-28 13:50:21,175][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000005978_24485888.pth... +[2024-12-28 13:50:21,210][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000004519_18509824.pth +[2024-12-28 13:50:23,036][84560] Updated weights for policy 0, policy_version 5988 (0.0008) +[2024-12-28 13:50:24,895][84560] Updated weights for policy 0, policy_version 5998 (0.0008) +[2024-12-28 13:50:26,167][78983] Fps is (10 sec: 22937.5, 60 sec: 24985.6, 300 sec: 24673.2). Total num frames: 24596480. Throughput: 0: 6188.7. Samples: 5140740. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:50:26,169][78983] Avg episode reward: [(0, '4.526')] +[2024-12-28 13:50:26,719][84560] Updated weights for policy 0, policy_version 6008 (0.0008) +[2024-12-28 13:50:28,386][84560] Updated weights for policy 0, policy_version 6018 (0.0008) +[2024-12-28 13:50:29,953][84560] Updated weights for policy 0, policy_version 6028 (0.0008) +[2024-12-28 13:50:31,167][78983] Fps is (10 sec: 23347.4, 60 sec: 24917.3, 300 sec: 24728.7). Total num frames: 24719360. Throughput: 0: 6185.9. Samples: 5177360. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:31,168][78983] Avg episode reward: [(0, '4.381')] +[2024-12-28 13:50:31,551][84560] Updated weights for policy 0, policy_version 6038 (0.0008) +[2024-12-28 13:50:33,076][84560] Updated weights for policy 0, policy_version 6048 (0.0006) +[2024-12-28 13:50:34,604][84560] Updated weights for policy 0, policy_version 6058 (0.0007) +[2024-12-28 13:50:36,151][84560] Updated weights for policy 0, policy_version 6068 (0.0008) +[2024-12-28 13:50:36,167][78983] Fps is (10 sec: 25804.7, 60 sec: 24985.6, 300 sec: 24756.5). Total num frames: 24854528. Throughput: 0: 6253.2. Samples: 5197190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:36,168][78983] Avg episode reward: [(0, '4.373')] +[2024-12-28 13:50:37,646][84560] Updated weights for policy 0, policy_version 6078 (0.0006) +[2024-12-28 13:50:39,192][84560] Updated weights for policy 0, policy_version 6088 (0.0006) +[2024-12-28 13:50:40,707][84560] Updated weights for policy 0, policy_version 6098 (0.0006) +[2024-12-28 13:50:41,167][78983] Fps is (10 sec: 26624.1, 60 sec: 24985.6, 300 sec: 24825.9). Total num frames: 24985600. Throughput: 0: 6379.7. Samples: 5237432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:41,168][78983] Avg episode reward: [(0, '4.524')] +[2024-12-28 13:50:42,277][84560] Updated weights for policy 0, policy_version 6108 (0.0007) +[2024-12-28 13:50:43,844][84560] Updated weights for policy 0, policy_version 6118 (0.0007) +[2024-12-28 13:50:45,417][84560] Updated weights for policy 0, policy_version 6128 (0.0008) +[2024-12-28 13:50:46,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25190.4, 300 sec: 24895.4). Total num frames: 25116672. Throughput: 0: 6384.0. Samples: 5277044. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:50:46,168][78983] Avg episode reward: [(0, '4.408')] +[2024-12-28 13:50:46,942][84560] Updated weights for policy 0, policy_version 6138 (0.0007) +[2024-12-28 13:50:48,464][84560] Updated weights for policy 0, policy_version 6148 (0.0007) +[2024-12-28 13:50:50,002][84560] Updated weights for policy 0, policy_version 6158 (0.0007) +[2024-12-28 13:50:51,167][78983] Fps is (10 sec: 26624.0, 60 sec: 25600.0, 300 sec: 24909.2). Total num frames: 25251840. Throughput: 0: 6384.8. Samples: 5297118. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:50:51,168][78983] Avg episode reward: [(0, '4.324')] +[2024-12-28 13:50:51,509][84560] Updated weights for policy 0, policy_version 6168 (0.0007) +[2024-12-28 13:50:53,110][84560] Updated weights for policy 0, policy_version 6178 (0.0008) +[2024-12-28 13:50:54,638][84560] Updated weights for policy 0, policy_version 6188 (0.0007) +[2024-12-28 13:50:56,167][78983] Fps is (10 sec: 26624.2, 60 sec: 25736.5, 300 sec: 24978.7). Total num frames: 25382912. Throughput: 0: 6380.4. Samples: 5336912. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:50:56,168][78983] Avg episode reward: [(0, '4.454')] +[2024-12-28 13:50:56,249][84560] Updated weights for policy 0, policy_version 6198 (0.0008) +[2024-12-28 13:50:57,771][84560] Updated weights for policy 0, policy_version 6208 (0.0006) +[2024-12-28 13:50:59,298][84560] Updated weights for policy 0, policy_version 6218 (0.0007) +[2024-12-28 13:51:00,862][84560] Updated weights for policy 0, policy_version 6228 (0.0006) +[2024-12-28 13:51:01,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25736.5, 300 sec: 25048.1). Total num frames: 25513984. Throughput: 0: 6371.5. Samples: 5376234. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:51:01,168][78983] Avg episode reward: [(0, '4.542')] +[2024-12-28 13:51:02,474][84560] Updated weights for policy 0, policy_version 6238 (0.0008) +[2024-12-28 13:51:04,061][84560] Updated weights for policy 0, policy_version 6248 (0.0007) +[2024-12-28 13:51:05,597][84560] Updated weights for policy 0, policy_version 6258 (0.0007) +[2024-12-28 13:51:06,167][78983] Fps is (10 sec: 26214.4, 60 sec: 25668.3, 300 sec: 25089.7). Total num frames: 25645056. Throughput: 0: 6398.0. Samples: 5395584. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:51:06,168][78983] Avg episode reward: [(0, '4.468')] +[2024-12-28 13:51:07,139][84560] Updated weights for policy 0, policy_version 6268 (0.0007) +[2024-12-28 13:51:08,648][84560] Updated weights for policy 0, policy_version 6278 (0.0006) +[2024-12-28 13:51:10,205][84560] Updated weights for policy 0, policy_version 6288 (0.0007) +[2024-12-28 13:51:11,167][78983] Fps is (10 sec: 26624.1, 60 sec: 25736.5, 300 sec: 25089.7). Total num frames: 25780224. Throughput: 0: 6551.2. Samples: 5435544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:51:11,168][78983] Avg episode reward: [(0, '4.341')] +[2024-12-28 13:51:11,833][84560] Updated weights for policy 0, policy_version 6298 (0.0007) +[2024-12-28 13:51:13,380][84560] Updated weights for policy 0, policy_version 6308 (0.0007) +[2024-12-28 13:51:14,927][84560] Updated weights for policy 0, policy_version 6318 (0.0007) +[2024-12-28 13:51:16,167][78983] Fps is (10 sec: 26624.0, 60 sec: 25736.5, 300 sec: 25089.7). Total num frames: 25911296. Throughput: 0: 6613.2. Samples: 5474952. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:51:16,168][78983] Avg episode reward: [(0, '4.405')] +[2024-12-28 13:51:16,475][84560] Updated weights for policy 0, policy_version 6328 (0.0007) +[2024-12-28 13:51:18,021][84560] Updated weights for policy 0, policy_version 6338 (0.0008) +[2024-12-28 13:51:19,547][84560] Updated weights for policy 0, policy_version 6348 (0.0006) +[2024-12-28 13:51:21,110][84560] Updated weights for policy 0, policy_version 6358 (0.0007) +[2024-12-28 13:51:21,167][78983] Fps is (10 sec: 26214.4, 60 sec: 25941.4, 300 sec: 25089.7). Total num frames: 26042368. Throughput: 0: 6608.5. Samples: 5494570. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:51:21,168][78983] Avg episode reward: [(0, '4.286')] +[2024-12-28 13:51:22,693][84560] Updated weights for policy 0, policy_version 6368 (0.0007) +[2024-12-28 13:51:24,229][84560] Updated weights for policy 0, policy_version 6378 (0.0007) +[2024-12-28 13:51:25,761][84560] Updated weights for policy 0, policy_version 6388 (0.0008) +[2024-12-28 13:51:26,167][78983] Fps is (10 sec: 26214.3, 60 sec: 26282.6, 300 sec: 25103.6). Total num frames: 26173440. Throughput: 0: 6594.0. Samples: 5534164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:51:26,168][78983] Avg episode reward: [(0, '4.334')] +[2024-12-28 13:51:27,263][84560] Updated weights for policy 0, policy_version 6398 (0.0007) +[2024-12-28 13:51:28,822][84560] Updated weights for policy 0, policy_version 6408 (0.0007) +[2024-12-28 13:51:30,401][84560] Updated weights for policy 0, policy_version 6418 (0.0007) +[2024-12-28 13:51:31,167][78983] Fps is (10 sec: 26214.4, 60 sec: 26419.2, 300 sec: 25131.4). Total num frames: 26304512. Throughput: 0: 6593.0. Samples: 5573730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:51:31,168][78983] Avg episode reward: [(0, '4.403')] +[2024-12-28 13:51:31,951][84560] Updated weights for policy 0, policy_version 6428 (0.0007) +[2024-12-28 13:51:33,483][84560] Updated weights for policy 0, policy_version 6438 (0.0008) +[2024-12-28 13:51:35,033][84560] Updated weights for policy 0, policy_version 6448 (0.0006) +[2024-12-28 13:51:36,167][78983] Fps is (10 sec: 26624.2, 60 sec: 26419.2, 300 sec: 25228.6). Total num frames: 26439680. Throughput: 0: 6590.2. Samples: 5593676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:51:36,168][78983] Avg episode reward: [(0, '4.308')] +[2024-12-28 13:51:36,628][84560] Updated weights for policy 0, policy_version 6458 (0.0007) +[2024-12-28 13:51:38,175][84560] Updated weights for policy 0, policy_version 6468 (0.0006) +[2024-12-28 13:51:39,725][84560] Updated weights for policy 0, policy_version 6478 (0.0007) +[2024-12-28 13:51:41,167][78983] Fps is (10 sec: 26624.0, 60 sec: 26419.2, 300 sec: 25270.2). Total num frames: 26570752. Throughput: 0: 6582.2. Samples: 5633110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:51:41,168][78983] Avg episode reward: [(0, '4.248')] +[2024-12-28 13:51:41,292][84560] Updated weights for policy 0, policy_version 6488 (0.0006) +[2024-12-28 13:51:42,872][84560] Updated weights for policy 0, policy_version 6498 (0.0007) +[2024-12-28 13:51:44,483][84560] Updated weights for policy 0, policy_version 6508 (0.0006) +[2024-12-28 13:51:46,037][84560] Updated weights for policy 0, policy_version 6518 (0.0008) +[2024-12-28 13:51:46,167][78983] Fps is (10 sec: 25804.7, 60 sec: 26351.0, 300 sec: 25270.2). Total num frames: 26697728. Throughput: 0: 6573.2. Samples: 5672028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:51:46,168][78983] Avg episode reward: [(0, '4.424')] +[2024-12-28 13:51:47,551][84560] Updated weights for policy 0, policy_version 6528 (0.0006) +[2024-12-28 13:51:49,108][84560] Updated weights for policy 0, policy_version 6538 (0.0007) +[2024-12-28 13:51:50,636][84560] Updated weights for policy 0, policy_version 6548 (0.0007) +[2024-12-28 13:51:51,167][78983] Fps is (10 sec: 26214.4, 60 sec: 26350.9, 300 sec: 25284.1). Total num frames: 26832896. Throughput: 0: 6588.9. Samples: 5692086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:51:51,168][78983] Avg episode reward: [(0, '4.495')] +[2024-12-28 13:51:52,260][84560] Updated weights for policy 0, policy_version 6558 (0.0007) +[2024-12-28 13:51:53,794][84560] Updated weights for policy 0, policy_version 6568 (0.0007) +[2024-12-28 13:51:55,323][84560] Updated weights for policy 0, policy_version 6578 (0.0006) +[2024-12-28 13:51:56,167][78983] Fps is (10 sec: 26624.1, 60 sec: 26350.9, 300 sec: 25284.1). Total num frames: 26963968. Throughput: 0: 6577.6. Samples: 5731536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:51:56,168][78983] Avg episode reward: [(0, '4.360')] +[2024-12-28 13:51:56,906][84560] Updated weights for policy 0, policy_version 6588 (0.0007) +[2024-12-28 13:51:58,560][84560] Updated weights for policy 0, policy_version 6598 (0.0007) +[2024-12-28 13:52:00,404][84560] Updated weights for policy 0, policy_version 6608 (0.0010) +[2024-12-28 13:52:01,167][78983] Fps is (10 sec: 24575.9, 60 sec: 26077.9, 300 sec: 25270.2). Total num frames: 27078656. Throughput: 0: 6508.3. Samples: 5767826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:52:01,168][78983] Avg episode reward: [(0, '4.633')] +[2024-12-28 13:52:02,300][84560] Updated weights for policy 0, policy_version 6618 (0.0009) +[2024-12-28 13:52:04,242][84560] Updated weights for policy 0, policy_version 6628 (0.0009) +[2024-12-28 13:52:06,097][84560] Updated weights for policy 0, policy_version 6638 (0.0008) +[2024-12-28 13:52:06,167][78983] Fps is (10 sec: 22528.0, 60 sec: 25736.5, 300 sec: 25270.3). Total num frames: 27189248. Throughput: 0: 6431.4. Samples: 5783984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:52:06,168][78983] Avg episode reward: [(0, '4.449')] +[2024-12-28 13:52:07,976][84560] Updated weights for policy 0, policy_version 6648 (0.0008) +[2024-12-28 13:52:09,652][84560] Updated weights for policy 0, policy_version 6658 (0.0008) +[2024-12-28 13:52:11,167][78983] Fps is (10 sec: 22937.6, 60 sec: 25463.5, 300 sec: 25311.9). Total num frames: 27308032. Throughput: 0: 6306.9. Samples: 5817972. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:52:11,168][78983] Avg episode reward: [(0, '4.467')] +[2024-12-28 13:52:11,188][84560] Updated weights for policy 0, policy_version 6668 (0.0006) +[2024-12-28 13:52:12,720][84560] Updated weights for policy 0, policy_version 6678 (0.0007) +[2024-12-28 13:52:14,241][84560] Updated weights for policy 0, policy_version 6688 (0.0007) +[2024-12-28 13:52:15,902][84560] Updated weights for policy 0, policy_version 6698 (0.0009) +[2024-12-28 13:52:16,167][78983] Fps is (10 sec: 24985.6, 60 sec: 25463.5, 300 sec: 25311.9). Total num frames: 27439104. Throughput: 0: 6300.0. Samples: 5857228. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:52:16,168][78983] Avg episode reward: [(0, '4.641')] +[2024-12-28 13:52:17,499][84560] Updated weights for policy 0, policy_version 6708 (0.0007) +[2024-12-28 13:52:18,990][84560] Updated weights for policy 0, policy_version 6718 (0.0007) +[2024-12-28 13:52:20,565][84560] Updated weights for policy 0, policy_version 6728 (0.0007) +[2024-12-28 13:52:21,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25463.5, 300 sec: 25395.2). Total num frames: 27570176. Throughput: 0: 6298.1. Samples: 5877090. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:52:21,168][78983] Avg episode reward: [(0, '4.365')] +[2024-12-28 13:52:21,185][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000006732_27574272.pth... +[2024-12-28 13:52:21,223][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000005246_21487616.pth +[2024-12-28 13:52:22,133][84560] Updated weights for policy 0, policy_version 6738 (0.0006) +[2024-12-28 13:52:23,721][84560] Updated weights for policy 0, policy_version 6748 (0.0006) +[2024-12-28 13:52:25,290][84560] Updated weights for policy 0, policy_version 6758 (0.0006) +[2024-12-28 13:52:26,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25463.5, 300 sec: 25478.5). Total num frames: 27701248. Throughput: 0: 6293.2. Samples: 5916304. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:52:26,168][78983] Avg episode reward: [(0, '4.362')] +[2024-12-28 13:52:26,839][84560] Updated weights for policy 0, policy_version 6768 (0.0007) +[2024-12-28 13:52:28,373][84560] Updated weights for policy 0, policy_version 6778 (0.0007) +[2024-12-28 13:52:29,900][84560] Updated weights for policy 0, policy_version 6788 (0.0006) +[2024-12-28 13:52:31,167][78983] Fps is (10 sec: 26214.4, 60 sec: 25463.5, 300 sec: 25534.0). Total num frames: 27832320. Throughput: 0: 6309.6. Samples: 5955960. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:52:31,168][78983] Avg episode reward: [(0, '4.468')] +[2024-12-28 13:52:31,473][84560] Updated weights for policy 0, policy_version 6798 (0.0008) +[2024-12-28 13:52:33,027][84560] Updated weights for policy 0, policy_version 6808 (0.0008) +[2024-12-28 13:52:34,594][84560] Updated weights for policy 0, policy_version 6818 (0.0007) +[2024-12-28 13:52:36,161][84560] Updated weights for policy 0, policy_version 6828 (0.0007) +[2024-12-28 13:52:36,167][78983] Fps is (10 sec: 26624.2, 60 sec: 25463.5, 300 sec: 25603.5). Total num frames: 27967488. Throughput: 0: 6303.1. Samples: 5975724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:52:36,168][78983] Avg episode reward: [(0, '4.473')] +[2024-12-28 13:52:37,723][84560] Updated weights for policy 0, policy_version 6838 (0.0007) +[2024-12-28 13:52:39,287][84560] Updated weights for policy 0, policy_version 6848 (0.0008) +[2024-12-28 13:52:40,804][84560] Updated weights for policy 0, policy_version 6858 (0.0006) +[2024-12-28 13:52:41,167][78983] Fps is (10 sec: 26624.0, 60 sec: 25463.4, 300 sec: 25589.6). Total num frames: 28098560. Throughput: 0: 6299.2. Samples: 6015000. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:52:41,168][78983] Avg episode reward: [(0, '4.407')] +[2024-12-28 13:52:42,344][84560] Updated weights for policy 0, policy_version 6868 (0.0006) +[2024-12-28 13:52:43,870][84560] Updated weights for policy 0, policy_version 6878 (0.0007) +[2024-12-28 13:52:45,408][84560] Updated weights for policy 0, policy_version 6888 (0.0006) +[2024-12-28 13:52:46,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25531.7, 300 sec: 25575.7). Total num frames: 28229632. Throughput: 0: 6378.5. Samples: 6054858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:52:46,169][78983] Avg episode reward: [(0, '4.163')] +[2024-12-28 13:52:47,016][84560] Updated weights for policy 0, policy_version 6898 (0.0007) +[2024-12-28 13:52:48,581][84560] Updated weights for policy 0, policy_version 6908 (0.0006) +[2024-12-28 13:52:50,126][84560] Updated weights for policy 0, policy_version 6918 (0.0007) +[2024-12-28 13:52:51,167][78983] Fps is (10 sec: 26214.3, 60 sec: 25463.4, 300 sec: 25561.8). Total num frames: 28360704. Throughput: 0: 6457.1. Samples: 6074554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:52:51,169][78983] Avg episode reward: [(0, '4.489')] +[2024-12-28 13:52:51,678][84560] Updated weights for policy 0, policy_version 6928 (0.0007) +[2024-12-28 13:52:53,253][84560] Updated weights for policy 0, policy_version 6938 (0.0007) +[2024-12-28 13:52:54,799][84560] Updated weights for policy 0, policy_version 6948 (0.0007) +[2024-12-28 13:52:56,167][78983] Fps is (10 sec: 26623.9, 60 sec: 25531.7, 300 sec: 25575.7). Total num frames: 28495872. Throughput: 0: 6578.1. Samples: 6113986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:52:56,168][78983] Avg episode reward: [(0, '4.531')] +[2024-12-28 13:52:56,311][84560] Updated weights for policy 0, policy_version 6958 (0.0008) +[2024-12-28 13:52:57,830][84560] Updated weights for policy 0, policy_version 6968 (0.0007) +[2024-12-28 13:52:59,409][84560] Updated weights for policy 0, policy_version 6978 (0.0009) +[2024-12-28 13:53:01,001][84560] Updated weights for policy 0, policy_version 6988 (0.0006) +[2024-12-28 13:53:01,167][78983] Fps is (10 sec: 26623.8, 60 sec: 25804.7, 300 sec: 25561.8). Total num frames: 28626944. Throughput: 0: 6584.6. Samples: 6153538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:53:01,168][78983] Avg episode reward: [(0, '4.430')] +[2024-12-28 13:53:02,519][84560] Updated weights for policy 0, policy_version 6998 (0.0006) +[2024-12-28 13:53:04,036][84560] Updated weights for policy 0, policy_version 7008 (0.0006) +[2024-12-28 13:53:05,559][84560] Updated weights for policy 0, policy_version 7018 (0.0007) +[2024-12-28 13:53:06,167][78983] Fps is (10 sec: 26624.1, 60 sec: 26214.4, 300 sec: 25561.8). Total num frames: 28762112. Throughput: 0: 6596.9. Samples: 6173950. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:53:06,168][78983] Avg episode reward: [(0, '4.376')] +[2024-12-28 13:53:07,090][84560] Updated weights for policy 0, policy_version 7028 (0.0006) +[2024-12-28 13:53:08,610][84560] Updated weights for policy 0, policy_version 7038 (0.0007) +[2024-12-28 13:53:10,136][84560] Updated weights for policy 0, policy_version 7048 (0.0007) +[2024-12-28 13:53:11,167][78983] Fps is (10 sec: 26624.2, 60 sec: 26419.2, 300 sec: 25561.8). Total num frames: 28893184. Throughput: 0: 6620.2. Samples: 6214214. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:53:11,168][78983] Avg episode reward: [(0, '4.601')] +[2024-12-28 13:53:11,669][84560] Updated weights for policy 0, policy_version 7058 (0.0007) +[2024-12-28 13:53:13,304][84560] Updated weights for policy 0, policy_version 7068 (0.0008) +[2024-12-28 13:53:15,084][84560] Updated weights for policy 0, policy_version 7078 (0.0008) +[2024-12-28 13:53:16,167][78983] Fps is (10 sec: 24985.6, 60 sec: 26214.4, 300 sec: 25506.3). Total num frames: 29011968. Throughput: 0: 6558.7. Samples: 6251102. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:53:16,168][78983] Avg episode reward: [(0, '4.449')] +[2024-12-28 13:53:16,912][84560] Updated weights for policy 0, policy_version 7088 (0.0009) +[2024-12-28 13:53:18,746][84560] Updated weights for policy 0, policy_version 7098 (0.0008) +[2024-12-28 13:53:20,540][84560] Updated weights for policy 0, policy_version 7108 (0.0008) +[2024-12-28 13:53:21,167][78983] Fps is (10 sec: 23347.0, 60 sec: 25941.3, 300 sec: 25450.7). Total num frames: 29126656. Throughput: 0: 6494.3. Samples: 6267968. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:53:21,169][78983] Avg episode reward: [(0, '4.525')] +[2024-12-28 13:53:22,355][84560] Updated weights for policy 0, policy_version 7118 (0.0008) +[2024-12-28 13:53:23,989][84560] Updated weights for policy 0, policy_version 7128 (0.0006) +[2024-12-28 13:53:25,538][84560] Updated weights for policy 0, policy_version 7138 (0.0006) +[2024-12-28 13:53:26,167][78983] Fps is (10 sec: 23756.8, 60 sec: 25804.8, 300 sec: 25423.0). Total num frames: 29249536. Throughput: 0: 6413.6. Samples: 6303612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:53:26,168][78983] Avg episode reward: [(0, '4.427')] +[2024-12-28 13:53:27,339][84560] Updated weights for policy 0, policy_version 7148 (0.0008) +[2024-12-28 13:53:29,193][84560] Updated weights for policy 0, policy_version 7158 (0.0009) +[2024-12-28 13:53:31,045][84560] Updated weights for policy 0, policy_version 7168 (0.0008) +[2024-12-28 13:53:31,167][78983] Fps is (10 sec: 23347.0, 60 sec: 25463.4, 300 sec: 25353.5). Total num frames: 29360128. Throughput: 0: 6292.2. Samples: 6338006. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:53:31,168][78983] Avg episode reward: [(0, '4.478')] +[2024-12-28 13:53:32,849][84560] Updated weights for policy 0, policy_version 7178 (0.0008) +[2024-12-28 13:53:34,689][84560] Updated weights for policy 0, policy_version 7188 (0.0007) +[2024-12-28 13:53:36,167][78983] Fps is (10 sec: 22527.9, 60 sec: 25122.1, 300 sec: 25367.4). Total num frames: 29474816. Throughput: 0: 6230.5. Samples: 6354928. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:53:36,169][78983] Avg episode reward: [(0, '4.340')] +[2024-12-28 13:53:36,512][84560] Updated weights for policy 0, policy_version 7198 (0.0007) +[2024-12-28 13:53:38,018][84560] Updated weights for policy 0, policy_version 7208 (0.0006) +[2024-12-28 13:53:39,520][84560] Updated weights for policy 0, policy_version 7218 (0.0008) +[2024-12-28 13:53:41,025][84560] Updated weights for policy 0, policy_version 7228 (0.0007) +[2024-12-28 13:53:41,167][78983] Fps is (10 sec: 24576.4, 60 sec: 25122.1, 300 sec: 25436.9). Total num frames: 29605888. Throughput: 0: 6188.4. Samples: 6392466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:53:41,168][78983] Avg episode reward: [(0, '4.383')] +[2024-12-28 13:53:42,769][84560] Updated weights for policy 0, policy_version 7238 (0.0008) +[2024-12-28 13:53:44,566][84560] Updated weights for policy 0, policy_version 7248 (0.0008) +[2024-12-28 13:53:46,167][78983] Fps is (10 sec: 24576.2, 60 sec: 24849.1, 300 sec: 25409.1). Total num frames: 29720576. Throughput: 0: 6102.8. Samples: 6428162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:53:46,168][78983] Avg episode reward: [(0, '4.398')] +[2024-12-28 13:53:46,400][84560] Updated weights for policy 0, policy_version 7258 (0.0007) +[2024-12-28 13:53:48,887][84560] Updated weights for policy 0, policy_version 7268 (0.0008) +[2024-12-28 13:53:51,009][84560] Updated weights for policy 0, policy_version 7278 (0.0008) +[2024-12-28 13:53:51,167][78983] Fps is (10 sec: 20479.9, 60 sec: 24166.4, 300 sec: 25339.7). Total num frames: 29810688. Throughput: 0: 5942.6. Samples: 6441368. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:53:51,168][78983] Avg episode reward: [(0, '4.512')] +[2024-12-28 13:53:52,894][84560] Updated weights for policy 0, policy_version 7288 (0.0007) +[2024-12-28 13:53:54,519][84560] Updated weights for policy 0, policy_version 7298 (0.0008) +[2024-12-28 13:53:56,167][78983] Fps is (10 sec: 20889.6, 60 sec: 23893.4, 300 sec: 25381.3). Total num frames: 29929472. Throughput: 0: 5772.6. Samples: 6473982. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:53:56,168][78983] Avg episode reward: [(0, '4.313')] +[2024-12-28 13:53:56,210][84560] Updated weights for policy 0, policy_version 7308 (0.0006) +[2024-12-28 13:53:57,920][84560] Updated weights for policy 0, policy_version 7318 (0.0008) +[2024-12-28 13:53:59,619][84560] Updated weights for policy 0, policy_version 7328 (0.0007) +[2024-12-28 13:54:01,167][78983] Fps is (10 sec: 24166.5, 60 sec: 23756.9, 300 sec: 25381.3). Total num frames: 30052352. Throughput: 0: 5762.0. Samples: 6510394. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:01,168][78983] Avg episode reward: [(0, '4.314')] +[2024-12-28 13:54:01,309][84560] Updated weights for policy 0, policy_version 7338 (0.0007) +[2024-12-28 13:54:02,928][84560] Updated weights for policy 0, policy_version 7348 (0.0008) +[2024-12-28 13:54:04,542][84560] Updated weights for policy 0, policy_version 7358 (0.0007) +[2024-12-28 13:54:06,165][84560] Updated weights for policy 0, policy_version 7368 (0.0008) +[2024-12-28 13:54:06,167][78983] Fps is (10 sec: 24985.5, 60 sec: 23620.3, 300 sec: 25367.4). Total num frames: 30179328. Throughput: 0: 5805.9. Samples: 6529232. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:06,168][78983] Avg episode reward: [(0, '4.198')] +[2024-12-28 13:54:07,850][84560] Updated weights for policy 0, policy_version 7378 (0.0008) +[2024-12-28 13:54:09,514][84560] Updated weights for policy 0, policy_version 7388 (0.0008) +[2024-12-28 13:54:11,167][78983] Fps is (10 sec: 24575.9, 60 sec: 23415.5, 300 sec: 25325.8). Total num frames: 30298112. Throughput: 0: 5838.7. Samples: 6566352. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:54:11,169][78983] Avg episode reward: [(0, '4.520')] +[2024-12-28 13:54:11,171][84560] Updated weights for policy 0, policy_version 7398 (0.0007) +[2024-12-28 13:54:13,026][84560] Updated weights for policy 0, policy_version 7408 (0.0007) +[2024-12-28 13:54:15,004][84560] Updated weights for policy 0, policy_version 7418 (0.0008) +[2024-12-28 13:54:16,167][78983] Fps is (10 sec: 22527.6, 60 sec: 23210.6, 300 sec: 25228.6). Total num frames: 30404608. Throughput: 0: 5810.3. Samples: 6599468. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:16,169][78983] Avg episode reward: [(0, '4.254')] +[2024-12-28 13:54:17,059][84560] Updated weights for policy 0, policy_version 7428 (0.0008) +[2024-12-28 13:54:19,127][84560] Updated weights for policy 0, policy_version 7438 (0.0009) +[2024-12-28 13:54:21,167][78983] Fps is (10 sec: 20480.1, 60 sec: 22937.6, 300 sec: 25103.6). Total num frames: 30502912. Throughput: 0: 5761.2. Samples: 6614182. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:21,168][78983] Avg episode reward: [(0, '4.516')] +[2024-12-28 13:54:21,184][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000007448_30507008.pth... +[2024-12-28 13:54:21,186][84560] Updated weights for policy 0, policy_version 7448 (0.0010) +[2024-12-28 13:54:21,225][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000005978_24485888.pth +[2024-12-28 13:54:23,182][84560] Updated weights for policy 0, policy_version 7458 (0.0009) +[2024-12-28 13:54:25,160][84560] Updated weights for policy 0, policy_version 7468 (0.0008) +[2024-12-28 13:54:26,167][78983] Fps is (10 sec: 20480.3, 60 sec: 22664.5, 300 sec: 25034.2). Total num frames: 30609408. Throughput: 0: 5602.7. Samples: 6644586. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:54:26,168][78983] Avg episode reward: [(0, '4.388')] +[2024-12-28 13:54:27,208][84560] Updated weights for policy 0, policy_version 7478 (0.0008) +[2024-12-28 13:54:29,210][84560] Updated weights for policy 0, policy_version 7488 (0.0009) +[2024-12-28 13:54:31,167][78983] Fps is (10 sec: 20480.0, 60 sec: 22459.8, 300 sec: 24923.1). Total num frames: 30707712. Throughput: 0: 5486.4. Samples: 6675052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:54:31,168][78983] Avg episode reward: [(0, '4.521')] +[2024-12-28 13:54:31,261][84560] Updated weights for policy 0, policy_version 7498 (0.0009) +[2024-12-28 13:54:33,309][84560] Updated weights for policy 0, policy_version 7508 (0.0010) +[2024-12-28 13:54:35,334][84560] Updated weights for policy 0, policy_version 7518 (0.0009) +[2024-12-28 13:54:36,167][78983] Fps is (10 sec: 20070.2, 60 sec: 22254.9, 300 sec: 24825.9). Total num frames: 30810112. Throughput: 0: 5525.6. Samples: 6690022. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:36,169][78983] Avg episode reward: [(0, '4.407')] +[2024-12-28 13:54:37,404][84560] Updated weights for policy 0, policy_version 7528 (0.0010) +[2024-12-28 13:54:39,322][84560] Updated weights for policy 0, policy_version 7538 (0.0009) +[2024-12-28 13:54:40,964][84560] Updated weights for policy 0, policy_version 7548 (0.0007) +[2024-12-28 13:54:41,167][78983] Fps is (10 sec: 21299.2, 60 sec: 21913.6, 300 sec: 24798.2). Total num frames: 30920704. Throughput: 0: 5493.2. Samples: 6721174. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:41,168][78983] Avg episode reward: [(0, '4.422')] +[2024-12-28 13:54:42,594][84560] Updated weights for policy 0, policy_version 7558 (0.0007) +[2024-12-28 13:54:44,225][84560] Updated weights for policy 0, policy_version 7568 (0.0007) +[2024-12-28 13:54:45,907][84560] Updated weights for policy 0, policy_version 7578 (0.0008) +[2024-12-28 13:54:46,167][78983] Fps is (10 sec: 23347.5, 60 sec: 22050.1, 300 sec: 24839.8). Total num frames: 31043584. Throughput: 0: 5513.6. Samples: 6758508. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:46,168][78983] Avg episode reward: [(0, '4.517')] +[2024-12-28 13:54:47,637][84560] Updated weights for policy 0, policy_version 7588 (0.0008) +[2024-12-28 13:54:49,278][84560] Updated weights for policy 0, policy_version 7598 (0.0007) +[2024-12-28 13:54:50,939][84560] Updated weights for policy 0, policy_version 7608 (0.0007) +[2024-12-28 13:54:51,167][78983] Fps is (10 sec: 24576.0, 60 sec: 22596.3, 300 sec: 24839.8). Total num frames: 31166464. Throughput: 0: 5498.7. Samples: 6776672. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:51,168][78983] Avg episode reward: [(0, '4.190')] +[2024-12-28 13:54:52,860][84560] Updated weights for policy 0, policy_version 7618 (0.0009) +[2024-12-28 13:54:54,666][84560] Updated weights for policy 0, policy_version 7628 (0.0007) +[2024-12-28 13:54:56,167][78983] Fps is (10 sec: 23347.3, 60 sec: 22459.7, 300 sec: 24770.4). Total num frames: 31277056. Throughput: 0: 5440.7. Samples: 6811182. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:54:56,168][78983] Avg episode reward: [(0, '4.353')] +[2024-12-28 13:54:56,358][84560] Updated weights for policy 0, policy_version 7638 (0.0007) +[2024-12-28 13:54:58,024][84560] Updated weights for policy 0, policy_version 7648 (0.0007) +[2024-12-28 13:54:59,678][84560] Updated weights for policy 0, policy_version 7658 (0.0007) +[2024-12-28 13:55:01,167][78983] Fps is (10 sec: 23346.9, 60 sec: 22459.7, 300 sec: 24728.7). Total num frames: 31399936. Throughput: 0: 5507.7. Samples: 6847316. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:01,168][78983] Avg episode reward: [(0, '4.538')] +[2024-12-28 13:55:01,512][84560] Updated weights for policy 0, policy_version 7668 (0.0009) +[2024-12-28 13:55:03,239][84560] Updated weights for policy 0, policy_version 7678 (0.0007) +[2024-12-28 13:55:04,958][84560] Updated weights for policy 0, policy_version 7688 (0.0008) +[2024-12-28 13:55:06,167][78983] Fps is (10 sec: 24166.3, 60 sec: 22323.2, 300 sec: 24687.1). Total num frames: 31518720. Throughput: 0: 5570.2. Samples: 6864840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:55:06,168][78983] Avg episode reward: [(0, '4.273')] +[2024-12-28 13:55:06,634][84560] Updated weights for policy 0, policy_version 7698 (0.0007) +[2024-12-28 13:55:08,428][84560] Updated weights for policy 0, policy_version 7708 (0.0009) +[2024-12-28 13:55:10,347][84560] Updated weights for policy 0, policy_version 7718 (0.0010) +[2024-12-28 13:55:11,167][78983] Fps is (10 sec: 22937.8, 60 sec: 22186.7, 300 sec: 24617.6). Total num frames: 31629312. Throughput: 0: 5663.2. Samples: 6899432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:55:11,168][78983] Avg episode reward: [(0, '4.625')] +[2024-12-28 13:55:12,251][84560] Updated weights for policy 0, policy_version 7728 (0.0008) +[2024-12-28 13:55:14,167][84560] Updated weights for policy 0, policy_version 7738 (0.0010) +[2024-12-28 13:55:16,074][84560] Updated weights for policy 0, policy_version 7748 (0.0008) +[2024-12-28 13:55:16,167][78983] Fps is (10 sec: 21708.8, 60 sec: 22186.7, 300 sec: 24576.0). Total num frames: 31735808. Throughput: 0: 5706.4. Samples: 6931838. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:16,168][78983] Avg episode reward: [(0, '4.511')] +[2024-12-28 13:55:18,003][84560] Updated weights for policy 0, policy_version 7758 (0.0008) +[2024-12-28 13:55:19,767][84560] Updated weights for policy 0, policy_version 7768 (0.0007) +[2024-12-28 13:55:21,167][78983] Fps is (10 sec: 22118.4, 60 sec: 22459.7, 300 sec: 24589.9). Total num frames: 31850496. Throughput: 0: 5724.3. Samples: 6947616. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:21,168][78983] Avg episode reward: [(0, '4.519')] +[2024-12-28 13:55:21,338][84560] Updated weights for policy 0, policy_version 7778 (0.0007) +[2024-12-28 13:55:22,918][84560] Updated weights for policy 0, policy_version 7788 (0.0007) +[2024-12-28 13:55:24,502][84560] Updated weights for policy 0, policy_version 7798 (0.0007) +[2024-12-28 13:55:26,078][84560] Updated weights for policy 0, policy_version 7808 (0.0008) +[2024-12-28 13:55:26,167][78983] Fps is (10 sec: 24575.5, 60 sec: 22869.3, 300 sec: 24617.6). Total num frames: 31981568. Throughput: 0: 5893.3. Samples: 6986372. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:26,168][78983] Avg episode reward: [(0, '4.260')] +[2024-12-28 13:55:27,653][84560] Updated weights for policy 0, policy_version 7818 (0.0007) +[2024-12-28 13:55:29,283][84560] Updated weights for policy 0, policy_version 7828 (0.0006) +[2024-12-28 13:55:30,904][84560] Updated weights for policy 0, policy_version 7838 (0.0008) +[2024-12-28 13:55:31,167][78983] Fps is (10 sec: 25804.8, 60 sec: 23347.2, 300 sec: 24589.9). Total num frames: 32108544. Throughput: 0: 5917.5. Samples: 7024794. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:31,168][78983] Avg episode reward: [(0, '4.411')] +[2024-12-28 13:55:32,491][84560] Updated weights for policy 0, policy_version 7848 (0.0007) +[2024-12-28 13:55:34,087][84560] Updated weights for policy 0, policy_version 7858 (0.0006) +[2024-12-28 13:55:35,725][84560] Updated weights for policy 0, policy_version 7868 (0.0008) +[2024-12-28 13:55:36,167][78983] Fps is (10 sec: 25395.8, 60 sec: 23756.9, 300 sec: 24576.0). Total num frames: 32235520. Throughput: 0: 5940.3. Samples: 7043984. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:36,168][78983] Avg episode reward: [(0, '4.396')] +[2024-12-28 13:55:37,312][84560] Updated weights for policy 0, policy_version 7878 (0.0008) +[2024-12-28 13:55:38,946][84560] Updated weights for policy 0, policy_version 7888 (0.0006) +[2024-12-28 13:55:40,531][84560] Updated weights for policy 0, policy_version 7898 (0.0008) +[2024-12-28 13:55:41,167][78983] Fps is (10 sec: 25394.9, 60 sec: 24029.8, 300 sec: 24562.1). Total num frames: 32362496. Throughput: 0: 6018.7. Samples: 7082026. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:41,168][78983] Avg episode reward: [(0, '4.303')] +[2024-12-28 13:55:42,133][84560] Updated weights for policy 0, policy_version 7908 (0.0006) +[2024-12-28 13:55:43,795][84560] Updated weights for policy 0, policy_version 7918 (0.0007) +[2024-12-28 13:55:45,426][84560] Updated weights for policy 0, policy_version 7928 (0.0007) +[2024-12-28 13:55:46,167][78983] Fps is (10 sec: 25394.9, 60 sec: 24098.1, 300 sec: 24534.3). Total num frames: 32489472. Throughput: 0: 6058.2. Samples: 7119934. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:46,168][78983] Avg episode reward: [(0, '4.339')] +[2024-12-28 13:55:47,049][84560] Updated weights for policy 0, policy_version 7938 (0.0007) +[2024-12-28 13:55:48,708][84560] Updated weights for policy 0, policy_version 7948 (0.0007) +[2024-12-28 13:55:50,332][84560] Updated weights for policy 0, policy_version 7958 (0.0008) +[2024-12-28 13:55:51,167][78983] Fps is (10 sec: 25395.3, 60 sec: 24166.4, 300 sec: 24520.5). Total num frames: 32616448. Throughput: 0: 6087.8. Samples: 7138790. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:55:51,168][78983] Avg episode reward: [(0, '4.313')] +[2024-12-28 13:55:51,980][84560] Updated weights for policy 0, policy_version 7968 (0.0008) +[2024-12-28 13:55:53,577][84560] Updated weights for policy 0, policy_version 7978 (0.0007) +[2024-12-28 13:55:55,186][84560] Updated weights for policy 0, policy_version 7988 (0.0007) +[2024-12-28 13:55:56,168][78983] Fps is (10 sec: 24984.6, 60 sec: 24371.0, 300 sec: 24492.7). Total num frames: 32739328. Throughput: 0: 6161.7. Samples: 7176710. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:55:56,170][78983] Avg episode reward: [(0, '4.929')] +[2024-12-28 13:55:56,171][84543] Saving new best policy, reward=4.929! +[2024-12-28 13:55:57,055][84560] Updated weights for policy 0, policy_version 7998 (0.0009) +[2024-12-28 13:55:58,947][84560] Updated weights for policy 0, policy_version 8008 (0.0009) +[2024-12-28 13:56:00,962][84560] Updated weights for policy 0, policy_version 8018 (0.0009) +[2024-12-28 13:56:01,167][78983] Fps is (10 sec: 22937.7, 60 sec: 24098.2, 300 sec: 24409.4). Total num frames: 32845824. Throughput: 0: 6154.1. Samples: 7208772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:56:01,169][78983] Avg episode reward: [(0, '4.231')] +[2024-12-28 13:56:02,860][84560] Updated weights for policy 0, policy_version 8028 (0.0007) +[2024-12-28 13:56:04,757][84560] Updated weights for policy 0, policy_version 8038 (0.0008) +[2024-12-28 13:56:06,167][78983] Fps is (10 sec: 21300.1, 60 sec: 23893.3, 300 sec: 24312.2). Total num frames: 32952320. Throughput: 0: 6163.9. Samples: 7224994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:56:06,168][78983] Avg episode reward: [(0, '4.555')] +[2024-12-28 13:56:06,563][84560] Updated weights for policy 0, policy_version 8048 (0.0009) +[2024-12-28 13:56:08,169][84560] Updated weights for policy 0, policy_version 8058 (0.0007) +[2024-12-28 13:56:09,737][84560] Updated weights for policy 0, policy_version 8068 (0.0007) +[2024-12-28 13:56:11,167][78983] Fps is (10 sec: 23756.9, 60 sec: 24234.7, 300 sec: 24312.2). Total num frames: 33083392. Throughput: 0: 6109.6. Samples: 7261304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:56:11,168][78983] Avg episode reward: [(0, '4.413')] +[2024-12-28 13:56:11,315][84560] Updated weights for policy 0, policy_version 8078 (0.0006) +[2024-12-28 13:56:12,878][84560] Updated weights for policy 0, policy_version 8088 (0.0007) +[2024-12-28 13:56:14,755][84560] Updated weights for policy 0, policy_version 8098 (0.0009) +[2024-12-28 13:56:16,167][78983] Fps is (10 sec: 24575.7, 60 sec: 24371.1, 300 sec: 24256.6). Total num frames: 33198080. Throughput: 0: 6058.7. Samples: 7297436. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:56:16,169][78983] Avg episode reward: [(0, '4.488')] +[2024-12-28 13:56:16,603][84560] Updated weights for policy 0, policy_version 8108 (0.0008) +[2024-12-28 13:56:18,442][84560] Updated weights for policy 0, policy_version 8118 (0.0008) +[2024-12-28 13:56:20,286][84560] Updated weights for policy 0, policy_version 8128 (0.0008) +[2024-12-28 13:56:21,167][78983] Fps is (10 sec: 22527.8, 60 sec: 24302.9, 300 sec: 24187.2). Total num frames: 33308672. Throughput: 0: 6001.9. Samples: 7314072. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:56:21,168][78983] Avg episode reward: [(0, '4.302')] +[2024-12-28 13:56:21,173][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000008132_33308672.pth... +[2024-12-28 13:56:21,217][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000006732_27574272.pth +[2024-12-28 13:56:22,261][84560] Updated weights for policy 0, policy_version 8138 (0.0008) +[2024-12-28 13:56:24,219][84560] Updated weights for policy 0, policy_version 8148 (0.0009) +[2024-12-28 13:56:25,782][84560] Updated weights for policy 0, policy_version 8158 (0.0007) +[2024-12-28 13:56:26,167][78983] Fps is (10 sec: 22528.4, 60 sec: 24029.9, 300 sec: 24131.7). Total num frames: 33423360. Throughput: 0: 5877.1. Samples: 7346494. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:56:26,168][78983] Avg episode reward: [(0, '4.429')] +[2024-12-28 13:56:27,604][84560] Updated weights for policy 0, policy_version 8168 (0.0010) +[2024-12-28 13:56:29,487][84560] Updated weights for policy 0, policy_version 8178 (0.0009) +[2024-12-28 13:56:31,167][78983] Fps is (10 sec: 22118.5, 60 sec: 23688.5, 300 sec: 24034.5). Total num frames: 33529856. Throughput: 0: 5787.9. Samples: 7380388. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:56:31,168][78983] Avg episode reward: [(0, '4.500')] +[2024-12-28 13:56:31,437][84560] Updated weights for policy 0, policy_version 8188 (0.0009) +[2024-12-28 13:56:33,357][84560] Updated weights for policy 0, policy_version 8198 (0.0008) +[2024-12-28 13:56:35,265][84560] Updated weights for policy 0, policy_version 8208 (0.0008) +[2024-12-28 13:56:36,167][78983] Fps is (10 sec: 21298.8, 60 sec: 23347.1, 300 sec: 23951.2). Total num frames: 33636352. Throughput: 0: 5724.7. Samples: 7396404. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:56:36,169][78983] Avg episode reward: [(0, '4.429')] +[2024-12-28 13:56:37,125][84560] Updated weights for policy 0, policy_version 8218 (0.0008) +[2024-12-28 13:56:38,746][84560] Updated weights for policy 0, policy_version 8228 (0.0007) +[2024-12-28 13:56:40,340][84560] Updated weights for policy 0, policy_version 8238 (0.0007) +[2024-12-28 13:56:41,167][78983] Fps is (10 sec: 22937.6, 60 sec: 23279.0, 300 sec: 23937.3). Total num frames: 33759232. Throughput: 0: 5660.8. Samples: 7431444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:56:41,168][78983] Avg episode reward: [(0, '4.541')] +[2024-12-28 13:56:42,155][84560] Updated weights for policy 0, policy_version 8248 (0.0008) +[2024-12-28 13:56:44,103][84560] Updated weights for policy 0, policy_version 8258 (0.0009) +[2024-12-28 13:56:46,002][84560] Updated weights for policy 0, policy_version 8268 (0.0010) +[2024-12-28 13:56:46,167][78983] Fps is (10 sec: 22938.0, 60 sec: 22937.6, 300 sec: 23840.1). Total num frames: 33865728. Throughput: 0: 5683.8. Samples: 7464544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:56:46,168][78983] Avg episode reward: [(0, '4.568')] +[2024-12-28 13:56:47,878][84560] Updated weights for policy 0, policy_version 8278 (0.0007) +[2024-12-28 13:56:49,821][84560] Updated weights for policy 0, policy_version 8288 (0.0009) +[2024-12-28 13:56:51,167][78983] Fps is (10 sec: 21299.1, 60 sec: 22596.3, 300 sec: 23756.8). Total num frames: 33972224. Throughput: 0: 5682.5. Samples: 7480706. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:56:51,169][78983] Avg episode reward: [(0, '4.571')] +[2024-12-28 13:56:51,809][84560] Updated weights for policy 0, policy_version 8298 (0.0008) +[2024-12-28 13:56:53,654][84560] Updated weights for policy 0, policy_version 8308 (0.0009) +[2024-12-28 13:56:55,202][84560] Updated weights for policy 0, policy_version 8318 (0.0006) +[2024-12-28 13:56:56,167][78983] Fps is (10 sec: 22937.7, 60 sec: 22596.5, 300 sec: 23784.6). Total num frames: 34095104. Throughput: 0: 5621.1. Samples: 7514252. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:56:56,168][78983] Avg episode reward: [(0, '4.463')] +[2024-12-28 13:56:56,777][84560] Updated weights for policy 0, policy_version 8328 (0.0008) +[2024-12-28 13:56:58,302][84560] Updated weights for policy 0, policy_version 8338 (0.0007) +[2024-12-28 13:56:59,886][84560] Updated weights for policy 0, policy_version 8348 (0.0008) +[2024-12-28 13:57:01,167][78983] Fps is (10 sec: 24985.8, 60 sec: 22937.6, 300 sec: 23840.1). Total num frames: 34222080. Throughput: 0: 5683.3. Samples: 7553182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:57:01,168][78983] Avg episode reward: [(0, '4.546')] +[2024-12-28 13:57:01,526][84560] Updated weights for policy 0, policy_version 8358 (0.0008) +[2024-12-28 13:57:03,436][84560] Updated weights for policy 0, policy_version 8368 (0.0008) +[2024-12-28 13:57:05,332][84560] Updated weights for policy 0, policy_version 8378 (0.0008) +[2024-12-28 13:57:06,167][78983] Fps is (10 sec: 23756.8, 60 sec: 23005.9, 300 sec: 23812.3). Total num frames: 34332672. Throughput: 0: 5691.3. Samples: 7570178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 13:57:06,168][78983] Avg episode reward: [(0, '4.282')] +[2024-12-28 13:57:07,317][84560] Updated weights for policy 0, policy_version 8388 (0.0009) +[2024-12-28 13:57:09,210][84560] Updated weights for policy 0, policy_version 8398 (0.0008) +[2024-12-28 13:57:11,136][84560] Updated weights for policy 0, policy_version 8408 (0.0007) +[2024-12-28 13:57:11,167][78983] Fps is (10 sec: 21708.7, 60 sec: 22596.2, 300 sec: 23729.0). Total num frames: 34439168. Throughput: 0: 5677.9. Samples: 7601998. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 13:57:11,168][78983] Avg episode reward: [(0, '4.391')] +[2024-12-28 13:57:13,032][84560] Updated weights for policy 0, policy_version 8418 (0.0008) +[2024-12-28 13:57:14,809][84560] Updated weights for policy 0, policy_version 8428 (0.0007) +[2024-12-28 13:57:16,167][78983] Fps is (10 sec: 22118.5, 60 sec: 22596.4, 300 sec: 23673.5). Total num frames: 34553856. Throughput: 0: 5678.7. Samples: 7635928. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:57:16,168][78983] Avg episode reward: [(0, '4.250')] +[2024-12-28 13:57:16,424][84560] Updated weights for policy 0, policy_version 8438 (0.0007) +[2024-12-28 13:57:18,042][84560] Updated weights for policy 0, policy_version 8448 (0.0007) +[2024-12-28 13:57:19,635][84560] Updated weights for policy 0, policy_version 8458 (0.0007) +[2024-12-28 13:57:21,167][78983] Fps is (10 sec: 24166.5, 60 sec: 22869.4, 300 sec: 23659.6). Total num frames: 34680832. Throughput: 0: 5746.8. Samples: 7655010. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:57:21,168][78983] Avg episode reward: [(0, '4.309')] +[2024-12-28 13:57:21,217][84560] Updated weights for policy 0, policy_version 8468 (0.0007) +[2024-12-28 13:57:22,761][84560] Updated weights for policy 0, policy_version 8478 (0.0007) +[2024-12-28 13:57:24,345][84560] Updated weights for policy 0, policy_version 8488 (0.0007) +[2024-12-28 13:57:25,904][84560] Updated weights for policy 0, policy_version 8498 (0.0007) +[2024-12-28 13:57:26,167][78983] Fps is (10 sec: 25804.8, 60 sec: 23142.4, 300 sec: 23659.6). Total num frames: 34811904. Throughput: 0: 5832.7. Samples: 7693914. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:57:26,168][78983] Avg episode reward: [(0, '4.574')] +[2024-12-28 13:57:27,450][84560] Updated weights for policy 0, policy_version 8508 (0.0007) +[2024-12-28 13:57:29,061][84560] Updated weights for policy 0, policy_version 8518 (0.0006) +[2024-12-28 13:57:30,673][84560] Updated weights for policy 0, policy_version 8528 (0.0007) +[2024-12-28 13:57:31,167][78983] Fps is (10 sec: 25804.9, 60 sec: 23483.7, 300 sec: 23631.8). Total num frames: 34938880. Throughput: 0: 5958.4. Samples: 7732670. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:57:31,168][78983] Avg episode reward: [(0, '4.772')] +[2024-12-28 13:57:32,277][84560] Updated weights for policy 0, policy_version 8538 (0.0008) +[2024-12-28 13:57:34,018][84560] Updated weights for policy 0, policy_version 8548 (0.0007) +[2024-12-28 13:57:35,834][84560] Updated weights for policy 0, policy_version 8558 (0.0008) +[2024-12-28 13:57:36,167][78983] Fps is (10 sec: 24575.4, 60 sec: 23688.5, 300 sec: 23590.2). Total num frames: 35057664. Throughput: 0: 6013.2. Samples: 7751300. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:57:36,169][78983] Avg episode reward: [(0, '4.551')] +[2024-12-28 13:57:37,690][84560] Updated weights for policy 0, policy_version 8568 (0.0007) +[2024-12-28 13:57:39,538][84560] Updated weights for policy 0, policy_version 8578 (0.0008) +[2024-12-28 13:57:41,167][78983] Fps is (10 sec: 22937.7, 60 sec: 23483.8, 300 sec: 23520.8). Total num frames: 35168256. Throughput: 0: 6008.3. Samples: 7784624. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:57:41,168][78983] Avg episode reward: [(0, '4.563')] +[2024-12-28 13:57:41,406][84560] Updated weights for policy 0, policy_version 8588 (0.0008) +[2024-12-28 13:57:43,375][84560] Updated weights for policy 0, policy_version 8598 (0.0009) +[2024-12-28 13:57:45,088][84560] Updated weights for policy 0, policy_version 8608 (0.0007) +[2024-12-28 13:57:46,167][78983] Fps is (10 sec: 22528.2, 60 sec: 23620.2, 300 sec: 23465.2). Total num frames: 35282944. Throughput: 0: 5891.8. Samples: 7818314. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:57:46,168][78983] Avg episode reward: [(0, '4.329')] +[2024-12-28 13:57:46,699][84560] Updated weights for policy 0, policy_version 8618 (0.0008) +[2024-12-28 13:57:48,259][84560] Updated weights for policy 0, policy_version 8628 (0.0006) +[2024-12-28 13:57:49,845][84560] Updated weights for policy 0, policy_version 8638 (0.0006) +[2024-12-28 13:57:51,167][78983] Fps is (10 sec: 24575.7, 60 sec: 24029.8, 300 sec: 23451.3). Total num frames: 35414016. Throughput: 0: 5947.2. Samples: 7837804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:57:51,169][78983] Avg episode reward: [(0, '4.407')] +[2024-12-28 13:57:51,442][84560] Updated weights for policy 0, policy_version 8648 (0.0007) +[2024-12-28 13:57:53,045][84560] Updated weights for policy 0, policy_version 8658 (0.0009) +[2024-12-28 13:57:54,683][84560] Updated weights for policy 0, policy_version 8668 (0.0007) +[2024-12-28 13:57:56,167][78983] Fps is (10 sec: 25805.2, 60 sec: 24098.1, 300 sec: 23437.5). Total num frames: 35540992. Throughput: 0: 6091.7. Samples: 7876124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 13:57:56,168][78983] Avg episode reward: [(0, '4.409')] +[2024-12-28 13:57:56,320][84560] Updated weights for policy 0, policy_version 8678 (0.0008) +[2024-12-28 13:57:57,893][84560] Updated weights for policy 0, policy_version 8688 (0.0008) +[2024-12-28 13:57:59,493][84560] Updated weights for policy 0, policy_version 8698 (0.0007) +[2024-12-28 13:58:01,124][84560] Updated weights for policy 0, policy_version 8708 (0.0007) +[2024-12-28 13:58:01,167][78983] Fps is (10 sec: 25395.5, 60 sec: 24098.1, 300 sec: 23409.7). Total num frames: 35667968. Throughput: 0: 6184.8. Samples: 7914246. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:58:01,168][78983] Avg episode reward: [(0, '4.554')] +[2024-12-28 13:58:02,779][84560] Updated weights for policy 0, policy_version 8718 (0.0007) +[2024-12-28 13:58:04,393][84560] Updated weights for policy 0, policy_version 8728 (0.0007) +[2024-12-28 13:58:05,954][84560] Updated weights for policy 0, policy_version 8738 (0.0008) +[2024-12-28 13:58:06,167][78983] Fps is (10 sec: 25395.1, 60 sec: 24371.2, 300 sec: 23395.8). Total num frames: 35794944. Throughput: 0: 6175.9. Samples: 7932924. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:06,168][78983] Avg episode reward: [(0, '4.446')] +[2024-12-28 13:58:07,537][84560] Updated weights for policy 0, policy_version 8748 (0.0007) +[2024-12-28 13:58:09,091][84560] Updated weights for policy 0, policy_version 8758 (0.0007) +[2024-12-28 13:58:10,714][84560] Updated weights for policy 0, policy_version 8768 (0.0006) +[2024-12-28 13:58:11,167][78983] Fps is (10 sec: 25395.2, 60 sec: 24712.6, 300 sec: 23423.6). Total num frames: 35921920. Throughput: 0: 6174.2. Samples: 7971754. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:11,168][78983] Avg episode reward: [(0, '4.324')] +[2024-12-28 13:58:12,472][84560] Updated weights for policy 0, policy_version 8778 (0.0009) +[2024-12-28 13:58:14,337][84560] Updated weights for policy 0, policy_version 8788 (0.0009) +[2024-12-28 13:58:16,167][78983] Fps is (10 sec: 23756.7, 60 sec: 24644.2, 300 sec: 23409.7). Total num frames: 36032512. Throughput: 0: 6075.7. Samples: 8006078. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:16,169][78983] Avg episode reward: [(0, '4.484')] +[2024-12-28 13:58:16,242][84560] Updated weights for policy 0, policy_version 8798 (0.0009) +[2024-12-28 13:58:18,049][84560] Updated weights for policy 0, policy_version 8808 (0.0008) +[2024-12-28 13:58:19,967][84560] Updated weights for policy 0, policy_version 8818 (0.0008) +[2024-12-28 13:58:21,167][78983] Fps is (10 sec: 22118.4, 60 sec: 24371.2, 300 sec: 23368.0). Total num frames: 36143104. Throughput: 0: 6030.0. Samples: 8022650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:58:21,169][78983] Avg episode reward: [(0, '4.517')] +[2024-12-28 13:58:21,174][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000008824_36143104.pth... +[2024-12-28 13:58:21,211][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000007448_30507008.pth +[2024-12-28 13:58:21,932][84560] Updated weights for policy 0, policy_version 8828 (0.0009) +[2024-12-28 13:58:23,534][84560] Updated weights for policy 0, policy_version 8838 (0.0007) +[2024-12-28 13:58:25,146][84560] Updated weights for policy 0, policy_version 8848 (0.0007) +[2024-12-28 13:58:26,167][78983] Fps is (10 sec: 23347.3, 60 sec: 24234.7, 300 sec: 23409.7). Total num frames: 36265984. Throughput: 0: 6064.4. Samples: 8057522. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:26,168][78983] Avg episode reward: [(0, '4.325')] +[2024-12-28 13:58:26,725][84560] Updated weights for policy 0, policy_version 8858 (0.0007) +[2024-12-28 13:58:28,311][84560] Updated weights for policy 0, policy_version 8868 (0.0007) +[2024-12-28 13:58:29,895][84560] Updated weights for policy 0, policy_version 8878 (0.0007) +[2024-12-28 13:58:31,167][78983] Fps is (10 sec: 25395.3, 60 sec: 24302.9, 300 sec: 23465.2). Total num frames: 36397056. Throughput: 0: 6177.8. Samples: 8096314. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:31,168][78983] Avg episode reward: [(0, '4.475')] +[2024-12-28 13:58:31,423][84560] Updated weights for policy 0, policy_version 8888 (0.0007) +[2024-12-28 13:58:32,985][84560] Updated weights for policy 0, policy_version 8898 (0.0006) +[2024-12-28 13:58:34,540][84560] Updated weights for policy 0, policy_version 8908 (0.0007) +[2024-12-28 13:58:36,114][84560] Updated weights for policy 0, policy_version 8918 (0.0008) +[2024-12-28 13:58:36,167][78983] Fps is (10 sec: 26214.2, 60 sec: 24507.8, 300 sec: 23465.2). Total num frames: 36528128. Throughput: 0: 6185.0. Samples: 8116130. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:58:36,168][78983] Avg episode reward: [(0, '4.382')] +[2024-12-28 13:58:37,690][84560] Updated weights for policy 0, policy_version 8928 (0.0006) +[2024-12-28 13:58:39,274][84560] Updated weights for policy 0, policy_version 8938 (0.0007) +[2024-12-28 13:58:40,967][84560] Updated weights for policy 0, policy_version 8948 (0.0008) +[2024-12-28 13:58:41,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24780.8, 300 sec: 23506.9). Total num frames: 36655104. Throughput: 0: 6200.8. Samples: 8155160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:58:41,169][78983] Avg episode reward: [(0, '4.322')] +[2024-12-28 13:58:42,822][84560] Updated weights for policy 0, policy_version 8958 (0.0008) +[2024-12-28 13:58:44,721][84560] Updated weights for policy 0, policy_version 8968 (0.0009) +[2024-12-28 13:58:46,167][78983] Fps is (10 sec: 23347.3, 60 sec: 24644.3, 300 sec: 23562.4). Total num frames: 36761600. Throughput: 0: 6090.2. Samples: 8188304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:58:46,168][78983] Avg episode reward: [(0, '4.455')] +[2024-12-28 13:58:46,639][84560] Updated weights for policy 0, policy_version 8978 (0.0008) +[2024-12-28 13:58:48,525][84560] Updated weights for policy 0, policy_version 8988 (0.0009) +[2024-12-28 13:58:50,357][84560] Updated weights for policy 0, policy_version 8998 (0.0008) +[2024-12-28 13:58:51,167][78983] Fps is (10 sec: 21708.7, 60 sec: 24303.0, 300 sec: 23534.6). Total num frames: 36872192. Throughput: 0: 6036.0. Samples: 8204542. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:58:51,169][78983] Avg episode reward: [(0, '4.439')] +[2024-12-28 13:58:52,033][84560] Updated weights for policy 0, policy_version 9008 (0.0007) +[2024-12-28 13:58:53,694][84560] Updated weights for policy 0, policy_version 9018 (0.0009) +[2024-12-28 13:58:55,574][84560] Updated weights for policy 0, policy_version 9028 (0.0007) +[2024-12-28 13:58:56,167][78983] Fps is (10 sec: 22937.6, 60 sec: 24166.4, 300 sec: 23520.8). Total num frames: 36990976. Throughput: 0: 5950.8. Samples: 8239540. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:58:56,169][78983] Avg episode reward: [(0, '4.561')] +[2024-12-28 13:58:57,426][84560] Updated weights for policy 0, policy_version 9038 (0.0008) +[2024-12-28 13:58:59,295][84560] Updated weights for policy 0, policy_version 9048 (0.0009) +[2024-12-28 13:59:01,167][78983] Fps is (10 sec: 22528.0, 60 sec: 23825.1, 300 sec: 23451.3). Total num frames: 37097472. Throughput: 0: 5920.2. Samples: 8272488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 13:59:01,169][78983] Avg episode reward: [(0, '4.246')] +[2024-12-28 13:59:01,182][84560] Updated weights for policy 0, policy_version 9058 (0.0007) +[2024-12-28 13:59:03,050][84560] Updated weights for policy 0, policy_version 9068 (0.0008) +[2024-12-28 13:59:04,784][84560] Updated weights for policy 0, policy_version 9078 (0.0007) +[2024-12-28 13:59:06,167][78983] Fps is (10 sec: 22528.0, 60 sec: 23688.5, 300 sec: 23451.3). Total num frames: 37216256. Throughput: 0: 5915.8. Samples: 8288862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:59:06,168][78983] Avg episode reward: [(0, '4.461')] +[2024-12-28 13:59:06,357][84560] Updated weights for policy 0, policy_version 9088 (0.0006) +[2024-12-28 13:59:07,881][84560] Updated weights for policy 0, policy_version 9098 (0.0006) +[2024-12-28 13:59:09,435][84560] Updated weights for policy 0, policy_version 9108 (0.0007) +[2024-12-28 13:59:10,999][84560] Updated weights for policy 0, policy_version 9118 (0.0007) +[2024-12-28 13:59:11,167][78983] Fps is (10 sec: 25395.2, 60 sec: 23825.1, 300 sec: 23548.5). Total num frames: 37351424. Throughput: 0: 6013.3. Samples: 8328122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:59:11,168][78983] Avg episode reward: [(0, '4.359')] +[2024-12-28 13:59:12,562][84560] Updated weights for policy 0, policy_version 9128 (0.0007) +[2024-12-28 13:59:14,158][84560] Updated weights for policy 0, policy_version 9138 (0.0007) +[2024-12-28 13:59:15,822][84560] Updated weights for policy 0, policy_version 9148 (0.0007) +[2024-12-28 13:59:16,167][78983] Fps is (10 sec: 26214.4, 60 sec: 24098.2, 300 sec: 23645.7). Total num frames: 37478400. Throughput: 0: 6006.2. Samples: 8366594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:59:16,169][78983] Avg episode reward: [(0, '4.391')] +[2024-12-28 13:59:17,415][84560] Updated weights for policy 0, policy_version 9158 (0.0008) +[2024-12-28 13:59:18,963][84560] Updated weights for policy 0, policy_version 9168 (0.0006) +[2024-12-28 13:59:20,557][84560] Updated weights for policy 0, policy_version 9178 (0.0007) +[2024-12-28 13:59:21,167][78983] Fps is (10 sec: 25395.1, 60 sec: 24371.2, 300 sec: 23715.1). Total num frames: 37605376. Throughput: 0: 5999.1. Samples: 8386088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:59:21,169][78983] Avg episode reward: [(0, '4.640')] +[2024-12-28 13:59:22,166][84560] Updated weights for policy 0, policy_version 9188 (0.0007) +[2024-12-28 13:59:23,769][84560] Updated weights for policy 0, policy_version 9198 (0.0006) +[2024-12-28 13:59:25,352][84560] Updated weights for policy 0, policy_version 9208 (0.0007) +[2024-12-28 13:59:26,167][78983] Fps is (10 sec: 25804.8, 60 sec: 24507.7, 300 sec: 23826.2). Total num frames: 37736448. Throughput: 0: 5992.0. Samples: 8424798. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:59:26,168][78983] Avg episode reward: [(0, '4.400')] +[2024-12-28 13:59:26,917][84560] Updated weights for policy 0, policy_version 9218 (0.0006) +[2024-12-28 13:59:28,491][84560] Updated weights for policy 0, policy_version 9228 (0.0007) +[2024-12-28 13:59:30,287][84560] Updated weights for policy 0, policy_version 9238 (0.0008) +[2024-12-28 13:59:31,167][78983] Fps is (10 sec: 24985.3, 60 sec: 24302.9, 300 sec: 23881.8). Total num frames: 37855232. Throughput: 0: 6077.9. Samples: 8461808. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:59:31,169][78983] Avg episode reward: [(0, '4.540')] +[2024-12-28 13:59:32,153][84560] Updated weights for policy 0, policy_version 9248 (0.0009) +[2024-12-28 13:59:33,972][84560] Updated weights for policy 0, policy_version 9258 (0.0008) +[2024-12-28 13:59:35,889][84560] Updated weights for policy 0, policy_version 9268 (0.0009) +[2024-12-28 13:59:36,167][78983] Fps is (10 sec: 22937.4, 60 sec: 23961.6, 300 sec: 23881.8). Total num frames: 37965824. Throughput: 0: 6087.4. Samples: 8478476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 13:59:36,168][78983] Avg episode reward: [(0, '4.426')] +[2024-12-28 13:59:37,793][84560] Updated weights for policy 0, policy_version 9278 (0.0009) +[2024-12-28 13:59:39,631][84560] Updated weights for policy 0, policy_version 9288 (0.0008) +[2024-12-28 13:59:41,167][78983] Fps is (10 sec: 22528.3, 60 sec: 23756.8, 300 sec: 23854.0). Total num frames: 38080512. Throughput: 0: 6038.6. Samples: 8511278. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:59:41,168][78983] Avg episode reward: [(0, '4.408')] +[2024-12-28 13:59:41,234][84560] Updated weights for policy 0, policy_version 9298 (0.0007) +[2024-12-28 13:59:42,786][84560] Updated weights for policy 0, policy_version 9308 (0.0007) +[2024-12-28 13:59:44,330][84560] Updated weights for policy 0, policy_version 9318 (0.0007) +[2024-12-28 13:59:45,961][84560] Updated weights for policy 0, policy_version 9328 (0.0007) +[2024-12-28 13:59:46,167][78983] Fps is (10 sec: 24576.2, 60 sec: 24166.4, 300 sec: 23881.8). Total num frames: 38211584. Throughput: 0: 6167.2. Samples: 8550010. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 13:59:46,168][78983] Avg episode reward: [(0, '4.315')] +[2024-12-28 13:59:47,580][84560] Updated weights for policy 0, policy_version 9338 (0.0007) +[2024-12-28 13:59:49,374][84560] Updated weights for policy 0, policy_version 9348 (0.0009) +[2024-12-28 13:59:51,167][78983] Fps is (10 sec: 24576.0, 60 sec: 24234.7, 300 sec: 23895.6). Total num frames: 38326272. Throughput: 0: 6214.0. Samples: 8568492. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:59:51,168][78983] Avg episode reward: [(0, '4.351')] +[2024-12-28 13:59:51,197][84560] Updated weights for policy 0, policy_version 9358 (0.0008) +[2024-12-28 13:59:53,040][84560] Updated weights for policy 0, policy_version 9368 (0.0009) +[2024-12-28 13:59:54,905][84560] Updated weights for policy 0, policy_version 9378 (0.0007) +[2024-12-28 13:59:56,167][78983] Fps is (10 sec: 22528.0, 60 sec: 24098.1, 300 sec: 23854.0). Total num frames: 38436864. Throughput: 0: 6080.8. Samples: 8601756. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 13:59:56,168][78983] Avg episode reward: [(0, '4.295')] +[2024-12-28 13:59:56,799][84560] Updated weights for policy 0, policy_version 9388 (0.0008) +[2024-12-28 13:59:58,669][84560] Updated weights for policy 0, policy_version 9398 (0.0009) +[2024-12-28 14:00:00,250][84560] Updated weights for policy 0, policy_version 9408 (0.0006) +[2024-12-28 14:00:01,167][78983] Fps is (10 sec: 22937.7, 60 sec: 24302.9, 300 sec: 23854.0). Total num frames: 38555648. Throughput: 0: 5998.5. Samples: 8636528. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:01,168][78983] Avg episode reward: [(0, '4.584')] +[2024-12-28 14:00:01,862][84560] Updated weights for policy 0, policy_version 9418 (0.0007) +[2024-12-28 14:00:03,399][84560] Updated weights for policy 0, policy_version 9428 (0.0007) +[2024-12-28 14:00:04,978][84560] Updated weights for policy 0, policy_version 9438 (0.0008) +[2024-12-28 14:00:06,167][78983] Fps is (10 sec: 24985.7, 60 sec: 24507.7, 300 sec: 23923.4). Total num frames: 38686720. Throughput: 0: 5996.9. Samples: 8655950. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:00:06,168][78983] Avg episode reward: [(0, '4.441')] +[2024-12-28 14:00:06,574][84560] Updated weights for policy 0, policy_version 9448 (0.0007) +[2024-12-28 14:00:08,170][84560] Updated weights for policy 0, policy_version 9458 (0.0007) +[2024-12-28 14:00:10,626][84560] Updated weights for policy 0, policy_version 9468 (0.0007) +[2024-12-28 14:00:11,167][78983] Fps is (10 sec: 23756.7, 60 sec: 24029.9, 300 sec: 23923.4). Total num frames: 38793216. Throughput: 0: 5942.8. Samples: 8692226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:00:11,168][78983] Avg episode reward: [(0, '4.455')] +[2024-12-28 14:00:12,106][84560] Updated weights for policy 0, policy_version 9478 (0.0006) +[2024-12-28 14:00:13,601][84560] Updated weights for policy 0, policy_version 9488 (0.0006) +[2024-12-28 14:00:15,295][84560] Updated weights for policy 0, policy_version 9498 (0.0008) +[2024-12-28 14:00:16,167][78983] Fps is (10 sec: 23756.8, 60 sec: 24098.1, 300 sec: 23979.0). Total num frames: 38924288. Throughput: 0: 5929.0. Samples: 8728614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:00:16,168][78983] Avg episode reward: [(0, '4.224')] +[2024-12-28 14:00:16,824][84560] Updated weights for policy 0, policy_version 9508 (0.0007) +[2024-12-28 14:00:18,386][84560] Updated weights for policy 0, policy_version 9518 (0.0007) +[2024-12-28 14:00:19,891][84560] Updated weights for policy 0, policy_version 9528 (0.0007) +[2024-12-28 14:00:21,167][78983] Fps is (10 sec: 26624.0, 60 sec: 24234.7, 300 sec: 23992.9). Total num frames: 39059456. Throughput: 0: 6001.3. Samples: 8748536. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:21,168][78983] Avg episode reward: [(0, '4.590')] +[2024-12-28 14:00:21,173][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009536_39059456.pth... +[2024-12-28 14:00:21,205][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000008132_33308672.pth +[2024-12-28 14:00:21,499][84560] Updated weights for policy 0, policy_version 9538 (0.0008) +[2024-12-28 14:00:23,038][84560] Updated weights for policy 0, policy_version 9548 (0.0007) +[2024-12-28 14:00:24,595][84560] Updated weights for policy 0, policy_version 9558 (0.0007) +[2024-12-28 14:00:26,135][84560] Updated weights for policy 0, policy_version 9568 (0.0007) +[2024-12-28 14:00:26,167][78983] Fps is (10 sec: 26623.7, 60 sec: 24234.6, 300 sec: 24006.7). Total num frames: 39190528. Throughput: 0: 6150.2. Samples: 8788036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:26,168][78983] Avg episode reward: [(0, '4.561')] +[2024-12-28 14:00:27,665][84560] Updated weights for policy 0, policy_version 9578 (0.0006) +[2024-12-28 14:00:29,160][84560] Updated weights for policy 0, policy_version 9588 (0.0006) +[2024-12-28 14:00:30,720][84560] Updated weights for policy 0, policy_version 9598 (0.0007) +[2024-12-28 14:00:31,167][78983] Fps is (10 sec: 26214.5, 60 sec: 24439.5, 300 sec: 24020.6). Total num frames: 39321600. Throughput: 0: 6181.2. Samples: 8828164. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:31,168][78983] Avg episode reward: [(0, '4.349')] +[2024-12-28 14:00:32,256][84560] Updated weights for policy 0, policy_version 9608 (0.0006) +[2024-12-28 14:00:33,813][84560] Updated weights for policy 0, policy_version 9618 (0.0007) +[2024-12-28 14:00:35,330][84560] Updated weights for policy 0, policy_version 9628 (0.0006) +[2024-12-28 14:00:36,167][78983] Fps is (10 sec: 26623.9, 60 sec: 24849.1, 300 sec: 24048.4). Total num frames: 39456768. Throughput: 0: 6212.4. Samples: 8848050. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:36,168][78983] Avg episode reward: [(0, '4.233')] +[2024-12-28 14:00:36,911][84560] Updated weights for policy 0, policy_version 9638 (0.0006) +[2024-12-28 14:00:38,404][84560] Updated weights for policy 0, policy_version 9648 (0.0007) +[2024-12-28 14:00:39,936][84560] Updated weights for policy 0, policy_version 9658 (0.0007) +[2024-12-28 14:00:41,167][78983] Fps is (10 sec: 27033.6, 60 sec: 25190.4, 300 sec: 24076.2). Total num frames: 39591936. Throughput: 0: 6363.5. Samples: 8888114. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:00:41,168][78983] Avg episode reward: [(0, '4.531')] +[2024-12-28 14:00:41,454][84560] Updated weights for policy 0, policy_version 9668 (0.0007) +[2024-12-28 14:00:42,999][84560] Updated weights for policy 0, policy_version 9678 (0.0008) +[2024-12-28 14:00:44,487][84560] Updated weights for policy 0, policy_version 9688 (0.0006) +[2024-12-28 14:00:46,071][84560] Updated weights for policy 0, policy_version 9698 (0.0006) +[2024-12-28 14:00:46,167][78983] Fps is (10 sec: 26624.3, 60 sec: 25190.4, 300 sec: 24090.0). Total num frames: 39723008. Throughput: 0: 6484.4. Samples: 8928324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:00:46,168][78983] Avg episode reward: [(0, '4.253')] +[2024-12-28 14:00:47,560][84560] Updated weights for policy 0, policy_version 9708 (0.0006) +[2024-12-28 14:00:49,100][84560] Updated weights for policy 0, policy_version 9718 (0.0007) +[2024-12-28 14:00:50,640][84560] Updated weights for policy 0, policy_version 9728 (0.0006) +[2024-12-28 14:00:51,167][78983] Fps is (10 sec: 26624.1, 60 sec: 25531.8, 300 sec: 24131.7). Total num frames: 39858176. Throughput: 0: 6502.2. Samples: 8948550. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:00:51,168][78983] Avg episode reward: [(0, '4.313')] +[2024-12-28 14:00:52,198][84560] Updated weights for policy 0, policy_version 9738 (0.0007) +[2024-12-28 14:00:53,711][84560] Updated weights for policy 0, policy_version 9748 (0.0007) +[2024-12-28 14:00:55,263][84560] Updated weights for policy 0, policy_version 9758 (0.0007) +[2024-12-28 14:00:56,167][78983] Fps is (10 sec: 26623.8, 60 sec: 25873.1, 300 sec: 24215.0). Total num frames: 39989248. Throughput: 0: 6582.6. Samples: 8988444. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:00:56,168][78983] Avg episode reward: [(0, '4.425')] +[2024-12-28 14:00:56,655][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009767_40005632.pth... +[2024-12-28 14:00:56,655][78983] Component Batcher_0 stopped! +[2024-12-28 14:00:56,655][84543] Stopping Batcher_0... +[2024-12-28 14:00:56,657][84543] Loop batcher_evt_loop terminating... +[2024-12-28 14:00:56,671][84560] Weights refcount: 2 0 +[2024-12-28 14:00:56,672][84560] Stopping InferenceWorker_p0-w0... +[2024-12-28 14:00:56,673][84560] Loop inference_proc0-0_evt_loop terminating... +[2024-12-28 14:00:56,673][78983] Component InferenceWorker_p0-w0 stopped! +[2024-12-28 14:00:56,679][84563] Stopping RolloutWorker_w1... +[2024-12-28 14:00:56,679][84563] Loop rollout_proc1_evt_loop terminating... +[2024-12-28 14:00:56,679][84567] Stopping RolloutWorker_w6... +[2024-12-28 14:00:56,680][84567] Loop rollout_proc6_evt_loop terminating... +[2024-12-28 14:00:56,679][78983] Component RolloutWorker_w1 stopped! +[2024-12-28 14:00:56,680][84561] Stopping RolloutWorker_w0... +[2024-12-28 14:00:56,680][84566] Stopping RolloutWorker_w5... +[2024-12-28 14:00:56,681][84561] Loop rollout_proc0_evt_loop terminating... +[2024-12-28 14:00:56,681][84566] Loop rollout_proc5_evt_loop terminating... +[2024-12-28 14:00:56,681][84568] Stopping RolloutWorker_w7... +[2024-12-28 14:00:56,681][84565] Stopping RolloutWorker_w4... +[2024-12-28 14:00:56,682][84568] Loop rollout_proc7_evt_loop terminating... +[2024-12-28 14:00:56,681][78983] Component RolloutWorker_w6 stopped! +[2024-12-28 14:00:56,682][84565] Loop rollout_proc4_evt_loop terminating... +[2024-12-28 14:00:56,682][84564] Stopping RolloutWorker_w3... +[2024-12-28 14:00:56,682][84564] Loop rollout_proc3_evt_loop terminating... +[2024-12-28 14:00:56,683][84562] Stopping RolloutWorker_w2... +[2024-12-28 14:00:56,684][84562] Loop rollout_proc2_evt_loop terminating... +[2024-12-28 14:00:56,682][78983] Component RolloutWorker_w0 stopped! +[2024-12-28 14:00:56,684][78983] Component RolloutWorker_w5 stopped! +[2024-12-28 14:00:56,685][78983] Component RolloutWorker_w7 stopped! +[2024-12-28 14:00:56,685][78983] Component RolloutWorker_w4 stopped! +[2024-12-28 14:00:56,686][78983] Component RolloutWorker_w3 stopped! +[2024-12-28 14:00:56,686][78983] Component RolloutWorker_w2 stopped! +[2024-12-28 14:00:56,692][84543] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000008824_36143104.pth +[2024-12-28 14:00:56,696][84543] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009767_40005632.pth... +[2024-12-28 14:00:56,763][84543] Stopping LearnerWorker_p0... +[2024-12-28 14:00:56,763][84543] Loop learner_proc0_evt_loop terminating... +[2024-12-28 14:00:56,763][78983] Component LearnerWorker_p0 stopped! +[2024-12-28 14:00:56,765][78983] Waiting for process learner_proc0 to stop... +[2024-12-28 14:00:57,299][78983] Waiting for process inference_proc0-0 to join... +[2024-12-28 14:00:57,300][78983] Waiting for process rollout_proc0 to join... +[2024-12-28 14:00:57,301][78983] Waiting for process rollout_proc1 to join... +[2024-12-28 14:00:57,302][78983] Waiting for process rollout_proc2 to join... +[2024-12-28 14:00:57,302][78983] Waiting for process rollout_proc3 to join... +[2024-12-28 14:00:57,303][78983] Waiting for process rollout_proc4 to join... +[2024-12-28 14:00:57,304][78983] Waiting for process rollout_proc5 to join... +[2024-12-28 14:00:57,304][78983] Waiting for process rollout_proc6 to join... +[2024-12-28 14:00:57,305][78983] Waiting for process rollout_proc7 to join... +[2024-12-28 14:00:57,306][78983] Batcher 0 profile tree view: +batching: 99.7298, releasing_batches: 0.1894 +[2024-12-28 14:00:57,306][78983] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 11.8693 +update_model: 19.5731 + weight_update: 0.0007 +one_step: 0.0017 + handle_policy_step: 1389.3927 + deserialize: 38.2134, stack: 6.1357, obs_to_device_normalize: 315.0358, forward: 575.7177, send_messages: 107.3211 + prepare_outputs: 303.5263 + to_cpu: 249.6029 +[2024-12-28 14:00:57,307][78983] Learner 0 profile tree view: +misc: 0.0319, prepare_batch: 72.7921 +train: 188.2509 + epoch_init: 0.0318, minibatch_init: 0.0335, losses_postprocess: 2.8785, kl_divergence: 3.0655, after_optimizer: 3.6337 + calculate_losses: 67.8252 + losses_init: 0.0152, forward_head: 4.2464, bptt_initial: 39.4775, tail: 3.3947, advantages_returns: 0.9313, losses: 11.8488 + bptt: 6.9742 + bptt_forward_core: 6.6820 + update: 108.6031 + clip: 4.3072 +[2024-12-28 14:00:57,307][78983] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.7809, enqueue_policy_requests: 39.0874, env_step: 874.5563, overhead: 53.3975, complete_rollouts: 1.5428 +save_policy_outputs: 42.3175 + split_output_tensors: 20.4161 +[2024-12-28 14:00:57,308][78983] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.7990, enqueue_policy_requests: 39.3763, env_step: 869.9811, overhead: 52.9739, complete_rollouts: 1.5244 +save_policy_outputs: 41.9468 + split_output_tensors: 20.3500 +[2024-12-28 14:00:57,309][78983] Loop Runner_EvtLoop terminating... +[2024-12-28 14:00:57,309][78983] Runner profile tree view: +main_loop: 1475.0938 +[2024-12-28 14:00:57,310][78983] Collected {0: 40005632}, FPS: 24405.1 +[2024-12-28 14:02:18,120][78983] Loading existing experiment configuration from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json +[2024-12-28 14:02:18,121][78983] Overriding arg 'num_workers' with value 1 passed from command line +[2024-12-28 14:02:18,121][78983] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-12-28 14:02:18,122][78983] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-12-28 14:02:18,123][78983] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 14:02:18,123][78983] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-12-28 14:02:18,124][78983] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 14:02:18,124][78983] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-12-28 14:02:18,125][78983] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2024-12-28 14:02:18,125][78983] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2024-12-28 14:02:18,126][78983] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-12-28 14:02:18,126][78983] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-12-28 14:02:18,127][78983] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-12-28 14:02:18,128][78983] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-12-28 14:02:18,128][78983] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-12-28 14:02:18,141][78983] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 14:02:18,142][78983] RunningMeanStd input shape: (1,) +[2024-12-28 14:02:18,149][78983] ConvEncoder: input_channels=3 +[2024-12-28 14:02:18,172][78983] Conv encoder output size: 512 +[2024-12-28 14:02:18,173][78983] Policy head output size: 512 +[2024-12-28 14:02:18,206][78983] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009767_40005632.pth... +[2024-12-28 14:02:18,555][78983] Num frames 100... +[2024-12-28 14:02:18,698][78983] Num frames 200... +[2024-12-28 14:02:18,829][78983] Num frames 300... +[2024-12-28 14:02:18,986][78983] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2024-12-28 14:02:18,987][78983] Avg episode reward: 3.840, avg true_objective: 3.840 +[2024-12-28 14:02:19,006][78983] Num frames 400... +[2024-12-28 14:02:19,121][78983] Num frames 500... +[2024-12-28 14:02:19,225][78983] Num frames 600... +[2024-12-28 14:02:19,333][78983] Num frames 700... +[2024-12-28 14:02:19,443][78983] Num frames 800... +[2024-12-28 14:02:19,494][78983] Avg episode rewards: #0: 4.500, true rewards: #0: 4.000 +[2024-12-28 14:02:19,495][78983] Avg episode reward: 4.500, avg true_objective: 4.000 +[2024-12-28 14:02:19,600][78983] Num frames 900... +[2024-12-28 14:02:19,708][78983] Num frames 1000... +[2024-12-28 14:02:19,816][78983] Num frames 1100... +[2024-12-28 14:02:19,956][78983] Avg episode rewards: #0: 4.280, true rewards: #0: 3.947 +[2024-12-28 14:02:19,957][78983] Avg episode reward: 4.280, avg true_objective: 3.947 +[2024-12-28 14:02:19,976][78983] Num frames 1200... +[2024-12-28 14:02:20,079][78983] Num frames 1300... +[2024-12-28 14:02:20,183][78983] Num frames 1400... +[2024-12-28 14:02:20,292][78983] Num frames 1500... +[2024-12-28 14:02:20,415][78983] Avg episode rewards: #0: 4.170, true rewards: #0: 3.920 +[2024-12-28 14:02:20,415][78983] Avg episode reward: 4.170, avg true_objective: 3.920 +[2024-12-28 14:02:20,451][78983] Num frames 1600... +[2024-12-28 14:02:20,554][78983] Num frames 1700... +[2024-12-28 14:02:20,659][78983] Num frames 1800... +[2024-12-28 14:02:20,763][78983] Num frames 1900... +[2024-12-28 14:02:20,872][78983] Avg episode rewards: #0: 4.104, true rewards: #0: 3.904 +[2024-12-28 14:02:20,873][78983] Avg episode reward: 4.104, avg true_objective: 3.904 +[2024-12-28 14:02:20,925][78983] Num frames 2000... +[2024-12-28 14:02:21,029][78983] Num frames 2100... +[2024-12-28 14:02:21,134][78983] Num frames 2200... +[2024-12-28 14:02:21,238][78983] Num frames 2300... +[2024-12-28 14:02:21,342][78983] Num frames 2400... +[2024-12-28 14:02:21,427][78983] Avg episode rewards: #0: 4.553, true rewards: #0: 4.053 +[2024-12-28 14:02:21,428][78983] Avg episode reward: 4.553, avg true_objective: 4.053 +[2024-12-28 14:02:21,500][78983] Num frames 2500... +[2024-12-28 14:02:21,608][78983] Num frames 2600... +[2024-12-28 14:02:21,712][78983] Num frames 2700... +[2024-12-28 14:02:21,821][78983] Num frames 2800... +[2024-12-28 14:02:21,923][78983] Avg episode rewards: #0: 4.640, true rewards: #0: 4.069 +[2024-12-28 14:02:21,924][78983] Avg episode reward: 4.640, avg true_objective: 4.069 +[2024-12-28 14:02:21,979][78983] Num frames 2900... +[2024-12-28 14:02:22,083][78983] Num frames 3000... +[2024-12-28 14:02:22,187][78983] Num frames 3100... +[2024-12-28 14:02:22,296][78983] Num frames 3200... +[2024-12-28 14:02:22,383][78983] Avg episode rewards: #0: 4.540, true rewards: #0: 4.040 +[2024-12-28 14:02:22,385][78983] Avg episode reward: 4.540, avg true_objective: 4.040 +[2024-12-28 14:02:22,458][78983] Num frames 3300... +[2024-12-28 14:02:22,564][78983] Num frames 3400... +[2024-12-28 14:02:22,671][78983] Num frames 3500... +[2024-12-28 14:02:22,779][78983] Num frames 3600... +[2024-12-28 14:02:22,914][78983] Avg episode rewards: #0: 4.644, true rewards: #0: 4.089 +[2024-12-28 14:02:22,915][78983] Avg episode reward: 4.644, avg true_objective: 4.089 +[2024-12-28 14:02:22,938][78983] Num frames 3700... +[2024-12-28 14:02:23,042][78983] Num frames 3800... +[2024-12-28 14:02:23,146][78983] Num frames 3900... +[2024-12-28 14:02:23,250][78983] Num frames 4000... +[2024-12-28 14:02:23,369][78983] Avg episode rewards: #0: 4.564, true rewards: #0: 4.064 +[2024-12-28 14:02:23,370][78983] Avg episode reward: 4.564, avg true_objective: 4.064 +[2024-12-28 14:02:27,583][78983] Replay video saved to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/replay.mp4! +[2024-12-28 14:05:55,941][100720] Saving configuration to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json... +[2024-12-28 14:05:55,950][100720] Rollout worker 0 uses device cpu +[2024-12-28 14:05:55,951][100720] Rollout worker 1 uses device cpu +[2024-12-28 14:05:55,952][100720] Rollout worker 2 uses device cpu +[2024-12-28 14:05:55,952][100720] Rollout worker 3 uses device cpu +[2024-12-28 14:05:55,953][100720] Rollout worker 4 uses device cpu +[2024-12-28 14:05:55,954][100720] Rollout worker 5 uses device cpu +[2024-12-28 14:05:55,954][100720] Rollout worker 6 uses device cpu +[2024-12-28 14:05:55,955][100720] Rollout worker 7 uses device cpu +[2024-12-28 14:05:55,978][100720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 14:05:55,979][100720] InferenceWorker_p0-w0: min num requests: 2 +[2024-12-28 14:05:55,993][100720] Starting all processes... +[2024-12-28 14:05:55,994][100720] Starting process learner_proc0 +[2024-12-28 14:05:56,043][100720] Starting all processes... +[2024-12-28 14:05:56,047][100720] Starting process inference_proc0-0 +[2024-12-28 14:05:56,047][100720] Starting process rollout_proc0 +[2024-12-28 14:05:56,048][100720] Starting process rollout_proc1 +[2024-12-28 14:05:56,048][100720] Starting process rollout_proc2 +[2024-12-28 14:05:56,049][100720] Starting process rollout_proc3 +[2024-12-28 14:05:56,052][100720] Starting process rollout_proc4 +[2024-12-28 14:05:56,052][100720] Starting process rollout_proc5 +[2024-12-28 14:05:56,104][100720] Starting process rollout_proc6 +[2024-12-28 14:05:56,104][100720] Starting process rollout_proc7 +[2024-12-28 14:05:57,430][100918] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 14:05:57,430][100918] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-12-28 14:05:57,486][100918] Num visible devices: 1 +[2024-12-28 14:05:57,524][100918] Starting seed is not provided +[2024-12-28 14:05:57,524][100918] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 14:05:57,524][100918] Initializing actor-critic model on device cuda:0 +[2024-12-28 14:05:57,524][100918] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 14:05:57,525][100918] RunningMeanStd input shape: (1,) +[2024-12-28 14:05:57,527][100939] Worker 5 uses CPU cores [20, 21, 22, 23] +[2024-12-28 14:05:57,533][100918] ConvEncoder: input_channels=3 +[2024-12-28 14:05:57,540][100936] Worker 2 uses CPU cores [8, 9, 10, 11] +[2024-12-28 14:05:57,540][100938] Worker 3 uses CPU cores [12, 13, 14, 15] +[2024-12-28 14:05:57,551][100942] Worker 7 uses CPU cores [28, 29, 30, 31] +[2024-12-28 14:05:57,551][100941] Worker 6 uses CPU cores [24, 25, 26, 27] +[2024-12-28 14:05:57,559][100937] Worker 1 uses CPU cores [4, 5, 6, 7] +[2024-12-28 14:05:57,570][100934] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 14:05:57,570][100934] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-12-28 14:05:57,585][100935] Worker 0 uses CPU cores [0, 1, 2, 3] +[2024-12-28 14:05:57,597][100934] Num visible devices: 1 +[2024-12-28 14:05:57,612][100940] Worker 4 uses CPU cores [16, 17, 18, 19] +[2024-12-28 14:05:57,630][100918] Conv encoder output size: 512 +[2024-12-28 14:05:57,643][100918] Policy head output size: 512 +[2024-12-28 14:05:57,659][100918] Created Actor Critic model with architecture: +[2024-12-28 14:05:57,659][100918] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2024-12-28 14:05:58,241][100918] Using optimizer +[2024-12-28 14:05:58,706][100918] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009767_40005632.pth... +[2024-12-28 14:05:58,728][100918] Loading model from checkpoint +[2024-12-28 14:05:58,729][100918] Loaded experiment state at self.train_step=9767, self.env_steps=40005632 +[2024-12-28 14:05:58,729][100918] Initialized policy 0 weights for model version 9767 +[2024-12-28 14:05:58,732][100918] LearnerWorker_p0 finished initialization! +[2024-12-28 14:05:58,732][100918] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-12-28 14:05:58,827][100934] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 14:05:58,828][100934] RunningMeanStd input shape: (1,) +[2024-12-28 14:05:58,834][100934] ConvEncoder: input_channels=3 +[2024-12-28 14:05:58,884][100934] Conv encoder output size: 512 +[2024-12-28 14:05:58,884][100934] Policy head output size: 512 +[2024-12-28 14:05:58,911][100720] Inference worker 0-0 is ready! +[2024-12-28 14:05:58,912][100720] All inference workers are ready! Signal rollout workers to start! +[2024-12-28 14:05:58,931][100936] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,932][100941] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,933][100937] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,933][100939] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,933][100940] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,934][100942] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,934][100938] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,938][100935] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 14:05:58,944][100720] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 40005632. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-12-28 14:05:59,113][100941] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,128][100936] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,135][100935] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,145][100938] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,152][100942] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,295][100936] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,301][100938] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,301][100935] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,319][100942] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,328][100940] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,338][100941] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,455][100937] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,481][100940] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,481][100936] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,515][100935] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,529][100941] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,608][100939] Decorrelating experience for 0 frames... +[2024-12-28 14:05:59,616][100937] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,641][100938] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,669][100942] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,680][100940] Decorrelating experience for 64 frames... +[2024-12-28 14:05:59,692][100936] Decorrelating experience for 96 frames... +[2024-12-28 14:05:59,735][100935] Decorrelating experience for 96 frames... +[2024-12-28 14:05:59,763][100939] Decorrelating experience for 32 frames... +[2024-12-28 14:05:59,830][100938] Decorrelating experience for 96 frames... +[2024-12-28 14:05:59,864][100942] Decorrelating experience for 96 frames... +[2024-12-28 14:05:59,876][100940] Decorrelating experience for 96 frames... +[2024-12-28 14:05:59,926][100941] Decorrelating experience for 96 frames... +[2024-12-28 14:06:00,112][100939] Decorrelating experience for 64 frames... +[2024-12-28 14:06:00,275][100937] Decorrelating experience for 64 frames... +[2024-12-28 14:06:00,311][100939] Decorrelating experience for 96 frames... +[2024-12-28 14:06:00,478][100937] Decorrelating experience for 96 frames... +[2024-12-28 14:06:00,642][100918] Signal inference workers to stop experience collection... +[2024-12-28 14:06:00,646][100934] InferenceWorker_p0-w0: stopping experience collection +[2024-12-28 14:06:01,497][100918] Signal inference workers to resume experience collection... +[2024-12-28 14:06:01,498][100934] InferenceWorker_p0-w0: resuming experience collection +[2024-12-28 14:06:03,306][100934] Updated weights for policy 0, policy_version 9777 (0.0052) +[2024-12-28 14:06:03,944][100720] Fps is (10 sec: 10649.4, 60 sec: 10649.4, 300 sec: 10649.4). Total num frames: 40058880. Throughput: 0: 861.6. Samples: 4308. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-12-28 14:06:03,945][100720] Avg episode reward: [(0, '4.445')] +[2024-12-28 14:06:05,434][100934] Updated weights for policy 0, policy_version 9787 (0.0012) +[2024-12-28 14:06:07,651][100934] Updated weights for policy 0, policy_version 9797 (0.0011) +[2024-12-28 14:06:08,944][100720] Fps is (10 sec: 14745.7, 60 sec: 14745.7, 300 sec: 14745.7). Total num frames: 40153088. Throughput: 0: 3250.8. Samples: 32508. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2024-12-28 14:06:08,947][100720] Avg episode reward: [(0, '4.364')] +[2024-12-28 14:06:09,686][100934] Updated weights for policy 0, policy_version 9807 (0.0010) +[2024-12-28 14:06:11,694][100934] Updated weights for policy 0, policy_version 9817 (0.0010) +[2024-12-28 14:06:13,549][100934] Updated weights for policy 0, policy_version 9827 (0.0009) +[2024-12-28 14:06:13,944][100720] Fps is (10 sec: 20070.5, 60 sec: 16930.1, 300 sec: 16930.1). Total num frames: 40259584. Throughput: 0: 4281.5. Samples: 64222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:06:13,945][100720] Avg episode reward: [(0, '4.760')] +[2024-12-28 14:06:15,302][100934] Updated weights for policy 0, policy_version 9837 (0.0008) +[2024-12-28 14:06:15,975][100720] Heartbeat connected on Batcher_0 +[2024-12-28 14:06:15,977][100720] Heartbeat connected on LearnerWorker_p0 +[2024-12-28 14:06:15,982][100720] Heartbeat connected on InferenceWorker_p0-w0 +[2024-12-28 14:06:15,984][100720] Heartbeat connected on RolloutWorker_w1 +[2024-12-28 14:06:15,985][100720] Heartbeat connected on RolloutWorker_w0 +[2024-12-28 14:06:15,986][100720] Heartbeat connected on RolloutWorker_w2 +[2024-12-28 14:06:15,987][100720] Heartbeat connected on RolloutWorker_w3 +[2024-12-28 14:06:15,990][100720] Heartbeat connected on RolloutWorker_w4 +[2024-12-28 14:06:15,993][100720] Heartbeat connected on RolloutWorker_w6 +[2024-12-28 14:06:15,994][100720] Heartbeat connected on RolloutWorker_w5 +[2024-12-28 14:06:15,996][100720] Heartbeat connected on RolloutWorker_w7 +[2024-12-28 14:06:17,015][100934] Updated weights for policy 0, policy_version 9847 (0.0008) +[2024-12-28 14:06:18,645][100934] Updated weights for policy 0, policy_version 9857 (0.0007) +[2024-12-28 14:06:18,944][100720] Fps is (10 sec: 22937.6, 60 sec: 18841.7, 300 sec: 18841.7). Total num frames: 40382464. Throughput: 0: 4104.6. Samples: 82092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:06:18,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:06:20,179][100934] Updated weights for policy 0, policy_version 9867 (0.0006) +[2024-12-28 14:06:21,761][100934] Updated weights for policy 0, policy_version 9877 (0.0007) +[2024-12-28 14:06:23,355][100934] Updated weights for policy 0, policy_version 9887 (0.0008) +[2024-12-28 14:06:23,944][100720] Fps is (10 sec: 24985.9, 60 sec: 20152.4, 300 sec: 20152.4). Total num frames: 40509440. Throughput: 0: 4828.4. Samples: 120710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:06:23,945][100720] Avg episode reward: [(0, '4.465')] +[2024-12-28 14:06:24,932][100934] Updated weights for policy 0, policy_version 9897 (0.0009) +[2024-12-28 14:06:26,612][100934] Updated weights for policy 0, policy_version 9907 (0.0008) +[2024-12-28 14:06:28,179][100934] Updated weights for policy 0, policy_version 9917 (0.0006) +[2024-12-28 14:06:28,944][100720] Fps is (10 sec: 25394.9, 60 sec: 21026.1, 300 sec: 21026.1). Total num frames: 40636416. Throughput: 0: 5312.4. Samples: 159372. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:06:28,945][100720] Avg episode reward: [(0, '4.396')] +[2024-12-28 14:06:29,727][100934] Updated weights for policy 0, policy_version 9927 (0.0007) +[2024-12-28 14:06:31,349][100934] Updated weights for policy 0, policy_version 9937 (0.0007) +[2024-12-28 14:06:32,977][100934] Updated weights for policy 0, policy_version 9947 (0.0009) +[2024-12-28 14:06:33,944][100720] Fps is (10 sec: 25804.8, 60 sec: 21767.4, 300 sec: 21767.4). Total num frames: 40767488. Throughput: 0: 5098.1. Samples: 178434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:06:33,945][100720] Avg episode reward: [(0, '4.473')] +[2024-12-28 14:06:34,579][100934] Updated weights for policy 0, policy_version 9957 (0.0007) +[2024-12-28 14:06:36,185][100934] Updated weights for policy 0, policy_version 9967 (0.0006) +[2024-12-28 14:06:37,824][100934] Updated weights for policy 0, policy_version 9977 (0.0007) +[2024-12-28 14:06:38,944][100720] Fps is (10 sec: 25395.2, 60 sec: 22118.4, 300 sec: 22118.4). Total num frames: 40890368. Throughput: 0: 5405.9. Samples: 216234. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:06:38,945][100720] Avg episode reward: [(0, '4.757')] +[2024-12-28 14:06:39,453][100934] Updated weights for policy 0, policy_version 9987 (0.0007) +[2024-12-28 14:06:41,097][100934] Updated weights for policy 0, policy_version 9997 (0.0007) +[2024-12-28 14:06:42,676][100934] Updated weights for policy 0, policy_version 10007 (0.0007) +[2024-12-28 14:06:43,944][100720] Fps is (10 sec: 24985.5, 60 sec: 22482.5, 300 sec: 22482.5). Total num frames: 41017344. Throughput: 0: 5651.8. Samples: 254330. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:06:43,945][100720] Avg episode reward: [(0, '4.344')] +[2024-12-28 14:06:44,309][100934] Updated weights for policy 0, policy_version 10017 (0.0009) +[2024-12-28 14:06:45,893][100934] Updated weights for policy 0, policy_version 10027 (0.0007) +[2024-12-28 14:06:47,550][100934] Updated weights for policy 0, policy_version 10037 (0.0007) +[2024-12-28 14:06:48,944][100720] Fps is (10 sec: 25395.4, 60 sec: 22773.8, 300 sec: 22773.8). Total num frames: 41144320. Throughput: 0: 5970.3. Samples: 272970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:06:48,945][100720] Avg episode reward: [(0, '4.750')] +[2024-12-28 14:06:49,212][100934] Updated weights for policy 0, policy_version 10047 (0.0007) +[2024-12-28 14:06:50,932][100934] Updated weights for policy 0, policy_version 10057 (0.0008) +[2024-12-28 14:06:52,542][100934] Updated weights for policy 0, policy_version 10067 (0.0008) +[2024-12-28 14:06:53,944][100720] Fps is (10 sec: 24985.6, 60 sec: 22937.6, 300 sec: 22937.6). Total num frames: 41267200. Throughput: 0: 6169.0. Samples: 310112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:06:53,945][100720] Avg episode reward: [(0, '4.334')] +[2024-12-28 14:06:54,186][100934] Updated weights for policy 0, policy_version 10077 (0.0007) +[2024-12-28 14:06:55,806][100934] Updated weights for policy 0, policy_version 10087 (0.0008) +[2024-12-28 14:06:57,513][100934] Updated weights for policy 0, policy_version 10097 (0.0008) +[2024-12-28 14:06:58,944][100720] Fps is (10 sec: 24575.7, 60 sec: 23074.1, 300 sec: 23074.1). Total num frames: 41390080. Throughput: 0: 6285.3. Samples: 347062. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:06:58,945][100720] Avg episode reward: [(0, '4.554')] +[2024-12-28 14:06:59,207][100934] Updated weights for policy 0, policy_version 10107 (0.0008) +[2024-12-28 14:07:00,856][100934] Updated weights for policy 0, policy_version 10117 (0.0007) +[2024-12-28 14:07:02,546][100934] Updated weights for policy 0, policy_version 10127 (0.0007) +[2024-12-28 14:07:03,944][100720] Fps is (10 sec: 24576.0, 60 sec: 24234.7, 300 sec: 23189.7). Total num frames: 41512960. Throughput: 0: 6300.1. Samples: 365596. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:07:03,945][100720] Avg episode reward: [(0, '4.476')] +[2024-12-28 14:07:04,175][100934] Updated weights for policy 0, policy_version 10137 (0.0007) +[2024-12-28 14:07:05,855][100934] Updated weights for policy 0, policy_version 10147 (0.0007) +[2024-12-28 14:07:07,525][100934] Updated weights for policy 0, policy_version 10157 (0.0007) +[2024-12-28 14:07:08,944][100720] Fps is (10 sec: 24576.0, 60 sec: 24712.5, 300 sec: 23288.7). Total num frames: 41635840. Throughput: 0: 6266.0. Samples: 402680. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:07:08,945][100720] Avg episode reward: [(0, '4.229')] +[2024-12-28 14:07:09,133][100934] Updated weights for policy 0, policy_version 10167 (0.0007) +[2024-12-28 14:07:10,710][100934] Updated weights for policy 0, policy_version 10177 (0.0007) +[2024-12-28 14:07:12,533][100934] Updated weights for policy 0, policy_version 10187 (0.0009) +[2024-12-28 14:07:13,944][100720] Fps is (10 sec: 24166.2, 60 sec: 24917.3, 300 sec: 23319.9). Total num frames: 41754624. Throughput: 0: 6201.1. Samples: 438422. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:07:13,945][100720] Avg episode reward: [(0, '4.408')] +[2024-12-28 14:07:14,395][100934] Updated weights for policy 0, policy_version 10197 (0.0010) +[2024-12-28 14:07:16,285][100934] Updated weights for policy 0, policy_version 10207 (0.0008) +[2024-12-28 14:07:18,130][100934] Updated weights for policy 0, policy_version 10217 (0.0008) +[2024-12-28 14:07:18,944][100720] Fps is (10 sec: 22937.4, 60 sec: 24712.5, 300 sec: 23244.8). Total num frames: 41865216. Throughput: 0: 6144.2. Samples: 454924. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:07:18,945][100720] Avg episode reward: [(0, '4.523')] +[2024-12-28 14:07:20,092][100934] Updated weights for policy 0, policy_version 10227 (0.0009) +[2024-12-28 14:07:21,924][100934] Updated weights for policy 0, policy_version 10237 (0.0009) +[2024-12-28 14:07:23,507][100934] Updated weights for policy 0, policy_version 10247 (0.0007) +[2024-12-28 14:07:23,944][100720] Fps is (10 sec: 22528.1, 60 sec: 24507.7, 300 sec: 23226.7). Total num frames: 41979904. Throughput: 0: 6045.7. Samples: 488292. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:07:23,945][100720] Avg episode reward: [(0, '4.349')] +[2024-12-28 14:07:25,102][100934] Updated weights for policy 0, policy_version 10257 (0.0007) +[2024-12-28 14:07:26,721][100934] Updated weights for policy 0, policy_version 10267 (0.0008) +[2024-12-28 14:07:28,359][100934] Updated weights for policy 0, policy_version 10277 (0.0008) +[2024-12-28 14:07:28,944][100720] Fps is (10 sec: 24166.8, 60 sec: 24507.8, 300 sec: 23347.2). Total num frames: 42106880. Throughput: 0: 6053.7. Samples: 526748. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:07:28,945][100720] Avg episode reward: [(0, '4.455')] +[2024-12-28 14:07:29,937][100934] Updated weights for policy 0, policy_version 10287 (0.0007) +[2024-12-28 14:07:31,535][100934] Updated weights for policy 0, policy_version 10297 (0.0009) +[2024-12-28 14:07:33,341][100934] Updated weights for policy 0, policy_version 10307 (0.0008) +[2024-12-28 14:07:33,944][100720] Fps is (10 sec: 24985.7, 60 sec: 24371.2, 300 sec: 23411.9). Total num frames: 42229760. Throughput: 0: 6070.7. Samples: 546152. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:07:33,945][100720] Avg episode reward: [(0, '4.460')] +[2024-12-28 14:07:35,261][100934] Updated weights for policy 0, policy_version 10317 (0.0009) +[2024-12-28 14:07:37,209][100934] Updated weights for policy 0, policy_version 10327 (0.0009) +[2024-12-28 14:07:38,944][100720] Fps is (10 sec: 22937.0, 60 sec: 24098.1, 300 sec: 23306.2). Total num frames: 42336256. Throughput: 0: 5963.3. Samples: 578462. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0) +[2024-12-28 14:07:38,945][100720] Avg episode reward: [(0, '4.337')] +[2024-12-28 14:07:39,060][100934] Updated weights for policy 0, policy_version 10337 (0.0008) +[2024-12-28 14:07:41,016][100934] Updated weights for policy 0, policy_version 10347 (0.0008) +[2024-12-28 14:07:43,103][100934] Updated weights for policy 0, policy_version 10357 (0.0010) +[2024-12-28 14:07:43,944][100720] Fps is (10 sec: 21298.9, 60 sec: 23756.7, 300 sec: 23210.7). Total num frames: 42442752. Throughput: 0: 5845.1. Samples: 610094. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0) +[2024-12-28 14:07:43,945][100720] Avg episode reward: [(0, '4.416')] +[2024-12-28 14:07:44,684][100934] Updated weights for policy 0, policy_version 10367 (0.0007) +[2024-12-28 14:07:46,261][100934] Updated weights for policy 0, policy_version 10377 (0.0007) +[2024-12-28 14:07:47,879][100934] Updated weights for policy 0, policy_version 10387 (0.0007) +[2024-12-28 14:07:48,944][100720] Fps is (10 sec: 23347.8, 60 sec: 23756.8, 300 sec: 23310.0). Total num frames: 42569728. Throughput: 0: 5865.7. Samples: 629552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:07:48,945][100720] Avg episode reward: [(0, '4.431')] +[2024-12-28 14:07:49,458][100934] Updated weights for policy 0, policy_version 10397 (0.0007) +[2024-12-28 14:07:51,016][100934] Updated weights for policy 0, policy_version 10407 (0.0007) +[2024-12-28 14:07:52,707][100934] Updated weights for policy 0, policy_version 10417 (0.0007) +[2024-12-28 14:07:53,944][100720] Fps is (10 sec: 25395.6, 60 sec: 23825.1, 300 sec: 23400.6). Total num frames: 42696704. Throughput: 0: 5885.3. Samples: 667520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:07:53,946][100720] Avg episode reward: [(0, '4.345')] +[2024-12-28 14:07:53,952][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000010424_42696704.pth... +[2024-12-28 14:07:53,987][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009536_39059456.pth +[2024-12-28 14:07:54,495][100934] Updated weights for policy 0, policy_version 10427 (0.0008) +[2024-12-28 14:07:56,265][100934] Updated weights for policy 0, policy_version 10437 (0.0008) +[2024-12-28 14:07:57,847][100934] Updated weights for policy 0, policy_version 10447 (0.0007) +[2024-12-28 14:07:58,944][100720] Fps is (10 sec: 24575.8, 60 sec: 23756.8, 300 sec: 23415.5). Total num frames: 42815488. Throughput: 0: 5897.1. Samples: 703790. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:07:58,945][100720] Avg episode reward: [(0, '4.280')] +[2024-12-28 14:07:59,445][100934] Updated weights for policy 0, policy_version 10457 (0.0007) +[2024-12-28 14:08:01,061][100934] Updated weights for policy 0, policy_version 10467 (0.0007) +[2024-12-28 14:08:02,721][100934] Updated weights for policy 0, policy_version 10477 (0.0007) +[2024-12-28 14:08:03,944][100720] Fps is (10 sec: 24166.2, 60 sec: 23756.8, 300 sec: 23461.9). Total num frames: 42938368. Throughput: 0: 5958.9. Samples: 723076. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:03,945][100720] Avg episode reward: [(0, '4.461')] +[2024-12-28 14:08:04,506][100934] Updated weights for policy 0, policy_version 10487 (0.0008) +[2024-12-28 14:08:06,387][100934] Updated weights for policy 0, policy_version 10497 (0.0009) +[2024-12-28 14:08:08,413][100934] Updated weights for policy 0, policy_version 10507 (0.0009) +[2024-12-28 14:08:08,944][100720] Fps is (10 sec: 22937.8, 60 sec: 23483.8, 300 sec: 23378.7). Total num frames: 43044864. Throughput: 0: 5954.0. Samples: 756222. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:08,945][100720] Avg episode reward: [(0, '4.543')] +[2024-12-28 14:08:10,436][100934] Updated weights for policy 0, policy_version 10517 (0.0009) +[2024-12-28 14:08:12,471][100934] Updated weights for policy 0, policy_version 10527 (0.0010) +[2024-12-28 14:08:13,944][100720] Fps is (10 sec: 20889.7, 60 sec: 23210.7, 300 sec: 23271.4). Total num frames: 43147264. Throughput: 0: 5772.8. Samples: 786526. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:13,945][100720] Avg episode reward: [(0, '4.593')] +[2024-12-28 14:08:14,466][100934] Updated weights for policy 0, policy_version 10537 (0.0008) +[2024-12-28 14:08:16,388][100934] Updated weights for policy 0, policy_version 10547 (0.0008) +[2024-12-28 14:08:18,033][100934] Updated weights for policy 0, policy_version 10557 (0.0007) +[2024-12-28 14:08:18,944][100720] Fps is (10 sec: 21708.5, 60 sec: 23279.0, 300 sec: 23259.4). Total num frames: 43261952. Throughput: 0: 5703.5. Samples: 802812. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:18,945][100720] Avg episode reward: [(0, '4.719')] +[2024-12-28 14:08:19,661][100934] Updated weights for policy 0, policy_version 10567 (0.0006) +[2024-12-28 14:08:21,243][100934] Updated weights for policy 0, policy_version 10577 (0.0008) +[2024-12-28 14:08:22,878][100934] Updated weights for policy 0, policy_version 10587 (0.0007) +[2024-12-28 14:08:23,944][100720] Fps is (10 sec: 24166.5, 60 sec: 23483.8, 300 sec: 23333.1). Total num frames: 43388928. Throughput: 0: 5824.9. Samples: 840582. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:23,945][100720] Avg episode reward: [(0, '4.356')] +[2024-12-28 14:08:24,478][100934] Updated weights for policy 0, policy_version 10597 (0.0007) +[2024-12-28 14:08:26,084][100934] Updated weights for policy 0, policy_version 10607 (0.0006) +[2024-12-28 14:08:27,699][100934] Updated weights for policy 0, policy_version 10617 (0.0007) +[2024-12-28 14:08:28,944][100720] Fps is (10 sec: 25395.5, 60 sec: 23483.7, 300 sec: 23401.8). Total num frames: 43515904. Throughput: 0: 5977.2. Samples: 879066. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:28,945][100720] Avg episode reward: [(0, '4.488')] +[2024-12-28 14:08:29,286][100934] Updated weights for policy 0, policy_version 10627 (0.0006) +[2024-12-28 14:08:30,906][100934] Updated weights for policy 0, policy_version 10637 (0.0006) +[2024-12-28 14:08:32,486][100934] Updated weights for policy 0, policy_version 10647 (0.0007) +[2024-12-28 14:08:33,944][100720] Fps is (10 sec: 25804.5, 60 sec: 23620.2, 300 sec: 23492.5). Total num frames: 43646976. Throughput: 0: 5966.2. Samples: 898030. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:33,945][100720] Avg episode reward: [(0, '4.292')] +[2024-12-28 14:08:34,039][100934] Updated weights for policy 0, policy_version 10657 (0.0007) +[2024-12-28 14:08:35,602][100934] Updated weights for policy 0, policy_version 10667 (0.0006) +[2024-12-28 14:08:37,185][100934] Updated weights for policy 0, policy_version 10677 (0.0008) +[2024-12-28 14:08:38,756][100934] Updated weights for policy 0, policy_version 10687 (0.0006) +[2024-12-28 14:08:38,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24030.0, 300 sec: 23577.6). Total num frames: 43778048. Throughput: 0: 5997.1. Samples: 937388. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:08:38,945][100720] Avg episode reward: [(0, '4.445')] +[2024-12-28 14:08:40,341][100934] Updated weights for policy 0, policy_version 10697 (0.0007) +[2024-12-28 14:08:41,970][100934] Updated weights for policy 0, policy_version 10707 (0.0008) +[2024-12-28 14:08:43,616][100934] Updated weights for policy 0, policy_version 10717 (0.0007) +[2024-12-28 14:08:43,944][100720] Fps is (10 sec: 25395.0, 60 sec: 24302.9, 300 sec: 23607.8). Total num frames: 43900928. Throughput: 0: 6033.8. Samples: 975310. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:43,946][100720] Avg episode reward: [(0, '4.456')] +[2024-12-28 14:08:45,452][100934] Updated weights for policy 0, policy_version 10727 (0.0007) +[2024-12-28 14:08:47,276][100934] Updated weights for policy 0, policy_version 10737 (0.0008) +[2024-12-28 14:08:48,944][100720] Fps is (10 sec: 23347.1, 60 sec: 24029.9, 300 sec: 23564.1). Total num frames: 44011520. Throughput: 0: 5976.5. Samples: 992020. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:08:48,945][100720] Avg episode reward: [(0, '4.615')] +[2024-12-28 14:08:49,256][100934] Updated weights for policy 0, policy_version 10747 (0.0009) +[2024-12-28 14:08:51,186][100934] Updated weights for policy 0, policy_version 10757 (0.0008) +[2024-12-28 14:08:53,059][100934] Updated weights for policy 0, policy_version 10767 (0.0009) +[2024-12-28 14:08:53,944][100720] Fps is (10 sec: 21709.2, 60 sec: 23688.5, 300 sec: 23499.3). Total num frames: 44118016. Throughput: 0: 5949.0. Samples: 1023926. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:08:53,945][100720] Avg episode reward: [(0, '4.458')] +[2024-12-28 14:08:54,836][100934] Updated weights for policy 0, policy_version 10777 (0.0009) +[2024-12-28 14:08:56,425][100934] Updated weights for policy 0, policy_version 10787 (0.0007) +[2024-12-28 14:08:57,957][100934] Updated weights for policy 0, policy_version 10797 (0.0007) +[2024-12-28 14:08:58,944][100720] Fps is (10 sec: 23756.9, 60 sec: 23893.4, 300 sec: 23574.8). Total num frames: 44249088. Throughput: 0: 6109.7. Samples: 1061460. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:08:58,945][100720] Avg episode reward: [(0, '4.247')] +[2024-12-28 14:08:59,569][100934] Updated weights for policy 0, policy_version 10807 (0.0008) +[2024-12-28 14:09:01,179][100934] Updated weights for policy 0, policy_version 10817 (0.0006) +[2024-12-28 14:09:02,773][100934] Updated weights for policy 0, policy_version 10827 (0.0008) +[2024-12-28 14:09:03,944][100720] Fps is (10 sec: 25805.0, 60 sec: 23961.7, 300 sec: 23624.0). Total num frames: 44376064. Throughput: 0: 6178.8. Samples: 1080858. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:03,945][100720] Avg episode reward: [(0, '4.192')] +[2024-12-28 14:09:04,440][100934] Updated weights for policy 0, policy_version 10837 (0.0008) +[2024-12-28 14:09:06,355][100934] Updated weights for policy 0, policy_version 10847 (0.0009) +[2024-12-28 14:09:08,289][100934] Updated weights for policy 0, policy_version 10857 (0.0010) +[2024-12-28 14:09:08,944][100720] Fps is (10 sec: 23347.1, 60 sec: 23961.6, 300 sec: 23562.8). Total num frames: 44482560. Throughput: 0: 6104.6. Samples: 1115290. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:09:08,945][100720] Avg episode reward: [(0, '4.457')] +[2024-12-28 14:09:10,236][100934] Updated weights for policy 0, policy_version 10867 (0.0009) +[2024-12-28 14:09:12,161][100934] Updated weights for policy 0, policy_version 10877 (0.0008) +[2024-12-28 14:09:13,944][100720] Fps is (10 sec: 21299.0, 60 sec: 24029.9, 300 sec: 23504.7). Total num frames: 44589056. Throughput: 0: 5946.5. Samples: 1146658. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:09:13,945][100720] Avg episode reward: [(0, '4.193')] +[2024-12-28 14:09:14,115][100934] Updated weights for policy 0, policy_version 10887 (0.0008) +[2024-12-28 14:09:15,891][100934] Updated weights for policy 0, policy_version 10897 (0.0008) +[2024-12-28 14:09:17,497][100934] Updated weights for policy 0, policy_version 10907 (0.0008) +[2024-12-28 14:09:18,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24166.5, 300 sec: 23531.5). Total num frames: 44711936. Throughput: 0: 5920.2. Samples: 1164440. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:09:18,945][100720] Avg episode reward: [(0, '4.510')] +[2024-12-28 14:09:19,064][100934] Updated weights for policy 0, policy_version 10917 (0.0006) +[2024-12-28 14:09:20,980][100934] Updated weights for policy 0, policy_version 10927 (0.0008) +[2024-12-28 14:09:22,922][100934] Updated weights for policy 0, policy_version 10937 (0.0008) +[2024-12-28 14:09:23,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23825.1, 300 sec: 23477.1). Total num frames: 44818432. Throughput: 0: 5808.0. Samples: 1198746. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:09:23,945][100720] Avg episode reward: [(0, '4.398')] +[2024-12-28 14:09:24,848][100934] Updated weights for policy 0, policy_version 10947 (0.0010) +[2024-12-28 14:09:26,746][100934] Updated weights for policy 0, policy_version 10957 (0.0008) +[2024-12-28 14:09:28,720][100934] Updated weights for policy 0, policy_version 10967 (0.0008) +[2024-12-28 14:09:28,944][100720] Fps is (10 sec: 21299.1, 60 sec: 23483.7, 300 sec: 23425.2). Total num frames: 44924928. Throughput: 0: 5673.2. Samples: 1230604. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:09:28,945][100720] Avg episode reward: [(0, '4.249')] +[2024-12-28 14:09:30,499][100934] Updated weights for policy 0, policy_version 10977 (0.0008) +[2024-12-28 14:09:32,129][100934] Updated weights for policy 0, policy_version 10987 (0.0008) +[2024-12-28 14:09:33,735][100934] Updated weights for policy 0, policy_version 10997 (0.0008) +[2024-12-28 14:09:33,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23347.2, 300 sec: 23452.0). Total num frames: 45047808. Throughput: 0: 5704.6. Samples: 1248728. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:33,945][100720] Avg episode reward: [(0, '4.525')] +[2024-12-28 14:09:35,339][100934] Updated weights for policy 0, policy_version 11007 (0.0007) +[2024-12-28 14:09:36,931][100934] Updated weights for policy 0, policy_version 11017 (0.0007) +[2024-12-28 14:09:38,499][100934] Updated weights for policy 0, policy_version 11027 (0.0007) +[2024-12-28 14:09:38,944][100720] Fps is (10 sec: 24985.7, 60 sec: 23278.9, 300 sec: 23496.2). Total num frames: 45174784. Throughput: 0: 5854.0. Samples: 1287356. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:38,945][100720] Avg episode reward: [(0, '4.361')] +[2024-12-28 14:09:40,080][100934] Updated weights for policy 0, policy_version 11037 (0.0007) +[2024-12-28 14:09:41,700][100934] Updated weights for policy 0, policy_version 11047 (0.0008) +[2024-12-28 14:09:43,235][100934] Updated weights for policy 0, policy_version 11057 (0.0007) +[2024-12-28 14:09:43,944][100720] Fps is (10 sec: 25804.8, 60 sec: 23415.5, 300 sec: 23556.6). Total num frames: 45305856. Throughput: 0: 5875.2. Samples: 1325844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:43,945][100720] Avg episode reward: [(0, '4.622')] +[2024-12-28 14:09:44,884][100934] Updated weights for policy 0, policy_version 11067 (0.0008) +[2024-12-28 14:09:46,445][100934] Updated weights for policy 0, policy_version 11077 (0.0007) +[2024-12-28 14:09:48,006][100934] Updated weights for policy 0, policy_version 11087 (0.0007) +[2024-12-28 14:09:48,944][100720] Fps is (10 sec: 25804.6, 60 sec: 23688.5, 300 sec: 23596.5). Total num frames: 45432832. Throughput: 0: 5875.7. Samples: 1345266. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:48,945][100720] Avg episode reward: [(0, '4.518')] +[2024-12-28 14:09:49,598][100934] Updated weights for policy 0, policy_version 11097 (0.0008) +[2024-12-28 14:09:51,282][100934] Updated weights for policy 0, policy_version 11107 (0.0008) +[2024-12-28 14:09:53,180][100934] Updated weights for policy 0, policy_version 11117 (0.0007) +[2024-12-28 14:09:53,944][100720] Fps is (10 sec: 24166.3, 60 sec: 23825.0, 300 sec: 23582.5). Total num frames: 45547520. Throughput: 0: 5921.6. Samples: 1381762. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:53,945][100720] Avg episode reward: [(0, '4.522')] +[2024-12-28 14:09:53,961][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000011121_45551616.pth... +[2024-12-28 14:09:53,998][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000009767_40005632.pth +[2024-12-28 14:09:55,156][100934] Updated weights for policy 0, policy_version 11127 (0.0009) +[2024-12-28 14:09:57,077][100934] Updated weights for policy 0, policy_version 11137 (0.0008) +[2024-12-28 14:09:58,944][100720] Fps is (10 sec: 22118.5, 60 sec: 23415.4, 300 sec: 23534.9). Total num frames: 45654016. Throughput: 0: 5925.2. Samples: 1413292. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:09:58,945][100720] Avg episode reward: [(0, '4.371')] +[2024-12-28 14:09:59,018][100934] Updated weights for policy 0, policy_version 11147 (0.0008) +[2024-12-28 14:10:00,958][100934] Updated weights for policy 0, policy_version 11157 (0.0008) +[2024-12-28 14:10:02,818][100934] Updated weights for policy 0, policy_version 11167 (0.0008) +[2024-12-28 14:10:03,944][100720] Fps is (10 sec: 21709.0, 60 sec: 23142.4, 300 sec: 23506.0). Total num frames: 45764608. Throughput: 0: 5877.6. Samples: 1428930. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:10:03,945][100720] Avg episode reward: [(0, '4.171')] +[2024-12-28 14:10:04,465][100934] Updated weights for policy 0, policy_version 11177 (0.0008) +[2024-12-28 14:10:06,065][100934] Updated weights for policy 0, policy_version 11187 (0.0008) +[2024-12-28 14:10:07,688][100934] Updated weights for policy 0, policy_version 11197 (0.0008) +[2024-12-28 14:10:08,944][100720] Fps is (10 sec: 24166.0, 60 sec: 23551.9, 300 sec: 23560.2). Total num frames: 45895680. Throughput: 0: 5955.0. Samples: 1466720. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:10:08,945][100720] Avg episode reward: [(0, '4.507')] +[2024-12-28 14:10:09,274][100934] Updated weights for policy 0, policy_version 11207 (0.0007) +[2024-12-28 14:10:10,875][100934] Updated weights for policy 0, policy_version 11217 (0.0008) +[2024-12-28 14:10:12,489][100934] Updated weights for policy 0, policy_version 11227 (0.0006) +[2024-12-28 14:10:13,944][100720] Fps is (10 sec: 25804.7, 60 sec: 23893.3, 300 sec: 23596.2). Total num frames: 46022656. Throughput: 0: 6098.6. Samples: 1505042. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:10:13,945][100720] Avg episode reward: [(0, '4.391')] +[2024-12-28 14:10:14,090][100934] Updated weights for policy 0, policy_version 11237 (0.0007) +[2024-12-28 14:10:15,654][100934] Updated weights for policy 0, policy_version 11247 (0.0007) +[2024-12-28 14:10:17,231][100934] Updated weights for policy 0, policy_version 11257 (0.0008) +[2024-12-28 14:10:18,840][100934] Updated weights for policy 0, policy_version 11267 (0.0008) +[2024-12-28 14:10:18,944][100720] Fps is (10 sec: 25395.7, 60 sec: 23961.6, 300 sec: 23630.8). Total num frames: 46149632. Throughput: 0: 6130.9. Samples: 1524620. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:10:18,945][100720] Avg episode reward: [(0, '4.205')] +[2024-12-28 14:10:20,449][100934] Updated weights for policy 0, policy_version 11277 (0.0007) +[2024-12-28 14:10:21,996][100934] Updated weights for policy 0, policy_version 11287 (0.0007) +[2024-12-28 14:10:23,590][100934] Updated weights for policy 0, policy_version 11297 (0.0007) +[2024-12-28 14:10:23,944][100720] Fps is (10 sec: 25804.8, 60 sec: 24371.2, 300 sec: 23679.5). Total num frames: 46280704. Throughput: 0: 6127.2. Samples: 1563078. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:10:23,945][100720] Avg episode reward: [(0, '4.581')] +[2024-12-28 14:10:25,208][100934] Updated weights for policy 0, policy_version 11307 (0.0007) +[2024-12-28 14:10:26,818][100934] Updated weights for policy 0, policy_version 11317 (0.0007) +[2024-12-28 14:10:28,428][100934] Updated weights for policy 0, policy_version 11327 (0.0007) +[2024-12-28 14:10:28,944][100720] Fps is (10 sec: 25804.8, 60 sec: 24712.5, 300 sec: 23711.3). Total num frames: 46407680. Throughput: 0: 6121.5. Samples: 1601312. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:10:28,945][100720] Avg episode reward: [(0, '4.357')] +[2024-12-28 14:10:30,012][100934] Updated weights for policy 0, policy_version 11337 (0.0006) +[2024-12-28 14:10:31,792][100934] Updated weights for policy 0, policy_version 11347 (0.0007) +[2024-12-28 14:10:33,704][100934] Updated weights for policy 0, policy_version 11357 (0.0009) +[2024-12-28 14:10:33,944][100720] Fps is (10 sec: 24166.3, 60 sec: 24576.0, 300 sec: 23697.2). Total num frames: 46522368. Throughput: 0: 6093.0. Samples: 1619450. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:10:33,945][100720] Avg episode reward: [(0, '4.534')] +[2024-12-28 14:10:35,524][100934] Updated weights for policy 0, policy_version 11367 (0.0009) +[2024-12-28 14:10:37,363][100934] Updated weights for policy 0, policy_version 11377 (0.0008) +[2024-12-28 14:10:38,944][100720] Fps is (10 sec: 22527.8, 60 sec: 24302.9, 300 sec: 23669.0). Total num frames: 46632960. Throughput: 0: 6012.1. Samples: 1652306. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:10:38,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:10:39,259][100934] Updated weights for policy 0, policy_version 11387 (0.0008) +[2024-12-28 14:10:41,163][100934] Updated weights for policy 0, policy_version 11397 (0.0008) +[2024-12-28 14:10:42,783][100934] Updated weights for policy 0, policy_version 11407 (0.0007) +[2024-12-28 14:10:43,944][100720] Fps is (10 sec: 22937.6, 60 sec: 24098.1, 300 sec: 23670.6). Total num frames: 46751744. Throughput: 0: 6096.0. Samples: 1687612. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:10:43,945][100720] Avg episode reward: [(0, '4.535')] +[2024-12-28 14:10:44,309][100934] Updated weights for policy 0, policy_version 11417 (0.0008) +[2024-12-28 14:10:45,863][100934] Updated weights for policy 0, policy_version 11427 (0.0007) +[2024-12-28 14:10:47,425][100934] Updated weights for policy 0, policy_version 11437 (0.0007) +[2024-12-28 14:10:48,944][100720] Fps is (10 sec: 24985.8, 60 sec: 24166.4, 300 sec: 23714.4). Total num frames: 46882816. Throughput: 0: 6189.2. Samples: 1707442. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:10:48,945][100720] Avg episode reward: [(0, '4.301')] +[2024-12-28 14:10:49,004][100934] Updated weights for policy 0, policy_version 11447 (0.0008) +[2024-12-28 14:10:50,549][100934] Updated weights for policy 0, policy_version 11457 (0.0008) +[2024-12-28 14:10:52,087][100934] Updated weights for policy 0, policy_version 11467 (0.0008) +[2024-12-28 14:10:53,677][100934] Updated weights for policy 0, policy_version 11477 (0.0008) +[2024-12-28 14:10:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24439.5, 300 sec: 23756.8). Total num frames: 47013888. Throughput: 0: 6225.2. Samples: 1746854. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:10:53,945][100720] Avg episode reward: [(0, '4.470')] +[2024-12-28 14:10:55,233][100934] Updated weights for policy 0, policy_version 11487 (0.0007) +[2024-12-28 14:10:56,818][100934] Updated weights for policy 0, policy_version 11497 (0.0008) +[2024-12-28 14:10:58,383][100934] Updated weights for policy 0, policy_version 11507 (0.0006) +[2024-12-28 14:10:58,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24849.1, 300 sec: 24020.6). Total num frames: 47144960. Throughput: 0: 6242.4. Samples: 1785948. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:10:58,945][100720] Avg episode reward: [(0, '4.275')] +[2024-12-28 14:10:59,922][100934] Updated weights for policy 0, policy_version 11517 (0.0006) +[2024-12-28 14:11:01,486][100934] Updated weights for policy 0, policy_version 11527 (0.0007) +[2024-12-28 14:11:03,258][100934] Updated weights for policy 0, policy_version 11537 (0.0009) +[2024-12-28 14:11:03,944][100720] Fps is (10 sec: 25395.0, 60 sec: 25053.8, 300 sec: 24117.8). Total num frames: 47267840. Throughput: 0: 6248.7. Samples: 1805810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:11:03,945][100720] Avg episode reward: [(0, '4.513')] +[2024-12-28 14:11:05,115][100934] Updated weights for policy 0, policy_version 11547 (0.0007) +[2024-12-28 14:11:07,054][100934] Updated weights for policy 0, policy_version 11557 (0.0010) +[2024-12-28 14:11:08,937][100934] Updated weights for policy 0, policy_version 11567 (0.0008) +[2024-12-28 14:11:08,944][100720] Fps is (10 sec: 23345.7, 60 sec: 24712.4, 300 sec: 24131.6). Total num frames: 47378432. Throughput: 0: 6122.4. Samples: 1838592. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:11:08,945][100720] Avg episode reward: [(0, '4.446')] +[2024-12-28 14:11:10,812][100934] Updated weights for policy 0, policy_version 11577 (0.0008) +[2024-12-28 14:11:12,656][100934] Updated weights for policy 0, policy_version 11587 (0.0007) +[2024-12-28 14:11:13,944][100720] Fps is (10 sec: 22528.3, 60 sec: 24507.7, 300 sec: 24103.9). Total num frames: 47493120. Throughput: 0: 6029.5. Samples: 1872638. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:11:13,945][100720] Avg episode reward: [(0, '4.348')] +[2024-12-28 14:11:14,242][100934] Updated weights for policy 0, policy_version 11597 (0.0006) +[2024-12-28 14:11:15,798][100934] Updated weights for policy 0, policy_version 11607 (0.0008) +[2024-12-28 14:11:17,306][100934] Updated weights for policy 0, policy_version 11617 (0.0006) +[2024-12-28 14:11:18,867][100934] Updated weights for policy 0, policy_version 11627 (0.0006) +[2024-12-28 14:11:18,944][100720] Fps is (10 sec: 24577.6, 60 sec: 24576.0, 300 sec: 24117.8). Total num frames: 47624192. Throughput: 0: 6064.4. Samples: 1892348. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:18,945][100720] Avg episode reward: [(0, '4.559')] +[2024-12-28 14:11:20,449][100934] Updated weights for policy 0, policy_version 11637 (0.0007) +[2024-12-28 14:11:21,967][100934] Updated weights for policy 0, policy_version 11647 (0.0006) +[2024-12-28 14:11:23,491][100934] Updated weights for policy 0, policy_version 11657 (0.0007) +[2024-12-28 14:11:23,944][100720] Fps is (10 sec: 26214.1, 60 sec: 24576.0, 300 sec: 24131.7). Total num frames: 47755264. Throughput: 0: 6223.3. Samples: 1932354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:23,945][100720] Avg episode reward: [(0, '4.408')] +[2024-12-28 14:11:25,053][100934] Updated weights for policy 0, policy_version 11667 (0.0007) +[2024-12-28 14:11:26,649][100934] Updated weights for policy 0, policy_version 11677 (0.0007) +[2024-12-28 14:11:28,218][100934] Updated weights for policy 0, policy_version 11687 (0.0008) +[2024-12-28 14:11:28,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24644.3, 300 sec: 24131.7). Total num frames: 47886336. Throughput: 0: 6312.0. Samples: 1971650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:28,945][100720] Avg episode reward: [(0, '4.448')] +[2024-12-28 14:11:29,758][100934] Updated weights for policy 0, policy_version 11697 (0.0006) +[2024-12-28 14:11:31,333][100934] Updated weights for policy 0, policy_version 11707 (0.0007) +[2024-12-28 14:11:32,902][100934] Updated weights for policy 0, policy_version 11717 (0.0007) +[2024-12-28 14:11:33,944][100720] Fps is (10 sec: 26214.6, 60 sec: 24917.3, 300 sec: 24159.5). Total num frames: 48017408. Throughput: 0: 6307.7. Samples: 1991288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:33,945][100720] Avg episode reward: [(0, '4.436')] +[2024-12-28 14:11:34,456][100934] Updated weights for policy 0, policy_version 11727 (0.0007) +[2024-12-28 14:11:36,046][100934] Updated weights for policy 0, policy_version 11737 (0.0006) +[2024-12-28 14:11:37,883][100934] Updated weights for policy 0, policy_version 11747 (0.0007) +[2024-12-28 14:11:38,944][100720] Fps is (10 sec: 24985.6, 60 sec: 25053.9, 300 sec: 24131.7). Total num frames: 48136192. Throughput: 0: 6256.4. Samples: 2028390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:38,945][100720] Avg episode reward: [(0, '4.433')] +[2024-12-28 14:11:39,763][100934] Updated weights for policy 0, policy_version 11757 (0.0009) +[2024-12-28 14:11:41,593][100934] Updated weights for policy 0, policy_version 11767 (0.0008) +[2024-12-28 14:11:43,466][100934] Updated weights for policy 0, policy_version 11777 (0.0009) +[2024-12-28 14:11:43,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24917.3, 300 sec: 24076.1). Total num frames: 48246784. Throughput: 0: 6121.5. Samples: 2061416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:11:43,945][100720] Avg episode reward: [(0, '4.278')] +[2024-12-28 14:11:45,327][100934] Updated weights for policy 0, policy_version 11787 (0.0008) +[2024-12-28 14:11:47,096][100934] Updated weights for policy 0, policy_version 11797 (0.0008) +[2024-12-28 14:11:48,625][100934] Updated weights for policy 0, policy_version 11807 (0.0008) +[2024-12-28 14:11:48,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24780.8, 300 sec: 24076.1). Total num frames: 48369664. Throughput: 0: 6056.5. Samples: 2078350. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:11:48,945][100720] Avg episode reward: [(0, '4.614')] +[2024-12-28 14:11:50,206][100934] Updated weights for policy 0, policy_version 11817 (0.0007) +[2024-12-28 14:11:51,943][100934] Updated weights for policy 0, policy_version 11827 (0.0007) +[2024-12-28 14:11:53,755][100934] Updated weights for policy 0, policy_version 11837 (0.0007) +[2024-12-28 14:11:53,944][100720] Fps is (10 sec: 24166.4, 60 sec: 24576.0, 300 sec: 24062.3). Total num frames: 48488448. Throughput: 0: 6158.9. Samples: 2115738. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:11:53,945][100720] Avg episode reward: [(0, '4.571')] +[2024-12-28 14:11:53,951][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000011838_48488448.pth... +[2024-12-28 14:11:53,988][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000010424_42696704.pth +[2024-12-28 14:11:55,687][100934] Updated weights for policy 0, policy_version 11847 (0.0009) +[2024-12-28 14:11:57,559][100934] Updated weights for policy 0, policy_version 11857 (0.0008) +[2024-12-28 14:11:58,944][100720] Fps is (10 sec: 22527.9, 60 sec: 24166.4, 300 sec: 24006.7). Total num frames: 48594944. Throughput: 0: 6129.3. Samples: 2148456. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:11:58,946][100720] Avg episode reward: [(0, '4.383')] +[2024-12-28 14:11:59,370][100934] Updated weights for policy 0, policy_version 11867 (0.0007) +[2024-12-28 14:12:01,158][100934] Updated weights for policy 0, policy_version 11877 (0.0008) +[2024-12-28 14:12:02,738][100934] Updated weights for policy 0, policy_version 11887 (0.0006) +[2024-12-28 14:12:03,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24166.4, 300 sec: 24006.7). Total num frames: 48717824. Throughput: 0: 6089.2. Samples: 2166364. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:03,945][100720] Avg episode reward: [(0, '4.494')] +[2024-12-28 14:12:04,261][100934] Updated weights for policy 0, policy_version 11897 (0.0007) +[2024-12-28 14:12:05,841][100934] Updated weights for policy 0, policy_version 11907 (0.0007) +[2024-12-28 14:12:07,476][100934] Updated weights for policy 0, policy_version 11917 (0.0008) +[2024-12-28 14:12:08,944][100720] Fps is (10 sec: 25395.1, 60 sec: 24508.0, 300 sec: 24048.4). Total num frames: 48848896. Throughput: 0: 6064.8. Samples: 2205272. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:08,945][100720] Avg episode reward: [(0, '4.234')] +[2024-12-28 14:12:09,054][100934] Updated weights for policy 0, policy_version 11927 (0.0006) +[2024-12-28 14:12:10,582][100934] Updated weights for policy 0, policy_version 11937 (0.0008) +[2024-12-28 14:12:12,154][100934] Updated weights for policy 0, policy_version 11947 (0.0008) +[2024-12-28 14:12:13,673][100934] Updated weights for policy 0, policy_version 11957 (0.0006) +[2024-12-28 14:12:13,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24780.8, 300 sec: 24117.8). Total num frames: 48979968. Throughput: 0: 6070.3. Samples: 2244814. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:13,945][100720] Avg episode reward: [(0, '4.379')] +[2024-12-28 14:12:15,252][100934] Updated weights for policy 0, policy_version 11967 (0.0007) +[2024-12-28 14:12:16,831][100934] Updated weights for policy 0, policy_version 11977 (0.0008) +[2024-12-28 14:12:18,387][100934] Updated weights for policy 0, policy_version 11987 (0.0006) +[2024-12-28 14:12:18,944][100720] Fps is (10 sec: 26214.5, 60 sec: 24780.8, 300 sec: 24173.3). Total num frames: 49111040. Throughput: 0: 6066.6. Samples: 2264284. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:18,945][100720] Avg episode reward: [(0, '4.420')] +[2024-12-28 14:12:19,963][100934] Updated weights for policy 0, policy_version 11997 (0.0007) +[2024-12-28 14:12:21,505][100934] Updated weights for policy 0, policy_version 12007 (0.0006) +[2024-12-28 14:12:23,255][100934] Updated weights for policy 0, policy_version 12017 (0.0008) +[2024-12-28 14:12:23,944][100720] Fps is (10 sec: 25395.1, 60 sec: 24644.3, 300 sec: 24159.5). Total num frames: 49233920. Throughput: 0: 6099.5. Samples: 2302868. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:23,945][100720] Avg episode reward: [(0, '4.500')] +[2024-12-28 14:12:25,079][100934] Updated weights for policy 0, policy_version 12027 (0.0010) +[2024-12-28 14:12:26,952][100934] Updated weights for policy 0, policy_version 12037 (0.0009) +[2024-12-28 14:12:28,826][100934] Updated weights for policy 0, policy_version 12047 (0.0007) +[2024-12-28 14:12:28,944][100720] Fps is (10 sec: 23347.1, 60 sec: 24302.9, 300 sec: 24117.8). Total num frames: 49344512. Throughput: 0: 6104.0. Samples: 2336096. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:28,945][100720] Avg episode reward: [(0, '4.482')] +[2024-12-28 14:12:30,731][100934] Updated weights for policy 0, policy_version 12057 (0.0009) +[2024-12-28 14:12:32,626][100934] Updated weights for policy 0, policy_version 12067 (0.0009) +[2024-12-28 14:12:33,944][100720] Fps is (10 sec: 22528.1, 60 sec: 24029.9, 300 sec: 24145.6). Total num frames: 49459200. Throughput: 0: 6086.8. Samples: 2352256. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:33,945][100720] Avg episode reward: [(0, '4.249')] +[2024-12-28 14:12:34,200][100934] Updated weights for policy 0, policy_version 12077 (0.0007) +[2024-12-28 14:12:35,745][100934] Updated weights for policy 0, policy_version 12087 (0.0007) +[2024-12-28 14:12:37,292][100934] Updated weights for policy 0, policy_version 12097 (0.0007) +[2024-12-28 14:12:38,840][100934] Updated weights for policy 0, policy_version 12107 (0.0006) +[2024-12-28 14:12:38,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24234.7, 300 sec: 24228.9). Total num frames: 49590272. Throughput: 0: 6110.8. Samples: 2390724. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:12:38,945][100720] Avg episode reward: [(0, '4.517')] +[2024-12-28 14:12:40,541][100934] Updated weights for policy 0, policy_version 12117 (0.0007) +[2024-12-28 14:12:42,372][100934] Updated weights for policy 0, policy_version 12127 (0.0009) +[2024-12-28 14:12:43,944][100720] Fps is (10 sec: 24575.9, 60 sec: 24302.9, 300 sec: 24187.2). Total num frames: 49704960. Throughput: 0: 6170.7. Samples: 2426138. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:12:43,945][100720] Avg episode reward: [(0, '4.291')] +[2024-12-28 14:12:44,173][100934] Updated weights for policy 0, policy_version 12137 (0.0007) +[2024-12-28 14:12:46,056][100934] Updated weights for policy 0, policy_version 12147 (0.0008) +[2024-12-28 14:12:47,938][100934] Updated weights for policy 0, policy_version 12157 (0.0008) +[2024-12-28 14:12:48,944][100720] Fps is (10 sec: 22528.0, 60 sec: 24098.1, 300 sec: 24131.7). Total num frames: 49815552. Throughput: 0: 6135.2. Samples: 2442450. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:12:48,945][100720] Avg episode reward: [(0, '4.148')] +[2024-12-28 14:12:49,852][100934] Updated weights for policy 0, policy_version 12167 (0.0008) +[2024-12-28 14:12:51,531][100934] Updated weights for policy 0, policy_version 12177 (0.0006) +[2024-12-28 14:12:53,103][100934] Updated weights for policy 0, policy_version 12187 (0.0007) +[2024-12-28 14:12:53,944][100720] Fps is (10 sec: 23347.4, 60 sec: 24166.4, 300 sec: 24145.6). Total num frames: 49938432. Throughput: 0: 6052.7. Samples: 2477642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:12:53,945][100720] Avg episode reward: [(0, '4.577')] +[2024-12-28 14:12:54,633][100934] Updated weights for policy 0, policy_version 12197 (0.0007) +[2024-12-28 14:12:56,181][100934] Updated weights for policy 0, policy_version 12207 (0.0007) +[2024-12-28 14:12:57,727][100934] Updated weights for policy 0, policy_version 12217 (0.0006) +[2024-12-28 14:12:58,944][100720] Fps is (10 sec: 25394.5, 60 sec: 24575.9, 300 sec: 24173.3). Total num frames: 50069504. Throughput: 0: 6054.8. Samples: 2517282. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:12:58,945][100720] Avg episode reward: [(0, '4.329')] +[2024-12-28 14:12:59,289][100934] Updated weights for policy 0, policy_version 12227 (0.0007) +[2024-12-28 14:13:00,820][100934] Updated weights for policy 0, policy_version 12237 (0.0007) +[2024-12-28 14:13:03,279][100934] Updated weights for policy 0, policy_version 12247 (0.0008) +[2024-12-28 14:13:03,944][100720] Fps is (10 sec: 23756.7, 60 sec: 24302.9, 300 sec: 24173.3). Total num frames: 50176000. Throughput: 0: 5939.9. Samples: 2531578. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:13:03,945][100720] Avg episode reward: [(0, '4.507')] +[2024-12-28 14:13:05,045][100934] Updated weights for policy 0, policy_version 12257 (0.0008) +[2024-12-28 14:13:07,046][100934] Updated weights for policy 0, policy_version 12267 (0.0009) +[2024-12-28 14:13:08,839][100934] Updated weights for policy 0, policy_version 12277 (0.0008) +[2024-12-28 14:13:08,944][100720] Fps is (10 sec: 21709.4, 60 sec: 23961.6, 300 sec: 24201.1). Total num frames: 50286592. Throughput: 0: 5841.3. Samples: 2565726. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:08,945][100720] Avg episode reward: [(0, '4.478')] +[2024-12-28 14:13:10,610][100934] Updated weights for policy 0, policy_version 12287 (0.0008) +[2024-12-28 14:13:12,400][100934] Updated weights for policy 0, policy_version 12297 (0.0007) +[2024-12-28 14:13:13,944][100720] Fps is (10 sec: 22527.7, 60 sec: 23688.5, 300 sec: 24201.1). Total num frames: 50401280. Throughput: 0: 5875.6. Samples: 2600498. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:13,945][100720] Avg episode reward: [(0, '4.292')] +[2024-12-28 14:13:14,160][100934] Updated weights for policy 0, policy_version 12307 (0.0008) +[2024-12-28 14:13:15,738][100934] Updated weights for policy 0, policy_version 12317 (0.0006) +[2024-12-28 14:13:17,215][100934] Updated weights for policy 0, policy_version 12327 (0.0006) +[2024-12-28 14:13:18,719][100934] Updated weights for policy 0, policy_version 12337 (0.0007) +[2024-12-28 14:13:18,944][100720] Fps is (10 sec: 24985.7, 60 sec: 23756.8, 300 sec: 24228.9). Total num frames: 50536448. Throughput: 0: 5950.0. Samples: 2620004. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:13:18,945][100720] Avg episode reward: [(0, '4.511')] +[2024-12-28 14:13:20,246][100934] Updated weights for policy 0, policy_version 12347 (0.0006) +[2024-12-28 14:13:21,750][100934] Updated weights for policy 0, policy_version 12357 (0.0006) +[2024-12-28 14:13:23,231][100934] Updated weights for policy 0, policy_version 12367 (0.0006) +[2024-12-28 14:13:23,944][100720] Fps is (10 sec: 27033.9, 60 sec: 23961.6, 300 sec: 24256.6). Total num frames: 50671616. Throughput: 0: 6004.5. Samples: 2660926. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:13:23,945][100720] Avg episode reward: [(0, '4.441')] +[2024-12-28 14:13:24,924][100934] Updated weights for policy 0, policy_version 12377 (0.0007) +[2024-12-28 14:13:26,700][100934] Updated weights for policy 0, policy_version 12387 (0.0008) +[2024-12-28 14:13:28,474][100934] Updated weights for policy 0, policy_version 12397 (0.0007) +[2024-12-28 14:13:28,944][100720] Fps is (10 sec: 24985.5, 60 sec: 24029.9, 300 sec: 24201.1). Total num frames: 50786304. Throughput: 0: 6006.7. Samples: 2696440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:13:28,945][100720] Avg episode reward: [(0, '4.487')] +[2024-12-28 14:13:30,270][100934] Updated weights for policy 0, policy_version 12407 (0.0008) +[2024-12-28 14:13:32,115][100934] Updated weights for policy 0, policy_version 12417 (0.0009) +[2024-12-28 14:13:33,944][100720] Fps is (10 sec: 22527.9, 60 sec: 23961.6, 300 sec: 24131.7). Total num frames: 50896896. Throughput: 0: 6019.2. Samples: 2713314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:13:33,945][100720] Avg episode reward: [(0, '4.355')] +[2024-12-28 14:13:34,011][100934] Updated weights for policy 0, policy_version 12427 (0.0008) +[2024-12-28 14:13:35,789][100934] Updated weights for policy 0, policy_version 12437 (0.0007) +[2024-12-28 14:13:37,355][100934] Updated weights for policy 0, policy_version 12447 (0.0007) +[2024-12-28 14:13:38,868][100934] Updated weights for policy 0, policy_version 12457 (0.0007) +[2024-12-28 14:13:38,944][100720] Fps is (10 sec: 23755.8, 60 sec: 23893.2, 300 sec: 24145.5). Total num frames: 51023872. Throughput: 0: 6025.8. Samples: 2748804. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:38,945][100720] Avg episode reward: [(0, '4.361')] +[2024-12-28 14:13:40,383][100934] Updated weights for policy 0, policy_version 12467 (0.0007) +[2024-12-28 14:13:41,972][100934] Updated weights for policy 0, policy_version 12477 (0.0007) +[2024-12-28 14:13:43,715][100934] Updated weights for policy 0, policy_version 12487 (0.0008) +[2024-12-28 14:13:43,944][100720] Fps is (10 sec: 25395.0, 60 sec: 24098.1, 300 sec: 24201.1). Total num frames: 51150848. Throughput: 0: 5998.9. Samples: 2787230. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:43,945][100720] Avg episode reward: [(0, '4.508')] +[2024-12-28 14:13:45,480][100934] Updated weights for policy 0, policy_version 12497 (0.0009) +[2024-12-28 14:13:47,288][100934] Updated weights for policy 0, policy_version 12507 (0.0008) +[2024-12-28 14:13:48,944][100720] Fps is (10 sec: 24167.5, 60 sec: 24166.4, 300 sec: 24228.9). Total num frames: 51265536. Throughput: 0: 6063.0. Samples: 2804414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:48,945][100720] Avg episode reward: [(0, '4.265')] +[2024-12-28 14:13:49,044][100934] Updated weights for policy 0, policy_version 12517 (0.0008) +[2024-12-28 14:13:50,816][100934] Updated weights for policy 0, policy_version 12527 (0.0007) +[2024-12-28 14:13:52,579][100934] Updated weights for policy 0, policy_version 12537 (0.0008) +[2024-12-28 14:13:53,944][100720] Fps is (10 sec: 22937.8, 60 sec: 24029.8, 300 sec: 24173.3). Total num frames: 51380224. Throughput: 0: 6075.7. Samples: 2839132. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:53,945][100720] Avg episode reward: [(0, '4.389')] +[2024-12-28 14:13:53,952][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000012544_51380224.pth... +[2024-12-28 14:13:53,988][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000011121_45551616.pth +[2024-12-28 14:13:54,376][100934] Updated weights for policy 0, policy_version 12547 (0.0008) +[2024-12-28 14:13:56,179][100934] Updated weights for policy 0, policy_version 12557 (0.0008) +[2024-12-28 14:13:57,915][100934] Updated weights for policy 0, policy_version 12567 (0.0008) +[2024-12-28 14:13:58,944][100720] Fps is (10 sec: 22937.5, 60 sec: 23756.9, 300 sec: 24131.7). Total num frames: 51494912. Throughput: 0: 6063.8. Samples: 2873368. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:13:58,945][100720] Avg episode reward: [(0, '4.492')] +[2024-12-28 14:13:59,735][100934] Updated weights for policy 0, policy_version 12577 (0.0008) +[2024-12-28 14:14:01,542][100934] Updated weights for policy 0, policy_version 12587 (0.0008) +[2024-12-28 14:14:03,325][100934] Updated weights for policy 0, policy_version 12597 (0.0008) +[2024-12-28 14:14:03,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23893.3, 300 sec: 24159.5). Total num frames: 51609600. Throughput: 0: 6013.0. Samples: 2890590. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:14:03,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:14:05,060][100934] Updated weights for policy 0, policy_version 12607 (0.0007) +[2024-12-28 14:14:06,905][100934] Updated weights for policy 0, policy_version 12617 (0.0009) +[2024-12-28 14:14:08,837][100934] Updated weights for policy 0, policy_version 12627 (0.0007) +[2024-12-28 14:14:08,944][100720] Fps is (10 sec: 22528.0, 60 sec: 23893.3, 300 sec: 24173.3). Total num frames: 51720192. Throughput: 0: 5854.8. Samples: 2924394. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:14:08,945][100720] Avg episode reward: [(0, '4.395')] +[2024-12-28 14:14:10,703][100934] Updated weights for policy 0, policy_version 12637 (0.0009) +[2024-12-28 14:14:12,586][100934] Updated weights for policy 0, policy_version 12647 (0.0007) +[2024-12-28 14:14:13,944][100720] Fps is (10 sec: 22118.4, 60 sec: 23825.1, 300 sec: 24131.7). Total num frames: 51830784. Throughput: 0: 5794.9. Samples: 2957212. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:14:13,945][100720] Avg episode reward: [(0, '4.409')] +[2024-12-28 14:14:14,481][100934] Updated weights for policy 0, policy_version 12657 (0.0010) +[2024-12-28 14:14:16,264][100934] Updated weights for policy 0, policy_version 12667 (0.0007) +[2024-12-28 14:14:18,038][100934] Updated weights for policy 0, policy_version 12677 (0.0007) +[2024-12-28 14:14:18,944][100720] Fps is (10 sec: 22528.0, 60 sec: 23483.7, 300 sec: 24159.5). Total num frames: 51945472. Throughput: 0: 5796.9. Samples: 2974176. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:14:18,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 14:14:19,625][100934] Updated weights for policy 0, policy_version 12687 (0.0007) +[2024-12-28 14:14:21,136][100934] Updated weights for policy 0, policy_version 12697 (0.0007) +[2024-12-28 14:14:22,655][100934] Updated weights for policy 0, policy_version 12707 (0.0007) +[2024-12-28 14:14:23,944][100720] Fps is (10 sec: 24985.6, 60 sec: 23483.7, 300 sec: 24256.7). Total num frames: 52080640. Throughput: 0: 5873.1. Samples: 3013090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:14:23,945][100720] Avg episode reward: [(0, '4.425')] +[2024-12-28 14:14:24,166][100934] Updated weights for policy 0, policy_version 12717 (0.0006) +[2024-12-28 14:14:25,660][100934] Updated weights for policy 0, policy_version 12727 (0.0007) +[2024-12-28 14:14:27,113][100934] Updated weights for policy 0, policy_version 12737 (0.0006) +[2024-12-28 14:14:28,606][100934] Updated weights for policy 0, policy_version 12747 (0.0007) +[2024-12-28 14:14:28,944][100720] Fps is (10 sec: 27443.0, 60 sec: 23893.3, 300 sec: 24312.2). Total num frames: 52219904. Throughput: 0: 5934.5. Samples: 3054280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:14:28,945][100720] Avg episode reward: [(0, '4.536')] +[2024-12-28 14:14:30,136][100934] Updated weights for policy 0, policy_version 12757 (0.0008) +[2024-12-28 14:14:31,621][100934] Updated weights for policy 0, policy_version 12767 (0.0008) +[2024-12-28 14:14:33,125][100934] Updated weights for policy 0, policy_version 12777 (0.0007) +[2024-12-28 14:14:33,944][100720] Fps is (10 sec: 27443.2, 60 sec: 24303.0, 300 sec: 24340.0). Total num frames: 52355072. Throughput: 0: 6006.4. Samples: 3074704. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:14:33,945][100720] Avg episode reward: [(0, '4.372')] +[2024-12-28 14:14:34,615][100934] Updated weights for policy 0, policy_version 12787 (0.0007) +[2024-12-28 14:14:36,121][100934] Updated weights for policy 0, policy_version 12797 (0.0006) +[2024-12-28 14:14:37,612][100934] Updated weights for policy 0, policy_version 12807 (0.0007) +[2024-12-28 14:14:38,944][100720] Fps is (10 sec: 27033.5, 60 sec: 24439.6, 300 sec: 24353.8). Total num frames: 52490240. Throughput: 0: 6144.8. Samples: 3115648. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:14:38,945][100720] Avg episode reward: [(0, '4.286')] +[2024-12-28 14:14:39,161][100934] Updated weights for policy 0, policy_version 12817 (0.0007) +[2024-12-28 14:14:40,672][100934] Updated weights for policy 0, policy_version 12827 (0.0007) +[2024-12-28 14:14:42,214][100934] Updated weights for policy 0, policy_version 12837 (0.0008) +[2024-12-28 14:14:43,715][100934] Updated weights for policy 0, policy_version 12847 (0.0007) +[2024-12-28 14:14:43,944][100720] Fps is (10 sec: 27033.6, 60 sec: 24576.0, 300 sec: 24381.6). Total num frames: 52625408. Throughput: 0: 6278.2. Samples: 3155886. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:14:43,945][100720] Avg episode reward: [(0, '4.707')] +[2024-12-28 14:14:45,265][100934] Updated weights for policy 0, policy_version 12857 (0.0007) +[2024-12-28 14:14:46,774][100934] Updated weights for policy 0, policy_version 12867 (0.0006) +[2024-12-28 14:14:48,321][100934] Updated weights for policy 0, policy_version 12877 (0.0006) +[2024-12-28 14:14:48,944][100720] Fps is (10 sec: 26624.2, 60 sec: 24849.1, 300 sec: 24437.2). Total num frames: 52756480. Throughput: 0: 6342.3. Samples: 3175994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:14:48,945][100720] Avg episode reward: [(0, '4.196')] +[2024-12-28 14:14:49,862][100934] Updated weights for policy 0, policy_version 12887 (0.0008) +[2024-12-28 14:14:51,440][100934] Updated weights for policy 0, policy_version 12897 (0.0008) +[2024-12-28 14:14:52,956][100934] Updated weights for policy 0, policy_version 12907 (0.0006) +[2024-12-28 14:14:53,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25190.4, 300 sec: 24534.3). Total num frames: 52891648. Throughput: 0: 6476.8. Samples: 3215852. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:14:53,945][100720] Avg episode reward: [(0, '4.449')] +[2024-12-28 14:14:54,489][100934] Updated weights for policy 0, policy_version 12917 (0.0007) +[2024-12-28 14:14:56,029][100934] Updated weights for policy 0, policy_version 12927 (0.0007) +[2024-12-28 14:14:57,521][100934] Updated weights for policy 0, policy_version 12937 (0.0007) +[2024-12-28 14:14:58,944][100720] Fps is (10 sec: 27033.4, 60 sec: 25531.7, 300 sec: 24617.6). Total num frames: 53026816. Throughput: 0: 6639.5. Samples: 3255988. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:14:58,945][100720] Avg episode reward: [(0, '4.408')] +[2024-12-28 14:14:59,090][100934] Updated weights for policy 0, policy_version 12947 (0.0007) +[2024-12-28 14:15:00,703][100934] Updated weights for policy 0, policy_version 12957 (0.0007) +[2024-12-28 14:15:02,321][100934] Updated weights for policy 0, policy_version 12967 (0.0007) +[2024-12-28 14:15:03,873][100934] Updated weights for policy 0, policy_version 12977 (0.0007) +[2024-12-28 14:15:03,944][100720] Fps is (10 sec: 26214.8, 60 sec: 25736.6, 300 sec: 24603.8). Total num frames: 53153792. Throughput: 0: 6681.9. Samples: 3274862. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:15:03,945][100720] Avg episode reward: [(0, '4.332')] +[2024-12-28 14:15:05,404][100934] Updated weights for policy 0, policy_version 12987 (0.0007) +[2024-12-28 14:15:06,926][100934] Updated weights for policy 0, policy_version 12997 (0.0006) +[2024-12-28 14:15:08,523][100934] Updated weights for policy 0, policy_version 13007 (0.0007) +[2024-12-28 14:15:08,944][100720] Fps is (10 sec: 25805.0, 60 sec: 26077.9, 300 sec: 24617.7). Total num frames: 53284864. Throughput: 0: 6697.6. Samples: 3314484. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:15:08,945][100720] Avg episode reward: [(0, '4.455')] +[2024-12-28 14:15:10,093][100934] Updated weights for policy 0, policy_version 13017 (0.0006) +[2024-12-28 14:15:11,640][100934] Updated weights for policy 0, policy_version 13027 (0.0006) +[2024-12-28 14:15:13,179][100934] Updated weights for policy 0, policy_version 13037 (0.0007) +[2024-12-28 14:15:13,944][100720] Fps is (10 sec: 26623.9, 60 sec: 26487.5, 300 sec: 24645.4). Total num frames: 53420032. Throughput: 0: 6663.2. Samples: 3354122. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:15:13,945][100720] Avg episode reward: [(0, '4.468')] +[2024-12-28 14:15:14,716][100934] Updated weights for policy 0, policy_version 13047 (0.0006) +[2024-12-28 14:15:16,238][100934] Updated weights for policy 0, policy_version 13057 (0.0007) +[2024-12-28 14:15:17,798][100934] Updated weights for policy 0, policy_version 13067 (0.0006) +[2024-12-28 14:15:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26760.5, 300 sec: 24645.4). Total num frames: 53551104. Throughput: 0: 6651.2. Samples: 3374006. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:15:18,945][100720] Avg episode reward: [(0, '4.241')] +[2024-12-28 14:15:19,367][100934] Updated weights for policy 0, policy_version 13077 (0.0006) +[2024-12-28 14:15:20,917][100934] Updated weights for policy 0, policy_version 13087 (0.0007) +[2024-12-28 14:15:22,451][100934] Updated weights for policy 0, policy_version 13097 (0.0006) +[2024-12-28 14:15:23,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26692.3, 300 sec: 24659.3). Total num frames: 53682176. Throughput: 0: 6623.7. Samples: 3413712. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:15:23,945][100720] Avg episode reward: [(0, '4.481')] +[2024-12-28 14:15:24,037][100934] Updated weights for policy 0, policy_version 13107 (0.0007) +[2024-12-28 14:15:25,570][100934] Updated weights for policy 0, policy_version 13117 (0.0006) +[2024-12-28 14:15:27,116][100934] Updated weights for policy 0, policy_version 13127 (0.0006) +[2024-12-28 14:15:28,637][100934] Updated weights for policy 0, policy_version 13137 (0.0006) +[2024-12-28 14:15:28,944][100720] Fps is (10 sec: 26214.5, 60 sec: 26555.8, 300 sec: 24714.9). Total num frames: 53813248. Throughput: 0: 6606.5. Samples: 3453176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:15:28,945][100720] Avg episode reward: [(0, '4.602')] +[2024-12-28 14:15:30,210][100934] Updated weights for policy 0, policy_version 13147 (0.0007) +[2024-12-28 14:15:31,812][100934] Updated weights for policy 0, policy_version 13157 (0.0007) +[2024-12-28 14:15:33,339][100934] Updated weights for policy 0, policy_version 13167 (0.0007) +[2024-12-28 14:15:33,944][100720] Fps is (10 sec: 26214.5, 60 sec: 26487.5, 300 sec: 24784.3). Total num frames: 53944320. Throughput: 0: 6597.3. Samples: 3472874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:15:33,945][100720] Avg episode reward: [(0, '4.366')] +[2024-12-28 14:15:34,877][100934] Updated weights for policy 0, policy_version 13177 (0.0007) +[2024-12-28 14:15:36,398][100934] Updated weights for policy 0, policy_version 13187 (0.0007) +[2024-12-28 14:15:37,951][100934] Updated weights for policy 0, policy_version 13197 (0.0007) +[2024-12-28 14:15:38,944][100720] Fps is (10 sec: 26623.9, 60 sec: 26487.5, 300 sec: 24839.8). Total num frames: 54079488. Throughput: 0: 6601.2. Samples: 3512906. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:15:38,945][100720] Avg episode reward: [(0, '4.415')] +[2024-12-28 14:15:39,481][100934] Updated weights for policy 0, policy_version 13207 (0.0006) +[2024-12-28 14:15:41,029][100934] Updated weights for policy 0, policy_version 13217 (0.0006) +[2024-12-28 14:15:42,564][100934] Updated weights for policy 0, policy_version 13227 (0.0006) +[2024-12-28 14:15:43,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26419.2, 300 sec: 24839.8). Total num frames: 54210560. Throughput: 0: 6596.1. Samples: 3552812. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:15:43,945][100720] Avg episode reward: [(0, '4.722')] +[2024-12-28 14:15:44,107][100934] Updated weights for policy 0, policy_version 13237 (0.0007) +[2024-12-28 14:15:45,667][100934] Updated weights for policy 0, policy_version 13247 (0.0007) +[2024-12-28 14:15:47,207][100934] Updated weights for policy 0, policy_version 13257 (0.0006) +[2024-12-28 14:15:48,776][100934] Updated weights for policy 0, policy_version 13267 (0.0007) +[2024-12-28 14:15:48,944][100720] Fps is (10 sec: 26214.3, 60 sec: 26419.2, 300 sec: 24839.8). Total num frames: 54341632. Throughput: 0: 6614.8. Samples: 3572528. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:15:48,945][100720] Avg episode reward: [(0, '4.434')] +[2024-12-28 14:15:50,568][100934] Updated weights for policy 0, policy_version 13277 (0.0008) +[2024-12-28 14:15:52,342][100934] Updated weights for policy 0, policy_version 13287 (0.0008) +[2024-12-28 14:15:53,944][100720] Fps is (10 sec: 24575.5, 60 sec: 26077.8, 300 sec: 24784.3). Total num frames: 54456320. Throughput: 0: 6531.3. Samples: 3608394. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:15:53,946][100720] Avg episode reward: [(0, '4.369')] +[2024-12-28 14:15:53,951][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000013295_54456320.pth... +[2024-12-28 14:15:53,988][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000011838_48488448.pth +[2024-12-28 14:15:54,165][100934] Updated weights for policy 0, policy_version 13297 (0.0007) +[2024-12-28 14:15:55,977][100934] Updated weights for policy 0, policy_version 13307 (0.0008) +[2024-12-28 14:15:57,742][100934] Updated weights for policy 0, policy_version 13317 (0.0008) +[2024-12-28 14:15:58,944][100720] Fps is (10 sec: 22937.7, 60 sec: 25736.6, 300 sec: 24756.5). Total num frames: 54571008. Throughput: 0: 6410.8. Samples: 3642606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:15:58,945][100720] Avg episode reward: [(0, '4.380')] +[2024-12-28 14:15:59,543][100934] Updated weights for policy 0, policy_version 13327 (0.0009) +[2024-12-28 14:16:01,815][100934] Updated weights for policy 0, policy_version 13337 (0.0007) +[2024-12-28 14:16:03,513][100934] Updated weights for policy 0, policy_version 13347 (0.0006) +[2024-12-28 14:16:03,944][100720] Fps is (10 sec: 22118.6, 60 sec: 25395.2, 300 sec: 24742.7). Total num frames: 54677504. Throughput: 0: 6287.4. Samples: 3656940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:16:03,945][100720] Avg episode reward: [(0, '4.363')] +[2024-12-28 14:16:05,140][100934] Updated weights for policy 0, policy_version 13357 (0.0007) +[2024-12-28 14:16:06,717][100934] Updated weights for policy 0, policy_version 13367 (0.0007) +[2024-12-28 14:16:08,382][100934] Updated weights for policy 0, policy_version 13377 (0.0007) +[2024-12-28 14:16:08,944][100720] Fps is (10 sec: 23347.0, 60 sec: 25326.9, 300 sec: 24784.3). Total num frames: 54804480. Throughput: 0: 6241.4. Samples: 3694576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:16:08,945][100720] Avg episode reward: [(0, '4.435')] +[2024-12-28 14:16:10,177][100934] Updated weights for policy 0, policy_version 13387 (0.0008) +[2024-12-28 14:16:12,096][100934] Updated weights for policy 0, policy_version 13397 (0.0008) +[2024-12-28 14:16:13,944][100720] Fps is (10 sec: 23347.3, 60 sec: 24849.1, 300 sec: 24701.0). Total num frames: 54910976. Throughput: 0: 6099.2. Samples: 3727642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:16:13,946][100720] Avg episode reward: [(0, '4.413')] +[2024-12-28 14:16:14,029][100934] Updated weights for policy 0, policy_version 13407 (0.0008) +[2024-12-28 14:16:15,859][100934] Updated weights for policy 0, policy_version 13417 (0.0007) +[2024-12-28 14:16:17,672][100934] Updated weights for policy 0, policy_version 13427 (0.0007) +[2024-12-28 14:16:18,944][100720] Fps is (10 sec: 22118.1, 60 sec: 24575.9, 300 sec: 24645.4). Total num frames: 55025664. Throughput: 0: 6033.4. Samples: 3744376. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:16:18,945][100720] Avg episode reward: [(0, '4.577')] +[2024-12-28 14:16:19,380][100934] Updated weights for policy 0, policy_version 13437 (0.0007) +[2024-12-28 14:16:20,998][100934] Updated weights for policy 0, policy_version 13447 (0.0008) +[2024-12-28 14:16:22,582][100934] Updated weights for policy 0, policy_version 13457 (0.0006) +[2024-12-28 14:16:23,944][100720] Fps is (10 sec: 24166.4, 60 sec: 24507.7, 300 sec: 24631.5). Total num frames: 55152640. Throughput: 0: 5967.2. Samples: 3781430. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:16:23,945][100720] Avg episode reward: [(0, '4.277')] +[2024-12-28 14:16:24,124][100934] Updated weights for policy 0, policy_version 13467 (0.0008) +[2024-12-28 14:16:25,693][100934] Updated weights for policy 0, policy_version 13477 (0.0006) +[2024-12-28 14:16:27,211][100934] Updated weights for policy 0, policy_version 13487 (0.0006) +[2024-12-28 14:16:28,718][100934] Updated weights for policy 0, policy_version 13497 (0.0007) +[2024-12-28 14:16:28,944][100720] Fps is (10 sec: 26214.9, 60 sec: 24576.0, 300 sec: 24645.4). Total num frames: 55287808. Throughput: 0: 5971.2. Samples: 3821514. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:16:28,945][100720] Avg episode reward: [(0, '4.572')] +[2024-12-28 14:16:30,314][100934] Updated weights for policy 0, policy_version 13507 (0.0008) +[2024-12-28 14:16:31,877][100934] Updated weights for policy 0, policy_version 13517 (0.0006) +[2024-12-28 14:16:33,426][100934] Updated weights for policy 0, policy_version 13527 (0.0007) +[2024-12-28 14:16:33,944][100720] Fps is (10 sec: 26623.0, 60 sec: 24575.8, 300 sec: 24687.0). Total num frames: 55418880. Throughput: 0: 5963.3. Samples: 3840880. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:16:33,945][100720] Avg episode reward: [(0, '4.574')] +[2024-12-28 14:16:35,004][100934] Updated weights for policy 0, policy_version 13537 (0.0007) +[2024-12-28 14:16:36,573][100934] Updated weights for policy 0, policy_version 13547 (0.0008) +[2024-12-28 14:16:38,103][100934] Updated weights for policy 0, policy_version 13557 (0.0006) +[2024-12-28 14:16:38,944][100720] Fps is (10 sec: 26214.3, 60 sec: 24507.7, 300 sec: 24756.5). Total num frames: 55549952. Throughput: 0: 6045.9. Samples: 3880460. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:16:38,945][100720] Avg episode reward: [(0, '4.356')] +[2024-12-28 14:16:39,639][100934] Updated weights for policy 0, policy_version 13567 (0.0007) +[2024-12-28 14:16:41,198][100934] Updated weights for policy 0, policy_version 13577 (0.0007) +[2024-12-28 14:16:42,779][100934] Updated weights for policy 0, policy_version 13587 (0.0008) +[2024-12-28 14:16:43,944][100720] Fps is (10 sec: 26215.5, 60 sec: 24507.7, 300 sec: 24784.3). Total num frames: 55681024. Throughput: 0: 6168.5. Samples: 3920188. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:16:43,945][100720] Avg episode reward: [(0, '4.547')] +[2024-12-28 14:16:44,283][100934] Updated weights for policy 0, policy_version 13597 (0.0006) +[2024-12-28 14:16:45,823][100934] Updated weights for policy 0, policy_version 13607 (0.0007) +[2024-12-28 14:16:47,375][100934] Updated weights for policy 0, policy_version 13617 (0.0007) +[2024-12-28 14:16:48,917][100934] Updated weights for policy 0, policy_version 13627 (0.0007) +[2024-12-28 14:16:48,944][100720] Fps is (10 sec: 26622.8, 60 sec: 24575.8, 300 sec: 24839.8). Total num frames: 55816192. Throughput: 0: 6291.1. Samples: 3940042. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:16:48,945][100720] Avg episode reward: [(0, '4.655')] +[2024-12-28 14:16:50,480][100934] Updated weights for policy 0, policy_version 13637 (0.0007) +[2024-12-28 14:16:52,044][100934] Updated weights for policy 0, policy_version 13647 (0.0006) +[2024-12-28 14:16:53,606][100934] Updated weights for policy 0, policy_version 13657 (0.0007) +[2024-12-28 14:16:53,944][100720] Fps is (10 sec: 26623.5, 60 sec: 24849.1, 300 sec: 24923.1). Total num frames: 55947264. Throughput: 0: 6336.0. Samples: 3979696. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:16:53,945][100720] Avg episode reward: [(0, '4.318')] +[2024-12-28 14:16:55,162][100934] Updated weights for policy 0, policy_version 13667 (0.0006) +[2024-12-28 14:16:56,715][100934] Updated weights for policy 0, policy_version 13677 (0.0007) +[2024-12-28 14:16:58,238][100934] Updated weights for policy 0, policy_version 13687 (0.0007) +[2024-12-28 14:16:58,944][100720] Fps is (10 sec: 26215.4, 60 sec: 25122.1, 300 sec: 24950.9). Total num frames: 56078336. Throughput: 0: 6478.7. Samples: 4019184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:16:58,945][100720] Avg episode reward: [(0, '4.519')] +[2024-12-28 14:16:59,787][100934] Updated weights for policy 0, policy_version 13697 (0.0007) +[2024-12-28 14:17:01,344][100934] Updated weights for policy 0, policy_version 13707 (0.0007) +[2024-12-28 14:17:02,916][100934] Updated weights for policy 0, policy_version 13717 (0.0007) +[2024-12-28 14:17:03,944][100720] Fps is (10 sec: 25805.0, 60 sec: 25463.5, 300 sec: 24937.0). Total num frames: 56205312. Throughput: 0: 6546.3. Samples: 4038958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:17:03,945][100720] Avg episode reward: [(0, '4.264')] +[2024-12-28 14:17:04,726][100934] Updated weights for policy 0, policy_version 13727 (0.0008) +[2024-12-28 14:17:06,572][100934] Updated weights for policy 0, policy_version 13737 (0.0009) +[2024-12-28 14:17:08,404][100934] Updated weights for policy 0, policy_version 13747 (0.0008) +[2024-12-28 14:17:08,944][100720] Fps is (10 sec: 23757.0, 60 sec: 25190.4, 300 sec: 24867.6). Total num frames: 56315904. Throughput: 0: 6490.7. Samples: 4073512. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:17:08,945][100720] Avg episode reward: [(0, '4.427')] +[2024-12-28 14:17:10,262][100934] Updated weights for policy 0, policy_version 13757 (0.0008) +[2024-12-28 14:17:12,113][100934] Updated weights for policy 0, policy_version 13767 (0.0009) +[2024-12-28 14:17:13,944][100720] Fps is (10 sec: 22118.3, 60 sec: 25258.6, 300 sec: 24798.2). Total num frames: 56426496. Throughput: 0: 6329.8. Samples: 4106356. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:17:13,945][100720] Avg episode reward: [(0, '4.621')] +[2024-12-28 14:17:14,023][100934] Updated weights for policy 0, policy_version 13777 (0.0009) +[2024-12-28 14:17:15,618][100934] Updated weights for policy 0, policy_version 13787 (0.0006) +[2024-12-28 14:17:17,153][100934] Updated weights for policy 0, policy_version 13797 (0.0008) +[2024-12-28 14:17:18,694][100934] Updated weights for policy 0, policy_version 13807 (0.0008) +[2024-12-28 14:17:18,944][100720] Fps is (10 sec: 24166.3, 60 sec: 25531.8, 300 sec: 24825.9). Total num frames: 56557568. Throughput: 0: 6333.0. Samples: 4125862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:17:18,945][100720] Avg episode reward: [(0, '4.287')] +[2024-12-28 14:17:20,422][100934] Updated weights for policy 0, policy_version 13817 (0.0008) +[2024-12-28 14:17:22,284][100934] Updated weights for policy 0, policy_version 13827 (0.0008) +[2024-12-28 14:17:23,944][100720] Fps is (10 sec: 24166.7, 60 sec: 25258.7, 300 sec: 24825.9). Total num frames: 56668160. Throughput: 0: 6244.2. Samples: 4161448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:17:23,945][100720] Avg episode reward: [(0, '4.324')] +[2024-12-28 14:17:24,242][100934] Updated weights for policy 0, policy_version 13837 (0.0010) +[2024-12-28 14:17:26,146][100934] Updated weights for policy 0, policy_version 13847 (0.0008) +[2024-12-28 14:17:27,942][100934] Updated weights for policy 0, policy_version 13857 (0.0008) +[2024-12-28 14:17:28,944][100720] Fps is (10 sec: 22118.5, 60 sec: 24849.1, 300 sec: 24812.0). Total num frames: 56778752. Throughput: 0: 6096.1. Samples: 4194514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:17:28,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 14:17:29,732][100934] Updated weights for policy 0, policy_version 13867 (0.0008) +[2024-12-28 14:17:31,356][100934] Updated weights for policy 0, policy_version 13877 (0.0007) +[2024-12-28 14:17:32,880][100934] Updated weights for policy 0, policy_version 13887 (0.0007) +[2024-12-28 14:17:33,944][100720] Fps is (10 sec: 23756.3, 60 sec: 24780.9, 300 sec: 24798.1). Total num frames: 56905728. Throughput: 0: 6068.5. Samples: 4213124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:17:33,945][100720] Avg episode reward: [(0, '4.567')] +[2024-12-28 14:17:34,456][100934] Updated weights for policy 0, policy_version 13897 (0.0006) +[2024-12-28 14:17:35,964][100934] Updated weights for policy 0, policy_version 13907 (0.0006) +[2024-12-28 14:17:37,528][100934] Updated weights for policy 0, policy_version 13917 (0.0007) +[2024-12-28 14:17:38,944][100720] Fps is (10 sec: 26214.5, 60 sec: 24849.1, 300 sec: 24867.6). Total num frames: 57040896. Throughput: 0: 6074.7. Samples: 4253054. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:17:38,945][100720] Avg episode reward: [(0, '4.343')] +[2024-12-28 14:17:39,074][100934] Updated weights for policy 0, policy_version 13927 (0.0006) +[2024-12-28 14:17:40,605][100934] Updated weights for policy 0, policy_version 13937 (0.0007) +[2024-12-28 14:17:42,151][100934] Updated weights for policy 0, policy_version 13947 (0.0006) +[2024-12-28 14:17:43,691][100934] Updated weights for policy 0, policy_version 13957 (0.0007) +[2024-12-28 14:17:43,944][100720] Fps is (10 sec: 26624.4, 60 sec: 24849.0, 300 sec: 24937.0). Total num frames: 57171968. Throughput: 0: 6078.3. Samples: 4292708. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:17:43,945][100720] Avg episode reward: [(0, '4.616')] +[2024-12-28 14:17:45,236][100934] Updated weights for policy 0, policy_version 13967 (0.0006) +[2024-12-28 14:17:46,815][100934] Updated weights for policy 0, policy_version 13977 (0.0008) +[2024-12-28 14:17:48,446][100934] Updated weights for policy 0, policy_version 13987 (0.0007) +[2024-12-28 14:17:48,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24781.0, 300 sec: 24964.8). Total num frames: 57303040. Throughput: 0: 6076.7. Samples: 4312410. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:17:48,945][100720] Avg episode reward: [(0, '4.601')] +[2024-12-28 14:17:49,970][100934] Updated weights for policy 0, policy_version 13997 (0.0006) +[2024-12-28 14:17:51,599][100934] Updated weights for policy 0, policy_version 14007 (0.0008) +[2024-12-28 14:17:53,211][100934] Updated weights for policy 0, policy_version 14017 (0.0006) +[2024-12-28 14:17:53,944][100720] Fps is (10 sec: 25804.7, 60 sec: 24712.6, 300 sec: 24950.9). Total num frames: 57430016. Throughput: 0: 6164.8. Samples: 4350928. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:17:53,945][100720] Avg episode reward: [(0, '4.328')] +[2024-12-28 14:17:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000014021_57430016.pth... +[2024-12-28 14:17:53,981][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000012544_51380224.pth +[2024-12-28 14:17:54,858][100934] Updated weights for policy 0, policy_version 14027 (0.0008) +[2024-12-28 14:17:56,406][100934] Updated weights for policy 0, policy_version 14037 (0.0006) +[2024-12-28 14:17:57,997][100934] Updated weights for policy 0, policy_version 14047 (0.0008) +[2024-12-28 14:17:58,944][100720] Fps is (10 sec: 25394.9, 60 sec: 24644.3, 300 sec: 25020.3). Total num frames: 57556992. Throughput: 0: 6271.2. Samples: 4388558. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:17:58,945][100720] Avg episode reward: [(0, '4.263')] +[2024-12-28 14:17:59,834][100934] Updated weights for policy 0, policy_version 14057 (0.0009) +[2024-12-28 14:18:01,753][100934] Updated weights for policy 0, policy_version 14067 (0.0008) +[2024-12-28 14:18:03,639][100934] Updated weights for policy 0, policy_version 14077 (0.0008) +[2024-12-28 14:18:03,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24302.9, 300 sec: 25006.4). Total num frames: 57663488. Throughput: 0: 6202.3. Samples: 4404964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:18:03,945][100720] Avg episode reward: [(0, '4.770')] +[2024-12-28 14:18:05,506][100934] Updated weights for policy 0, policy_version 14087 (0.0008) +[2024-12-28 14:18:07,387][100934] Updated weights for policy 0, policy_version 14097 (0.0008) +[2024-12-28 14:18:08,944][100720] Fps is (10 sec: 21708.9, 60 sec: 24302.9, 300 sec: 24992.5). Total num frames: 57774080. Throughput: 0: 6138.0. Samples: 4437658. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:18:08,945][100720] Avg episode reward: [(0, '4.200')] +[2024-12-28 14:18:09,132][100934] Updated weights for policy 0, policy_version 14107 (0.0008) +[2024-12-28 14:18:10,693][100934] Updated weights for policy 0, policy_version 14117 (0.0007) +[2024-12-28 14:18:12,260][100934] Updated weights for policy 0, policy_version 14127 (0.0007) +[2024-12-28 14:18:13,944][100720] Fps is (10 sec: 23757.0, 60 sec: 24576.0, 300 sec: 24964.8). Total num frames: 57901056. Throughput: 0: 6224.8. Samples: 4474632. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:18:13,945][100720] Avg episode reward: [(0, '4.373')] +[2024-12-28 14:18:14,070][100934] Updated weights for policy 0, policy_version 14137 (0.0009) +[2024-12-28 14:18:15,944][100934] Updated weights for policy 0, policy_version 14147 (0.0008) +[2024-12-28 14:18:17,779][100934] Updated weights for policy 0, policy_version 14157 (0.0008) +[2024-12-28 14:18:18,944][100720] Fps is (10 sec: 23756.6, 60 sec: 24234.6, 300 sec: 24881.5). Total num frames: 58011648. Throughput: 0: 6180.8. Samples: 4491258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:18:18,945][100720] Avg episode reward: [(0, '4.483')] +[2024-12-28 14:18:19,664][100934] Updated weights for policy 0, policy_version 14167 (0.0008) +[2024-12-28 14:18:21,508][100934] Updated weights for policy 0, policy_version 14177 (0.0007) +[2024-12-28 14:18:23,321][100934] Updated weights for policy 0, policy_version 14187 (0.0008) +[2024-12-28 14:18:23,944][100720] Fps is (10 sec: 22118.5, 60 sec: 24234.7, 300 sec: 24867.6). Total num frames: 58122240. Throughput: 0: 6031.1. Samples: 4524454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:18:23,945][100720] Avg episode reward: [(0, '4.423')] +[2024-12-28 14:18:24,950][100934] Updated weights for policy 0, policy_version 14197 (0.0008) +[2024-12-28 14:18:26,471][100934] Updated weights for policy 0, policy_version 14207 (0.0007) +[2024-12-28 14:18:28,005][100934] Updated weights for policy 0, policy_version 14217 (0.0006) +[2024-12-28 14:18:28,944][100720] Fps is (10 sec: 24576.4, 60 sec: 24644.3, 300 sec: 24950.9). Total num frames: 58257408. Throughput: 0: 6017.3. Samples: 4563486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:18:28,945][100720] Avg episode reward: [(0, '4.406')] +[2024-12-28 14:18:29,521][100934] Updated weights for policy 0, policy_version 14227 (0.0007) +[2024-12-28 14:18:31,079][100934] Updated weights for policy 0, policy_version 14237 (0.0007) +[2024-12-28 14:18:32,608][100934] Updated weights for policy 0, policy_version 14247 (0.0007) +[2024-12-28 14:18:33,944][100720] Fps is (10 sec: 26623.7, 60 sec: 24712.6, 300 sec: 24964.8). Total num frames: 58388480. Throughput: 0: 6024.2. Samples: 4583500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:18:33,945][100720] Avg episode reward: [(0, '4.363')] +[2024-12-28 14:18:34,178][100934] Updated weights for policy 0, policy_version 14257 (0.0007) +[2024-12-28 14:18:35,827][100934] Updated weights for policy 0, policy_version 14267 (0.0008) +[2024-12-28 14:18:37,620][100934] Updated weights for policy 0, policy_version 14277 (0.0008) +[2024-12-28 14:18:38,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24439.5, 300 sec: 24937.0). Total num frames: 58507264. Throughput: 0: 5996.1. Samples: 4620752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:18:38,945][100720] Avg episode reward: [(0, '4.495')] +[2024-12-28 14:18:39,376][100934] Updated weights for policy 0, policy_version 14287 (0.0008) +[2024-12-28 14:18:41,178][100934] Updated weights for policy 0, policy_version 14297 (0.0009) +[2024-12-28 14:18:43,011][100934] Updated weights for policy 0, policy_version 14307 (0.0009) +[2024-12-28 14:18:43,944][100720] Fps is (10 sec: 23347.1, 60 sec: 24166.4, 300 sec: 24937.0). Total num frames: 58621952. Throughput: 0: 5913.6. Samples: 4654672. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:18:43,945][100720] Avg episode reward: [(0, '4.374')] +[2024-12-28 14:18:44,862][100934] Updated weights for policy 0, policy_version 14317 (0.0008) +[2024-12-28 14:18:46,641][100934] Updated weights for policy 0, policy_version 14327 (0.0008) +[2024-12-28 14:18:48,169][100934] Updated weights for policy 0, policy_version 14337 (0.0006) +[2024-12-28 14:18:48,944][100720] Fps is (10 sec: 23756.6, 60 sec: 24029.8, 300 sec: 24964.8). Total num frames: 58744832. Throughput: 0: 5933.1. Samples: 4671952. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:18:48,946][100720] Avg episode reward: [(0, '4.525')] +[2024-12-28 14:18:49,697][100934] Updated weights for policy 0, policy_version 14347 (0.0006) +[2024-12-28 14:18:51,240][100934] Updated weights for policy 0, policy_version 14357 (0.0006) +[2024-12-28 14:18:52,807][100934] Updated weights for policy 0, policy_version 14367 (0.0007) +[2024-12-28 14:18:53,944][100720] Fps is (10 sec: 25395.5, 60 sec: 24098.2, 300 sec: 25020.3). Total num frames: 58875904. Throughput: 0: 6097.4. Samples: 4712042. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:18:53,945][100720] Avg episode reward: [(0, '4.377')] +[2024-12-28 14:18:54,357][100934] Updated weights for policy 0, policy_version 14377 (0.0006) +[2024-12-28 14:18:55,895][100934] Updated weights for policy 0, policy_version 14387 (0.0006) +[2024-12-28 14:18:57,421][100934] Updated weights for policy 0, policy_version 14397 (0.0006) +[2024-12-28 14:18:58,944][100720] Fps is (10 sec: 26214.6, 60 sec: 24166.5, 300 sec: 25075.9). Total num frames: 59006976. Throughput: 0: 6161.4. Samples: 4751894. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:18:58,945][100720] Avg episode reward: [(0, '4.317')] +[2024-12-28 14:18:58,948][100934] Updated weights for policy 0, policy_version 14407 (0.0007) +[2024-12-28 14:19:00,498][100934] Updated weights for policy 0, policy_version 14417 (0.0007) +[2024-12-28 14:19:02,023][100934] Updated weights for policy 0, policy_version 14427 (0.0006) +[2024-12-28 14:19:03,567][100934] Updated weights for policy 0, policy_version 14437 (0.0007) +[2024-12-28 14:19:03,944][100720] Fps is (10 sec: 26623.8, 60 sec: 24644.3, 300 sec: 25159.2). Total num frames: 59142144. Throughput: 0: 6238.8. Samples: 4772004. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:03,945][100720] Avg episode reward: [(0, '4.632')] +[2024-12-28 14:19:05,140][100934] Updated weights for policy 0, policy_version 14447 (0.0007) +[2024-12-28 14:19:06,695][100934] Updated weights for policy 0, policy_version 14457 (0.0006) +[2024-12-28 14:19:08,279][100934] Updated weights for policy 0, policy_version 14467 (0.0007) +[2024-12-28 14:19:08,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24985.6, 300 sec: 25228.6). Total num frames: 59273216. Throughput: 0: 6375.3. Samples: 4811342. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:08,945][100720] Avg episode reward: [(0, '4.246')] +[2024-12-28 14:19:09,842][100934] Updated weights for policy 0, policy_version 14477 (0.0007) +[2024-12-28 14:19:11,406][100934] Updated weights for policy 0, policy_version 14487 (0.0007) +[2024-12-28 14:19:12,962][100934] Updated weights for policy 0, policy_version 14497 (0.0006) +[2024-12-28 14:19:13,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25053.8, 300 sec: 25284.1). Total num frames: 59404288. Throughput: 0: 6375.5. Samples: 4850384. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-12-28 14:19:13,945][100720] Avg episode reward: [(0, '4.395')] +[2024-12-28 14:19:14,544][100934] Updated weights for policy 0, policy_version 14507 (0.0006) +[2024-12-28 14:19:16,353][100934] Updated weights for policy 0, policy_version 14517 (0.0008) +[2024-12-28 14:19:18,171][100934] Updated weights for policy 0, policy_version 14527 (0.0010) +[2024-12-28 14:19:18,944][100720] Fps is (10 sec: 24575.9, 60 sec: 25122.2, 300 sec: 25214.7). Total num frames: 59518976. Throughput: 0: 6323.1. Samples: 4868038. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2024-12-28 14:19:18,945][100720] Avg episode reward: [(0, '4.379')] +[2024-12-28 14:19:20,059][100934] Updated weights for policy 0, policy_version 14537 (0.0008) +[2024-12-28 14:19:21,935][100934] Updated weights for policy 0, policy_version 14547 (0.0008) +[2024-12-28 14:19:23,799][100934] Updated weights for policy 0, policy_version 14557 (0.0008) +[2024-12-28 14:19:23,944][100720] Fps is (10 sec: 22118.6, 60 sec: 25053.9, 300 sec: 25103.6). Total num frames: 59625472. Throughput: 0: 6222.4. Samples: 4900762. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:19:23,945][100720] Avg episode reward: [(0, '4.337')] +[2024-12-28 14:19:25,752][100934] Updated weights for policy 0, policy_version 14567 (0.0008) +[2024-12-28 14:19:27,317][100934] Updated weights for policy 0, policy_version 14577 (0.0008) +[2024-12-28 14:19:28,878][100934] Updated weights for policy 0, policy_version 14587 (0.0007) +[2024-12-28 14:19:28,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24849.1, 300 sec: 25062.0). Total num frames: 59748352. Throughput: 0: 6266.2. Samples: 4936648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:19:28,945][100720] Avg episode reward: [(0, '4.626')] +[2024-12-28 14:19:30,440][100934] Updated weights for policy 0, policy_version 14597 (0.0007) +[2024-12-28 14:19:31,965][100934] Updated weights for policy 0, policy_version 14607 (0.0007) +[2024-12-28 14:19:33,500][100934] Updated weights for policy 0, policy_version 14617 (0.0006) +[2024-12-28 14:19:33,944][100720] Fps is (10 sec: 25394.9, 60 sec: 24849.1, 300 sec: 25048.1). Total num frames: 59879424. Throughput: 0: 6324.4. Samples: 4956552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:33,945][100720] Avg episode reward: [(0, '4.301')] +[2024-12-28 14:19:35,092][100934] Updated weights for policy 0, policy_version 14627 (0.0008) +[2024-12-28 14:19:36,647][100934] Updated weights for policy 0, policy_version 14637 (0.0007) +[2024-12-28 14:19:38,191][100934] Updated weights for policy 0, policy_version 14647 (0.0007) +[2024-12-28 14:19:38,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25053.9, 300 sec: 25034.2). Total num frames: 60010496. Throughput: 0: 6309.3. Samples: 4995960. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:38,945][100720] Avg episode reward: [(0, '4.463')] +[2024-12-28 14:19:39,751][100934] Updated weights for policy 0, policy_version 14657 (0.0007) +[2024-12-28 14:19:41,405][100934] Updated weights for policy 0, policy_version 14667 (0.0009) +[2024-12-28 14:19:43,252][100934] Updated weights for policy 0, policy_version 14677 (0.0010) +[2024-12-28 14:19:43,944][100720] Fps is (10 sec: 24985.5, 60 sec: 25122.1, 300 sec: 24992.5). Total num frames: 60129280. Throughput: 0: 6232.1. Samples: 5032338. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:43,945][100720] Avg episode reward: [(0, '4.565')] +[2024-12-28 14:19:45,083][100934] Updated weights for policy 0, policy_version 14687 (0.0007) +[2024-12-28 14:19:46,957][100934] Updated weights for policy 0, policy_version 14697 (0.0007) +[2024-12-28 14:19:48,859][100934] Updated weights for policy 0, policy_version 14707 (0.0008) +[2024-12-28 14:19:48,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24917.3, 300 sec: 24909.2). Total num frames: 60239872. Throughput: 0: 6152.4. Samples: 5048862. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:48,945][100720] Avg episode reward: [(0, '4.119')] +[2024-12-28 14:19:50,763][100934] Updated weights for policy 0, policy_version 14717 (0.0007) +[2024-12-28 14:19:52,451][100934] Updated weights for policy 0, policy_version 14727 (0.0007) +[2024-12-28 14:19:53,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24712.5, 300 sec: 24853.7). Total num frames: 60358656. Throughput: 0: 6032.4. Samples: 5082800. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:53,945][100720] Avg episode reward: [(0, '4.506')] +[2024-12-28 14:19:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000014736_60358656.pth... +[2024-12-28 14:19:53,982][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000013295_54456320.pth +[2024-12-28 14:19:54,022][100934] Updated weights for policy 0, policy_version 14737 (0.0007) +[2024-12-28 14:19:55,655][100934] Updated weights for policy 0, policy_version 14747 (0.0008) +[2024-12-28 14:19:57,213][100934] Updated weights for policy 0, policy_version 14757 (0.0008) +[2024-12-28 14:19:58,775][100934] Updated weights for policy 0, policy_version 14767 (0.0007) +[2024-12-28 14:19:58,944][100720] Fps is (10 sec: 24985.4, 60 sec: 24712.5, 300 sec: 24867.6). Total num frames: 60489728. Throughput: 0: 6032.2. Samples: 5121834. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:19:58,946][100720] Avg episode reward: [(0, '4.350')] +[2024-12-28 14:20:00,328][100934] Updated weights for policy 0, policy_version 14777 (0.0007) +[2024-12-28 14:20:01,868][100934] Updated weights for policy 0, policy_version 14787 (0.0007) +[2024-12-28 14:20:03,463][100934] Updated weights for policy 0, policy_version 14797 (0.0006) +[2024-12-28 14:20:03,944][100720] Fps is (10 sec: 26214.7, 60 sec: 24644.3, 300 sec: 24867.6). Total num frames: 60620800. Throughput: 0: 6079.3. Samples: 5141606. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:03,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:20:05,032][100934] Updated weights for policy 0, policy_version 14807 (0.0007) +[2024-12-28 14:20:06,633][100934] Updated weights for policy 0, policy_version 14817 (0.0007) +[2024-12-28 14:20:08,221][100934] Updated weights for policy 0, policy_version 14827 (0.0006) +[2024-12-28 14:20:08,944][100720] Fps is (10 sec: 25805.0, 60 sec: 24576.0, 300 sec: 24839.8). Total num frames: 60747776. Throughput: 0: 6207.6. Samples: 5180104. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:08,945][100720] Avg episode reward: [(0, '4.421')] +[2024-12-28 14:20:09,751][100934] Updated weights for policy 0, policy_version 14837 (0.0007) +[2024-12-28 14:20:11,370][100934] Updated weights for policy 0, policy_version 14847 (0.0008) +[2024-12-28 14:20:12,949][100934] Updated weights for policy 0, policy_version 14857 (0.0007) +[2024-12-28 14:20:13,944][100720] Fps is (10 sec: 25804.8, 60 sec: 24576.0, 300 sec: 24839.8). Total num frames: 60878848. Throughput: 0: 6277.6. Samples: 5219138. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:13,945][100720] Avg episode reward: [(0, '4.407')] +[2024-12-28 14:20:14,532][100934] Updated weights for policy 0, policy_version 14867 (0.0008) +[2024-12-28 14:20:16,073][100934] Updated weights for policy 0, policy_version 14877 (0.0006) +[2024-12-28 14:20:17,885][100934] Updated weights for policy 0, policy_version 14887 (0.0009) +[2024-12-28 14:20:18,944][100720] Fps is (10 sec: 24985.5, 60 sec: 24644.2, 300 sec: 24798.2). Total num frames: 60997632. Throughput: 0: 6259.9. Samples: 5238246. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:20:18,945][100720] Avg episode reward: [(0, '4.308')] +[2024-12-28 14:20:19,686][100934] Updated weights for policy 0, policy_version 14897 (0.0007) +[2024-12-28 14:20:21,528][100934] Updated weights for policy 0, policy_version 14907 (0.0008) +[2024-12-28 14:20:23,305][100934] Updated weights for policy 0, policy_version 14917 (0.0008) +[2024-12-28 14:20:23,944][100720] Fps is (10 sec: 23347.1, 60 sec: 24780.8, 300 sec: 24742.6). Total num frames: 61112320. Throughput: 0: 6137.3. Samples: 5272138. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:20:23,945][100720] Avg episode reward: [(0, '4.375')] +[2024-12-28 14:20:25,152][100934] Updated weights for policy 0, policy_version 14927 (0.0008) +[2024-12-28 14:20:27,082][100934] Updated weights for policy 0, policy_version 14937 (0.0008) +[2024-12-28 14:20:28,679][100934] Updated weights for policy 0, policy_version 14947 (0.0006) +[2024-12-28 14:20:28,944][100720] Fps is (10 sec: 22937.6, 60 sec: 24644.2, 300 sec: 24687.1). Total num frames: 61227008. Throughput: 0: 6094.5. Samples: 5306590. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:20:28,945][100720] Avg episode reward: [(0, '4.461')] +[2024-12-28 14:20:30,232][100934] Updated weights for policy 0, policy_version 14957 (0.0006) +[2024-12-28 14:20:31,782][100934] Updated weights for policy 0, policy_version 14967 (0.0007) +[2024-12-28 14:20:33,317][100934] Updated weights for policy 0, policy_version 14977 (0.0006) +[2024-12-28 14:20:33,944][100720] Fps is (10 sec: 24576.0, 60 sec: 24644.3, 300 sec: 24673.2). Total num frames: 61358080. Throughput: 0: 6166.7. Samples: 5326364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:20:33,945][100720] Avg episode reward: [(0, '4.588')] +[2024-12-28 14:20:34,915][100934] Updated weights for policy 0, policy_version 14987 (0.0008) +[2024-12-28 14:20:36,467][100934] Updated weights for policy 0, policy_version 14997 (0.0006) +[2024-12-28 14:20:38,043][100934] Updated weights for policy 0, policy_version 15007 (0.0009) +[2024-12-28 14:20:38,944][100720] Fps is (10 sec: 26214.5, 60 sec: 24644.3, 300 sec: 24673.2). Total num frames: 61489152. Throughput: 0: 6283.2. Samples: 5365544. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:38,945][100720] Avg episode reward: [(0, '4.504')] +[2024-12-28 14:20:39,637][100934] Updated weights for policy 0, policy_version 15017 (0.0006) +[2024-12-28 14:20:41,287][100934] Updated weights for policy 0, policy_version 15027 (0.0008) +[2024-12-28 14:20:43,099][100934] Updated weights for policy 0, policy_version 15037 (0.0008) +[2024-12-28 14:20:43,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24644.3, 300 sec: 24631.5). Total num frames: 61607936. Throughput: 0: 6218.6. Samples: 5401670. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:43,945][100720] Avg episode reward: [(0, '4.370')] +[2024-12-28 14:20:44,958][100934] Updated weights for policy 0, policy_version 15047 (0.0010) +[2024-12-28 14:20:46,794][100934] Updated weights for policy 0, policy_version 15057 (0.0008) +[2024-12-28 14:20:48,687][100934] Updated weights for policy 0, policy_version 15067 (0.0008) +[2024-12-28 14:20:48,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24644.2, 300 sec: 24617.7). Total num frames: 61718528. Throughput: 0: 6148.2. Samples: 5418276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:20:48,945][100720] Avg episode reward: [(0, '4.526')] +[2024-12-28 14:20:50,575][100934] Updated weights for policy 0, policy_version 15077 (0.0009) +[2024-12-28 14:20:52,135][100934] Updated weights for policy 0, policy_version 15087 (0.0007) +[2024-12-28 14:20:53,674][100934] Updated weights for policy 0, policy_version 15097 (0.0007) +[2024-12-28 14:20:53,944][100720] Fps is (10 sec: 23346.9, 60 sec: 24712.5, 300 sec: 24645.4). Total num frames: 61841408. Throughput: 0: 6077.2. Samples: 5453578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:20:53,946][100720] Avg episode reward: [(0, '4.646')] +[2024-12-28 14:20:55,231][100934] Updated weights for policy 0, policy_version 15107 (0.0007) +[2024-12-28 14:20:56,760][100934] Updated weights for policy 0, policy_version 15117 (0.0007) +[2024-12-28 14:20:58,944][100720] Fps is (10 sec: 23347.3, 60 sec: 24371.2, 300 sec: 24659.3). Total num frames: 61952000. Throughput: 0: 5968.8. Samples: 5487732. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:20:58,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:20:59,137][100934] Updated weights for policy 0, policy_version 15127 (0.0007) +[2024-12-28 14:21:00,647][100934] Updated weights for policy 0, policy_version 15137 (0.0006) +[2024-12-28 14:21:02,166][100934] Updated weights for policy 0, policy_version 15147 (0.0007) +[2024-12-28 14:21:03,820][100934] Updated weights for policy 0, policy_version 15157 (0.0007) +[2024-12-28 14:21:03,944][100720] Fps is (10 sec: 24166.7, 60 sec: 24371.2, 300 sec: 24673.2). Total num frames: 62083072. Throughput: 0: 5996.3. Samples: 5508080. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:03,945][100720] Avg episode reward: [(0, '4.448')] +[2024-12-28 14:21:05,314][100934] Updated weights for policy 0, policy_version 15167 (0.0007) +[2024-12-28 14:21:06,841][100934] Updated weights for policy 0, policy_version 15177 (0.0006) +[2024-12-28 14:21:08,361][100934] Updated weights for policy 0, policy_version 15187 (0.0007) +[2024-12-28 14:21:08,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24507.7, 300 sec: 24770.4). Total num frames: 62218240. Throughput: 0: 6124.8. Samples: 5547754. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:08,945][100720] Avg episode reward: [(0, '4.365')] +[2024-12-28 14:21:09,896][100934] Updated weights for policy 0, policy_version 15197 (0.0008) +[2024-12-28 14:21:11,414][100934] Updated weights for policy 0, policy_version 15207 (0.0006) +[2024-12-28 14:21:13,039][100934] Updated weights for policy 0, policy_version 15217 (0.0007) +[2024-12-28 14:21:13,944][100720] Fps is (10 sec: 26623.8, 60 sec: 24507.7, 300 sec: 24825.9). Total num frames: 62349312. Throughput: 0: 6243.9. Samples: 5587564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:21:13,945][100720] Avg episode reward: [(0, '4.287')] +[2024-12-28 14:21:14,584][100934] Updated weights for policy 0, policy_version 15227 (0.0007) +[2024-12-28 14:21:16,121][100934] Updated weights for policy 0, policy_version 15237 (0.0007) +[2024-12-28 14:21:17,626][100934] Updated weights for policy 0, policy_version 15247 (0.0006) +[2024-12-28 14:21:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24780.8, 300 sec: 24853.7). Total num frames: 62484480. Throughput: 0: 6246.8. Samples: 5607470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:21:18,945][100720] Avg episode reward: [(0, '4.430')] +[2024-12-28 14:21:19,172][100934] Updated weights for policy 0, policy_version 15257 (0.0006) +[2024-12-28 14:21:20,716][100934] Updated weights for policy 0, policy_version 15267 (0.0007) +[2024-12-28 14:21:22,261][100934] Updated weights for policy 0, policy_version 15277 (0.0006) +[2024-12-28 14:21:23,778][100934] Updated weights for policy 0, policy_version 15287 (0.0007) +[2024-12-28 14:21:23,944][100720] Fps is (10 sec: 27034.0, 60 sec: 25122.2, 300 sec: 24853.7). Total num frames: 62619648. Throughput: 0: 6262.8. Samples: 5647368. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:21:23,945][100720] Avg episode reward: [(0, '4.410')] +[2024-12-28 14:21:25,330][100934] Updated weights for policy 0, policy_version 15297 (0.0007) +[2024-12-28 14:21:26,880][100934] Updated weights for policy 0, policy_version 15307 (0.0007) +[2024-12-28 14:21:28,458][100934] Updated weights for policy 0, policy_version 15317 (0.0007) +[2024-12-28 14:21:28,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25395.2, 300 sec: 24853.7). Total num frames: 62750720. Throughput: 0: 6342.5. Samples: 5687084. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:21:28,945][100720] Avg episode reward: [(0, '4.572')] +[2024-12-28 14:21:30,027][100934] Updated weights for policy 0, policy_version 15327 (0.0007) +[2024-12-28 14:21:31,531][100934] Updated weights for policy 0, policy_version 15337 (0.0007) +[2024-12-28 14:21:33,083][100934] Updated weights for policy 0, policy_version 15347 (0.0008) +[2024-12-28 14:21:33,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25395.2, 300 sec: 24853.7). Total num frames: 62881792. Throughput: 0: 6413.3. Samples: 5706876. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:33,945][100720] Avg episode reward: [(0, '4.517')] +[2024-12-28 14:21:34,596][100934] Updated weights for policy 0, policy_version 15357 (0.0007) +[2024-12-28 14:21:36,149][100934] Updated weights for policy 0, policy_version 15367 (0.0007) +[2024-12-28 14:21:37,661][100934] Updated weights for policy 0, policy_version 15377 (0.0006) +[2024-12-28 14:21:38,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25463.5, 300 sec: 24867.6). Total num frames: 63016960. Throughput: 0: 6521.7. Samples: 5747052. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:38,945][100720] Avg episode reward: [(0, '4.331')] +[2024-12-28 14:21:39,232][100934] Updated weights for policy 0, policy_version 15387 (0.0007) +[2024-12-28 14:21:40,773][100934] Updated weights for policy 0, policy_version 15397 (0.0006) +[2024-12-28 14:21:42,325][100934] Updated weights for policy 0, policy_version 15407 (0.0007) +[2024-12-28 14:21:43,822][100934] Updated weights for policy 0, policy_version 15417 (0.0007) +[2024-12-28 14:21:43,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25668.3, 300 sec: 24853.7). Total num frames: 63148032. Throughput: 0: 6647.7. Samples: 5786880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:21:43,945][100720] Avg episode reward: [(0, '4.460')] +[2024-12-28 14:21:45,370][100934] Updated weights for policy 0, policy_version 15427 (0.0007) +[2024-12-28 14:21:46,939][100934] Updated weights for policy 0, policy_version 15437 (0.0008) +[2024-12-28 14:21:48,510][100934] Updated weights for policy 0, policy_version 15447 (0.0006) +[2024-12-28 14:21:48,944][100720] Fps is (10 sec: 26213.9, 60 sec: 26009.6, 300 sec: 24853.7). Total num frames: 63279104. Throughput: 0: 6635.5. Samples: 5806678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:21:48,945][100720] Avg episode reward: [(0, '4.593')] +[2024-12-28 14:21:50,092][100934] Updated weights for policy 0, policy_version 15457 (0.0007) +[2024-12-28 14:21:51,662][100934] Updated weights for policy 0, policy_version 15467 (0.0006) +[2024-12-28 14:21:53,169][100934] Updated weights for policy 0, policy_version 15477 (0.0007) +[2024-12-28 14:21:53,944][100720] Fps is (10 sec: 26623.9, 60 sec: 26214.5, 300 sec: 24867.6). Total num frames: 63414272. Throughput: 0: 6628.5. Samples: 5846036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:53,945][100720] Avg episode reward: [(0, '4.361')] +[2024-12-28 14:21:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000015482_63414272.pth... +[2024-12-28 14:21:53,983][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000014021_57430016.pth +[2024-12-28 14:21:54,733][100934] Updated weights for policy 0, policy_version 15487 (0.0007) +[2024-12-28 14:21:56,303][100934] Updated weights for policy 0, policy_version 15497 (0.0007) +[2024-12-28 14:21:57,830][100934] Updated weights for policy 0, policy_version 15507 (0.0006) +[2024-12-28 14:21:58,944][100720] Fps is (10 sec: 26624.2, 60 sec: 26555.7, 300 sec: 24881.5). Total num frames: 63545344. Throughput: 0: 6625.9. Samples: 5885728. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:21:58,945][100720] Avg episode reward: [(0, '4.427')] +[2024-12-28 14:21:59,332][100934] Updated weights for policy 0, policy_version 15517 (0.0007) +[2024-12-28 14:22:00,902][100934] Updated weights for policy 0, policy_version 15527 (0.0006) +[2024-12-28 14:22:02,426][100934] Updated weights for policy 0, policy_version 15537 (0.0007) +[2024-12-28 14:22:03,944][100720] Fps is (10 sec: 26214.2, 60 sec: 26555.7, 300 sec: 24950.9). Total num frames: 63676416. Throughput: 0: 6627.2. Samples: 5905694. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:22:03,945][100720] Avg episode reward: [(0, '4.384')] +[2024-12-28 14:22:03,991][100934] Updated weights for policy 0, policy_version 15547 (0.0007) +[2024-12-28 14:22:05,552][100934] Updated weights for policy 0, policy_version 15557 (0.0006) +[2024-12-28 14:22:07,148][100934] Updated weights for policy 0, policy_version 15567 (0.0008) +[2024-12-28 14:22:08,714][100934] Updated weights for policy 0, policy_version 15577 (0.0007) +[2024-12-28 14:22:08,944][100720] Fps is (10 sec: 26214.0, 60 sec: 26487.4, 300 sec: 25020.3). Total num frames: 63807488. Throughput: 0: 6611.4. Samples: 5944882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:08,946][100720] Avg episode reward: [(0, '4.454')] +[2024-12-28 14:22:10,250][100934] Updated weights for policy 0, policy_version 15587 (0.0006) +[2024-12-28 14:22:11,820][100934] Updated weights for policy 0, policy_version 15597 (0.0007) +[2024-12-28 14:22:13,342][100934] Updated weights for policy 0, policy_version 15607 (0.0006) +[2024-12-28 14:22:13,944][100720] Fps is (10 sec: 26214.7, 60 sec: 26487.5, 300 sec: 25020.3). Total num frames: 63938560. Throughput: 0: 6618.2. Samples: 5984902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:13,945][100720] Avg episode reward: [(0, '4.351')] +[2024-12-28 14:22:14,906][100934] Updated weights for policy 0, policy_version 15617 (0.0007) +[2024-12-28 14:22:16,432][100934] Updated weights for policy 0, policy_version 15627 (0.0007) +[2024-12-28 14:22:18,006][100934] Updated weights for policy 0, policy_version 15637 (0.0008) +[2024-12-28 14:22:18,944][100720] Fps is (10 sec: 26624.3, 60 sec: 26487.4, 300 sec: 25103.6). Total num frames: 64073728. Throughput: 0: 6612.5. Samples: 6004440. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:22:18,945][100720] Avg episode reward: [(0, '4.272')] +[2024-12-28 14:22:19,536][100934] Updated weights for policy 0, policy_version 15647 (0.0007) +[2024-12-28 14:22:21,112][100934] Updated weights for policy 0, policy_version 15657 (0.0006) +[2024-12-28 14:22:22,679][100934] Updated weights for policy 0, policy_version 15667 (0.0007) +[2024-12-28 14:22:23,944][100720] Fps is (10 sec: 26623.9, 60 sec: 26419.2, 300 sec: 25173.0). Total num frames: 64204800. Throughput: 0: 6596.6. Samples: 6043898. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:22:23,945][100720] Avg episode reward: [(0, '4.143')] +[2024-12-28 14:22:24,196][100934] Updated weights for policy 0, policy_version 15677 (0.0007) +[2024-12-28 14:22:25,947][100934] Updated weights for policy 0, policy_version 15687 (0.0008) +[2024-12-28 14:22:27,776][100934] Updated weights for policy 0, policy_version 15697 (0.0009) +[2024-12-28 14:22:28,944][100720] Fps is (10 sec: 24576.1, 60 sec: 26146.1, 300 sec: 25131.4). Total num frames: 64319488. Throughput: 0: 6505.5. Samples: 6079626. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:28,945][100720] Avg episode reward: [(0, '4.428')] +[2024-12-28 14:22:29,548][100934] Updated weights for policy 0, policy_version 15707 (0.0008) +[2024-12-28 14:22:31,373][100934] Updated weights for policy 0, policy_version 15717 (0.0008) +[2024-12-28 14:22:33,211][100934] Updated weights for policy 0, policy_version 15727 (0.0008) +[2024-12-28 14:22:33,944][100720] Fps is (10 sec: 22937.5, 60 sec: 25873.1, 300 sec: 25062.0). Total num frames: 64434176. Throughput: 0: 6442.9. Samples: 6096606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:33,945][100720] Avg episode reward: [(0, '4.710')] +[2024-12-28 14:22:35,018][100934] Updated weights for policy 0, policy_version 15737 (0.0007) +[2024-12-28 14:22:36,557][100934] Updated weights for policy 0, policy_version 15747 (0.0007) +[2024-12-28 14:22:38,097][100934] Updated weights for policy 0, policy_version 15757 (0.0007) +[2024-12-28 14:22:38,944][100720] Fps is (10 sec: 24166.4, 60 sec: 25736.5, 300 sec: 25048.1). Total num frames: 64561152. Throughput: 0: 6383.5. Samples: 6133294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:38,945][100720] Avg episode reward: [(0, '4.619')] +[2024-12-28 14:22:39,629][100934] Updated weights for policy 0, policy_version 15767 (0.0006) +[2024-12-28 14:22:41,187][100934] Updated weights for policy 0, policy_version 15777 (0.0007) +[2024-12-28 14:22:42,737][100934] Updated weights for policy 0, policy_version 15787 (0.0006) +[2024-12-28 14:22:43,944][100720] Fps is (10 sec: 25804.8, 60 sec: 25736.5, 300 sec: 25048.1). Total num frames: 64692224. Throughput: 0: 6387.8. Samples: 6173178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:43,945][100720] Avg episode reward: [(0, '4.751')] +[2024-12-28 14:22:44,264][100934] Updated weights for policy 0, policy_version 15797 (0.0006) +[2024-12-28 14:22:45,781][100934] Updated weights for policy 0, policy_version 15807 (0.0006) +[2024-12-28 14:22:47,297][100934] Updated weights for policy 0, policy_version 15817 (0.0007) +[2024-12-28 14:22:48,841][100934] Updated weights for policy 0, policy_version 15827 (0.0006) +[2024-12-28 14:22:48,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.9, 300 sec: 25075.9). Total num frames: 64827392. Throughput: 0: 6390.5. Samples: 6193268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:22:48,945][100720] Avg episode reward: [(0, '4.531')] +[2024-12-28 14:22:50,418][100934] Updated weights for policy 0, policy_version 15837 (0.0008) +[2024-12-28 14:22:51,930][100934] Updated weights for policy 0, policy_version 15847 (0.0006) +[2024-12-28 14:22:53,471][100934] Updated weights for policy 0, policy_version 15857 (0.0006) +[2024-12-28 14:22:53,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25736.5, 300 sec: 25089.7). Total num frames: 64958464. Throughput: 0: 6407.0. Samples: 6233194. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:22:53,945][100720] Avg episode reward: [(0, '4.545')] +[2024-12-28 14:22:55,069][100934] Updated weights for policy 0, policy_version 15867 (0.0006) +[2024-12-28 14:22:56,596][100934] Updated weights for policy 0, policy_version 15877 (0.0007) +[2024-12-28 14:22:58,123][100934] Updated weights for policy 0, policy_version 15887 (0.0006) +[2024-12-28 14:22:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.8, 300 sec: 25186.9). Total num frames: 65093632. Throughput: 0: 6400.9. Samples: 6272944. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:22:58,945][100720] Avg episode reward: [(0, '4.315')] +[2024-12-28 14:22:59,688][100934] Updated weights for policy 0, policy_version 15897 (0.0007) +[2024-12-28 14:23:01,290][100934] Updated weights for policy 0, policy_version 15907 (0.0007) +[2024-12-28 14:23:02,849][100934] Updated weights for policy 0, policy_version 15917 (0.0007) +[2024-12-28 14:23:03,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25804.8, 300 sec: 25256.4). Total num frames: 65224704. Throughput: 0: 6393.3. Samples: 6292138. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:03,945][100720] Avg episode reward: [(0, '4.434')] +[2024-12-28 14:23:04,355][100934] Updated weights for policy 0, policy_version 15927 (0.0006) +[2024-12-28 14:23:05,852][100934] Updated weights for policy 0, policy_version 15937 (0.0007) +[2024-12-28 14:23:07,399][100934] Updated weights for policy 0, policy_version 15947 (0.0007) +[2024-12-28 14:23:08,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25804.9, 300 sec: 25270.2). Total num frames: 65355776. Throughput: 0: 6412.7. Samples: 6332472. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:08,945][100720] Avg episode reward: [(0, '4.528')] +[2024-12-28 14:23:08,958][100934] Updated weights for policy 0, policy_version 15957 (0.0007) +[2024-12-28 14:23:10,504][100934] Updated weights for policy 0, policy_version 15967 (0.0007) +[2024-12-28 14:23:12,063][100934] Updated weights for policy 0, policy_version 15977 (0.0006) +[2024-12-28 14:23:13,596][100934] Updated weights for policy 0, policy_version 15987 (0.0007) +[2024-12-28 14:23:13,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25873.1, 300 sec: 25353.6). Total num frames: 65490944. Throughput: 0: 6498.3. Samples: 6372048. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:13,945][100720] Avg episode reward: [(0, '4.428')] +[2024-12-28 14:23:15,176][100934] Updated weights for policy 0, policy_version 15997 (0.0006) +[2024-12-28 14:23:16,793][100934] Updated weights for policy 0, policy_version 16007 (0.0008) +[2024-12-28 14:23:18,595][100934] Updated weights for policy 0, policy_version 16017 (0.0007) +[2024-12-28 14:23:18,944][100720] Fps is (10 sec: 25395.4, 60 sec: 25600.0, 300 sec: 25381.3). Total num frames: 65609728. Throughput: 0: 6548.1. Samples: 6391270. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:18,945][100720] Avg episode reward: [(0, '4.492')] +[2024-12-28 14:23:20,419][100934] Updated weights for policy 0, policy_version 16027 (0.0008) +[2024-12-28 14:23:22,217][100934] Updated weights for policy 0, policy_version 16037 (0.0007) +[2024-12-28 14:23:23,944][100720] Fps is (10 sec: 23347.0, 60 sec: 25326.9, 300 sec: 25311.9). Total num frames: 65724416. Throughput: 0: 6487.8. Samples: 6425244. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:23,945][100720] Avg episode reward: [(0, '4.542')] +[2024-12-28 14:23:23,986][100934] Updated weights for policy 0, policy_version 16047 (0.0008) +[2024-12-28 14:23:25,781][100934] Updated weights for policy 0, policy_version 16057 (0.0008) +[2024-12-28 14:23:27,534][100934] Updated weights for policy 0, policy_version 16067 (0.0008) +[2024-12-28 14:23:28,944][100720] Fps is (10 sec: 23756.7, 60 sec: 25463.5, 300 sec: 25284.1). Total num frames: 65847296. Throughput: 0: 6399.8. Samples: 6461168. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:28,945][100720] Avg episode reward: [(0, '4.537')] +[2024-12-28 14:23:29,064][100934] Updated weights for policy 0, policy_version 16077 (0.0006) +[2024-12-28 14:23:30,605][100934] Updated weights for policy 0, policy_version 16087 (0.0006) +[2024-12-28 14:23:32,227][100934] Updated weights for policy 0, policy_version 16097 (0.0009) +[2024-12-28 14:23:33,944][100720] Fps is (10 sec: 24576.2, 60 sec: 25600.0, 300 sec: 25298.0). Total num frames: 65970176. Throughput: 0: 6388.6. Samples: 6480754. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:33,945][100720] Avg episode reward: [(0, '4.296')] +[2024-12-28 14:23:34,045][100934] Updated weights for policy 0, policy_version 16107 (0.0008) +[2024-12-28 14:23:35,894][100934] Updated weights for policy 0, policy_version 16117 (0.0008) +[2024-12-28 14:23:37,739][100934] Updated weights for policy 0, policy_version 16127 (0.0008) +[2024-12-28 14:23:38,944][100720] Fps is (10 sec: 23347.0, 60 sec: 25326.9, 300 sec: 25284.1). Total num frames: 66080768. Throughput: 0: 6248.9. Samples: 6514394. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:38,945][100720] Avg episode reward: [(0, '4.570')] +[2024-12-28 14:23:39,554][100934] Updated weights for policy 0, policy_version 16137 (0.0007) +[2024-12-28 14:23:41,391][100934] Updated weights for policy 0, policy_version 16147 (0.0008) +[2024-12-28 14:23:43,165][100934] Updated weights for policy 0, policy_version 16157 (0.0007) +[2024-12-28 14:23:43,944][100720] Fps is (10 sec: 22937.1, 60 sec: 25122.1, 300 sec: 25270.2). Total num frames: 66199552. Throughput: 0: 6139.5. Samples: 6549222. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:23:43,945][100720] Avg episode reward: [(0, '4.325')] +[2024-12-28 14:23:44,698][100934] Updated weights for policy 0, policy_version 16167 (0.0006) +[2024-12-28 14:23:46,232][100934] Updated weights for policy 0, policy_version 16177 (0.0007) +[2024-12-28 14:23:47,722][100934] Updated weights for policy 0, policy_version 16187 (0.0007) +[2024-12-28 14:23:48,944][100720] Fps is (10 sec: 25395.4, 60 sec: 25122.1, 300 sec: 25284.1). Total num frames: 66334720. Throughput: 0: 6159.0. Samples: 6569294. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:23:48,945][100720] Avg episode reward: [(0, '4.558')] +[2024-12-28 14:23:49,223][100934] Updated weights for policy 0, policy_version 16197 (0.0006) +[2024-12-28 14:23:50,735][100934] Updated weights for policy 0, policy_version 16207 (0.0006) +[2024-12-28 14:23:52,308][100934] Updated weights for policy 0, policy_version 16217 (0.0008) +[2024-12-28 14:23:53,830][100934] Updated weights for policy 0, policy_version 16227 (0.0007) +[2024-12-28 14:23:53,944][100720] Fps is (10 sec: 26624.6, 60 sec: 25122.1, 300 sec: 25284.1). Total num frames: 66465792. Throughput: 0: 6160.1. Samples: 6609676. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:23:53,945][100720] Avg episode reward: [(0, '4.523')] +[2024-12-28 14:23:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000016227_66465792.pth... +[2024-12-28 14:23:53,982][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000014736_60358656.pth +[2024-12-28 14:23:55,366][100934] Updated weights for policy 0, policy_version 16237 (0.0007) +[2024-12-28 14:23:56,868][100934] Updated weights for policy 0, policy_version 16247 (0.0007) +[2024-12-28 14:23:58,526][100934] Updated weights for policy 0, policy_version 16257 (0.0008) +[2024-12-28 14:23:58,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25053.8, 300 sec: 25270.2). Total num frames: 66596864. Throughput: 0: 6146.7. Samples: 6648652. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:23:58,945][100720] Avg episode reward: [(0, '4.322')] +[2024-12-28 14:24:00,365][100934] Updated weights for policy 0, policy_version 16267 (0.0009) +[2024-12-28 14:24:02,112][100934] Updated weights for policy 0, policy_version 16277 (0.0007) +[2024-12-28 14:24:03,944][100720] Fps is (10 sec: 24166.4, 60 sec: 24712.5, 300 sec: 25200.8). Total num frames: 66707456. Throughput: 0: 6097.5. Samples: 6665660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:24:03,945][100720] Avg episode reward: [(0, '4.274')] +[2024-12-28 14:24:03,981][100934] Updated weights for policy 0, policy_version 16287 (0.0007) +[2024-12-28 14:24:05,884][100934] Updated weights for policy 0, policy_version 16297 (0.0009) +[2024-12-28 14:24:07,750][100934] Updated weights for policy 0, policy_version 16307 (0.0008) +[2024-12-28 14:24:08,944][100720] Fps is (10 sec: 22118.4, 60 sec: 24371.2, 300 sec: 25131.4). Total num frames: 66818048. Throughput: 0: 6077.6. Samples: 6698734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:24:08,945][100720] Avg episode reward: [(0, '4.517')] +[2024-12-28 14:24:09,455][100934] Updated weights for policy 0, policy_version 16317 (0.0008) +[2024-12-28 14:24:10,981][100934] Updated weights for policy 0, policy_version 16327 (0.0007) +[2024-12-28 14:24:12,510][100934] Updated weights for policy 0, policy_version 16337 (0.0006) +[2024-12-28 14:24:13,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24371.2, 300 sec: 25200.8). Total num frames: 66953216. Throughput: 0: 6151.8. Samples: 6737998. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:24:13,945][100720] Avg episode reward: [(0, '4.489')] +[2024-12-28 14:24:14,022][100934] Updated weights for policy 0, policy_version 16347 (0.0006) +[2024-12-28 14:24:15,521][100934] Updated weights for policy 0, policy_version 16357 (0.0007) +[2024-12-28 14:24:17,048][100934] Updated weights for policy 0, policy_version 16367 (0.0006) +[2024-12-28 14:24:18,595][100934] Updated weights for policy 0, policy_version 16377 (0.0006) +[2024-12-28 14:24:18,944][100720] Fps is (10 sec: 27033.7, 60 sec: 24644.3, 300 sec: 25298.0). Total num frames: 67088384. Throughput: 0: 6162.5. Samples: 6758066. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:24:18,945][100720] Avg episode reward: [(0, '4.651')] +[2024-12-28 14:24:20,159][100934] Updated weights for policy 0, policy_version 16387 (0.0006) +[2024-12-28 14:24:21,724][100934] Updated weights for policy 0, policy_version 16397 (0.0007) +[2024-12-28 14:24:23,260][100934] Updated weights for policy 0, policy_version 16407 (0.0007) +[2024-12-28 14:24:23,944][100720] Fps is (10 sec: 26623.7, 60 sec: 24917.3, 300 sec: 25325.8). Total num frames: 67219456. Throughput: 0: 6300.5. Samples: 6797916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:24:23,946][100720] Avg episode reward: [(0, '4.403')] +[2024-12-28 14:24:24,740][100934] Updated weights for policy 0, policy_version 16417 (0.0007) +[2024-12-28 14:24:26,505][100934] Updated weights for policy 0, policy_version 16427 (0.0007) +[2024-12-28 14:24:28,313][100934] Updated weights for policy 0, policy_version 16437 (0.0008) +[2024-12-28 14:24:28,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24849.1, 300 sec: 25284.1). Total num frames: 67338240. Throughput: 0: 6330.5. Samples: 6834094. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:24:28,945][100720] Avg episode reward: [(0, '4.595')] +[2024-12-28 14:24:30,192][100934] Updated weights for policy 0, policy_version 16447 (0.0008) +[2024-12-28 14:24:32,024][100934] Updated weights for policy 0, policy_version 16457 (0.0009) +[2024-12-28 14:24:33,788][100934] Updated weights for policy 0, policy_version 16467 (0.0008) +[2024-12-28 14:24:33,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24644.2, 300 sec: 25214.7). Total num frames: 67448832. Throughput: 0: 6254.6. Samples: 6850750. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:24:33,945][100720] Avg episode reward: [(0, '4.547')] +[2024-12-28 14:24:35,614][100934] Updated weights for policy 0, policy_version 16477 (0.0008) +[2024-12-28 14:24:37,179][100934] Updated weights for policy 0, policy_version 16487 (0.0007) +[2024-12-28 14:24:38,738][100934] Updated weights for policy 0, policy_version 16497 (0.0007) +[2024-12-28 14:24:38,944][100720] Fps is (10 sec: 23756.6, 60 sec: 24917.3, 300 sec: 25242.5). Total num frames: 67575808. Throughput: 0: 6164.5. Samples: 6887080. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:24:38,945][100720] Avg episode reward: [(0, '4.499')] +[2024-12-28 14:24:40,304][100934] Updated weights for policy 0, policy_version 16507 (0.0007) +[2024-12-28 14:24:41,817][100934] Updated weights for policy 0, policy_version 16517 (0.0006) +[2024-12-28 14:24:43,379][100934] Updated weights for policy 0, policy_version 16527 (0.0007) +[2024-12-28 14:24:43,944][100720] Fps is (10 sec: 25804.9, 60 sec: 25122.2, 300 sec: 25311.9). Total num frames: 67706880. Throughput: 0: 6182.4. Samples: 6926860. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:24:43,945][100720] Avg episode reward: [(0, '4.452')] +[2024-12-28 14:24:44,876][100934] Updated weights for policy 0, policy_version 16537 (0.0006) +[2024-12-28 14:24:46,388][100934] Updated weights for policy 0, policy_version 16547 (0.0006) +[2024-12-28 14:24:47,920][100934] Updated weights for policy 0, policy_version 16557 (0.0007) +[2024-12-28 14:24:48,944][100720] Fps is (10 sec: 26623.0, 60 sec: 25121.9, 300 sec: 25367.4). Total num frames: 67842048. Throughput: 0: 6253.3. Samples: 6947062. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:24:48,945][100720] Avg episode reward: [(0, '4.400')] +[2024-12-28 14:24:49,431][100934] Updated weights for policy 0, policy_version 16567 (0.0006) +[2024-12-28 14:24:51,020][100934] Updated weights for policy 0, policy_version 16577 (0.0008) +[2024-12-28 14:24:52,580][100934] Updated weights for policy 0, policy_version 16587 (0.0007) +[2024-12-28 14:24:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25053.9, 300 sec: 25353.6). Total num frames: 67969024. Throughput: 0: 6392.7. Samples: 6986406. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:24:53,945][100720] Avg episode reward: [(0, '4.312')] +[2024-12-28 14:24:54,353][100934] Updated weights for policy 0, policy_version 16597 (0.0007) +[2024-12-28 14:24:56,168][100934] Updated weights for policy 0, policy_version 16607 (0.0007) +[2024-12-28 14:24:57,925][100934] Updated weights for policy 0, policy_version 16617 (0.0007) +[2024-12-28 14:24:58,944][100720] Fps is (10 sec: 24167.5, 60 sec: 24780.8, 300 sec: 25298.0). Total num frames: 68083712. Throughput: 0: 6280.5. Samples: 7020620. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:24:58,945][100720] Avg episode reward: [(0, '4.451')] +[2024-12-28 14:24:59,741][100934] Updated weights for policy 0, policy_version 16627 (0.0007) +[2024-12-28 14:25:01,596][100934] Updated weights for policy 0, policy_version 16637 (0.0008) +[2024-12-28 14:25:03,412][100934] Updated weights for policy 0, policy_version 16647 (0.0008) +[2024-12-28 14:25:03,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24849.1, 300 sec: 25256.4). Total num frames: 68198400. Throughput: 0: 6204.5. Samples: 7037270. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:25:03,945][100720] Avg episode reward: [(0, '4.392')] +[2024-12-28 14:25:04,987][100934] Updated weights for policy 0, policy_version 16657 (0.0007) +[2024-12-28 14:25:06,476][100934] Updated weights for policy 0, policy_version 16667 (0.0007) +[2024-12-28 14:25:07,978][100934] Updated weights for policy 0, policy_version 16677 (0.0007) +[2024-12-28 14:25:08,944][100720] Fps is (10 sec: 24985.7, 60 sec: 25258.7, 300 sec: 25270.2). Total num frames: 68333568. Throughput: 0: 6181.3. Samples: 7076076. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:25:08,945][100720] Avg episode reward: [(0, '4.367')] +[2024-12-28 14:25:09,468][100934] Updated weights for policy 0, policy_version 16687 (0.0006) +[2024-12-28 14:25:10,985][100934] Updated weights for policy 0, policy_version 16697 (0.0006) +[2024-12-28 14:25:12,650][100934] Updated weights for policy 0, policy_version 16707 (0.0008) +[2024-12-28 14:25:13,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25122.1, 300 sec: 25298.0). Total num frames: 68460544. Throughput: 0: 6232.3. Samples: 7114546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:25:13,945][100720] Avg episode reward: [(0, '4.354')] +[2024-12-28 14:25:14,445][100934] Updated weights for policy 0, policy_version 16717 (0.0008) +[2024-12-28 14:25:16,278][100934] Updated weights for policy 0, policy_version 16727 (0.0007) +[2024-12-28 14:25:18,042][100934] Updated weights for policy 0, policy_version 16737 (0.0007) +[2024-12-28 14:25:18,944][100720] Fps is (10 sec: 24166.1, 60 sec: 24780.7, 300 sec: 25298.0). Total num frames: 68575232. Throughput: 0: 6240.1. Samples: 7131554. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:25:18,945][100720] Avg episode reward: [(0, '4.263')] +[2024-12-28 14:25:19,896][100934] Updated weights for policy 0, policy_version 16747 (0.0009) +[2024-12-28 14:25:21,732][100934] Updated weights for policy 0, policy_version 16757 (0.0007) +[2024-12-28 14:25:23,355][100934] Updated weights for policy 0, policy_version 16767 (0.0007) +[2024-12-28 14:25:23,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24507.8, 300 sec: 25298.0). Total num frames: 68689920. Throughput: 0: 6193.8. Samples: 7165800. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:25:23,945][100720] Avg episode reward: [(0, '4.506')] +[2024-12-28 14:25:24,875][100934] Updated weights for policy 0, policy_version 16777 (0.0006) +[2024-12-28 14:25:26,381][100934] Updated weights for policy 0, policy_version 16787 (0.0006) +[2024-12-28 14:25:27,850][100934] Updated weights for policy 0, policy_version 16797 (0.0007) +[2024-12-28 14:25:28,944][100720] Fps is (10 sec: 25395.4, 60 sec: 24849.1, 300 sec: 25325.8). Total num frames: 68829184. Throughput: 0: 6218.9. Samples: 7206710. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:25:28,945][100720] Avg episode reward: [(0, '4.531')] +[2024-12-28 14:25:29,350][100934] Updated weights for policy 0, policy_version 16807 (0.0006) +[2024-12-28 14:25:30,834][100934] Updated weights for policy 0, policy_version 16817 (0.0007) +[2024-12-28 14:25:32,312][100934] Updated weights for policy 0, policy_version 16827 (0.0007) +[2024-12-28 14:25:33,842][100934] Updated weights for policy 0, policy_version 16837 (0.0007) +[2024-12-28 14:25:33,944][100720] Fps is (10 sec: 27442.0, 60 sec: 25258.5, 300 sec: 25339.6). Total num frames: 68964352. Throughput: 0: 6229.4. Samples: 7227386. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:25:33,945][100720] Avg episode reward: [(0, '4.555')] +[2024-12-28 14:25:35,352][100934] Updated weights for policy 0, policy_version 16847 (0.0006) +[2024-12-28 14:25:36,868][100934] Updated weights for policy 0, policy_version 16857 (0.0006) +[2024-12-28 14:25:38,348][100934] Updated weights for policy 0, policy_version 16867 (0.0006) +[2024-12-28 14:25:38,944][100720] Fps is (10 sec: 27033.7, 60 sec: 25395.2, 300 sec: 25395.2). Total num frames: 69099520. Throughput: 0: 6261.2. Samples: 7268162. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:25:38,945][100720] Avg episode reward: [(0, '4.392')] +[2024-12-28 14:25:40,041][100934] Updated weights for policy 0, policy_version 16877 (0.0008) +[2024-12-28 14:25:41,881][100934] Updated weights for policy 0, policy_version 16887 (0.0008) +[2024-12-28 14:25:43,714][100934] Updated weights for policy 0, policy_version 16897 (0.0009) +[2024-12-28 14:25:43,944][100720] Fps is (10 sec: 24986.8, 60 sec: 25122.1, 300 sec: 25409.1). Total num frames: 69214208. Throughput: 0: 6278.4. Samples: 7303148. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:25:43,945][100720] Avg episode reward: [(0, '4.446')] +[2024-12-28 14:25:45,511][100934] Updated weights for policy 0, policy_version 16907 (0.0008) +[2024-12-28 14:25:47,296][100934] Updated weights for policy 0, policy_version 16917 (0.0007) +[2024-12-28 14:25:48,944][100720] Fps is (10 sec: 22937.3, 60 sec: 24780.9, 300 sec: 25381.3). Total num frames: 69328896. Throughput: 0: 6286.7. Samples: 7320174. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:25:48,945][100720] Avg episode reward: [(0, '4.222')] +[2024-12-28 14:25:49,115][100934] Updated weights for policy 0, policy_version 16927 (0.0007) +[2024-12-28 14:25:50,691][100934] Updated weights for policy 0, policy_version 16937 (0.0008) +[2024-12-28 14:25:52,232][100934] Updated weights for policy 0, policy_version 16947 (0.0007) +[2024-12-28 14:25:53,739][100934] Updated weights for policy 0, policy_version 16957 (0.0007) +[2024-12-28 14:25:53,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24849.1, 300 sec: 25450.7). Total num frames: 69459968. Throughput: 0: 6259.4. Samples: 7357748. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:25:53,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 14:25:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000016958_69459968.pth... +[2024-12-28 14:25:53,981][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000015482_63414272.pth +[2024-12-28 14:25:55,287][100934] Updated weights for policy 0, policy_version 16967 (0.0008) +[2024-12-28 14:25:56,816][100934] Updated weights for policy 0, policy_version 16977 (0.0007) +[2024-12-28 14:25:58,345][100934] Updated weights for policy 0, policy_version 16987 (0.0008) +[2024-12-28 14:25:58,944][100720] Fps is (10 sec: 26214.8, 60 sec: 25122.2, 300 sec: 25450.7). Total num frames: 69591040. Throughput: 0: 6297.2. Samples: 7397918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:25:58,945][100720] Avg episode reward: [(0, '4.579')] +[2024-12-28 14:25:59,876][100934] Updated weights for policy 0, policy_version 16997 (0.0007) +[2024-12-28 14:26:01,385][100934] Updated weights for policy 0, policy_version 17007 (0.0007) +[2024-12-28 14:26:02,873][100934] Updated weights for policy 0, policy_version 17017 (0.0006) +[2024-12-28 14:26:03,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25463.4, 300 sec: 25450.7). Total num frames: 69726208. Throughput: 0: 6369.1. Samples: 7418162. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:26:03,945][100720] Avg episode reward: [(0, '4.401')] +[2024-12-28 14:26:04,413][100934] Updated weights for policy 0, policy_version 17027 (0.0007) +[2024-12-28 14:26:05,912][100934] Updated weights for policy 0, policy_version 17037 (0.0007) +[2024-12-28 14:26:07,455][100934] Updated weights for policy 0, policy_version 17047 (0.0006) +[2024-12-28 14:26:08,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25463.5, 300 sec: 25464.6). Total num frames: 69861376. Throughput: 0: 6508.4. Samples: 7458676. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:26:08,945][100720] Avg episode reward: [(0, '4.504')] +[2024-12-28 14:26:08,965][100934] Updated weights for policy 0, policy_version 17057 (0.0006) +[2024-12-28 14:26:10,554][100934] Updated weights for policy 0, policy_version 17067 (0.0007) +[2024-12-28 14:26:12,099][100934] Updated weights for policy 0, policy_version 17077 (0.0008) +[2024-12-28 14:26:13,601][100934] Updated weights for policy 0, policy_version 17087 (0.0006) +[2024-12-28 14:26:13,944][100720] Fps is (10 sec: 27033.7, 60 sec: 25600.0, 300 sec: 25464.6). Total num frames: 69996544. Throughput: 0: 6485.7. Samples: 7498566. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:26:13,945][100720] Avg episode reward: [(0, '4.366')] +[2024-12-28 14:26:15,151][100934] Updated weights for policy 0, policy_version 17097 (0.0007) +[2024-12-28 14:26:16,636][100934] Updated weights for policy 0, policy_version 17107 (0.0007) +[2024-12-28 14:26:18,173][100934] Updated weights for policy 0, policy_version 17117 (0.0006) +[2024-12-28 14:26:18,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25873.1, 300 sec: 25450.7). Total num frames: 70127616. Throughput: 0: 6473.9. Samples: 7518708. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:26:18,945][100720] Avg episode reward: [(0, '4.553')] +[2024-12-28 14:26:19,750][100934] Updated weights for policy 0, policy_version 17127 (0.0007) +[2024-12-28 14:26:21,269][100934] Updated weights for policy 0, policy_version 17137 (0.0006) +[2024-12-28 14:26:22,799][100934] Updated weights for policy 0, policy_version 17147 (0.0007) +[2024-12-28 14:26:23,944][100720] Fps is (10 sec: 26624.1, 60 sec: 26214.4, 300 sec: 25464.6). Total num frames: 70262784. Throughput: 0: 6458.2. Samples: 7558780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:26:23,945][100720] Avg episode reward: [(0, '4.219')] +[2024-12-28 14:26:24,349][100934] Updated weights for policy 0, policy_version 17157 (0.0007) +[2024-12-28 14:26:26,126][100934] Updated weights for policy 0, policy_version 17167 (0.0008) +[2024-12-28 14:26:27,896][100934] Updated weights for policy 0, policy_version 17177 (0.0008) +[2024-12-28 14:26:28,944][100720] Fps is (10 sec: 24985.6, 60 sec: 25804.8, 300 sec: 25409.1). Total num frames: 70377472. Throughput: 0: 6471.6. Samples: 7594370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:26:28,945][100720] Avg episode reward: [(0, '4.377')] +[2024-12-28 14:26:29,705][100934] Updated weights for policy 0, policy_version 17187 (0.0007) +[2024-12-28 14:26:31,494][100934] Updated weights for policy 0, policy_version 17197 (0.0008) +[2024-12-28 14:26:33,296][100934] Updated weights for policy 0, policy_version 17207 (0.0009) +[2024-12-28 14:26:33,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25463.7, 300 sec: 25339.7). Total num frames: 70492160. Throughput: 0: 6473.3. Samples: 7611470. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:26:33,945][100720] Avg episode reward: [(0, '4.556')] +[2024-12-28 14:26:35,125][100934] Updated weights for policy 0, policy_version 17217 (0.0008) +[2024-12-28 14:26:36,657][100934] Updated weights for policy 0, policy_version 17227 (0.0008) +[2024-12-28 14:26:38,236][100934] Updated weights for policy 0, policy_version 17237 (0.0007) +[2024-12-28 14:26:38,944][100720] Fps is (10 sec: 24166.4, 60 sec: 25326.9, 300 sec: 25325.8). Total num frames: 70619136. Throughput: 0: 6451.9. Samples: 7648084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:26:38,945][100720] Avg episode reward: [(0, '4.449')] +[2024-12-28 14:26:39,746][100934] Updated weights for policy 0, policy_version 17247 (0.0007) +[2024-12-28 14:26:41,284][100934] Updated weights for policy 0, policy_version 17257 (0.0006) +[2024-12-28 14:26:42,822][100934] Updated weights for policy 0, policy_version 17267 (0.0007) +[2024-12-28 14:26:43,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25668.3, 300 sec: 25339.7). Total num frames: 70754304. Throughput: 0: 6454.0. Samples: 7688346. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:26:43,945][100720] Avg episode reward: [(0, '4.437')] +[2024-12-28 14:26:44,336][100934] Updated weights for policy 0, policy_version 17277 (0.0007) +[2024-12-28 14:26:45,832][100934] Updated weights for policy 0, policy_version 17287 (0.0006) +[2024-12-28 14:26:47,378][100934] Updated weights for policy 0, policy_version 17297 (0.0006) +[2024-12-28 14:26:48,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25941.4, 300 sec: 25325.8). Total num frames: 70885376. Throughput: 0: 6450.4. Samples: 7708430. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:26:48,945][100720] Avg episode reward: [(0, '4.552')] +[2024-12-28 14:26:49,068][100934] Updated weights for policy 0, policy_version 17307 (0.0008) +[2024-12-28 14:26:50,878][100934] Updated weights for policy 0, policy_version 17317 (0.0008) +[2024-12-28 14:26:52,695][100934] Updated weights for policy 0, policy_version 17327 (0.0010) +[2024-12-28 14:26:53,944][100720] Fps is (10 sec: 24165.9, 60 sec: 25599.9, 300 sec: 25256.3). Total num frames: 70995968. Throughput: 0: 6328.8. Samples: 7743472. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:26:53,946][100720] Avg episode reward: [(0, '4.271')] +[2024-12-28 14:26:54,517][100934] Updated weights for policy 0, policy_version 17337 (0.0007) +[2024-12-28 14:26:56,336][100934] Updated weights for policy 0, policy_version 17347 (0.0008) +[2024-12-28 14:26:58,093][100934] Updated weights for policy 0, policy_version 17357 (0.0008) +[2024-12-28 14:26:58,944][100720] Fps is (10 sec: 22528.1, 60 sec: 25326.9, 300 sec: 25200.8). Total num frames: 71110656. Throughput: 0: 6206.9. Samples: 7777874. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:26:58,945][100720] Avg episode reward: [(0, '4.492')] +[2024-12-28 14:26:59,727][100934] Updated weights for policy 0, policy_version 17367 (0.0007) +[2024-12-28 14:27:01,227][100934] Updated weights for policy 0, policy_version 17377 (0.0006) +[2024-12-28 14:27:02,806][100934] Updated weights for policy 0, policy_version 17387 (0.0007) +[2024-12-28 14:27:03,944][100720] Fps is (10 sec: 24985.9, 60 sec: 25326.9, 300 sec: 25214.7). Total num frames: 71245824. Throughput: 0: 6203.5. Samples: 7797866. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:27:03,945][100720] Avg episode reward: [(0, '4.661')] +[2024-12-28 14:27:04,409][100934] Updated weights for policy 0, policy_version 17397 (0.0007) +[2024-12-28 14:27:06,053][100934] Updated weights for policy 0, policy_version 17407 (0.0007) +[2024-12-28 14:27:07,660][100934] Updated weights for policy 0, policy_version 17417 (0.0006) +[2024-12-28 14:27:08,944][100720] Fps is (10 sec: 25804.7, 60 sec: 25122.1, 300 sec: 25186.9). Total num frames: 71368704. Throughput: 0: 6159.9. Samples: 7835974. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:27:08,945][100720] Avg episode reward: [(0, '4.384')] +[2024-12-28 14:27:09,409][100934] Updated weights for policy 0, policy_version 17427 (0.0009) +[2024-12-28 14:27:11,202][100934] Updated weights for policy 0, policy_version 17437 (0.0008) +[2024-12-28 14:27:13,087][100934] Updated weights for policy 0, policy_version 17447 (0.0010) +[2024-12-28 14:27:13,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24712.5, 300 sec: 25103.6). Total num frames: 71479296. Throughput: 0: 6119.5. Samples: 7869748. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:27:13,945][100720] Avg episode reward: [(0, '4.295')] +[2024-12-28 14:27:14,947][100934] Updated weights for policy 0, policy_version 17457 (0.0009) +[2024-12-28 14:27:16,804][100934] Updated weights for policy 0, policy_version 17467 (0.0007) +[2024-12-28 14:27:18,526][100934] Updated weights for policy 0, policy_version 17477 (0.0008) +[2024-12-28 14:27:18,944][100720] Fps is (10 sec: 22528.2, 60 sec: 24439.5, 300 sec: 25048.1). Total num frames: 71593984. Throughput: 0: 6110.3. Samples: 7886432. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:27:18,945][100720] Avg episode reward: [(0, '4.583')] +[2024-12-28 14:27:20,720][100934] Updated weights for policy 0, policy_version 17487 (0.0007) +[2024-12-28 14:27:22,358][100934] Updated weights for policy 0, policy_version 17497 (0.0006) +[2024-12-28 14:27:23,944][100720] Fps is (10 sec: 22528.0, 60 sec: 24029.8, 300 sec: 25034.2). Total num frames: 71704576. Throughput: 0: 6037.5. Samples: 7919774. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:27:23,945][100720] Avg episode reward: [(0, '4.334')] +[2024-12-28 14:27:24,008][100934] Updated weights for policy 0, policy_version 17507 (0.0006) +[2024-12-28 14:27:25,564][100934] Updated weights for policy 0, policy_version 17517 (0.0006) +[2024-12-28 14:27:27,132][100934] Updated weights for policy 0, policy_version 17527 (0.0007) +[2024-12-28 14:27:28,674][100934] Updated weights for policy 0, policy_version 17537 (0.0006) +[2024-12-28 14:27:28,944][100720] Fps is (10 sec: 24166.1, 60 sec: 24302.9, 300 sec: 25089.7). Total num frames: 71835648. Throughput: 0: 6009.3. Samples: 7958764. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:27:28,945][100720] Avg episode reward: [(0, '4.465')] +[2024-12-28 14:27:30,276][100934] Updated weights for policy 0, policy_version 17547 (0.0007) +[2024-12-28 14:27:31,794][100934] Updated weights for policy 0, policy_version 17557 (0.0007) +[2024-12-28 14:27:33,326][100934] Updated weights for policy 0, policy_version 17567 (0.0007) +[2024-12-28 14:27:33,944][100720] Fps is (10 sec: 26214.3, 60 sec: 24575.9, 300 sec: 25103.6). Total num frames: 71966720. Throughput: 0: 5999.4. Samples: 7978402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:27:33,945][100720] Avg episode reward: [(0, '4.459')] +[2024-12-28 14:27:34,893][100934] Updated weights for policy 0, policy_version 17577 (0.0007) +[2024-12-28 14:27:36,535][100934] Updated weights for policy 0, policy_version 17587 (0.0008) +[2024-12-28 14:27:38,408][100934] Updated weights for policy 0, policy_version 17597 (0.0008) +[2024-12-28 14:27:38,944][100720] Fps is (10 sec: 24985.8, 60 sec: 24439.5, 300 sec: 25062.0). Total num frames: 72085504. Throughput: 0: 6054.3. Samples: 8015916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:27:38,946][100720] Avg episode reward: [(0, '4.213')] +[2024-12-28 14:27:40,261][100934] Updated weights for policy 0, policy_version 17607 (0.0008) +[2024-12-28 14:27:42,104][100934] Updated weights for policy 0, policy_version 17617 (0.0008) +[2024-12-28 14:27:43,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24029.8, 300 sec: 24978.6). Total num frames: 72196096. Throughput: 0: 6030.9. Samples: 8049264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:27:43,945][100720] Avg episode reward: [(0, '4.434')] +[2024-12-28 14:27:43,968][100934] Updated weights for policy 0, policy_version 17627 (0.0009) +[2024-12-28 14:27:45,803][100934] Updated weights for policy 0, policy_version 17637 (0.0007) +[2024-12-28 14:27:47,603][100934] Updated weights for policy 0, policy_version 17647 (0.0009) +[2024-12-28 14:27:48,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23825.1, 300 sec: 24937.0). Total num frames: 72314880. Throughput: 0: 5953.9. Samples: 8065792. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:27:48,945][100720] Avg episode reward: [(0, '4.305')] +[2024-12-28 14:27:49,271][100934] Updated weights for policy 0, policy_version 17657 (0.0008) +[2024-12-28 14:27:50,880][100934] Updated weights for policy 0, policy_version 17667 (0.0007) +[2024-12-28 14:27:52,421][100934] Updated weights for policy 0, policy_version 17677 (0.0008) +[2024-12-28 14:27:53,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24098.2, 300 sec: 24909.2). Total num frames: 72441856. Throughput: 0: 5952.6. Samples: 8103840. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:27:53,945][100720] Avg episode reward: [(0, '4.439')] +[2024-12-28 14:27:53,951][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000017686_72441856.pth... +[2024-12-28 14:27:53,988][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000016227_66465792.pth +[2024-12-28 14:27:54,086][100934] Updated weights for policy 0, policy_version 17687 (0.0008) +[2024-12-28 14:27:55,920][100934] Updated weights for policy 0, policy_version 17697 (0.0008) +[2024-12-28 14:27:57,771][100934] Updated weights for policy 0, policy_version 17707 (0.0008) +[2024-12-28 14:27:58,944][100720] Fps is (10 sec: 23756.7, 60 sec: 24029.9, 300 sec: 24839.8). Total num frames: 72552448. Throughput: 0: 5956.6. Samples: 8137794. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:27:58,945][100720] Avg episode reward: [(0, '4.233')] +[2024-12-28 14:27:59,552][100934] Updated weights for policy 0, policy_version 17717 (0.0008) +[2024-12-28 14:28:01,429][100934] Updated weights for policy 0, policy_version 17727 (0.0009) +[2024-12-28 14:28:03,310][100934] Updated weights for policy 0, policy_version 17737 (0.0009) +[2024-12-28 14:28:03,944][100720] Fps is (10 sec: 22118.4, 60 sec: 23620.3, 300 sec: 24770.4). Total num frames: 72663040. Throughput: 0: 5958.5. Samples: 8154566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:28:03,945][100720] Avg episode reward: [(0, '4.605')] +[2024-12-28 14:28:05,021][100934] Updated weights for policy 0, policy_version 17747 (0.0007) +[2024-12-28 14:28:06,683][100934] Updated weights for policy 0, policy_version 17757 (0.0008) +[2024-12-28 14:28:08,280][100934] Updated weights for policy 0, policy_version 17767 (0.0007) +[2024-12-28 14:28:08,944][100720] Fps is (10 sec: 23756.7, 60 sec: 23688.5, 300 sec: 24742.6). Total num frames: 72790016. Throughput: 0: 6015.2. Samples: 8190456. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:28:08,945][100720] Avg episode reward: [(0, '4.370')] +[2024-12-28 14:28:09,821][100934] Updated weights for policy 0, policy_version 17777 (0.0007) +[2024-12-28 14:28:11,404][100934] Updated weights for policy 0, policy_version 17787 (0.0009) +[2024-12-28 14:28:12,951][100934] Updated weights for policy 0, policy_version 17797 (0.0007) +[2024-12-28 14:28:13,944][100720] Fps is (10 sec: 25804.9, 60 sec: 24029.9, 300 sec: 24784.3). Total num frames: 72921088. Throughput: 0: 6026.6. Samples: 8229960. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:28:13,945][100720] Avg episode reward: [(0, '4.443')] +[2024-12-28 14:28:14,500][100934] Updated weights for policy 0, policy_version 17807 (0.0007) +[2024-12-28 14:28:15,986][100934] Updated weights for policy 0, policy_version 17817 (0.0006) +[2024-12-28 14:28:17,534][100934] Updated weights for policy 0, policy_version 17827 (0.0006) +[2024-12-28 14:28:18,944][100720] Fps is (10 sec: 26214.6, 60 sec: 24302.9, 300 sec: 24839.8). Total num frames: 73052160. Throughput: 0: 6034.2. Samples: 8249938. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:28:18,944][100720] Avg episode reward: [(0, '4.289')] +[2024-12-28 14:28:19,106][100934] Updated weights for policy 0, policy_version 17837 (0.0007) +[2024-12-28 14:28:20,641][100934] Updated weights for policy 0, policy_version 17847 (0.0007) +[2024-12-28 14:28:22,427][100934] Updated weights for policy 0, policy_version 17857 (0.0008) +[2024-12-28 14:28:23,944][100720] Fps is (10 sec: 25395.1, 60 sec: 24507.8, 300 sec: 24839.8). Total num frames: 73175040. Throughput: 0: 6042.1. Samples: 8287810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:28:23,945][100720] Avg episode reward: [(0, '4.473')] +[2024-12-28 14:28:24,217][100934] Updated weights for policy 0, policy_version 17867 (0.0008) +[2024-12-28 14:28:26,004][100934] Updated weights for policy 0, policy_version 17877 (0.0008) +[2024-12-28 14:28:27,800][100934] Updated weights for policy 0, policy_version 17887 (0.0008) +[2024-12-28 14:28:28,944][100720] Fps is (10 sec: 23756.7, 60 sec: 24234.7, 300 sec: 24812.0). Total num frames: 73289728. Throughput: 0: 6055.3. Samples: 8321752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:28:28,945][100720] Avg episode reward: [(0, '4.666')] +[2024-12-28 14:28:29,643][100934] Updated weights for policy 0, policy_version 17897 (0.0008) +[2024-12-28 14:28:31,455][100934] Updated weights for policy 0, policy_version 17907 (0.0009) +[2024-12-28 14:28:33,877][100934] Updated weights for policy 0, policy_version 17917 (0.0007) +[2024-12-28 14:28:33,944][100720] Fps is (10 sec: 21299.3, 60 sec: 23688.6, 300 sec: 24770.4). Total num frames: 73388032. Throughput: 0: 6068.6. Samples: 8338880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:28:33,945][100720] Avg episode reward: [(0, '4.343')] +[2024-12-28 14:28:35,414][100934] Updated weights for policy 0, policy_version 17927 (0.0008) +[2024-12-28 14:28:36,938][100934] Updated weights for policy 0, policy_version 17937 (0.0007) +[2024-12-28 14:28:38,477][100934] Updated weights for policy 0, policy_version 17947 (0.0007) +[2024-12-28 14:28:38,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23893.3, 300 sec: 24812.1). Total num frames: 73519104. Throughput: 0: 5987.7. Samples: 8373284. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:28:38,945][100720] Avg episode reward: [(0, '4.533')] +[2024-12-28 14:28:40,056][100934] Updated weights for policy 0, policy_version 17957 (0.0008) +[2024-12-28 14:28:41,579][100934] Updated weights for policy 0, policy_version 17967 (0.0006) +[2024-12-28 14:28:43,145][100934] Updated weights for policy 0, policy_version 17977 (0.0007) +[2024-12-28 14:28:43,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24303.0, 300 sec: 24812.0). Total num frames: 73654272. Throughput: 0: 6116.3. Samples: 8413028. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:28:43,945][100720] Avg episode reward: [(0, '4.497')] +[2024-12-28 14:28:44,683][100934] Updated weights for policy 0, policy_version 17987 (0.0007) +[2024-12-28 14:28:46,229][100934] Updated weights for policy 0, policy_version 17997 (0.0007) +[2024-12-28 14:28:47,796][100934] Updated weights for policy 0, policy_version 18007 (0.0006) +[2024-12-28 14:28:48,944][100720] Fps is (10 sec: 26624.3, 60 sec: 24507.8, 300 sec: 24812.1). Total num frames: 73785344. Throughput: 0: 6183.4. Samples: 8432820. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:28:48,944][100720] Avg episode reward: [(0, '4.374')] +[2024-12-28 14:28:49,323][100934] Updated weights for policy 0, policy_version 18017 (0.0007) +[2024-12-28 14:28:50,889][100934] Updated weights for policy 0, policy_version 18027 (0.0006) +[2024-12-28 14:28:52,419][100934] Updated weights for policy 0, policy_version 18037 (0.0006) +[2024-12-28 14:28:53,944][100720] Fps is (10 sec: 26214.3, 60 sec: 24576.0, 300 sec: 24812.0). Total num frames: 73916416. Throughput: 0: 6270.2. Samples: 8472616. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:28:53,945][100720] Avg episode reward: [(0, '4.388')] +[2024-12-28 14:28:53,977][100934] Updated weights for policy 0, policy_version 18047 (0.0007) +[2024-12-28 14:28:55,748][100934] Updated weights for policy 0, policy_version 18057 (0.0009) +[2024-12-28 14:28:57,570][100934] Updated weights for policy 0, policy_version 18067 (0.0008) +[2024-12-28 14:28:58,944][100720] Fps is (10 sec: 24575.7, 60 sec: 24644.3, 300 sec: 24825.9). Total num frames: 74031104. Throughput: 0: 6170.8. Samples: 8507646. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:28:58,945][100720] Avg episode reward: [(0, '4.230')] +[2024-12-28 14:28:59,416][100934] Updated weights for policy 0, policy_version 18077 (0.0007) +[2024-12-28 14:29:01,257][100934] Updated weights for policy 0, policy_version 18087 (0.0008) +[2024-12-28 14:29:03,066][100934] Updated weights for policy 0, policy_version 18097 (0.0008) +[2024-12-28 14:29:03,944][100720] Fps is (10 sec: 22528.0, 60 sec: 24644.3, 300 sec: 24825.9). Total num frames: 74141696. Throughput: 0: 6099.8. Samples: 8524430. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:29:03,945][100720] Avg episode reward: [(0, '4.551')] +[2024-12-28 14:29:04,803][100934] Updated weights for policy 0, policy_version 18107 (0.0007) +[2024-12-28 14:29:06,373][100934] Updated weights for policy 0, policy_version 18117 (0.0007) +[2024-12-28 14:29:07,860][100934] Updated weights for policy 0, policy_version 18127 (0.0007) +[2024-12-28 14:29:08,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24780.8, 300 sec: 24825.9). Total num frames: 74276864. Throughput: 0: 6089.8. Samples: 8561850. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:29:08,945][100720] Avg episode reward: [(0, '4.416')] +[2024-12-28 14:29:09,405][100934] Updated weights for policy 0, policy_version 18137 (0.0007) +[2024-12-28 14:29:11,179][100934] Updated weights for policy 0, policy_version 18147 (0.0008) +[2024-12-28 14:29:13,013][100934] Updated weights for policy 0, policy_version 18157 (0.0008) +[2024-12-28 14:29:13,944][100720] Fps is (10 sec: 24985.1, 60 sec: 24507.6, 300 sec: 24756.5). Total num frames: 74391552. Throughput: 0: 6122.3. Samples: 8597256. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:29:13,945][100720] Avg episode reward: [(0, '4.459')] +[2024-12-28 14:29:14,822][100934] Updated weights for policy 0, policy_version 18167 (0.0008) +[2024-12-28 14:29:16,655][100934] Updated weights for policy 0, policy_version 18177 (0.0009) +[2024-12-28 14:29:18,509][100934] Updated weights for policy 0, policy_version 18187 (0.0008) +[2024-12-28 14:29:18,944][100720] Fps is (10 sec: 22527.9, 60 sec: 24166.4, 300 sec: 24687.1). Total num frames: 74502144. Throughput: 0: 6114.8. Samples: 8614046. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:29:18,945][100720] Avg episode reward: [(0, '4.400')] +[2024-12-28 14:29:20,360][100934] Updated weights for policy 0, policy_version 18197 (0.0008) +[2024-12-28 14:29:21,847][100934] Updated weights for policy 0, policy_version 18207 (0.0006) +[2024-12-28 14:29:23,344][100934] Updated weights for policy 0, policy_version 18217 (0.0006) +[2024-12-28 14:29:23,944][100720] Fps is (10 sec: 24167.0, 60 sec: 24302.9, 300 sec: 24728.7). Total num frames: 74633216. Throughput: 0: 6161.4. Samples: 8650546. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:29:23,945][100720] Avg episode reward: [(0, '4.288')] +[2024-12-28 14:29:24,881][100934] Updated weights for policy 0, policy_version 18227 (0.0006) +[2024-12-28 14:29:26,425][100934] Updated weights for policy 0, policy_version 18237 (0.0006) +[2024-12-28 14:29:27,944][100934] Updated weights for policy 0, policy_version 18247 (0.0006) +[2024-12-28 14:29:28,944][100720] Fps is (10 sec: 26214.5, 60 sec: 24576.0, 300 sec: 24798.2). Total num frames: 74764288. Throughput: 0: 6168.9. Samples: 8690630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:29:28,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:29:29,501][100934] Updated weights for policy 0, policy_version 18257 (0.0006) +[2024-12-28 14:29:31,082][100934] Updated weights for policy 0, policy_version 18267 (0.0007) +[2024-12-28 14:29:32,580][100934] Updated weights for policy 0, policy_version 18277 (0.0007) +[2024-12-28 14:29:33,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25122.1, 300 sec: 24812.0). Total num frames: 74895360. Throughput: 0: 6172.3. Samples: 8710574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:29:33,945][100720] Avg episode reward: [(0, '4.563')] +[2024-12-28 14:29:34,121][100934] Updated weights for policy 0, policy_version 18287 (0.0007) +[2024-12-28 14:29:35,634][100934] Updated weights for policy 0, policy_version 18297 (0.0006) +[2024-12-28 14:29:37,125][100934] Updated weights for policy 0, policy_version 18307 (0.0007) +[2024-12-28 14:29:38,647][100934] Updated weights for policy 0, policy_version 18317 (0.0007) +[2024-12-28 14:29:38,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25190.4, 300 sec: 24825.9). Total num frames: 75030528. Throughput: 0: 6188.0. Samples: 8751074. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:29:38,945][100720] Avg episode reward: [(0, '4.214')] +[2024-12-28 14:29:40,212][100934] Updated weights for policy 0, policy_version 18327 (0.0006) +[2024-12-28 14:29:41,766][100934] Updated weights for policy 0, policy_version 18337 (0.0008) +[2024-12-28 14:29:43,276][100934] Updated weights for policy 0, policy_version 18347 (0.0006) +[2024-12-28 14:29:43,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25190.4, 300 sec: 24826.0). Total num frames: 75165696. Throughput: 0: 6299.7. Samples: 8791134. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:29:43,945][100720] Avg episode reward: [(0, '4.580')] +[2024-12-28 14:29:44,786][100934] Updated weights for policy 0, policy_version 18357 (0.0006) +[2024-12-28 14:29:46,323][100934] Updated weights for policy 0, policy_version 18367 (0.0007) +[2024-12-28 14:29:47,894][100934] Updated weights for policy 0, policy_version 18377 (0.0006) +[2024-12-28 14:29:48,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25190.3, 300 sec: 24839.8). Total num frames: 75296768. Throughput: 0: 6368.7. Samples: 8811024. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:29:48,945][100720] Avg episode reward: [(0, '4.297')] +[2024-12-28 14:29:49,424][100934] Updated weights for policy 0, policy_version 18387 (0.0006) +[2024-12-28 14:29:51,051][100934] Updated weights for policy 0, policy_version 18397 (0.0007) +[2024-12-28 14:29:52,591][100934] Updated weights for policy 0, policy_version 18407 (0.0007) +[2024-12-28 14:29:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25190.4, 300 sec: 24895.4). Total num frames: 75427840. Throughput: 0: 6410.6. Samples: 8850328. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:29:53,945][100720] Avg episode reward: [(0, '4.255')] +[2024-12-28 14:29:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000018415_75427840.pth... +[2024-12-28 14:29:53,991][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000016958_69459968.pth +[2024-12-28 14:29:54,159][100934] Updated weights for policy 0, policy_version 18417 (0.0007) +[2024-12-28 14:29:55,728][100934] Updated weights for policy 0, policy_version 18427 (0.0007) +[2024-12-28 14:29:57,267][100934] Updated weights for policy 0, policy_version 18437 (0.0006) +[2024-12-28 14:29:58,784][100934] Updated weights for policy 0, policy_version 18447 (0.0006) +[2024-12-28 14:29:58,944][100720] Fps is (10 sec: 26624.3, 60 sec: 25531.8, 300 sec: 24964.8). Total num frames: 75563008. Throughput: 0: 6503.2. Samples: 8889898. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:29:58,945][100720] Avg episode reward: [(0, '4.463')] +[2024-12-28 14:30:00,313][100934] Updated weights for policy 0, policy_version 18457 (0.0006) +[2024-12-28 14:30:01,898][100934] Updated weights for policy 0, policy_version 18467 (0.0006) +[2024-12-28 14:30:03,425][100934] Updated weights for policy 0, policy_version 18477 (0.0006) +[2024-12-28 14:30:03,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 24950.9). Total num frames: 75694080. Throughput: 0: 6570.8. Samples: 8909734. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:03,945][100720] Avg episode reward: [(0, '4.325')] +[2024-12-28 14:30:04,960][100934] Updated weights for policy 0, policy_version 18487 (0.0006) +[2024-12-28 14:30:06,516][100934] Updated weights for policy 0, policy_version 18497 (0.0006) +[2024-12-28 14:30:08,091][100934] Updated weights for policy 0, policy_version 18507 (0.0007) +[2024-12-28 14:30:08,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25804.8, 300 sec: 24964.8). Total num frames: 75825152. Throughput: 0: 6647.7. Samples: 8949692. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:08,945][100720] Avg episode reward: [(0, '4.375')] +[2024-12-28 14:30:09,682][100934] Updated weights for policy 0, policy_version 18517 (0.0007) +[2024-12-28 14:30:11,243][100934] Updated weights for policy 0, policy_version 18527 (0.0007) +[2024-12-28 14:30:12,813][100934] Updated weights for policy 0, policy_version 18537 (0.0007) +[2024-12-28 14:30:13,944][100720] Fps is (10 sec: 26214.2, 60 sec: 26077.9, 300 sec: 25020.3). Total num frames: 75956224. Throughput: 0: 6622.5. Samples: 8988644. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:13,945][100720] Avg episode reward: [(0, '4.548')] +[2024-12-28 14:30:14,346][100934] Updated weights for policy 0, policy_version 18547 (0.0006) +[2024-12-28 14:30:15,887][100934] Updated weights for policy 0, policy_version 18557 (0.0006) +[2024-12-28 14:30:17,456][100934] Updated weights for policy 0, policy_version 18567 (0.0006) +[2024-12-28 14:30:18,944][100720] Fps is (10 sec: 26214.0, 60 sec: 26419.1, 300 sec: 25075.8). Total num frames: 76087296. Throughput: 0: 6617.9. Samples: 9008382. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:30:18,945][100720] Avg episode reward: [(0, '4.371')] +[2024-12-28 14:30:19,121][100934] Updated weights for policy 0, policy_version 18577 (0.0007) +[2024-12-28 14:30:20,904][100934] Updated weights for policy 0, policy_version 18587 (0.0008) +[2024-12-28 14:30:22,672][100934] Updated weights for policy 0, policy_version 18597 (0.0007) +[2024-12-28 14:30:23,944][100720] Fps is (10 sec: 24166.4, 60 sec: 26077.8, 300 sec: 24978.7). Total num frames: 76197888. Throughput: 0: 6509.3. Samples: 9043994. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:30:23,945][100720] Avg episode reward: [(0, '4.501')] +[2024-12-28 14:30:24,515][100934] Updated weights for policy 0, policy_version 18607 (0.0008) +[2024-12-28 14:30:26,364][100934] Updated weights for policy 0, policy_version 18617 (0.0007) +[2024-12-28 14:30:28,216][100934] Updated weights for policy 0, policy_version 18627 (0.0009) +[2024-12-28 14:30:28,944][100720] Fps is (10 sec: 22118.7, 60 sec: 25736.5, 300 sec: 24895.4). Total num frames: 76308480. Throughput: 0: 6365.5. Samples: 9077580. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:30:28,945][100720] Avg episode reward: [(0, '4.299')] +[2024-12-28 14:30:29,954][100934] Updated weights for policy 0, policy_version 18637 (0.0008) +[2024-12-28 14:30:31,492][100934] Updated weights for policy 0, policy_version 18647 (0.0007) +[2024-12-28 14:30:32,989][100934] Updated weights for policy 0, policy_version 18657 (0.0007) +[2024-12-28 14:30:33,944][100720] Fps is (10 sec: 24576.2, 60 sec: 25804.8, 300 sec: 24895.3). Total num frames: 76443648. Throughput: 0: 6351.9. Samples: 9096860. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:30:33,945][100720] Avg episode reward: [(0, '4.361')] +[2024-12-28 14:30:34,501][100934] Updated weights for policy 0, policy_version 18667 (0.0006) +[2024-12-28 14:30:36,013][100934] Updated weights for policy 0, policy_version 18677 (0.0006) +[2024-12-28 14:30:37,506][100934] Updated weights for policy 0, policy_version 18687 (0.0007) +[2024-12-28 14:30:38,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25804.8, 300 sec: 24964.8). Total num frames: 76578816. Throughput: 0: 6380.4. Samples: 9137448. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:30:38,945][100720] Avg episode reward: [(0, '4.455')] +[2024-12-28 14:30:39,024][100934] Updated weights for policy 0, policy_version 18697 (0.0006) +[2024-12-28 14:30:40,567][100934] Updated weights for policy 0, policy_version 18707 (0.0006) +[2024-12-28 14:30:42,127][100934] Updated weights for policy 0, policy_version 18717 (0.0006) +[2024-12-28 14:30:43,626][100934] Updated weights for policy 0, policy_version 18727 (0.0006) +[2024-12-28 14:30:43,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25804.8, 300 sec: 25034.2). Total num frames: 76713984. Throughput: 0: 6396.7. Samples: 9177748. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:43,945][100720] Avg episode reward: [(0, '4.470')] +[2024-12-28 14:30:45,154][100934] Updated weights for policy 0, policy_version 18737 (0.0007) +[2024-12-28 14:30:46,698][100934] Updated weights for policy 0, policy_version 18747 (0.0006) +[2024-12-28 14:30:48,209][100934] Updated weights for policy 0, policy_version 18757 (0.0006) +[2024-12-28 14:30:48,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25804.8, 300 sec: 25034.2). Total num frames: 76845056. Throughput: 0: 6398.5. Samples: 9197668. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:48,945][100720] Avg episode reward: [(0, '4.284')] +[2024-12-28 14:30:49,749][100934] Updated weights for policy 0, policy_version 18767 (0.0006) +[2024-12-28 14:30:51,287][100934] Updated weights for policy 0, policy_version 18777 (0.0007) +[2024-12-28 14:30:52,835][100934] Updated weights for policy 0, policy_version 18787 (0.0007) +[2024-12-28 14:30:53,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25048.1). Total num frames: 76980224. Throughput: 0: 6400.7. Samples: 9237722. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:30:53,945][100720] Avg episode reward: [(0, '4.199')] +[2024-12-28 14:30:54,379][100934] Updated weights for policy 0, policy_version 18797 (0.0006) +[2024-12-28 14:30:56,100][100934] Updated weights for policy 0, policy_version 18807 (0.0007) +[2024-12-28 14:30:57,903][100934] Updated weights for policy 0, policy_version 18817 (0.0007) +[2024-12-28 14:30:58,944][100720] Fps is (10 sec: 24985.5, 60 sec: 25531.7, 300 sec: 24978.7). Total num frames: 77094912. Throughput: 0: 6332.7. Samples: 9273614. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:30:58,945][100720] Avg episode reward: [(0, '4.541')] +[2024-12-28 14:30:59,724][100934] Updated weights for policy 0, policy_version 18827 (0.0008) +[2024-12-28 14:31:01,559][100934] Updated weights for policy 0, policy_version 18837 (0.0008) +[2024-12-28 14:31:03,436][100934] Updated weights for policy 0, policy_version 18847 (0.0009) +[2024-12-28 14:31:03,944][100720] Fps is (10 sec: 22527.9, 60 sec: 25190.4, 300 sec: 24895.3). Total num frames: 77205504. Throughput: 0: 6264.0. Samples: 9290262. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:03,945][100720] Avg episode reward: [(0, '4.354')] +[2024-12-28 14:31:05,292][100934] Updated weights for policy 0, policy_version 18857 (0.0008) +[2024-12-28 14:31:06,870][100934] Updated weights for policy 0, policy_version 18867 (0.0006) +[2024-12-28 14:31:08,410][100934] Updated weights for policy 0, policy_version 18877 (0.0007) +[2024-12-28 14:31:08,944][100720] Fps is (10 sec: 23756.9, 60 sec: 25122.1, 300 sec: 24867.6). Total num frames: 77332480. Throughput: 0: 6271.8. Samples: 9326224. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:31:08,945][100720] Avg episode reward: [(0, '4.465')] +[2024-12-28 14:31:09,939][100934] Updated weights for policy 0, policy_version 18887 (0.0007) +[2024-12-28 14:31:11,480][100934] Updated weights for policy 0, policy_version 18897 (0.0007) +[2024-12-28 14:31:13,045][100934] Updated weights for policy 0, policy_version 18907 (0.0006) +[2024-12-28 14:31:13,944][100720] Fps is (10 sec: 26214.6, 60 sec: 25190.5, 300 sec: 24881.5). Total num frames: 77467648. Throughput: 0: 6411.1. Samples: 9366080. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:31:13,945][100720] Avg episode reward: [(0, '4.445')] +[2024-12-28 14:31:14,561][100934] Updated weights for policy 0, policy_version 18917 (0.0006) +[2024-12-28 14:31:16,122][100934] Updated weights for policy 0, policy_version 18927 (0.0007) +[2024-12-28 14:31:17,646][100934] Updated weights for policy 0, policy_version 18937 (0.0006) +[2024-12-28 14:31:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25190.5, 300 sec: 24867.6). Total num frames: 77598720. Throughput: 0: 6421.8. Samples: 9385840. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:31:18,945][100720] Avg episode reward: [(0, '4.500')] +[2024-12-28 14:31:19,203][100934] Updated weights for policy 0, policy_version 18947 (0.0007) +[2024-12-28 14:31:20,798][100934] Updated weights for policy 0, policy_version 18957 (0.0007) +[2024-12-28 14:31:22,382][100934] Updated weights for policy 0, policy_version 18967 (0.0007) +[2024-12-28 14:31:23,939][100934] Updated weights for policy 0, policy_version 18977 (0.0008) +[2024-12-28 14:31:23,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25531.8, 300 sec: 24923.1). Total num frames: 77729792. Throughput: 0: 6392.2. Samples: 9425096. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:23,945][100720] Avg episode reward: [(0, '4.226')] +[2024-12-28 14:31:25,476][100934] Updated weights for policy 0, policy_version 18987 (0.0007) +[2024-12-28 14:31:27,015][100934] Updated weights for policy 0, policy_version 18997 (0.0007) +[2024-12-28 14:31:28,580][100934] Updated weights for policy 0, policy_version 19007 (0.0006) +[2024-12-28 14:31:28,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25873.1, 300 sec: 24978.7). Total num frames: 77860864. Throughput: 0: 6379.3. Samples: 9464816. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:28,945][100720] Avg episode reward: [(0, '4.536')] +[2024-12-28 14:31:30,119][100934] Updated weights for policy 0, policy_version 19017 (0.0007) +[2024-12-28 14:31:31,699][100934] Updated weights for policy 0, policy_version 19027 (0.0007) +[2024-12-28 14:31:33,274][100934] Updated weights for policy 0, policy_version 19037 (0.0007) +[2024-12-28 14:31:33,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25804.8, 300 sec: 24992.5). Total num frames: 77991936. Throughput: 0: 6370.7. Samples: 9484350. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:33,945][100720] Avg episode reward: [(0, '4.395')] +[2024-12-28 14:31:34,839][100934] Updated weights for policy 0, policy_version 19047 (0.0006) +[2024-12-28 14:31:36,351][100934] Updated weights for policy 0, policy_version 19057 (0.0006) +[2024-12-28 14:31:37,926][100934] Updated weights for policy 0, policy_version 19067 (0.0007) +[2024-12-28 14:31:38,944][100720] Fps is (10 sec: 26214.2, 60 sec: 25736.5, 300 sec: 24978.7). Total num frames: 78123008. Throughput: 0: 6355.3. Samples: 9523712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:31:38,945][100720] Avg episode reward: [(0, '4.442')] +[2024-12-28 14:31:39,503][100934] Updated weights for policy 0, policy_version 19077 (0.0007) +[2024-12-28 14:31:41,069][100934] Updated weights for policy 0, policy_version 19087 (0.0006) +[2024-12-28 14:31:42,617][100934] Updated weights for policy 0, policy_version 19097 (0.0007) +[2024-12-28 14:31:43,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25668.3, 300 sec: 24978.7). Total num frames: 78254080. Throughput: 0: 6431.7. Samples: 9563042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:31:43,945][100720] Avg episode reward: [(0, '4.120')] +[2024-12-28 14:31:44,192][100934] Updated weights for policy 0, policy_version 19107 (0.0007) +[2024-12-28 14:31:45,756][100934] Updated weights for policy 0, policy_version 19117 (0.0007) +[2024-12-28 14:31:47,343][100934] Updated weights for policy 0, policy_version 19127 (0.0006) +[2024-12-28 14:31:48,885][100934] Updated weights for policy 0, policy_version 19137 (0.0007) +[2024-12-28 14:31:48,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25668.3, 300 sec: 25048.1). Total num frames: 78385152. Throughput: 0: 6495.9. Samples: 9582576. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:48,945][100720] Avg episode reward: [(0, '4.656')] +[2024-12-28 14:31:50,463][100934] Updated weights for policy 0, policy_version 19147 (0.0007) +[2024-12-28 14:31:52,297][100934] Updated weights for policy 0, policy_version 19157 (0.0007) +[2024-12-28 14:31:53,944][100720] Fps is (10 sec: 24575.9, 60 sec: 25326.9, 300 sec: 25048.1). Total num frames: 78499840. Throughput: 0: 6512.6. Samples: 9619290. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:53,945][100720] Avg episode reward: [(0, '4.157')] +[2024-12-28 14:31:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000019165_78499840.pth... +[2024-12-28 14:31:53,987][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000017686_72441856.pth +[2024-12-28 14:31:54,243][100934] Updated weights for policy 0, policy_version 19167 (0.0008) +[2024-12-28 14:31:56,081][100934] Updated weights for policy 0, policy_version 19177 (0.0008) +[2024-12-28 14:31:57,916][100934] Updated weights for policy 0, policy_version 19187 (0.0007) +[2024-12-28 14:31:58,944][100720] Fps is (10 sec: 22528.0, 60 sec: 25258.7, 300 sec: 24964.8). Total num frames: 78610432. Throughput: 0: 6358.9. Samples: 9652230. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:31:58,945][100720] Avg episode reward: [(0, '4.429')] +[2024-12-28 14:31:59,781][100934] Updated weights for policy 0, policy_version 19197 (0.0008) +[2024-12-28 14:32:01,534][100934] Updated weights for policy 0, policy_version 19207 (0.0008) +[2024-12-28 14:32:03,113][100934] Updated weights for policy 0, policy_version 19217 (0.0006) +[2024-12-28 14:32:03,944][100720] Fps is (10 sec: 23347.3, 60 sec: 25463.5, 300 sec: 24964.8). Total num frames: 78733312. Throughput: 0: 6304.6. Samples: 9669548. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:32:03,945][100720] Avg episode reward: [(0, '4.653')] +[2024-12-28 14:32:04,699][100934] Updated weights for policy 0, policy_version 19227 (0.0007) +[2024-12-28 14:32:06,232][100934] Updated weights for policy 0, policy_version 19237 (0.0007) +[2024-12-28 14:32:07,774][100934] Updated weights for policy 0, policy_version 19247 (0.0007) +[2024-12-28 14:32:08,944][100720] Fps is (10 sec: 25395.3, 60 sec: 25531.7, 300 sec: 25034.2). Total num frames: 78864384. Throughput: 0: 6312.4. Samples: 9709154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:32:08,945][100720] Avg episode reward: [(0, '4.461')] +[2024-12-28 14:32:09,348][100934] Updated weights for policy 0, policy_version 19257 (0.0008) +[2024-12-28 14:32:10,863][100934] Updated weights for policy 0, policy_version 19267 (0.0006) +[2024-12-28 14:32:12,408][100934] Updated weights for policy 0, policy_version 19277 (0.0006) +[2024-12-28 14:32:13,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25463.4, 300 sec: 25089.7). Total num frames: 78995456. Throughput: 0: 6316.8. Samples: 9749072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:32:13,945][100720] Avg episode reward: [(0, '4.430')] +[2024-12-28 14:32:13,975][100934] Updated weights for policy 0, policy_version 19287 (0.0008) +[2024-12-28 14:32:15,514][100934] Updated weights for policy 0, policy_version 19297 (0.0007) +[2024-12-28 14:32:17,091][100934] Updated weights for policy 0, policy_version 19307 (0.0007) +[2024-12-28 14:32:18,611][100934] Updated weights for policy 0, policy_version 19317 (0.0007) +[2024-12-28 14:32:18,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25531.7, 300 sec: 25173.1). Total num frames: 79130624. Throughput: 0: 6318.9. Samples: 9768702. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:32:18,945][100720] Avg episode reward: [(0, '4.233')] +[2024-12-28 14:32:20,175][100934] Updated weights for policy 0, policy_version 19327 (0.0007) +[2024-12-28 14:32:21,800][100934] Updated weights for policy 0, policy_version 19337 (0.0007) +[2024-12-28 14:32:23,390][100934] Updated weights for policy 0, policy_version 19347 (0.0007) +[2024-12-28 14:32:23,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25463.4, 300 sec: 25159.2). Total num frames: 79257600. Throughput: 0: 6308.5. Samples: 9807594. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:32:23,945][100720] Avg episode reward: [(0, '4.416')] +[2024-12-28 14:32:24,951][100934] Updated weights for policy 0, policy_version 19357 (0.0007) +[2024-12-28 14:32:26,477][100934] Updated weights for policy 0, policy_version 19367 (0.0006) +[2024-12-28 14:32:28,023][100934] Updated weights for policy 0, policy_version 19377 (0.0007) +[2024-12-28 14:32:28,944][100720] Fps is (10 sec: 25804.8, 60 sec: 25463.4, 300 sec: 25159.2). Total num frames: 79388672. Throughput: 0: 6319.0. Samples: 9847398. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:32:28,945][100720] Avg episode reward: [(0, '4.300')] +[2024-12-28 14:32:29,595][100934] Updated weights for policy 0, policy_version 19387 (0.0007) +[2024-12-28 14:32:31,134][100934] Updated weights for policy 0, policy_version 19397 (0.0007) +[2024-12-28 14:32:32,684][100934] Updated weights for policy 0, policy_version 19407 (0.0007) +[2024-12-28 14:32:33,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25531.7, 300 sec: 25214.7). Total num frames: 79523840. Throughput: 0: 6321.6. Samples: 9867046. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:32:33,945][100720] Avg episode reward: [(0, '4.370')] +[2024-12-28 14:32:34,219][100934] Updated weights for policy 0, policy_version 19417 (0.0006) +[2024-12-28 14:32:35,765][100934] Updated weights for policy 0, policy_version 19427 (0.0007) +[2024-12-28 14:32:37,367][100934] Updated weights for policy 0, policy_version 19437 (0.0007) +[2024-12-28 14:32:38,944][100720] Fps is (10 sec: 26214.2, 60 sec: 25463.5, 300 sec: 25270.2). Total num frames: 79650816. Throughput: 0: 6376.5. Samples: 9906234. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:32:38,945][100720] Avg episode reward: [(0, '4.341')] +[2024-12-28 14:32:38,958][100934] Updated weights for policy 0, policy_version 19447 (0.0007) +[2024-12-28 14:32:40,506][100934] Updated weights for policy 0, policy_version 19457 (0.0006) +[2024-12-28 14:32:42,021][100934] Updated weights for policy 0, policy_version 19467 (0.0007) +[2024-12-28 14:32:43,527][100934] Updated weights for policy 0, policy_version 19477 (0.0006) +[2024-12-28 14:32:43,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25531.7, 300 sec: 25325.8). Total num frames: 79785984. Throughput: 0: 6532.4. Samples: 9946190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:32:43,945][100720] Avg episode reward: [(0, '4.447')] +[2024-12-28 14:32:45,114][100934] Updated weights for policy 0, policy_version 19487 (0.0007) +[2024-12-28 14:32:46,730][100934] Updated weights for policy 0, policy_version 19497 (0.0007) +[2024-12-28 14:32:48,545][100934] Updated weights for policy 0, policy_version 19507 (0.0009) +[2024-12-28 14:32:48,944][100720] Fps is (10 sec: 25805.0, 60 sec: 25395.2, 300 sec: 25311.9). Total num frames: 79908864. Throughput: 0: 6569.7. Samples: 9965184. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:32:48,945][100720] Avg episode reward: [(0, '4.754')] +[2024-12-28 14:32:50,392][100934] Updated weights for policy 0, policy_version 19517 (0.0008) +[2024-12-28 14:32:52,232][100934] Updated weights for policy 0, policy_version 19527 (0.0009) +[2024-12-28 14:32:53,944][100720] Fps is (10 sec: 23346.7, 60 sec: 25326.9, 300 sec: 25311.9). Total num frames: 80019456. Throughput: 0: 6436.3. Samples: 9998790. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:32:53,945][100720] Avg episode reward: [(0, '4.357')] +[2024-12-28 14:32:54,053][100934] Updated weights for policy 0, policy_version 19537 (0.0007) +[2024-12-28 14:32:55,913][100934] Updated weights for policy 0, policy_version 19547 (0.0008) +[2024-12-28 14:32:57,707][100934] Updated weights for policy 0, policy_version 19557 (0.0008) +[2024-12-28 14:32:58,944][100720] Fps is (10 sec: 22118.3, 60 sec: 25326.9, 300 sec: 25311.9). Total num frames: 80130048. Throughput: 0: 6296.7. Samples: 10032424. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:32:58,945][100720] Avg episode reward: [(0, '4.672')] +[2024-12-28 14:32:59,478][100934] Updated weights for policy 0, policy_version 19567 (0.0007) +[2024-12-28 14:33:01,058][100934] Updated weights for policy 0, policy_version 19577 (0.0008) +[2024-12-28 14:33:02,632][100934] Updated weights for policy 0, policy_version 19587 (0.0006) +[2024-12-28 14:33:03,944][100720] Fps is (10 sec: 24166.9, 60 sec: 25463.5, 300 sec: 25325.8). Total num frames: 80261120. Throughput: 0: 6292.5. Samples: 10051864. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:33:03,945][100720] Avg episode reward: [(0, '4.405')] +[2024-12-28 14:33:04,165][100934] Updated weights for policy 0, policy_version 19597 (0.0007) +[2024-12-28 14:33:05,723][100934] Updated weights for policy 0, policy_version 19607 (0.0006) +[2024-12-28 14:33:07,265][100934] Updated weights for policy 0, policy_version 19617 (0.0006) +[2024-12-28 14:33:08,790][100934] Updated weights for policy 0, policy_version 19627 (0.0006) +[2024-12-28 14:33:08,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25463.4, 300 sec: 25325.8). Total num frames: 80392192. Throughput: 0: 6304.0. Samples: 10091274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:33:08,945][100720] Avg episode reward: [(0, '4.530')] +[2024-12-28 14:33:10,385][100934] Updated weights for policy 0, policy_version 19637 (0.0008) +[2024-12-28 14:33:11,920][100934] Updated weights for policy 0, policy_version 19647 (0.0007) +[2024-12-28 14:33:13,602][100934] Updated weights for policy 0, policy_version 19657 (0.0008) +[2024-12-28 14:33:13,944][100720] Fps is (10 sec: 25804.7, 60 sec: 25395.2, 300 sec: 25311.9). Total num frames: 80519168. Throughput: 0: 6274.0. Samples: 10129730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:33:13,945][100720] Avg episode reward: [(0, '4.316')] +[2024-12-28 14:33:15,467][100934] Updated weights for policy 0, policy_version 19667 (0.0008) +[2024-12-28 14:33:17,317][100934] Updated weights for policy 0, policy_version 19677 (0.0009) +[2024-12-28 14:33:18,944][100720] Fps is (10 sec: 23756.9, 60 sec: 24985.6, 300 sec: 25270.2). Total num frames: 80629760. Throughput: 0: 6209.9. Samples: 10146490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:33:18,945][100720] Avg episode reward: [(0, '4.480')] +[2024-12-28 14:33:19,182][100934] Updated weights for policy 0, policy_version 19687 (0.0008) +[2024-12-28 14:33:21,092][100934] Updated weights for policy 0, policy_version 19697 (0.0008) +[2024-12-28 14:33:22,990][100934] Updated weights for policy 0, policy_version 19707 (0.0008) +[2024-12-28 14:33:23,944][100720] Fps is (10 sec: 22118.5, 60 sec: 24712.6, 300 sec: 25256.4). Total num frames: 80740352. Throughput: 0: 6062.7. Samples: 10179054. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:33:23,945][100720] Avg episode reward: [(0, '4.248')] +[2024-12-28 14:33:24,670][100934] Updated weights for policy 0, policy_version 19717 (0.0007) +[2024-12-28 14:33:26,196][100934] Updated weights for policy 0, policy_version 19727 (0.0006) +[2024-12-28 14:33:27,744][100934] Updated weights for policy 0, policy_version 19737 (0.0007) +[2024-12-28 14:33:28,944][100720] Fps is (10 sec: 24166.5, 60 sec: 24712.5, 300 sec: 25367.4). Total num frames: 80871424. Throughput: 0: 6038.3. Samples: 10217914. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:33:28,945][100720] Avg episode reward: [(0, '4.558')] +[2024-12-28 14:33:29,282][100934] Updated weights for policy 0, policy_version 19747 (0.0007) +[2024-12-28 14:33:30,852][100934] Updated weights for policy 0, policy_version 19757 (0.0007) +[2024-12-28 14:33:32,407][100934] Updated weights for policy 0, policy_version 19767 (0.0007) +[2024-12-28 14:33:33,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24644.3, 300 sec: 25367.4). Total num frames: 81002496. Throughput: 0: 6053.5. Samples: 10237590. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:33:33,945][100720] Avg episode reward: [(0, '4.397')] +[2024-12-28 14:33:33,951][100934] Updated weights for policy 0, policy_version 19777 (0.0007) +[2024-12-28 14:33:35,503][100934] Updated weights for policy 0, policy_version 19787 (0.0007) +[2024-12-28 14:33:37,057][100934] Updated weights for policy 0, policy_version 19797 (0.0006) +[2024-12-28 14:33:38,594][100934] Updated weights for policy 0, policy_version 19807 (0.0007) +[2024-12-28 14:33:38,944][100720] Fps is (10 sec: 26623.9, 60 sec: 24780.8, 300 sec: 25367.4). Total num frames: 81137664. Throughput: 0: 6186.4. Samples: 10277176. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:33:38,945][100720] Avg episode reward: [(0, '4.335')] +[2024-12-28 14:33:40,127][100934] Updated weights for policy 0, policy_version 19817 (0.0007) +[2024-12-28 14:33:41,671][100934] Updated weights for policy 0, policy_version 19827 (0.0006) +[2024-12-28 14:33:43,304][100934] Updated weights for policy 0, policy_version 19837 (0.0007) +[2024-12-28 14:33:43,944][100720] Fps is (10 sec: 26214.1, 60 sec: 24644.2, 300 sec: 25353.5). Total num frames: 81264640. Throughput: 0: 6298.6. Samples: 10315860. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:33:43,945][100720] Avg episode reward: [(0, '4.464')] +[2024-12-28 14:33:45,134][100934] Updated weights for policy 0, policy_version 19847 (0.0007) +[2024-12-28 14:33:46,952][100934] Updated weights for policy 0, policy_version 19857 (0.0009) +[2024-12-28 14:33:48,811][100934] Updated weights for policy 0, policy_version 19867 (0.0009) +[2024-12-28 14:33:48,944][100720] Fps is (10 sec: 23756.7, 60 sec: 24439.4, 300 sec: 25284.1). Total num frames: 81375232. Throughput: 0: 6238.2. Samples: 10332584. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:33:48,945][100720] Avg episode reward: [(0, '4.463')] +[2024-12-28 14:33:50,665][100934] Updated weights for policy 0, policy_version 19877 (0.0009) +[2024-12-28 14:33:52,539][100934] Updated weights for policy 0, policy_version 19887 (0.0008) +[2024-12-28 14:33:53,944][100720] Fps is (10 sec: 22527.8, 60 sec: 24507.7, 300 sec: 25284.1). Total num frames: 81489920. Throughput: 0: 6095.5. Samples: 10365572. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:33:53,945][100720] Avg episode reward: [(0, '4.608')] +[2024-12-28 14:33:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000019895_81489920.pth... +[2024-12-28 14:33:53,986][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000018415_75427840.pth +[2024-12-28 14:33:54,249][100934] Updated weights for policy 0, policy_version 19897 (0.0008) +[2024-12-28 14:33:55,809][100934] Updated weights for policy 0, policy_version 19907 (0.0008) +[2024-12-28 14:33:57,339][100934] Updated weights for policy 0, policy_version 19917 (0.0007) +[2024-12-28 14:33:58,868][100934] Updated weights for policy 0, policy_version 19927 (0.0007) +[2024-12-28 14:33:58,944][100720] Fps is (10 sec: 24576.2, 60 sec: 24849.1, 300 sec: 25353.6). Total num frames: 81620992. Throughput: 0: 6113.0. Samples: 10404816. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:33:58,945][100720] Avg episode reward: [(0, '4.323')] +[2024-12-28 14:34:00,393][100934] Updated weights for policy 0, policy_version 19937 (0.0006) +[2024-12-28 14:34:01,968][100934] Updated weights for policy 0, policy_version 19947 (0.0007) +[2024-12-28 14:34:03,507][100934] Updated weights for policy 0, policy_version 19957 (0.0007) +[2024-12-28 14:34:03,944][100720] Fps is (10 sec: 26214.9, 60 sec: 24849.1, 300 sec: 25339.7). Total num frames: 81752064. Throughput: 0: 6180.8. Samples: 10424624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:03,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:34:05,047][100934] Updated weights for policy 0, policy_version 19967 (0.0007) +[2024-12-28 14:34:06,626][100934] Updated weights for policy 0, policy_version 19977 (0.0007) +[2024-12-28 14:34:08,182][100934] Updated weights for policy 0, policy_version 19987 (0.0007) +[2024-12-28 14:34:08,944][100720] Fps is (10 sec: 26214.2, 60 sec: 24849.1, 300 sec: 25395.2). Total num frames: 81883136. Throughput: 0: 6338.4. Samples: 10464280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:08,945][100720] Avg episode reward: [(0, '4.411')] +[2024-12-28 14:34:09,743][100934] Updated weights for policy 0, policy_version 19997 (0.0007) +[2024-12-28 14:34:11,315][100934] Updated weights for policy 0, policy_version 20007 (0.0008) +[2024-12-28 14:34:12,861][100934] Updated weights for policy 0, policy_version 20017 (0.0006) +[2024-12-28 14:34:13,944][100720] Fps is (10 sec: 26623.8, 60 sec: 24985.6, 300 sec: 25478.5). Total num frames: 82018304. Throughput: 0: 6352.4. Samples: 10503774. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:13,945][100720] Avg episode reward: [(0, '4.389')] +[2024-12-28 14:34:14,456][100934] Updated weights for policy 0, policy_version 20027 (0.0007) +[2024-12-28 14:34:16,263][100934] Updated weights for policy 0, policy_version 20037 (0.0009) +[2024-12-28 14:34:18,113][100934] Updated weights for policy 0, policy_version 20047 (0.0008) +[2024-12-28 14:34:18,944][100720] Fps is (10 sec: 24576.0, 60 sec: 24985.6, 300 sec: 25409.1). Total num frames: 82128896. Throughput: 0: 6293.5. Samples: 10520796. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:18,945][100720] Avg episode reward: [(0, '4.309')] +[2024-12-28 14:34:19,959][100934] Updated weights for policy 0, policy_version 20057 (0.0007) +[2024-12-28 14:34:21,866][100934] Updated weights for policy 0, policy_version 20067 (0.0009) +[2024-12-28 14:34:23,731][100934] Updated weights for policy 0, policy_version 20077 (0.0009) +[2024-12-28 14:34:23,944][100720] Fps is (10 sec: 22118.5, 60 sec: 24985.6, 300 sec: 25339.7). Total num frames: 82239488. Throughput: 0: 6146.1. Samples: 10553750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:23,945][100720] Avg episode reward: [(0, '4.330')] +[2024-12-28 14:34:25,475][100934] Updated weights for policy 0, policy_version 20087 (0.0007) +[2024-12-28 14:34:27,037][100934] Updated weights for policy 0, policy_version 20097 (0.0008) +[2024-12-28 14:34:28,546][100934] Updated weights for policy 0, policy_version 20107 (0.0007) +[2024-12-28 14:34:28,944][100720] Fps is (10 sec: 23756.8, 60 sec: 24917.3, 300 sec: 25325.8). Total num frames: 82366464. Throughput: 0: 6118.5. Samples: 10591194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:28,945][100720] Avg episode reward: [(0, '4.409')] +[2024-12-28 14:34:30,158][100934] Updated weights for policy 0, policy_version 20117 (0.0007) +[2024-12-28 14:34:31,682][100934] Updated weights for policy 0, policy_version 20127 (0.0007) +[2024-12-28 14:34:33,223][100934] Updated weights for policy 0, policy_version 20137 (0.0007) +[2024-12-28 14:34:33,944][100720] Fps is (10 sec: 25804.9, 60 sec: 24917.3, 300 sec: 25311.9). Total num frames: 82497536. Throughput: 0: 6184.9. Samples: 10610902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:33,945][100720] Avg episode reward: [(0, '4.547')] +[2024-12-28 14:34:34,756][100934] Updated weights for policy 0, policy_version 20147 (0.0008) +[2024-12-28 14:34:36,315][100934] Updated weights for policy 0, policy_version 20157 (0.0007) +[2024-12-28 14:34:37,891][100934] Updated weights for policy 0, policy_version 20167 (0.0007) +[2024-12-28 14:34:38,944][100720] Fps is (10 sec: 26214.5, 60 sec: 24849.1, 300 sec: 25298.0). Total num frames: 82628608. Throughput: 0: 6331.3. Samples: 10650480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:34:38,945][100720] Avg episode reward: [(0, '4.271')] +[2024-12-28 14:34:39,430][100934] Updated weights for policy 0, policy_version 20177 (0.0006) +[2024-12-28 14:34:40,951][100934] Updated weights for policy 0, policy_version 20187 (0.0008) +[2024-12-28 14:34:42,487][100934] Updated weights for policy 0, policy_version 20197 (0.0007) +[2024-12-28 14:34:43,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24985.6, 300 sec: 25311.9). Total num frames: 82763776. Throughput: 0: 6348.4. Samples: 10690496. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:34:43,945][100720] Avg episode reward: [(0, '4.140')] +[2024-12-28 14:34:44,050][100934] Updated weights for policy 0, policy_version 20207 (0.0008) +[2024-12-28 14:34:45,580][100934] Updated weights for policy 0, policy_version 20217 (0.0006) +[2024-12-28 14:34:47,180][100934] Updated weights for policy 0, policy_version 20227 (0.0007) +[2024-12-28 14:34:48,714][100934] Updated weights for policy 0, policy_version 20237 (0.0006) +[2024-12-28 14:34:48,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25326.9, 300 sec: 25311.9). Total num frames: 82894848. Throughput: 0: 6349.7. Samples: 10710360. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:34:48,945][100720] Avg episode reward: [(0, '4.391')] +[2024-12-28 14:34:50,274][100934] Updated weights for policy 0, policy_version 20247 (0.0007) +[2024-12-28 14:34:51,847][100934] Updated weights for policy 0, policy_version 20257 (0.0006) +[2024-12-28 14:34:53,386][100934] Updated weights for policy 0, policy_version 20267 (0.0007) +[2024-12-28 14:34:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25600.1, 300 sec: 25298.0). Total num frames: 83025920. Throughput: 0: 6340.1. Samples: 10749586. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:34:53,945][100720] Avg episode reward: [(0, '4.595')] +[2024-12-28 14:34:54,927][100934] Updated weights for policy 0, policy_version 20277 (0.0006) +[2024-12-28 14:34:56,448][100934] Updated weights for policy 0, policy_version 20287 (0.0007) +[2024-12-28 14:34:58,064][100934] Updated weights for policy 0, policy_version 20297 (0.0007) +[2024-12-28 14:34:58,944][100720] Fps is (10 sec: 26214.6, 60 sec: 25600.0, 300 sec: 25298.0). Total num frames: 83156992. Throughput: 0: 6345.5. Samples: 10789320. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:34:58,946][100720] Avg episode reward: [(0, '4.552')] +[2024-12-28 14:34:59,563][100934] Updated weights for policy 0, policy_version 20307 (0.0008) +[2024-12-28 14:35:01,152][100934] Updated weights for policy 0, policy_version 20317 (0.0008) +[2024-12-28 14:35:02,705][100934] Updated weights for policy 0, policy_version 20327 (0.0007) +[2024-12-28 14:35:03,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25600.0, 300 sec: 25298.0). Total num frames: 83288064. Throughput: 0: 6399.8. Samples: 10808788. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:03,945][100720] Avg episode reward: [(0, '4.479')] +[2024-12-28 14:35:04,270][100934] Updated weights for policy 0, policy_version 20337 (0.0006) +[2024-12-28 14:35:05,800][100934] Updated weights for policy 0, policy_version 20347 (0.0007) +[2024-12-28 14:35:07,321][100934] Updated weights for policy 0, policy_version 20357 (0.0007) +[2024-12-28 14:35:08,847][100934] Updated weights for policy 0, policy_version 20367 (0.0006) +[2024-12-28 14:35:08,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25668.2, 300 sec: 25311.9). Total num frames: 83423232. Throughput: 0: 6556.3. Samples: 10848786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:08,945][100720] Avg episode reward: [(0, '4.501')] +[2024-12-28 14:35:10,355][100934] Updated weights for policy 0, policy_version 20377 (0.0006) +[2024-12-28 14:35:11,856][100934] Updated weights for policy 0, policy_version 20387 (0.0006) +[2024-12-28 14:35:13,437][100934] Updated weights for policy 0, policy_version 20397 (0.0008) +[2024-12-28 14:35:13,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25668.3, 300 sec: 25325.8). Total num frames: 83558400. Throughput: 0: 6620.8. Samples: 10889130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:35:13,945][100720] Avg episode reward: [(0, '4.332')] +[2024-12-28 14:35:14,955][100934] Updated weights for policy 0, policy_version 20407 (0.0006) +[2024-12-28 14:35:16,594][100934] Updated weights for policy 0, policy_version 20417 (0.0007) +[2024-12-28 14:35:18,394][100934] Updated weights for policy 0, policy_version 20427 (0.0007) +[2024-12-28 14:35:18,944][100720] Fps is (10 sec: 25805.1, 60 sec: 25873.1, 300 sec: 25367.4). Total num frames: 83681280. Throughput: 0: 6604.3. Samples: 10908096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:35:18,945][100720] Avg episode reward: [(0, '4.193')] +[2024-12-28 14:35:20,217][100934] Updated weights for policy 0, policy_version 20437 (0.0008) +[2024-12-28 14:35:22,114][100934] Updated weights for policy 0, policy_version 20447 (0.0009) +[2024-12-28 14:35:23,937][100934] Updated weights for policy 0, policy_version 20457 (0.0009) +[2024-12-28 14:35:23,944][100720] Fps is (10 sec: 23347.3, 60 sec: 25873.1, 300 sec: 25367.4). Total num frames: 83791872. Throughput: 0: 6472.0. Samples: 10941722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:35:23,945][100720] Avg episode reward: [(0, '4.450')] +[2024-12-28 14:35:25,730][100934] Updated weights for policy 0, policy_version 20467 (0.0007) +[2024-12-28 14:35:27,330][100934] Updated weights for policy 0, policy_version 20477 (0.0008) +[2024-12-28 14:35:28,861][100934] Updated weights for policy 0, policy_version 20487 (0.0007) +[2024-12-28 14:35:28,944][100720] Fps is (10 sec: 23347.1, 60 sec: 25804.8, 300 sec: 25325.8). Total num frames: 83914752. Throughput: 0: 6397.9. Samples: 10978402. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:28,945][100720] Avg episode reward: [(0, '4.279')] +[2024-12-28 14:35:30,376][100934] Updated weights for policy 0, policy_version 20497 (0.0007) +[2024-12-28 14:35:31,902][100934] Updated weights for policy 0, policy_version 20507 (0.0006) +[2024-12-28 14:35:33,387][100934] Updated weights for policy 0, policy_version 20517 (0.0007) +[2024-12-28 14:35:33,944][100720] Fps is (10 sec: 25804.8, 60 sec: 25873.1, 300 sec: 25325.8). Total num frames: 84049920. Throughput: 0: 6404.3. Samples: 10998552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:33,946][100720] Avg episode reward: [(0, '4.474')] +[2024-12-28 14:35:34,919][100934] Updated weights for policy 0, policy_version 20527 (0.0007) +[2024-12-28 14:35:36,468][100934] Updated weights for policy 0, policy_version 20537 (0.0007) +[2024-12-28 14:35:38,017][100934] Updated weights for policy 0, policy_version 20547 (0.0007) +[2024-12-28 14:35:38,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25873.1, 300 sec: 25311.9). Total num frames: 84180992. Throughput: 0: 6428.5. Samples: 11038868. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:38,945][100720] Avg episode reward: [(0, '4.562')] +[2024-12-28 14:35:39,722][100934] Updated weights for policy 0, policy_version 20557 (0.0008) +[2024-12-28 14:35:41,500][100934] Updated weights for policy 0, policy_version 20567 (0.0008) +[2024-12-28 14:35:43,311][100934] Updated weights for policy 0, policy_version 20577 (0.0009) +[2024-12-28 14:35:43,944][100720] Fps is (10 sec: 24575.8, 60 sec: 25531.7, 300 sec: 25256.3). Total num frames: 84295680. Throughput: 0: 6317.9. Samples: 11073628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:35:43,945][100720] Avg episode reward: [(0, '4.488')] +[2024-12-28 14:35:45,126][100934] Updated weights for policy 0, policy_version 20587 (0.0008) +[2024-12-28 14:35:46,916][100934] Updated weights for policy 0, policy_version 20597 (0.0008) +[2024-12-28 14:35:48,747][100934] Updated weights for policy 0, policy_version 20607 (0.0008) +[2024-12-28 14:35:48,944][100720] Fps is (10 sec: 22937.5, 60 sec: 25258.7, 300 sec: 25186.9). Total num frames: 84410368. Throughput: 0: 6264.0. Samples: 11090666. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:35:48,945][100720] Avg episode reward: [(0, '4.357')] +[2024-12-28 14:35:50,407][100934] Updated weights for policy 0, policy_version 20617 (0.0008) +[2024-12-28 14:35:51,911][100934] Updated weights for policy 0, policy_version 20627 (0.0007) +[2024-12-28 14:35:53,454][100934] Updated weights for policy 0, policy_version 20637 (0.0006) +[2024-12-28 14:35:53,944][100720] Fps is (10 sec: 24575.9, 60 sec: 25258.6, 300 sec: 25242.5). Total num frames: 84541440. Throughput: 0: 6205.4. Samples: 11128028. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:53,945][100720] Avg episode reward: [(0, '4.311')] +[2024-12-28 14:35:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000020640_84541440.pth... +[2024-12-28 14:35:53,984][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000019165_78499840.pth +[2024-12-28 14:35:55,001][100934] Updated weights for policy 0, policy_version 20647 (0.0006) +[2024-12-28 14:35:56,488][100934] Updated weights for policy 0, policy_version 20657 (0.0006) +[2024-12-28 14:35:57,990][100934] Updated weights for policy 0, policy_version 20667 (0.0007) +[2024-12-28 14:35:58,944][100720] Fps is (10 sec: 26214.3, 60 sec: 25258.6, 300 sec: 25311.9). Total num frames: 84672512. Throughput: 0: 6206.3. Samples: 11168414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:35:58,945][100720] Avg episode reward: [(0, '4.259')] +[2024-12-28 14:35:59,605][100934] Updated weights for policy 0, policy_version 20677 (0.0008) +[2024-12-28 14:36:01,139][100934] Updated weights for policy 0, policy_version 20687 (0.0006) +[2024-12-28 14:36:02,646][100934] Updated weights for policy 0, policy_version 20697 (0.0007) +[2024-12-28 14:36:03,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25326.9, 300 sec: 25339.7). Total num frames: 84807680. Throughput: 0: 6222.3. Samples: 11188100. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:03,945][100720] Avg episode reward: [(0, '4.424')] +[2024-12-28 14:36:04,177][100934] Updated weights for policy 0, policy_version 20707 (0.0007) +[2024-12-28 14:36:05,693][100934] Updated weights for policy 0, policy_version 20717 (0.0007) +[2024-12-28 14:36:07,214][100934] Updated weights for policy 0, policy_version 20727 (0.0007) +[2024-12-28 14:36:08,718][100934] Updated weights for policy 0, policy_version 20737 (0.0006) +[2024-12-28 14:36:08,944][100720] Fps is (10 sec: 27033.8, 60 sec: 25327.0, 300 sec: 25339.7). Total num frames: 84942848. Throughput: 0: 6374.7. Samples: 11228582. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:08,945][100720] Avg episode reward: [(0, '4.736')] +[2024-12-28 14:36:10,269][100934] Updated weights for policy 0, policy_version 20747 (0.0007) +[2024-12-28 14:36:11,803][100934] Updated weights for policy 0, policy_version 20757 (0.0008) +[2024-12-28 14:36:13,336][100934] Updated weights for policy 0, policy_version 20767 (0.0007) +[2024-12-28 14:36:13,944][100720] Fps is (10 sec: 26624.3, 60 sec: 25258.7, 300 sec: 25339.7). Total num frames: 85073920. Throughput: 0: 6446.6. Samples: 11268500. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:13,945][100720] Avg episode reward: [(0, '4.464')] +[2024-12-28 14:36:14,897][100934] Updated weights for policy 0, policy_version 20777 (0.0007) +[2024-12-28 14:36:16,435][100934] Updated weights for policy 0, policy_version 20787 (0.0007) +[2024-12-28 14:36:17,939][100934] Updated weights for policy 0, policy_version 20797 (0.0006) +[2024-12-28 14:36:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25463.5, 300 sec: 25353.5). Total num frames: 85209088. Throughput: 0: 6440.8. Samples: 11288388. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:18,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 14:36:19,499][100934] Updated weights for policy 0, policy_version 20807 (0.0006) +[2024-12-28 14:36:21,042][100934] Updated weights for policy 0, policy_version 20817 (0.0007) +[2024-12-28 14:36:22,615][100934] Updated weights for policy 0, policy_version 20827 (0.0007) +[2024-12-28 14:36:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.8, 300 sec: 25353.5). Total num frames: 85340160. Throughput: 0: 6428.8. Samples: 11328162. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:23,945][100720] Avg episode reward: [(0, '4.421')] +[2024-12-28 14:36:24,144][100934] Updated weights for policy 0, policy_version 20837 (0.0008) +[2024-12-28 14:36:25,655][100934] Updated weights for policy 0, policy_version 20847 (0.0007) +[2024-12-28 14:36:27,203][100934] Updated weights for policy 0, policy_version 20857 (0.0007) +[2024-12-28 14:36:28,736][100934] Updated weights for policy 0, policy_version 20867 (0.0007) +[2024-12-28 14:36:28,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26009.6, 300 sec: 25367.4). Total num frames: 85475328. Throughput: 0: 6546.9. Samples: 11368240. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:28,945][100720] Avg episode reward: [(0, '4.392')] +[2024-12-28 14:36:30,328][100934] Updated weights for policy 0, policy_version 20877 (0.0007) +[2024-12-28 14:36:31,866][100934] Updated weights for policy 0, policy_version 20887 (0.0007) +[2024-12-28 14:36:33,406][100934] Updated weights for policy 0, policy_version 20897 (0.0007) +[2024-12-28 14:36:33,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25941.3, 300 sec: 25367.4). Total num frames: 85606400. Throughput: 0: 6606.8. Samples: 11387972. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:33,945][100720] Avg episode reward: [(0, '4.324')] +[2024-12-28 14:36:35,063][100934] Updated weights for policy 0, policy_version 20907 (0.0008) +[2024-12-28 14:36:36,887][100934] Updated weights for policy 0, policy_version 20917 (0.0007) +[2024-12-28 14:36:38,683][100934] Updated weights for policy 0, policy_version 20927 (0.0009) +[2024-12-28 14:36:38,944][100720] Fps is (10 sec: 24575.5, 60 sec: 25668.2, 300 sec: 25311.9). Total num frames: 85721088. Throughput: 0: 6585.3. Samples: 11424368. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:38,945][100720] Avg episode reward: [(0, '4.445')] +[2024-12-28 14:36:40,483][100934] Updated weights for policy 0, policy_version 20937 (0.0008) +[2024-12-28 14:36:42,258][100934] Updated weights for policy 0, policy_version 20947 (0.0008) +[2024-12-28 14:36:43,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25668.3, 300 sec: 25256.4). Total num frames: 85835776. Throughput: 0: 6437.9. Samples: 11458118. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:43,945][100720] Avg episode reward: [(0, '4.455')] +[2024-12-28 14:36:44,100][100934] Updated weights for policy 0, policy_version 20957 (0.0007) +[2024-12-28 14:36:45,804][100934] Updated weights for policy 0, policy_version 20967 (0.0007) +[2024-12-28 14:36:47,367][100934] Updated weights for policy 0, policy_version 20977 (0.0007) +[2024-12-28 14:36:48,899][100934] Updated weights for policy 0, policy_version 20987 (0.0007) +[2024-12-28 14:36:48,944][100720] Fps is (10 sec: 24166.9, 60 sec: 25873.1, 300 sec: 25298.0). Total num frames: 85962752. Throughput: 0: 6413.5. Samples: 11476706. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:48,945][100720] Avg episode reward: [(0, '4.354')] +[2024-12-28 14:36:50,465][100934] Updated weights for policy 0, policy_version 20997 (0.0007) +[2024-12-28 14:36:51,995][100934] Updated weights for policy 0, policy_version 21007 (0.0006) +[2024-12-28 14:36:53,503][100934] Updated weights for policy 0, policy_version 21017 (0.0007) +[2024-12-28 14:36:53,944][100720] Fps is (10 sec: 25804.6, 60 sec: 25873.1, 300 sec: 25367.4). Total num frames: 86093824. Throughput: 0: 6404.1. Samples: 11516768. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:53,945][100720] Avg episode reward: [(0, '4.409')] +[2024-12-28 14:36:55,020][100934] Updated weights for policy 0, policy_version 21027 (0.0006) +[2024-12-28 14:36:56,549][100934] Updated weights for policy 0, policy_version 21037 (0.0008) +[2024-12-28 14:36:58,034][100934] Updated weights for policy 0, policy_version 21047 (0.0007) +[2024-12-28 14:36:58,944][100720] Fps is (10 sec: 27033.5, 60 sec: 26009.6, 300 sec: 25423.0). Total num frames: 86233088. Throughput: 0: 6421.2. Samples: 11557454. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:36:58,945][100720] Avg episode reward: [(0, '4.335')] +[2024-12-28 14:36:59,536][100934] Updated weights for policy 0, policy_version 21057 (0.0006) +[2024-12-28 14:37:01,125][100934] Updated weights for policy 0, policy_version 21067 (0.0007) +[2024-12-28 14:37:02,661][100934] Updated weights for policy 0, policy_version 21077 (0.0007) +[2024-12-28 14:37:03,944][100720] Fps is (10 sec: 27033.8, 60 sec: 25941.4, 300 sec: 25423.0). Total num frames: 86364160. Throughput: 0: 6417.2. Samples: 11577164. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:03,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:37:04,207][100934] Updated weights for policy 0, policy_version 21087 (0.0007) +[2024-12-28 14:37:05,721][100934] Updated weights for policy 0, policy_version 21097 (0.0007) +[2024-12-28 14:37:07,306][100934] Updated weights for policy 0, policy_version 21107 (0.0006) +[2024-12-28 14:37:08,836][100934] Updated weights for policy 0, policy_version 21117 (0.0007) +[2024-12-28 14:37:08,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25873.1, 300 sec: 25423.0). Total num frames: 86495232. Throughput: 0: 6420.3. Samples: 11617074. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:08,945][100720] Avg episode reward: [(0, '4.402')] +[2024-12-28 14:37:10,371][100934] Updated weights for policy 0, policy_version 21127 (0.0007) +[2024-12-28 14:37:11,962][100934] Updated weights for policy 0, policy_version 21137 (0.0007) +[2024-12-28 14:37:13,499][100934] Updated weights for policy 0, policy_version 21147 (0.0006) +[2024-12-28 14:37:13,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25873.1, 300 sec: 25409.1). Total num frames: 86626304. Throughput: 0: 6407.6. Samples: 11656584. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:13,945][100720] Avg episode reward: [(0, '4.478')] +[2024-12-28 14:37:15,034][100934] Updated weights for policy 0, policy_version 21157 (0.0006) +[2024-12-28 14:37:16,584][100934] Updated weights for policy 0, policy_version 21167 (0.0007) +[2024-12-28 14:37:18,108][100934] Updated weights for policy 0, policy_version 21177 (0.0006) +[2024-12-28 14:37:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25436.9). Total num frames: 86761472. Throughput: 0: 6414.8. Samples: 11676640. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:18,945][100720] Avg episode reward: [(0, '4.240')] +[2024-12-28 14:37:19,660][100934] Updated weights for policy 0, policy_version 21187 (0.0007) +[2024-12-28 14:37:21,204][100934] Updated weights for policy 0, policy_version 21197 (0.0007) +[2024-12-28 14:37:22,904][100934] Updated weights for policy 0, policy_version 21207 (0.0008) +[2024-12-28 14:37:23,944][100720] Fps is (10 sec: 25804.5, 60 sec: 25736.5, 300 sec: 25409.1). Total num frames: 86884352. Throughput: 0: 6466.1. Samples: 11715344. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:23,945][100720] Avg episode reward: [(0, '4.527')] +[2024-12-28 14:37:24,665][100934] Updated weights for policy 0, policy_version 21217 (0.0007) +[2024-12-28 14:37:26,441][100934] Updated weights for policy 0, policy_version 21227 (0.0008) +[2024-12-28 14:37:28,221][100934] Updated weights for policy 0, policy_version 21237 (0.0007) +[2024-12-28 14:37:28,944][100720] Fps is (10 sec: 24166.3, 60 sec: 25463.5, 300 sec: 25353.5). Total num frames: 87003136. Throughput: 0: 6485.5. Samples: 11749966. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:28,945][100720] Avg episode reward: [(0, '4.447')] +[2024-12-28 14:37:30,049][100934] Updated weights for policy 0, policy_version 21247 (0.0009) +[2024-12-28 14:37:31,884][100934] Updated weights for policy 0, policy_version 21257 (0.0007) +[2024-12-28 14:37:33,559][100934] Updated weights for policy 0, policy_version 21267 (0.0007) +[2024-12-28 14:37:33,944][100720] Fps is (10 sec: 23347.4, 60 sec: 25190.4, 300 sec: 25311.9). Total num frames: 87117824. Throughput: 0: 6441.0. Samples: 11766552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:33,945][100720] Avg episode reward: [(0, '4.432')] +[2024-12-28 14:37:35,128][100934] Updated weights for policy 0, policy_version 21277 (0.0007) +[2024-12-28 14:37:36,639][100934] Updated weights for policy 0, policy_version 21287 (0.0006) +[2024-12-28 14:37:38,339][100934] Updated weights for policy 0, policy_version 21297 (0.0007) +[2024-12-28 14:37:38,944][100720] Fps is (10 sec: 24166.4, 60 sec: 25395.3, 300 sec: 25284.1). Total num frames: 87244800. Throughput: 0: 6406.9. Samples: 11805080. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:38,945][100720] Avg episode reward: [(0, '4.616')] +[2024-12-28 14:37:40,148][100934] Updated weights for policy 0, policy_version 21307 (0.0008) +[2024-12-28 14:37:41,981][100934] Updated weights for policy 0, policy_version 21317 (0.0009) +[2024-12-28 14:37:43,816][100934] Updated weights for policy 0, policy_version 21327 (0.0007) +[2024-12-28 14:37:43,944][100720] Fps is (10 sec: 23756.9, 60 sec: 25326.9, 300 sec: 25242.5). Total num frames: 87355392. Throughput: 0: 6251.0. Samples: 11838750. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:37:43,945][100720] Avg episode reward: [(0, '4.196')] +[2024-12-28 14:37:45,601][100934] Updated weights for policy 0, policy_version 21337 (0.0008) +[2024-12-28 14:37:47,429][100934] Updated weights for policy 0, policy_version 21347 (0.0008) +[2024-12-28 14:37:48,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25190.4, 300 sec: 25270.3). Total num frames: 87474176. Throughput: 0: 6194.8. Samples: 11855928. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:37:48,945][100720] Avg episode reward: [(0, '4.411')] +[2024-12-28 14:37:49,080][100934] Updated weights for policy 0, policy_version 21357 (0.0007) +[2024-12-28 14:37:50,625][100934] Updated weights for policy 0, policy_version 21367 (0.0007) +[2024-12-28 14:37:52,162][100934] Updated weights for policy 0, policy_version 21377 (0.0007) +[2024-12-28 14:37:53,662][100934] Updated weights for policy 0, policy_version 21387 (0.0007) +[2024-12-28 14:37:53,944][100720] Fps is (10 sec: 24985.5, 60 sec: 25190.4, 300 sec: 25339.7). Total num frames: 87605248. Throughput: 0: 6163.7. Samples: 11894442. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:37:53,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:37:53,971][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000021389_87609344.pth... +[2024-12-28 14:37:54,001][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000019895_81489920.pth +[2024-12-28 14:37:55,214][100934] Updated weights for policy 0, policy_version 21397 (0.0006) +[2024-12-28 14:37:56,768][100934] Updated weights for policy 0, policy_version 21407 (0.0007) +[2024-12-28 14:37:58,312][100934] Updated weights for policy 0, policy_version 21417 (0.0007) +[2024-12-28 14:37:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25122.1, 300 sec: 25353.5). Total num frames: 87740416. Throughput: 0: 6173.0. Samples: 11934368. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:37:58,945][100720] Avg episode reward: [(0, '4.586')] +[2024-12-28 14:37:59,828][100934] Updated weights for policy 0, policy_version 21427 (0.0007) +[2024-12-28 14:38:01,386][100934] Updated weights for policy 0, policy_version 21437 (0.0007) +[2024-12-28 14:38:02,965][100934] Updated weights for policy 0, policy_version 21447 (0.0006) +[2024-12-28 14:38:03,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25122.1, 300 sec: 25353.6). Total num frames: 87871488. Throughput: 0: 6164.6. Samples: 11954046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:38:03,945][100720] Avg episode reward: [(0, '4.360')] +[2024-12-28 14:38:04,502][100934] Updated weights for policy 0, policy_version 21457 (0.0007) +[2024-12-28 14:38:06,040][100934] Updated weights for policy 0, policy_version 21467 (0.0006) +[2024-12-28 14:38:07,564][100934] Updated weights for policy 0, policy_version 21477 (0.0007) +[2024-12-28 14:38:08,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25122.1, 300 sec: 25367.4). Total num frames: 88002560. Throughput: 0: 6193.9. Samples: 11994070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:38:08,945][100720] Avg episode reward: [(0, '4.321')] +[2024-12-28 14:38:09,123][100934] Updated weights for policy 0, policy_version 21487 (0.0007) +[2024-12-28 14:38:10,636][100934] Updated weights for policy 0, policy_version 21497 (0.0006) +[2024-12-28 14:38:12,197][100934] Updated weights for policy 0, policy_version 21507 (0.0007) +[2024-12-28 14:38:13,764][100934] Updated weights for policy 0, policy_version 21517 (0.0008) +[2024-12-28 14:38:13,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25190.4, 300 sec: 25450.7). Total num frames: 88137728. Throughput: 0: 6304.7. Samples: 12033678. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:38:13,945][100720] Avg episode reward: [(0, '4.563')] +[2024-12-28 14:38:15,311][100934] Updated weights for policy 0, policy_version 21527 (0.0007) +[2024-12-28 14:38:16,854][100934] Updated weights for policy 0, policy_version 21537 (0.0007) +[2024-12-28 14:38:18,395][100934] Updated weights for policy 0, policy_version 21547 (0.0007) +[2024-12-28 14:38:18,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25122.1, 300 sec: 25520.2). Total num frames: 88268800. Throughput: 0: 6380.0. Samples: 12053650. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:18,945][100720] Avg episode reward: [(0, '4.423')] +[2024-12-28 14:38:19,962][100934] Updated weights for policy 0, policy_version 21557 (0.0007) +[2024-12-28 14:38:21,520][100934] Updated weights for policy 0, policy_version 21567 (0.0007) +[2024-12-28 14:38:23,104][100934] Updated weights for policy 0, policy_version 21577 (0.0007) +[2024-12-28 14:38:23,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25258.7, 300 sec: 25520.2). Total num frames: 88399872. Throughput: 0: 6398.7. Samples: 12093020. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:23,945][100720] Avg episode reward: [(0, '4.475')] +[2024-12-28 14:38:24,675][100934] Updated weights for policy 0, policy_version 21587 (0.0007) +[2024-12-28 14:38:26,226][100934] Updated weights for policy 0, policy_version 21597 (0.0007) +[2024-12-28 14:38:27,811][100934] Updated weights for policy 0, policy_version 21607 (0.0007) +[2024-12-28 14:38:28,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25463.5, 300 sec: 25520.2). Total num frames: 88530944. Throughput: 0: 6522.5. Samples: 12132262. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:28,945][100720] Avg episode reward: [(0, '4.564')] +[2024-12-28 14:38:29,364][100934] Updated weights for policy 0, policy_version 21617 (0.0006) +[2024-12-28 14:38:30,866][100934] Updated weights for policy 0, policy_version 21627 (0.0007) +[2024-12-28 14:38:32,391][100934] Updated weights for policy 0, policy_version 21637 (0.0007) +[2024-12-28 14:38:33,936][100934] Updated weights for policy 0, policy_version 21647 (0.0006) +[2024-12-28 14:38:33,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25804.8, 300 sec: 25520.2). Total num frames: 88666112. Throughput: 0: 6589.5. Samples: 12152454. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:33,945][100720] Avg episode reward: [(0, '4.423')] +[2024-12-28 14:38:35,469][100934] Updated weights for policy 0, policy_version 21657 (0.0008) +[2024-12-28 14:38:37,002][100934] Updated weights for policy 0, policy_version 21667 (0.0006) +[2024-12-28 14:38:38,558][100934] Updated weights for policy 0, policy_version 21677 (0.0006) +[2024-12-28 14:38:38,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25534.1). Total num frames: 88797184. Throughput: 0: 6623.0. Samples: 12192476. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:38,945][100720] Avg episode reward: [(0, '4.389')] +[2024-12-28 14:38:40,098][100934] Updated weights for policy 0, policy_version 21687 (0.0006) +[2024-12-28 14:38:41,746][100934] Updated weights for policy 0, policy_version 21697 (0.0007) +[2024-12-28 14:38:43,601][100934] Updated weights for policy 0, policy_version 21707 (0.0008) +[2024-12-28 14:38:43,944][100720] Fps is (10 sec: 24985.5, 60 sec: 26009.6, 300 sec: 25561.8). Total num frames: 88915968. Throughput: 0: 6554.6. Samples: 12229324. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:43,945][100720] Avg episode reward: [(0, '4.420')] +[2024-12-28 14:38:45,381][100934] Updated weights for policy 0, policy_version 21717 (0.0008) +[2024-12-28 14:38:47,200][100934] Updated weights for policy 0, policy_version 21727 (0.0008) +[2024-12-28 14:38:48,944][100720] Fps is (10 sec: 23347.2, 60 sec: 25941.3, 300 sec: 25561.8). Total num frames: 89030656. Throughput: 0: 6493.2. Samples: 12246242. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:48,945][100720] Avg episode reward: [(0, '4.391')] +[2024-12-28 14:38:49,015][100934] Updated weights for policy 0, policy_version 21737 (0.0008) +[2024-12-28 14:38:50,929][100934] Updated weights for policy 0, policy_version 21747 (0.0008) +[2024-12-28 14:38:52,568][100934] Updated weights for policy 0, policy_version 21757 (0.0006) +[2024-12-28 14:38:53,944][100720] Fps is (10 sec: 23347.3, 60 sec: 25736.6, 300 sec: 25520.2). Total num frames: 89149440. Throughput: 0: 6364.9. Samples: 12280492. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:53,945][100720] Avg episode reward: [(0, '4.489')] +[2024-12-28 14:38:54,131][100934] Updated weights for policy 0, policy_version 21767 (0.0006) +[2024-12-28 14:38:55,666][100934] Updated weights for policy 0, policy_version 21777 (0.0007) +[2024-12-28 14:38:57,155][100934] Updated weights for policy 0, policy_version 21787 (0.0007) +[2024-12-28 14:38:58,670][100934] Updated weights for policy 0, policy_version 21797 (0.0006) +[2024-12-28 14:38:58,944][100720] Fps is (10 sec: 25395.1, 60 sec: 25736.5, 300 sec: 25534.0). Total num frames: 89284608. Throughput: 0: 6386.8. Samples: 12321084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:38:58,945][100720] Avg episode reward: [(0, '4.305')] +[2024-12-28 14:39:00,295][100934] Updated weights for policy 0, policy_version 21807 (0.0007) +[2024-12-28 14:39:02,079][100934] Updated weights for policy 0, policy_version 21817 (0.0007) +[2024-12-28 14:39:03,944][100720] Fps is (10 sec: 24985.6, 60 sec: 25463.5, 300 sec: 25478.5). Total num frames: 89399296. Throughput: 0: 6339.2. Samples: 12338916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:39:03,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 14:39:03,969][100934] Updated weights for policy 0, policy_version 21827 (0.0008) +[2024-12-28 14:39:05,748][100934] Updated weights for policy 0, policy_version 21837 (0.0007) +[2024-12-28 14:39:07,563][100934] Updated weights for policy 0, policy_version 21847 (0.0007) +[2024-12-28 14:39:08,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25190.4, 300 sec: 25409.1). Total num frames: 89513984. Throughput: 0: 6214.3. Samples: 12372664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:39:08,945][100720] Avg episode reward: [(0, '4.616')] +[2024-12-28 14:39:09,416][100934] Updated weights for policy 0, policy_version 21857 (0.0007) +[2024-12-28 14:39:11,109][100934] Updated weights for policy 0, policy_version 21867 (0.0008) +[2024-12-28 14:39:12,647][100934] Updated weights for policy 0, policy_version 21877 (0.0006) +[2024-12-28 14:39:13,944][100720] Fps is (10 sec: 24166.4, 60 sec: 25053.9, 300 sec: 25464.6). Total num frames: 89640960. Throughput: 0: 6168.1. Samples: 12409826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:39:13,945][100720] Avg episode reward: [(0, '4.330')] +[2024-12-28 14:39:14,197][100934] Updated weights for policy 0, policy_version 21887 (0.0006) +[2024-12-28 14:39:15,675][100934] Updated weights for policy 0, policy_version 21897 (0.0006) +[2024-12-28 14:39:17,175][100934] Updated weights for policy 0, policy_version 21907 (0.0006) +[2024-12-28 14:39:18,728][100934] Updated weights for policy 0, policy_version 21917 (0.0007) +[2024-12-28 14:39:18,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25122.1, 300 sec: 25547.9). Total num frames: 89776128. Throughput: 0: 6174.9. Samples: 12430326. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:39:18,945][100720] Avg episode reward: [(0, '4.335')] +[2024-12-28 14:39:20,274][100934] Updated weights for policy 0, policy_version 21927 (0.0006) +[2024-12-28 14:39:21,781][100934] Updated weights for policy 0, policy_version 21937 (0.0007) +[2024-12-28 14:39:23,290][100934] Updated weights for policy 0, policy_version 21947 (0.0007) +[2024-12-28 14:39:23,944][100720] Fps is (10 sec: 27033.5, 60 sec: 25190.4, 300 sec: 25575.7). Total num frames: 89911296. Throughput: 0: 6178.6. Samples: 12470514. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:39:23,945][100720] Avg episode reward: [(0, '4.355')] +[2024-12-28 14:39:24,853][100934] Updated weights for policy 0, policy_version 21957 (0.0007) +[2024-12-28 14:39:26,360][100934] Updated weights for policy 0, policy_version 21967 (0.0006) +[2024-12-28 14:39:27,871][100934] Updated weights for policy 0, policy_version 21977 (0.0007) +[2024-12-28 14:39:28,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25190.4, 300 sec: 25575.7). Total num frames: 90042368. Throughput: 0: 6256.1. Samples: 12510850. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:39:28,945][100720] Avg episode reward: [(0, '4.194')] +[2024-12-28 14:39:29,407][100934] Updated weights for policy 0, policy_version 21987 (0.0006) +[2024-12-28 14:39:30,963][100934] Updated weights for policy 0, policy_version 21997 (0.0007) +[2024-12-28 14:39:32,470][100934] Updated weights for policy 0, policy_version 22007 (0.0006) +[2024-12-28 14:39:33,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25190.4, 300 sec: 25589.6). Total num frames: 90177536. Throughput: 0: 6321.2. Samples: 12530698. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:33,945][100720] Avg episode reward: [(0, '4.317')] +[2024-12-28 14:39:34,035][100934] Updated weights for policy 0, policy_version 22017 (0.0008) +[2024-12-28 14:39:35,684][100934] Updated weights for policy 0, policy_version 22027 (0.0006) +[2024-12-28 14:39:37,461][100934] Updated weights for policy 0, policy_version 22037 (0.0007) +[2024-12-28 14:39:38,944][100720] Fps is (10 sec: 25395.1, 60 sec: 24985.6, 300 sec: 25534.0). Total num frames: 90296320. Throughput: 0: 6390.8. Samples: 12568080. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:38,945][100720] Avg episode reward: [(0, '4.252')] +[2024-12-28 14:39:39,216][100934] Updated weights for policy 0, policy_version 22047 (0.0007) +[2024-12-28 14:39:41,080][100934] Updated weights for policy 0, policy_version 22057 (0.0008) +[2024-12-28 14:39:42,943][100934] Updated weights for policy 0, policy_version 22067 (0.0008) +[2024-12-28 14:39:43,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24849.0, 300 sec: 25464.6). Total num frames: 90406912. Throughput: 0: 6231.7. Samples: 12601510. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:43,945][100720] Avg episode reward: [(0, '4.385')] +[2024-12-28 14:39:44,768][100934] Updated weights for policy 0, policy_version 22077 (0.0007) +[2024-12-28 14:39:46,462][100934] Updated weights for policy 0, policy_version 22087 (0.0007) +[2024-12-28 14:39:47,978][100934] Updated weights for policy 0, policy_version 22097 (0.0007) +[2024-12-28 14:39:48,944][100720] Fps is (10 sec: 23756.9, 60 sec: 25053.9, 300 sec: 25450.7). Total num frames: 90533888. Throughput: 0: 6236.2. Samples: 12619544. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:48,945][100720] Avg episode reward: [(0, '4.283')] +[2024-12-28 14:39:49,507][100934] Updated weights for policy 0, policy_version 22107 (0.0006) +[2024-12-28 14:39:51,058][100934] Updated weights for policy 0, policy_version 22117 (0.0007) +[2024-12-28 14:39:52,609][100934] Updated weights for policy 0, policy_version 22127 (0.0007) +[2024-12-28 14:39:53,944][100720] Fps is (10 sec: 25804.8, 60 sec: 25258.6, 300 sec: 25450.7). Total num frames: 90664960. Throughput: 0: 6376.2. Samples: 12659594. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:53,945][100720] Avg episode reward: [(0, '4.544')] +[2024-12-28 14:39:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000022135_90664960.pth... +[2024-12-28 14:39:53,987][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000020640_84541440.pth +[2024-12-28 14:39:54,175][100934] Updated weights for policy 0, policy_version 22137 (0.0007) +[2024-12-28 14:39:55,708][100934] Updated weights for policy 0, policy_version 22147 (0.0007) +[2024-12-28 14:39:57,470][100934] Updated weights for policy 0, policy_version 22157 (0.0007) +[2024-12-28 14:39:58,944][100720] Fps is (10 sec: 25395.2, 60 sec: 25053.9, 300 sec: 25423.0). Total num frames: 90787840. Throughput: 0: 6368.9. Samples: 12696428. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:39:58,945][100720] Avg episode reward: [(0, '4.363')] +[2024-12-28 14:39:59,261][100934] Updated weights for policy 0, policy_version 22167 (0.0007) +[2024-12-28 14:40:01,139][100934] Updated weights for policy 0, policy_version 22177 (0.0010) +[2024-12-28 14:40:03,078][100934] Updated weights for policy 0, policy_version 22187 (0.0007) +[2024-12-28 14:40:03,944][100720] Fps is (10 sec: 22937.8, 60 sec: 24917.3, 300 sec: 25325.8). Total num frames: 90894336. Throughput: 0: 6280.9. Samples: 12712968. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:40:03,945][100720] Avg episode reward: [(0, '4.537')] +[2024-12-28 14:40:04,926][100934] Updated weights for policy 0, policy_version 22197 (0.0008) +[2024-12-28 14:40:06,766][100934] Updated weights for policy 0, policy_version 22207 (0.0008) +[2024-12-28 14:40:08,327][100934] Updated weights for policy 0, policy_version 22217 (0.0007) +[2024-12-28 14:40:08,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25053.9, 300 sec: 25284.1). Total num frames: 91017216. Throughput: 0: 6140.4. Samples: 12746832. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:40:08,945][100720] Avg episode reward: [(0, '4.487')] +[2024-12-28 14:40:09,880][100934] Updated weights for policy 0, policy_version 22227 (0.0008) +[2024-12-28 14:40:11,415][100934] Updated weights for policy 0, policy_version 22237 (0.0006) +[2024-12-28 14:40:13,083][100934] Updated weights for policy 0, policy_version 22247 (0.0008) +[2024-12-28 14:40:13,944][100720] Fps is (10 sec: 24576.0, 60 sec: 24985.6, 300 sec: 25284.1). Total num frames: 91140096. Throughput: 0: 6091.3. Samples: 12784958. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:40:13,945][100720] Avg episode reward: [(0, '4.391')] +[2024-12-28 14:40:14,975][100934] Updated weights for policy 0, policy_version 22257 (0.0008) +[2024-12-28 14:40:16,817][100934] Updated weights for policy 0, policy_version 22267 (0.0008) +[2024-12-28 14:40:18,656][100934] Updated weights for policy 0, policy_version 22277 (0.0007) +[2024-12-28 14:40:18,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24576.0, 300 sec: 25284.1). Total num frames: 91250688. Throughput: 0: 6014.4. Samples: 12801344. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:18,945][100720] Avg episode reward: [(0, '4.288')] +[2024-12-28 14:40:20,477][100934] Updated weights for policy 0, policy_version 22287 (0.0008) +[2024-12-28 14:40:22,366][100934] Updated weights for policy 0, policy_version 22297 (0.0008) +[2024-12-28 14:40:23,944][100720] Fps is (10 sec: 22528.1, 60 sec: 24234.7, 300 sec: 25256.4). Total num frames: 91365376. Throughput: 0: 5919.4. Samples: 12834452. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:23,945][100720] Avg episode reward: [(0, '4.490')] +[2024-12-28 14:40:24,079][100934] Updated weights for policy 0, policy_version 22307 (0.0008) +[2024-12-28 14:40:25,644][100934] Updated weights for policy 0, policy_version 22317 (0.0007) +[2024-12-28 14:40:27,175][100934] Updated weights for policy 0, policy_version 22327 (0.0006) +[2024-12-28 14:40:28,716][100934] Updated weights for policy 0, policy_version 22337 (0.0007) +[2024-12-28 14:40:28,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24234.7, 300 sec: 25242.5). Total num frames: 91496448. Throughput: 0: 6051.1. Samples: 12873808. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:28,945][100720] Avg episode reward: [(0, '4.342')] +[2024-12-28 14:40:30,277][100934] Updated weights for policy 0, policy_version 22347 (0.0006) +[2024-12-28 14:40:31,795][100934] Updated weights for policy 0, policy_version 22357 (0.0008) +[2024-12-28 14:40:33,299][100934] Updated weights for policy 0, policy_version 22367 (0.0007) +[2024-12-28 14:40:33,944][100720] Fps is (10 sec: 26623.5, 60 sec: 24234.6, 300 sec: 25256.3). Total num frames: 91631616. Throughput: 0: 6096.2. Samples: 12893876. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:40:33,945][100720] Avg episode reward: [(0, '4.421')] +[2024-12-28 14:40:34,846][100934] Updated weights for policy 0, policy_version 22377 (0.0007) +[2024-12-28 14:40:36,478][100934] Updated weights for policy 0, policy_version 22387 (0.0007) +[2024-12-28 14:40:38,259][100934] Updated weights for policy 0, policy_version 22397 (0.0009) +[2024-12-28 14:40:38,944][100720] Fps is (10 sec: 25394.7, 60 sec: 24234.6, 300 sec: 25270.2). Total num frames: 91750400. Throughput: 0: 6051.3. Samples: 12931904. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:40:38,945][100720] Avg episode reward: [(0, '4.404')] +[2024-12-28 14:40:40,073][100934] Updated weights for policy 0, policy_version 22407 (0.0008) +[2024-12-28 14:40:41,953][100934] Updated weights for policy 0, policy_version 22417 (0.0008) +[2024-12-28 14:40:43,773][100934] Updated weights for policy 0, policy_version 22427 (0.0008) +[2024-12-28 14:40:43,944][100720] Fps is (10 sec: 22937.6, 60 sec: 24234.6, 300 sec: 25256.3). Total num frames: 91860992. Throughput: 0: 5980.7. Samples: 12965562. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:40:43,945][100720] Avg episode reward: [(0, '4.657')] +[2024-12-28 14:40:45,594][100934] Updated weights for policy 0, policy_version 22437 (0.0008) +[2024-12-28 14:40:47,300][100934] Updated weights for policy 0, policy_version 22447 (0.0007) +[2024-12-28 14:40:48,873][100934] Updated weights for policy 0, policy_version 22457 (0.0007) +[2024-12-28 14:40:48,944][100720] Fps is (10 sec: 23347.3, 60 sec: 24166.4, 300 sec: 25228.6). Total num frames: 91983872. Throughput: 0: 5990.3. Samples: 12982534. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:48,945][100720] Avg episode reward: [(0, '4.378')] +[2024-12-28 14:40:50,454][100934] Updated weights for policy 0, policy_version 22467 (0.0006) +[2024-12-28 14:40:51,970][100934] Updated weights for policy 0, policy_version 22477 (0.0007) +[2024-12-28 14:40:53,552][100934] Updated weights for policy 0, policy_version 22487 (0.0007) +[2024-12-28 14:40:53,944][100720] Fps is (10 sec: 25395.5, 60 sec: 24166.4, 300 sec: 25228.6). Total num frames: 92114944. Throughput: 0: 6111.7. Samples: 13021860. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:53,945][100720] Avg episode reward: [(0, '4.511')] +[2024-12-28 14:40:55,130][100934] Updated weights for policy 0, policy_version 22497 (0.0008) +[2024-12-28 14:40:56,670][100934] Updated weights for policy 0, policy_version 22507 (0.0007) +[2024-12-28 14:40:58,200][100934] Updated weights for policy 0, policy_version 22517 (0.0007) +[2024-12-28 14:40:58,944][100720] Fps is (10 sec: 26214.7, 60 sec: 24302.9, 300 sec: 25214.7). Total num frames: 92246016. Throughput: 0: 6143.6. Samples: 13061418. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:40:58,945][100720] Avg episode reward: [(0, '4.328')] +[2024-12-28 14:40:59,793][100934] Updated weights for policy 0, policy_version 22527 (0.0008) +[2024-12-28 14:41:01,595][100934] Updated weights for policy 0, policy_version 22537 (0.0008) +[2024-12-28 14:41:03,491][100934] Updated weights for policy 0, policy_version 22547 (0.0009) +[2024-12-28 14:41:03,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24439.5, 300 sec: 25145.3). Total num frames: 92360704. Throughput: 0: 6172.0. Samples: 13079086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:41:03,945][100720] Avg episode reward: [(0, '4.422')] +[2024-12-28 14:41:05,302][100934] Updated weights for policy 0, policy_version 22557 (0.0008) +[2024-12-28 14:41:07,073][100934] Updated weights for policy 0, policy_version 22567 (0.0008) +[2024-12-28 14:41:08,857][100934] Updated weights for policy 0, policy_version 22577 (0.0008) +[2024-12-28 14:41:08,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24302.9, 300 sec: 25089.7). Total num frames: 92475392. Throughput: 0: 6186.8. Samples: 13112858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:41:08,945][100720] Avg episode reward: [(0, '4.410')] +[2024-12-28 14:41:10,704][100934] Updated weights for policy 0, policy_version 22587 (0.0008) +[2024-12-28 14:41:12,255][100934] Updated weights for policy 0, policy_version 22597 (0.0007) +[2024-12-28 14:41:13,770][100934] Updated weights for policy 0, policy_version 22607 (0.0007) +[2024-12-28 14:41:13,944][100720] Fps is (10 sec: 24166.4, 60 sec: 24371.2, 300 sec: 25062.0). Total num frames: 92602368. Throughput: 0: 6135.6. Samples: 13149910. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:41:13,945][100720] Avg episode reward: [(0, '4.300')] +[2024-12-28 14:41:15,292][100934] Updated weights for policy 0, policy_version 22617 (0.0007) +[2024-12-28 14:41:16,821][100934] Updated weights for policy 0, policy_version 22627 (0.0007) +[2024-12-28 14:41:18,340][100934] Updated weights for policy 0, policy_version 22637 (0.0007) +[2024-12-28 14:41:18,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24780.8, 300 sec: 25075.8). Total num frames: 92737536. Throughput: 0: 6137.2. Samples: 13170048. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:41:18,945][100720] Avg episode reward: [(0, '4.565')] +[2024-12-28 14:41:19,880][100934] Updated weights for policy 0, policy_version 22647 (0.0006) +[2024-12-28 14:41:21,395][100934] Updated weights for policy 0, policy_version 22657 (0.0007) +[2024-12-28 14:41:22,918][100934] Updated weights for policy 0, policy_version 22667 (0.0006) +[2024-12-28 14:41:23,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25053.8, 300 sec: 25062.0). Total num frames: 92868608. Throughput: 0: 6190.1. Samples: 13210460. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:41:23,945][100720] Avg episode reward: [(0, '4.394')] +[2024-12-28 14:41:24,470][100934] Updated weights for policy 0, policy_version 22677 (0.0006) +[2024-12-28 14:41:26,258][100934] Updated weights for policy 0, policy_version 22687 (0.0007) +[2024-12-28 14:41:28,024][100934] Updated weights for policy 0, policy_version 22697 (0.0008) +[2024-12-28 14:41:28,944][100720] Fps is (10 sec: 24985.7, 60 sec: 24849.1, 300 sec: 25020.3). Total num frames: 92987392. Throughput: 0: 6234.8. Samples: 13246128. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:41:28,945][100720] Avg episode reward: [(0, '4.584')] +[2024-12-28 14:41:29,824][100934] Updated weights for policy 0, policy_version 22707 (0.0008) +[2024-12-28 14:41:31,651][100934] Updated weights for policy 0, policy_version 22717 (0.0009) +[2024-12-28 14:41:33,474][100934] Updated weights for policy 0, policy_version 22727 (0.0008) +[2024-12-28 14:41:33,944][100720] Fps is (10 sec: 22937.2, 60 sec: 24439.4, 300 sec: 25006.4). Total num frames: 93097984. Throughput: 0: 6236.6. Samples: 13263180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:33,945][100720] Avg episode reward: [(0, '4.260')] +[2024-12-28 14:41:35,229][100934] Updated weights for policy 0, policy_version 22737 (0.0008) +[2024-12-28 14:41:36,786][100934] Updated weights for policy 0, policy_version 22747 (0.0006) +[2024-12-28 14:41:38,262][100934] Updated weights for policy 0, policy_version 22757 (0.0007) +[2024-12-28 14:41:38,944][100720] Fps is (10 sec: 24166.5, 60 sec: 24644.3, 300 sec: 25062.0). Total num frames: 93229056. Throughput: 0: 6182.9. Samples: 13300090. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:38,945][100720] Avg episode reward: [(0, '4.476')] +[2024-12-28 14:41:39,804][100934] Updated weights for policy 0, policy_version 22767 (0.0007) +[2024-12-28 14:41:41,340][100934] Updated weights for policy 0, policy_version 22777 (0.0006) +[2024-12-28 14:41:42,866][100934] Updated weights for policy 0, policy_version 22787 (0.0006) +[2024-12-28 14:41:43,944][100720] Fps is (10 sec: 26623.4, 60 sec: 25053.7, 300 sec: 25089.7). Total num frames: 93364224. Throughput: 0: 6198.8. Samples: 13340368. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:43,945][100720] Avg episode reward: [(0, '4.490')] +[2024-12-28 14:41:44,382][100934] Updated weights for policy 0, policy_version 22797 (0.0006) +[2024-12-28 14:41:45,940][100934] Updated weights for policy 0, policy_version 22807 (0.0007) +[2024-12-28 14:41:47,497][100934] Updated weights for policy 0, policy_version 22817 (0.0008) +[2024-12-28 14:41:48,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25190.5, 300 sec: 25089.7). Total num frames: 93495296. Throughput: 0: 6243.6. Samples: 13360048. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:48,945][100720] Avg episode reward: [(0, '4.463')] +[2024-12-28 14:41:49,046][100934] Updated weights for policy 0, policy_version 22827 (0.0007) +[2024-12-28 14:41:50,591][100934] Updated weights for policy 0, policy_version 22837 (0.0008) +[2024-12-28 14:41:52,124][100934] Updated weights for policy 0, policy_version 22847 (0.0006) +[2024-12-28 14:41:53,624][100934] Updated weights for policy 0, policy_version 22857 (0.0006) +[2024-12-28 14:41:53,944][100720] Fps is (10 sec: 26625.2, 60 sec: 25258.7, 300 sec: 25075.9). Total num frames: 93630464. Throughput: 0: 6384.4. Samples: 13400154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:53,945][100720] Avg episode reward: [(0, '4.453')] +[2024-12-28 14:41:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000022859_93630464.pth... +[2024-12-28 14:41:53,979][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000021389_87609344.pth +[2024-12-28 14:41:55,200][100934] Updated weights for policy 0, policy_version 22867 (0.0006) +[2024-12-28 14:41:56,874][100934] Updated weights for policy 0, policy_version 22877 (0.0007) +[2024-12-28 14:41:58,675][100934] Updated weights for policy 0, policy_version 22887 (0.0007) +[2024-12-28 14:41:58,944][100720] Fps is (10 sec: 25394.9, 60 sec: 25053.8, 300 sec: 25034.2). Total num frames: 93749248. Throughput: 0: 6383.1. Samples: 13437148. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:41:58,945][100720] Avg episode reward: [(0, '4.281')] +[2024-12-28 14:42:00,506][100934] Updated weights for policy 0, policy_version 22897 (0.0009) +[2024-12-28 14:42:02,327][100934] Updated weights for policy 0, policy_version 22907 (0.0008) +[2024-12-28 14:42:03,944][100720] Fps is (10 sec: 22937.2, 60 sec: 24985.5, 300 sec: 24964.8). Total num frames: 93859840. Throughput: 0: 6305.1. Samples: 13453778. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:03,945][100720] Avg episode reward: [(0, '4.555')] +[2024-12-28 14:42:04,142][100934] Updated weights for policy 0, policy_version 22917 (0.0009) +[2024-12-28 14:42:05,944][100934] Updated weights for policy 0, policy_version 22927 (0.0008) +[2024-12-28 14:42:07,687][100934] Updated weights for policy 0, policy_version 22937 (0.0007) +[2024-12-28 14:42:08,944][100720] Fps is (10 sec: 23347.4, 60 sec: 25122.1, 300 sec: 24937.0). Total num frames: 93982720. Throughput: 0: 6174.5. Samples: 13488314. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:08,945][100720] Avg episode reward: [(0, '4.442')] +[2024-12-28 14:42:09,206][100934] Updated weights for policy 0, policy_version 22947 (0.0007) +[2024-12-28 14:42:10,730][100934] Updated weights for policy 0, policy_version 22957 (0.0007) +[2024-12-28 14:42:12,254][100934] Updated weights for policy 0, policy_version 22967 (0.0006) +[2024-12-28 14:42:13,822][100934] Updated weights for policy 0, policy_version 22977 (0.0007) +[2024-12-28 14:42:13,944][100720] Fps is (10 sec: 25395.3, 60 sec: 25190.4, 300 sec: 24923.1). Total num frames: 94113792. Throughput: 0: 6275.4. Samples: 13528520. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:13,945][100720] Avg episode reward: [(0, '4.278')] +[2024-12-28 14:42:15,329][100934] Updated weights for policy 0, policy_version 22987 (0.0006) +[2024-12-28 14:42:16,858][100934] Updated weights for policy 0, policy_version 22997 (0.0007) +[2024-12-28 14:42:18,404][100934] Updated weights for policy 0, policy_version 23007 (0.0007) +[2024-12-28 14:42:18,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25190.4, 300 sec: 24964.8). Total num frames: 94248960. Throughput: 0: 6340.7. Samples: 13548508. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:18,945][100720] Avg episode reward: [(0, '4.544')] +[2024-12-28 14:42:19,978][100934] Updated weights for policy 0, policy_version 23017 (0.0006) +[2024-12-28 14:42:21,498][100934] Updated weights for policy 0, policy_version 23027 (0.0007) +[2024-12-28 14:42:22,995][100934] Updated weights for policy 0, policy_version 23037 (0.0006) +[2024-12-28 14:42:23,944][100720] Fps is (10 sec: 27033.9, 60 sec: 25258.7, 300 sec: 25020.3). Total num frames: 94384128. Throughput: 0: 6412.3. Samples: 13588644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:23,945][100720] Avg episode reward: [(0, '4.408')] +[2024-12-28 14:42:24,568][100934] Updated weights for policy 0, policy_version 23047 (0.0007) +[2024-12-28 14:42:26,096][100934] Updated weights for policy 0, policy_version 23057 (0.0006) +[2024-12-28 14:42:27,587][100934] Updated weights for policy 0, policy_version 23067 (0.0007) +[2024-12-28 14:42:28,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25463.5, 300 sec: 25075.9). Total num frames: 94515200. Throughput: 0: 6411.4. Samples: 13628878. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:28,945][100720] Avg episode reward: [(0, '4.466')] +[2024-12-28 14:42:29,097][100934] Updated weights for policy 0, policy_version 23077 (0.0007) +[2024-12-28 14:42:30,657][100934] Updated weights for policy 0, policy_version 23087 (0.0007) +[2024-12-28 14:42:32,154][100934] Updated weights for policy 0, policy_version 23097 (0.0006) +[2024-12-28 14:42:33,682][100934] Updated weights for policy 0, policy_version 23107 (0.0007) +[2024-12-28 14:42:33,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25873.1, 300 sec: 25103.6). Total num frames: 94650368. Throughput: 0: 6419.4. Samples: 13648922. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:33,945][100720] Avg episode reward: [(0, '4.509')] +[2024-12-28 14:42:35,400][100934] Updated weights for policy 0, policy_version 23117 (0.0007) +[2024-12-28 14:42:37,153][100934] Updated weights for policy 0, policy_version 23127 (0.0007) +[2024-12-28 14:42:38,944][100720] Fps is (10 sec: 24985.5, 60 sec: 25600.0, 300 sec: 25117.5). Total num frames: 94765056. Throughput: 0: 6343.4. Samples: 13685606. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:38,945][100720] Avg episode reward: [(0, '4.207')] +[2024-12-28 14:42:38,977][100934] Updated weights for policy 0, policy_version 23137 (0.0009) +[2024-12-28 14:42:40,757][100934] Updated weights for policy 0, policy_version 23147 (0.0008) +[2024-12-28 14:42:42,521][100934] Updated weights for policy 0, policy_version 23157 (0.0008) +[2024-12-28 14:42:43,944][100720] Fps is (10 sec: 22937.8, 60 sec: 25258.8, 300 sec: 25103.6). Total num frames: 94879744. Throughput: 0: 6282.2. Samples: 13719848. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:42:43,945][100720] Avg episode reward: [(0, '4.221')] +[2024-12-28 14:42:44,367][100934] Updated weights for policy 0, policy_version 23167 (0.0009) +[2024-12-28 14:42:45,941][100934] Updated weights for policy 0, policy_version 23177 (0.0007) +[2024-12-28 14:42:47,511][100934] Updated weights for policy 0, policy_version 23187 (0.0006) +[2024-12-28 14:42:48,944][100720] Fps is (10 sec: 24575.8, 60 sec: 25258.6, 300 sec: 25103.6). Total num frames: 95010816. Throughput: 0: 6336.1. Samples: 13738900. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:42:48,945][100720] Avg episode reward: [(0, '4.364')] +[2024-12-28 14:42:49,040][100934] Updated weights for policy 0, policy_version 23197 (0.0007) +[2024-12-28 14:42:50,621][100934] Updated weights for policy 0, policy_version 23207 (0.0008) +[2024-12-28 14:42:52,113][100934] Updated weights for policy 0, policy_version 23217 (0.0007) +[2024-12-28 14:42:53,647][100934] Updated weights for policy 0, policy_version 23227 (0.0007) +[2024-12-28 14:42:53,944][100720] Fps is (10 sec: 26214.2, 60 sec: 25190.4, 300 sec: 25089.7). Total num frames: 95141888. Throughput: 0: 6454.8. Samples: 13778782. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:42:53,945][100720] Avg episode reward: [(0, '4.692')] +[2024-12-28 14:42:55,186][100934] Updated weights for policy 0, policy_version 23237 (0.0006) +[2024-12-28 14:42:56,662][100934] Updated weights for policy 0, policy_version 23247 (0.0006) +[2024-12-28 14:42:58,166][100934] Updated weights for policy 0, policy_version 23257 (0.0006) +[2024-12-28 14:42:58,944][100720] Fps is (10 sec: 27033.9, 60 sec: 25531.8, 300 sec: 25117.5). Total num frames: 95281152. Throughput: 0: 6463.1. Samples: 13819360. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:42:58,945][100720] Avg episode reward: [(0, '4.517')] +[2024-12-28 14:42:59,705][100934] Updated weights for policy 0, policy_version 23267 (0.0006) +[2024-12-28 14:43:01,260][100934] Updated weights for policy 0, policy_version 23277 (0.0006) +[2024-12-28 14:43:02,779][100934] Updated weights for policy 0, policy_version 23287 (0.0006) +[2024-12-28 14:43:03,944][100720] Fps is (10 sec: 27033.9, 60 sec: 25873.1, 300 sec: 25117.5). Total num frames: 95412224. Throughput: 0: 6463.4. Samples: 13839362. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:43:03,945][100720] Avg episode reward: [(0, '4.290')] +[2024-12-28 14:43:04,300][100934] Updated weights for policy 0, policy_version 23297 (0.0006) +[2024-12-28 14:43:05,849][100934] Updated weights for policy 0, policy_version 23307 (0.0006) +[2024-12-28 14:43:07,397][100934] Updated weights for policy 0, policy_version 23317 (0.0006) +[2024-12-28 14:43:08,905][100934] Updated weights for policy 0, policy_version 23327 (0.0006) +[2024-12-28 14:43:08,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26077.9, 300 sec: 25117.5). Total num frames: 95547392. Throughput: 0: 6463.1. Samples: 13879482. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:08,945][100720] Avg episode reward: [(0, '4.415')] +[2024-12-28 14:43:10,424][100934] Updated weights for policy 0, policy_version 23337 (0.0006) +[2024-12-28 14:43:11,974][100934] Updated weights for policy 0, policy_version 23347 (0.0007) +[2024-12-28 14:43:13,498][100934] Updated weights for policy 0, policy_version 23357 (0.0006) +[2024-12-28 14:43:13,944][100720] Fps is (10 sec: 26623.8, 60 sec: 26077.9, 300 sec: 25117.5). Total num frames: 95678464. Throughput: 0: 6461.5. Samples: 13919648. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:13,945][100720] Avg episode reward: [(0, '4.535')] +[2024-12-28 14:43:15,040][100934] Updated weights for policy 0, policy_version 23367 (0.0007) +[2024-12-28 14:43:16,563][100934] Updated weights for policy 0, policy_version 23377 (0.0006) +[2024-12-28 14:43:18,098][100934] Updated weights for policy 0, policy_version 23387 (0.0006) +[2024-12-28 14:43:18,944][100720] Fps is (10 sec: 26623.7, 60 sec: 26077.8, 300 sec: 25131.4). Total num frames: 95813632. Throughput: 0: 6460.7. Samples: 13939654. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:18,945][100720] Avg episode reward: [(0, '4.302')] +[2024-12-28 14:43:19,662][100934] Updated weights for policy 0, policy_version 23397 (0.0007) +[2024-12-28 14:43:21,231][100934] Updated weights for policy 0, policy_version 23407 (0.0007) +[2024-12-28 14:43:22,806][100934] Updated weights for policy 0, policy_version 23417 (0.0008) +[2024-12-28 14:43:23,944][100720] Fps is (10 sec: 26624.2, 60 sec: 26009.6, 300 sec: 25131.4). Total num frames: 95944704. Throughput: 0: 6521.9. Samples: 13979092. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:43:23,945][100720] Avg episode reward: [(0, '4.284')] +[2024-12-28 14:43:24,367][100934] Updated weights for policy 0, policy_version 23427 (0.0006) +[2024-12-28 14:43:25,882][100934] Updated weights for policy 0, policy_version 23437 (0.0006) +[2024-12-28 14:43:27,405][100934] Updated weights for policy 0, policy_version 23447 (0.0006) +[2024-12-28 14:43:28,924][100934] Updated weights for policy 0, policy_version 23457 (0.0008) +[2024-12-28 14:43:28,944][100720] Fps is (10 sec: 26624.2, 60 sec: 26077.8, 300 sec: 25131.4). Total num frames: 96079872. Throughput: 0: 6651.2. Samples: 14019152. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:43:28,945][100720] Avg episode reward: [(0, '4.450')] +[2024-12-28 14:43:30,482][100934] Updated weights for policy 0, policy_version 23467 (0.0007) +[2024-12-28 14:43:32,061][100934] Updated weights for policy 0, policy_version 23477 (0.0007) +[2024-12-28 14:43:33,605][100934] Updated weights for policy 0, policy_version 23487 (0.0007) +[2024-12-28 14:43:33,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26009.6, 300 sec: 25131.4). Total num frames: 96210944. Throughput: 0: 6660.7. Samples: 14038630. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:43:33,945][100720] Avg episode reward: [(0, '4.494')] +[2024-12-28 14:43:35,167][100934] Updated weights for policy 0, policy_version 23497 (0.0007) +[2024-12-28 14:43:36,680][100934] Updated weights for policy 0, policy_version 23507 (0.0006) +[2024-12-28 14:43:38,235][100934] Updated weights for policy 0, policy_version 23517 (0.0007) +[2024-12-28 14:43:38,944][100720] Fps is (10 sec: 26214.2, 60 sec: 26282.6, 300 sec: 25173.0). Total num frames: 96342016. Throughput: 0: 6663.2. Samples: 14078626. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:38,945][100720] Avg episode reward: [(0, '4.438')] +[2024-12-28 14:43:39,819][100934] Updated weights for policy 0, policy_version 23527 (0.0006) +[2024-12-28 14:43:41,353][100934] Updated weights for policy 0, policy_version 23537 (0.0006) +[2024-12-28 14:43:42,845][100934] Updated weights for policy 0, policy_version 23547 (0.0007) +[2024-12-28 14:43:43,944][100720] Fps is (10 sec: 26623.7, 60 sec: 26624.0, 300 sec: 25242.5). Total num frames: 96477184. Throughput: 0: 6648.9. Samples: 14118560. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:43,945][100720] Avg episode reward: [(0, '4.386')] +[2024-12-28 14:43:44,386][100934] Updated weights for policy 0, policy_version 23557 (0.0007) +[2024-12-28 14:43:45,936][100934] Updated weights for policy 0, policy_version 23567 (0.0007) +[2024-12-28 14:43:47,470][100934] Updated weights for policy 0, policy_version 23577 (0.0007) +[2024-12-28 14:43:48,944][100720] Fps is (10 sec: 26624.1, 60 sec: 26624.0, 300 sec: 25284.1). Total num frames: 96608256. Throughput: 0: 6644.3. Samples: 14138354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:48,945][100720] Avg episode reward: [(0, '4.504')] +[2024-12-28 14:43:49,015][100934] Updated weights for policy 0, policy_version 23587 (0.0007) +[2024-12-28 14:43:50,560][100934] Updated weights for policy 0, policy_version 23597 (0.0007) +[2024-12-28 14:43:52,260][100934] Updated weights for policy 0, policy_version 23607 (0.0008) +[2024-12-28 14:43:53,944][100720] Fps is (10 sec: 25395.4, 60 sec: 26487.5, 300 sec: 25242.5). Total num frames: 96731136. Throughput: 0: 6600.7. Samples: 14176516. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:53,945][100720] Avg episode reward: [(0, '4.273')] +[2024-12-28 14:43:53,951][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000023616_96731136.pth... +[2024-12-28 14:43:53,992][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000022135_90664960.pth +[2024-12-28 14:43:54,066][100934] Updated weights for policy 0, policy_version 23617 (0.0009) +[2024-12-28 14:43:55,940][100934] Updated weights for policy 0, policy_version 23627 (0.0009) +[2024-12-28 14:43:57,777][100934] Updated weights for policy 0, policy_version 23637 (0.0008) +[2024-12-28 14:43:58,944][100720] Fps is (10 sec: 23347.2, 60 sec: 26009.6, 300 sec: 25228.6). Total num frames: 96841728. Throughput: 0: 6446.7. Samples: 14209748. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:43:58,945][100720] Avg episode reward: [(0, '4.533')] +[2024-12-28 14:43:59,674][100934] Updated weights for policy 0, policy_version 23647 (0.0009) +[2024-12-28 14:44:01,515][100934] Updated weights for policy 0, policy_version 23657 (0.0008) +[2024-12-28 14:44:03,108][100934] Updated weights for policy 0, policy_version 23667 (0.0006) +[2024-12-28 14:44:03,944][100720] Fps is (10 sec: 22937.7, 60 sec: 25804.8, 300 sec: 25242.5). Total num frames: 96960512. Throughput: 0: 6372.4. Samples: 14226412. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:44:03,945][100720] Avg episode reward: [(0, '4.427')] +[2024-12-28 14:44:04,613][100934] Updated weights for policy 0, policy_version 23677 (0.0007) +[2024-12-28 14:44:06,135][100934] Updated weights for policy 0, policy_version 23687 (0.0007) +[2024-12-28 14:44:07,731][100934] Updated weights for policy 0, policy_version 23697 (0.0007) +[2024-12-28 14:44:08,944][100720] Fps is (10 sec: 24985.7, 60 sec: 25736.5, 300 sec: 25256.4). Total num frames: 97091584. Throughput: 0: 6380.8. Samples: 14266226. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:44:08,945][100720] Avg episode reward: [(0, '4.369')] +[2024-12-28 14:44:09,344][100934] Updated weights for policy 0, policy_version 23707 (0.0007) +[2024-12-28 14:44:10,993][100934] Updated weights for policy 0, policy_version 23717 (0.0007) +[2024-12-28 14:44:12,603][100934] Updated weights for policy 0, policy_version 23727 (0.0007) +[2024-12-28 14:44:13,944][100720] Fps is (10 sec: 25804.7, 60 sec: 25668.3, 300 sec: 25228.6). Total num frames: 97218560. Throughput: 0: 6331.5. Samples: 14304070. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:44:13,945][100720] Avg episode reward: [(0, '4.371')] +[2024-12-28 14:44:14,187][100934] Updated weights for policy 0, policy_version 23737 (0.0008) +[2024-12-28 14:44:15,769][100934] Updated weights for policy 0, policy_version 23747 (0.0007) +[2024-12-28 14:44:17,311][100934] Updated weights for policy 0, policy_version 23757 (0.0007) +[2024-12-28 14:44:18,802][100934] Updated weights for policy 0, policy_version 23767 (0.0006) +[2024-12-28 14:44:18,944][100720] Fps is (10 sec: 25804.9, 60 sec: 25600.0, 300 sec: 25214.7). Total num frames: 97349632. Throughput: 0: 6340.1. Samples: 14323936. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:44:18,944][100720] Avg episode reward: [(0, '4.585')] +[2024-12-28 14:44:20,375][100934] Updated weights for policy 0, policy_version 23777 (0.0006) +[2024-12-28 14:44:21,881][100934] Updated weights for policy 0, policy_version 23787 (0.0007) +[2024-12-28 14:44:23,462][100934] Updated weights for policy 0, policy_version 23797 (0.0006) +[2024-12-28 14:44:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25668.3, 300 sec: 25228.6). Total num frames: 97484800. Throughput: 0: 6336.5. Samples: 14363770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:44:23,945][100720] Avg episode reward: [(0, '4.460')] +[2024-12-28 14:44:25,009][100934] Updated weights for policy 0, policy_version 23807 (0.0007) +[2024-12-28 14:44:26,534][100934] Updated weights for policy 0, policy_version 23817 (0.0006) +[2024-12-28 14:44:28,061][100934] Updated weights for policy 0, policy_version 23827 (0.0007) +[2024-12-28 14:44:28,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25600.0, 300 sec: 25214.7). Total num frames: 97615872. Throughput: 0: 6337.2. Samples: 14403732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:44:28,945][100720] Avg episode reward: [(0, '4.421')] +[2024-12-28 14:44:29,632][100934] Updated weights for policy 0, policy_version 23837 (0.0007) +[2024-12-28 14:44:31,208][100934] Updated weights for policy 0, policy_version 23847 (0.0007) +[2024-12-28 14:44:32,771][100934] Updated weights for policy 0, policy_version 23857 (0.0006) +[2024-12-28 14:44:33,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25600.0, 300 sec: 25256.4). Total num frames: 97746944. Throughput: 0: 6330.6. Samples: 14423230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:44:33,945][100720] Avg episode reward: [(0, '4.484')] +[2024-12-28 14:44:34,279][100934] Updated weights for policy 0, policy_version 23867 (0.0007) +[2024-12-28 14:44:35,844][100934] Updated weights for policy 0, policy_version 23877 (0.0006) +[2024-12-28 14:44:37,389][100934] Updated weights for policy 0, policy_version 23887 (0.0006) +[2024-12-28 14:44:38,937][100934] Updated weights for policy 0, policy_version 23897 (0.0007) +[2024-12-28 14:44:38,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25668.3, 300 sec: 25339.7). Total num frames: 97882112. Throughput: 0: 6368.9. Samples: 14463118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:44:38,945][100720] Avg episode reward: [(0, '4.345')] +[2024-12-28 14:44:40,470][100934] Updated weights for policy 0, policy_version 23907 (0.0006) +[2024-12-28 14:44:42,088][100934] Updated weights for policy 0, policy_version 23917 (0.0007) +[2024-12-28 14:44:43,628][100934] Updated weights for policy 0, policy_version 23927 (0.0007) +[2024-12-28 14:44:43,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25600.0, 300 sec: 25353.5). Total num frames: 98013184. Throughput: 0: 6504.8. Samples: 14502462. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:44:43,945][100720] Avg episode reward: [(0, '4.355')] +[2024-12-28 14:44:45,204][100934] Updated weights for policy 0, policy_version 23937 (0.0006) +[2024-12-28 14:44:46,771][100934] Updated weights for policy 0, policy_version 23947 (0.0006) +[2024-12-28 14:44:48,303][100934] Updated weights for policy 0, policy_version 23957 (0.0006) +[2024-12-28 14:44:48,944][100720] Fps is (10 sec: 26214.1, 60 sec: 25600.0, 300 sec: 25353.5). Total num frames: 98144256. Throughput: 0: 6569.7. Samples: 14522050. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:44:48,945][100720] Avg episode reward: [(0, '4.478')] +[2024-12-28 14:44:49,869][100934] Updated weights for policy 0, policy_version 23967 (0.0007) +[2024-12-28 14:44:51,449][100934] Updated weights for policy 0, policy_version 23977 (0.0007) +[2024-12-28 14:44:53,008][100934] Updated weights for policy 0, policy_version 23987 (0.0007) +[2024-12-28 14:44:53,944][100720] Fps is (10 sec: 26214.1, 60 sec: 25736.5, 300 sec: 25381.3). Total num frames: 98275328. Throughput: 0: 6560.1. Samples: 14561432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:44:53,945][100720] Avg episode reward: [(0, '4.355')] +[2024-12-28 14:44:54,527][100934] Updated weights for policy 0, policy_version 23997 (0.0007) +[2024-12-28 14:44:56,092][100934] Updated weights for policy 0, policy_version 24007 (0.0006) +[2024-12-28 14:44:57,591][100934] Updated weights for policy 0, policy_version 24017 (0.0007) +[2024-12-28 14:44:58,944][100720] Fps is (10 sec: 26214.7, 60 sec: 26077.9, 300 sec: 25464.6). Total num frames: 98406400. Throughput: 0: 6612.8. Samples: 14601646. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:44:58,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:44:59,116][100934] Updated weights for policy 0, policy_version 24027 (0.0006) +[2024-12-28 14:45:00,717][100934] Updated weights for policy 0, policy_version 24037 (0.0006) +[2024-12-28 14:45:02,265][100934] Updated weights for policy 0, policy_version 24047 (0.0009) +[2024-12-28 14:45:03,788][100934] Updated weights for policy 0, policy_version 24057 (0.0007) +[2024-12-28 14:45:03,944][100720] Fps is (10 sec: 26214.6, 60 sec: 26282.7, 300 sec: 25492.4). Total num frames: 98537472. Throughput: 0: 6607.6. Samples: 14621278. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:03,945][100720] Avg episode reward: [(0, '4.274')] +[2024-12-28 14:45:05,326][100934] Updated weights for policy 0, policy_version 24067 (0.0007) +[2024-12-28 14:45:06,869][100934] Updated weights for policy 0, policy_version 24077 (0.0007) +[2024-12-28 14:45:08,417][100934] Updated weights for policy 0, policy_version 24087 (0.0006) +[2024-12-28 14:45:08,944][100720] Fps is (10 sec: 26623.6, 60 sec: 26350.9, 300 sec: 25534.0). Total num frames: 98672640. Throughput: 0: 6607.9. Samples: 14661124. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:08,945][100720] Avg episode reward: [(0, '4.559')] +[2024-12-28 14:45:09,975][100934] Updated weights for policy 0, policy_version 24097 (0.0006) +[2024-12-28 14:45:11,530][100934] Updated weights for policy 0, policy_version 24107 (0.0007) +[2024-12-28 14:45:13,083][100934] Updated weights for policy 0, policy_version 24117 (0.0007) +[2024-12-28 14:45:13,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26419.2, 300 sec: 25603.5). Total num frames: 98803712. Throughput: 0: 6595.1. Samples: 14700512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:45:13,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 14:45:14,641][100934] Updated weights for policy 0, policy_version 24127 (0.0008) +[2024-12-28 14:45:16,241][100934] Updated weights for policy 0, policy_version 24137 (0.0007) +[2024-12-28 14:45:17,810][100934] Updated weights for policy 0, policy_version 24147 (0.0007) +[2024-12-28 14:45:18,944][100720] Fps is (10 sec: 26214.8, 60 sec: 26419.2, 300 sec: 25659.0). Total num frames: 98934784. Throughput: 0: 6596.3. Samples: 14720064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:45:18,945][100720] Avg episode reward: [(0, '4.472')] +[2024-12-28 14:45:19,332][100934] Updated weights for policy 0, policy_version 24157 (0.0007) +[2024-12-28 14:45:20,937][100934] Updated weights for policy 0, policy_version 24167 (0.0008) +[2024-12-28 14:45:22,497][100934] Updated weights for policy 0, policy_version 24177 (0.0007) +[2024-12-28 14:45:23,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26350.9, 300 sec: 25659.0). Total num frames: 99065856. Throughput: 0: 6582.2. Samples: 14759316. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:45:23,945][100720] Avg episode reward: [(0, '4.553')] +[2024-12-28 14:45:24,060][100934] Updated weights for policy 0, policy_version 24187 (0.0006) +[2024-12-28 14:45:25,575][100934] Updated weights for policy 0, policy_version 24197 (0.0007) +[2024-12-28 14:45:27,144][100934] Updated weights for policy 0, policy_version 24207 (0.0006) +[2024-12-28 14:45:28,698][100934] Updated weights for policy 0, policy_version 24217 (0.0008) +[2024-12-28 14:45:28,944][100720] Fps is (10 sec: 26214.1, 60 sec: 26350.9, 300 sec: 25645.1). Total num frames: 99196928. Throughput: 0: 6587.8. Samples: 14798912. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:45:28,945][100720] Avg episode reward: [(0, '4.525')] +[2024-12-28 14:45:30,258][100934] Updated weights for policy 0, policy_version 24227 (0.0007) +[2024-12-28 14:45:31,823][100934] Updated weights for policy 0, policy_version 24237 (0.0007) +[2024-12-28 14:45:33,345][100934] Updated weights for policy 0, policy_version 24247 (0.0006) +[2024-12-28 14:45:33,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26350.9, 300 sec: 25686.8). Total num frames: 99328000. Throughput: 0: 6592.3. Samples: 14818702. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:45:33,945][100720] Avg episode reward: [(0, '4.298')] +[2024-12-28 14:45:34,863][100934] Updated weights for policy 0, policy_version 24257 (0.0007) +[2024-12-28 14:45:36,455][100934] Updated weights for policy 0, policy_version 24267 (0.0006) +[2024-12-28 14:45:38,106][100934] Updated weights for policy 0, policy_version 24277 (0.0008) +[2024-12-28 14:45:38,944][100720] Fps is (10 sec: 26214.6, 60 sec: 26282.6, 300 sec: 25756.2). Total num frames: 99459072. Throughput: 0: 6587.5. Samples: 14857870. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:45:38,945][100720] Avg episode reward: [(0, '4.211')] +[2024-12-28 14:45:39,658][100934] Updated weights for policy 0, policy_version 24287 (0.0006) +[2024-12-28 14:45:41,226][100934] Updated weights for policy 0, policy_version 24297 (0.0006) +[2024-12-28 14:45:42,824][100934] Updated weights for policy 0, policy_version 24307 (0.0007) +[2024-12-28 14:45:43,944][100720] Fps is (10 sec: 26214.1, 60 sec: 26282.6, 300 sec: 25784.0). Total num frames: 99590144. Throughput: 0: 6563.1. Samples: 14896988. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:43,945][100720] Avg episode reward: [(0, '4.470')] +[2024-12-28 14:45:44,356][100934] Updated weights for policy 0, policy_version 24317 (0.0007) +[2024-12-28 14:45:45,898][100934] Updated weights for policy 0, policy_version 24327 (0.0007) +[2024-12-28 14:45:47,452][100934] Updated weights for policy 0, policy_version 24337 (0.0006) +[2024-12-28 14:45:48,944][100720] Fps is (10 sec: 26214.2, 60 sec: 26282.7, 300 sec: 25784.0). Total num frames: 99721216. Throughput: 0: 6569.5. Samples: 14916904. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:48,945][100720] Avg episode reward: [(0, '4.316')] +[2024-12-28 14:45:49,027][100934] Updated weights for policy 0, policy_version 24347 (0.0008) +[2024-12-28 14:45:50,644][100934] Updated weights for policy 0, policy_version 24357 (0.0008) +[2024-12-28 14:45:52,197][100934] Updated weights for policy 0, policy_version 24367 (0.0006) +[2024-12-28 14:45:53,735][100934] Updated weights for policy 0, policy_version 24377 (0.0006) +[2024-12-28 14:45:53,944][100720] Fps is (10 sec: 26214.6, 60 sec: 26282.7, 300 sec: 25784.0). Total num frames: 99852288. Throughput: 0: 6550.2. Samples: 14955880. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:53,945][100720] Avg episode reward: [(0, '4.623')] +[2024-12-28 14:45:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000024378_99852288.pth... +[2024-12-28 14:45:53,979][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000022859_93630464.pth +[2024-12-28 14:45:55,358][100934] Updated weights for policy 0, policy_version 24387 (0.0008) +[2024-12-28 14:45:56,924][100934] Updated weights for policy 0, policy_version 24397 (0.0006) +[2024-12-28 14:45:58,487][100934] Updated weights for policy 0, policy_version 24407 (0.0007) +[2024-12-28 14:45:58,944][100720] Fps is (10 sec: 26214.5, 60 sec: 26282.6, 300 sec: 25839.5). Total num frames: 99983360. Throughput: 0: 6543.2. Samples: 14994956. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:45:58,945][100720] Avg episode reward: [(0, '4.352')] +[2024-12-28 14:46:00,067][100934] Updated weights for policy 0, policy_version 24417 (0.0007) +[2024-12-28 14:46:01,629][100934] Updated weights for policy 0, policy_version 24427 (0.0007) +[2024-12-28 14:46:03,208][100934] Updated weights for policy 0, policy_version 24437 (0.0007) +[2024-12-28 14:46:03,944][100720] Fps is (10 sec: 25804.8, 60 sec: 26214.4, 300 sec: 25881.2). Total num frames: 100110336. Throughput: 0: 6538.8. Samples: 15014312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:03,945][100720] Avg episode reward: [(0, '4.483')] +[2024-12-28 14:46:04,764][100934] Updated weights for policy 0, policy_version 24447 (0.0006) +[2024-12-28 14:46:06,310][100934] Updated weights for policy 0, policy_version 24457 (0.0009) +[2024-12-28 14:46:07,878][100934] Updated weights for policy 0, policy_version 24467 (0.0006) +[2024-12-28 14:46:08,944][100720] Fps is (10 sec: 25804.9, 60 sec: 26146.2, 300 sec: 25895.0). Total num frames: 100241408. Throughput: 0: 6542.7. Samples: 15053738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:08,945][100720] Avg episode reward: [(0, '4.537')] +[2024-12-28 14:46:09,457][100934] Updated weights for policy 0, policy_version 24477 (0.0007) +[2024-12-28 14:46:11,264][100934] Updated weights for policy 0, policy_version 24487 (0.0008) +[2024-12-28 14:46:13,104][100934] Updated weights for policy 0, policy_version 24497 (0.0008) +[2024-12-28 14:46:13,944][100720] Fps is (10 sec: 24575.7, 60 sec: 25873.0, 300 sec: 25825.6). Total num frames: 100356096. Throughput: 0: 6443.3. Samples: 15088862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:13,945][100720] Avg episode reward: [(0, '4.285')] +[2024-12-28 14:46:14,868][100934] Updated weights for policy 0, policy_version 24507 (0.0007) +[2024-12-28 14:46:16,664][100934] Updated weights for policy 0, policy_version 24517 (0.0008) +[2024-12-28 14:46:18,465][100934] Updated weights for policy 0, policy_version 24527 (0.0008) +[2024-12-28 14:46:18,944][100720] Fps is (10 sec: 22937.5, 60 sec: 25600.0, 300 sec: 25770.1). Total num frames: 100470784. Throughput: 0: 6387.5. Samples: 15106140. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:46:18,946][100720] Avg episode reward: [(0, '4.331')] +[2024-12-28 14:46:20,308][100934] Updated weights for policy 0, policy_version 24537 (0.0009) +[2024-12-28 14:46:21,863][100934] Updated weights for policy 0, policy_version 24547 (0.0007) +[2024-12-28 14:46:23,452][100934] Updated weights for policy 0, policy_version 24557 (0.0007) +[2024-12-28 14:46:23,944][100720] Fps is (10 sec: 23757.0, 60 sec: 25463.5, 300 sec: 25784.0). Total num frames: 100593664. Throughput: 0: 6328.9. Samples: 15142672. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:46:23,945][100720] Avg episode reward: [(0, '4.568')] +[2024-12-28 14:46:25,199][100934] Updated weights for policy 0, policy_version 24567 (0.0008) +[2024-12-28 14:46:27,058][100934] Updated weights for policy 0, policy_version 24577 (0.0009) +[2024-12-28 14:46:28,903][100934] Updated weights for policy 0, policy_version 24587 (0.0008) +[2024-12-28 14:46:28,944][100720] Fps is (10 sec: 23756.9, 60 sec: 25190.4, 300 sec: 25797.9). Total num frames: 100708352. Throughput: 0: 6208.1. Samples: 15176352. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:46:28,945][100720] Avg episode reward: [(0, '4.710')] +[2024-12-28 14:46:30,788][100934] Updated weights for policy 0, policy_version 24597 (0.0008) +[2024-12-28 14:46:32,633][100934] Updated weights for policy 0, policy_version 24607 (0.0008) +[2024-12-28 14:46:33,944][100720] Fps is (10 sec: 22527.7, 60 sec: 24849.0, 300 sec: 25728.4). Total num frames: 100818944. Throughput: 0: 6131.6. Samples: 15192826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:33,945][100720] Avg episode reward: [(0, '4.635')] +[2024-12-28 14:46:34,352][100934] Updated weights for policy 0, policy_version 24617 (0.0008) +[2024-12-28 14:46:35,882][100934] Updated weights for policy 0, policy_version 24627 (0.0006) +[2024-12-28 14:46:37,429][100934] Updated weights for policy 0, policy_version 24637 (0.0007) +[2024-12-28 14:46:38,937][100934] Updated weights for policy 0, policy_version 24647 (0.0007) +[2024-12-28 14:46:38,944][100720] Fps is (10 sec: 24576.1, 60 sec: 24917.4, 300 sec: 25728.5). Total num frames: 100954112. Throughput: 0: 6114.5. Samples: 15231034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:38,945][100720] Avg episode reward: [(0, '4.369')] +[2024-12-28 14:46:40,479][100934] Updated weights for policy 0, policy_version 24657 (0.0008) +[2024-12-28 14:46:42,022][100934] Updated weights for policy 0, policy_version 24667 (0.0006) +[2024-12-28 14:46:43,580][100934] Updated weights for policy 0, policy_version 24677 (0.0007) +[2024-12-28 14:46:43,944][100720] Fps is (10 sec: 26624.4, 60 sec: 24917.4, 300 sec: 25728.4). Total num frames: 101085184. Throughput: 0: 6126.9. Samples: 15270664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:43,945][100720] Avg episode reward: [(0, '4.420')] +[2024-12-28 14:46:45,127][100934] Updated weights for policy 0, policy_version 24687 (0.0006) +[2024-12-28 14:46:46,624][100934] Updated weights for policy 0, policy_version 24697 (0.0006) +[2024-12-28 14:46:48,187][100934] Updated weights for policy 0, policy_version 24707 (0.0006) +[2024-12-28 14:46:48,944][100720] Fps is (10 sec: 26214.3, 60 sec: 24917.4, 300 sec: 25714.5). Total num frames: 101216256. Throughput: 0: 6145.7. Samples: 15290868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:46:48,945][100720] Avg episode reward: [(0, '4.541')] +[2024-12-28 14:46:49,796][100934] Updated weights for policy 0, policy_version 24717 (0.0009) +[2024-12-28 14:46:51,342][100934] Updated weights for policy 0, policy_version 24727 (0.0007) +[2024-12-28 14:46:52,859][100934] Updated weights for policy 0, policy_version 24737 (0.0006) +[2024-12-28 14:46:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 24917.3, 300 sec: 25756.2). Total num frames: 101347328. Throughput: 0: 6147.2. Samples: 15330362. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:46:53,945][100720] Avg episode reward: [(0, '4.452')] +[2024-12-28 14:46:54,408][100934] Updated weights for policy 0, policy_version 24747 (0.0007) +[2024-12-28 14:46:55,935][100934] Updated weights for policy 0, policy_version 24757 (0.0007) +[2024-12-28 14:46:57,469][100934] Updated weights for policy 0, policy_version 24767 (0.0006) +[2024-12-28 14:46:58,944][100720] Fps is (10 sec: 26214.3, 60 sec: 24917.3, 300 sec: 25825.6). Total num frames: 101478400. Throughput: 0: 6234.1. Samples: 15369398. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:46:58,945][100720] Avg episode reward: [(0, '4.281')] +[2024-12-28 14:46:59,165][100934] Updated weights for policy 0, policy_version 24777 (0.0007) +[2024-12-28 14:47:01,016][100934] Updated weights for policy 0, policy_version 24787 (0.0008) +[2024-12-28 14:47:02,879][100934] Updated weights for policy 0, policy_version 24797 (0.0008) +[2024-12-28 14:47:03,944][100720] Fps is (10 sec: 24165.9, 60 sec: 24644.2, 300 sec: 25784.0). Total num frames: 101588992. Throughput: 0: 6221.3. Samples: 15386098. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:03,945][100720] Avg episode reward: [(0, '4.458')] +[2024-12-28 14:47:04,738][100934] Updated weights for policy 0, policy_version 24807 (0.0008) +[2024-12-28 14:47:06,683][100934] Updated weights for policy 0, policy_version 24817 (0.0009) +[2024-12-28 14:47:08,480][100934] Updated weights for policy 0, policy_version 24827 (0.0008) +[2024-12-28 14:47:08,944][100720] Fps is (10 sec: 22118.4, 60 sec: 24302.9, 300 sec: 25714.6). Total num frames: 101699584. Throughput: 0: 6140.8. Samples: 15419008. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:47:08,945][100720] Avg episode reward: [(0, '4.319')] +[2024-12-28 14:47:10,019][100934] Updated weights for policy 0, policy_version 24837 (0.0006) +[2024-12-28 14:47:11,549][100934] Updated weights for policy 0, policy_version 24847 (0.0007) +[2024-12-28 14:47:13,065][100934] Updated weights for policy 0, policy_version 24857 (0.0007) +[2024-12-28 14:47:13,944][100720] Fps is (10 sec: 24576.5, 60 sec: 24644.3, 300 sec: 25714.6). Total num frames: 101834752. Throughput: 0: 6269.9. Samples: 15458498. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:47:13,945][100720] Avg episode reward: [(0, '4.358')] +[2024-12-28 14:47:14,596][100934] Updated weights for policy 0, policy_version 24867 (0.0007) +[2024-12-28 14:47:16,116][100934] Updated weights for policy 0, policy_version 24877 (0.0006) +[2024-12-28 14:47:17,667][100934] Updated weights for policy 0, policy_version 24887 (0.0006) +[2024-12-28 14:47:18,944][100720] Fps is (10 sec: 27032.8, 60 sec: 24985.5, 300 sec: 25714.5). Total num frames: 101969920. Throughput: 0: 6354.4. Samples: 15478776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:47:18,946][100720] Avg episode reward: [(0, '4.399')] +[2024-12-28 14:47:19,215][100934] Updated weights for policy 0, policy_version 24897 (0.0007) +[2024-12-28 14:47:20,787][100934] Updated weights for policy 0, policy_version 24907 (0.0007) +[2024-12-28 14:47:22,351][100934] Updated weights for policy 0, policy_version 24917 (0.0007) +[2024-12-28 14:47:23,866][100934] Updated weights for policy 0, policy_version 24927 (0.0007) +[2024-12-28 14:47:23,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25122.1, 300 sec: 25714.5). Total num frames: 102100992. Throughput: 0: 6382.0. Samples: 15518224. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:23,945][100720] Avg episode reward: [(0, '4.444')] +[2024-12-28 14:47:25,459][100934] Updated weights for policy 0, policy_version 24937 (0.0006) +[2024-12-28 14:47:27,021][100934] Updated weights for policy 0, policy_version 24947 (0.0007) +[2024-12-28 14:47:28,512][100934] Updated weights for policy 0, policy_version 24957 (0.0007) +[2024-12-28 14:47:28,944][100720] Fps is (10 sec: 26214.8, 60 sec: 25395.1, 300 sec: 25700.7). Total num frames: 102232064. Throughput: 0: 6382.4. Samples: 15557874. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:28,945][100720] Avg episode reward: [(0, '4.229')] +[2024-12-28 14:47:30,044][100934] Updated weights for policy 0, policy_version 24967 (0.0007) +[2024-12-28 14:47:31,597][100934] Updated weights for policy 0, policy_version 24977 (0.0006) +[2024-12-28 14:47:33,112][100934] Updated weights for policy 0, policy_version 24987 (0.0007) +[2024-12-28 14:47:33,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25804.9, 300 sec: 25770.1). Total num frames: 102367232. Throughput: 0: 6377.0. Samples: 15577832. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:33,945][100720] Avg episode reward: [(0, '4.422')] +[2024-12-28 14:47:34,699][100934] Updated weights for policy 0, policy_version 24997 (0.0008) +[2024-12-28 14:47:36,286][100934] Updated weights for policy 0, policy_version 25007 (0.0007) +[2024-12-28 14:47:37,825][100934] Updated weights for policy 0, policy_version 25017 (0.0007) +[2024-12-28 14:47:38,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25736.5, 300 sec: 25825.6). Total num frames: 102498304. Throughput: 0: 6379.8. Samples: 15617452. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:38,945][100720] Avg episode reward: [(0, '4.531')] +[2024-12-28 14:47:39,346][100934] Updated weights for policy 0, policy_version 25027 (0.0006) +[2024-12-28 14:47:40,921][100934] Updated weights for policy 0, policy_version 25037 (0.0007) +[2024-12-28 14:47:42,430][100934] Updated weights for policy 0, policy_version 25047 (0.0006) +[2024-12-28 14:47:43,944][100720] Fps is (10 sec: 26214.1, 60 sec: 25736.5, 300 sec: 25825.6). Total num frames: 102629376. Throughput: 0: 6397.8. Samples: 15657298. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:43,945][100720] Avg episode reward: [(0, '4.403')] +[2024-12-28 14:47:43,963][100934] Updated weights for policy 0, policy_version 25057 (0.0007) +[2024-12-28 14:47:45,514][100934] Updated weights for policy 0, policy_version 25067 (0.0007) +[2024-12-28 14:47:47,098][100934] Updated weights for policy 0, policy_version 25077 (0.0007) +[2024-12-28 14:47:48,644][100934] Updated weights for policy 0, policy_version 25087 (0.0006) +[2024-12-28 14:47:48,944][100720] Fps is (10 sec: 26214.6, 60 sec: 25736.5, 300 sec: 25825.6). Total num frames: 102760448. Throughput: 0: 6461.9. Samples: 15676882. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:48,945][100720] Avg episode reward: [(0, '4.630')] +[2024-12-28 14:47:50,199][100934] Updated weights for policy 0, policy_version 25097 (0.0006) +[2024-12-28 14:47:51,729][100934] Updated weights for policy 0, policy_version 25107 (0.0006) +[2024-12-28 14:47:53,332][100934] Updated weights for policy 0, policy_version 25117 (0.0007) +[2024-12-28 14:47:53,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25804.8, 300 sec: 25811.7). Total num frames: 102895616. Throughput: 0: 6612.1. Samples: 15716552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:47:53,945][100720] Avg episode reward: [(0, '4.434')] +[2024-12-28 14:47:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000025121_102895616.pth... +[2024-12-28 14:47:53,985][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000023616_96731136.pth +[2024-12-28 14:47:54,853][100934] Updated weights for policy 0, policy_version 25127 (0.0007) +[2024-12-28 14:47:56,429][100934] Updated weights for policy 0, policy_version 25137 (0.0006) +[2024-12-28 14:47:57,968][100934] Updated weights for policy 0, policy_version 25147 (0.0006) +[2024-12-28 14:47:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.8, 300 sec: 25811.7). Total num frames: 103026688. Throughput: 0: 6612.1. Samples: 15756044. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:47:58,945][100720] Avg episode reward: [(0, '4.427')] +[2024-12-28 14:47:59,513][100934] Updated weights for policy 0, policy_version 25157 (0.0007) +[2024-12-28 14:48:01,055][100934] Updated weights for policy 0, policy_version 25167 (0.0006) +[2024-12-28 14:48:02,583][100934] Updated weights for policy 0, policy_version 25177 (0.0008) +[2024-12-28 14:48:03,944][100720] Fps is (10 sec: 26214.5, 60 sec: 26146.2, 300 sec: 25797.9). Total num frames: 103157760. Throughput: 0: 6604.9. Samples: 15775994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:48:03,945][100720] Avg episode reward: [(0, '4.438')] +[2024-12-28 14:48:04,120][100934] Updated weights for policy 0, policy_version 25187 (0.0007) +[2024-12-28 14:48:05,668][100934] Updated weights for policy 0, policy_version 25197 (0.0007) +[2024-12-28 14:48:07,244][100934] Updated weights for policy 0, policy_version 25207 (0.0007) +[2024-12-28 14:48:08,819][100934] Updated weights for policy 0, policy_version 25217 (0.0006) +[2024-12-28 14:48:08,944][100720] Fps is (10 sec: 26214.3, 60 sec: 26487.5, 300 sec: 25797.9). Total num frames: 103288832. Throughput: 0: 6610.5. Samples: 15815694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:48:08,945][100720] Avg episode reward: [(0, '4.303')] +[2024-12-28 14:48:10,311][100934] Updated weights for policy 0, policy_version 25227 (0.0006) +[2024-12-28 14:48:11,863][100934] Updated weights for policy 0, policy_version 25237 (0.0007) +[2024-12-28 14:48:13,443][100934] Updated weights for policy 0, policy_version 25247 (0.0007) +[2024-12-28 14:48:13,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26487.5, 300 sec: 25797.9). Total num frames: 103424000. Throughput: 0: 6614.2. Samples: 15855514. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:48:13,945][100720] Avg episode reward: [(0, '4.293')] +[2024-12-28 14:48:14,966][100934] Updated weights for policy 0, policy_version 25257 (0.0006) +[2024-12-28 14:48:16,497][100934] Updated weights for policy 0, policy_version 25267 (0.0007) +[2024-12-28 14:48:18,055][100934] Updated weights for policy 0, policy_version 25277 (0.0007) +[2024-12-28 14:48:18,944][100720] Fps is (10 sec: 26623.9, 60 sec: 26419.3, 300 sec: 25797.9). Total num frames: 103555072. Throughput: 0: 6616.0. Samples: 15875552. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:48:18,946][100720] Avg episode reward: [(0, '4.280')] +[2024-12-28 14:48:19,619][100934] Updated weights for policy 0, policy_version 25287 (0.0007) +[2024-12-28 14:48:21,200][100934] Updated weights for policy 0, policy_version 25297 (0.0006) +[2024-12-28 14:48:22,725][100934] Updated weights for policy 0, policy_version 25307 (0.0007) +[2024-12-28 14:48:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26487.5, 300 sec: 25797.9). Total num frames: 103690240. Throughput: 0: 6611.3. Samples: 15914962. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2024-12-28 14:48:23,945][100720] Avg episode reward: [(0, '4.250')] +[2024-12-28 14:48:24,234][100934] Updated weights for policy 0, policy_version 25317 (0.0007) +[2024-12-28 14:48:25,762][100934] Updated weights for policy 0, policy_version 25327 (0.0006) +[2024-12-28 14:48:27,296][100934] Updated weights for policy 0, policy_version 25337 (0.0007) +[2024-12-28 14:48:28,881][100934] Updated weights for policy 0, policy_version 25347 (0.0007) +[2024-12-28 14:48:28,944][100720] Fps is (10 sec: 26624.2, 60 sec: 26487.5, 300 sec: 25797.9). Total num frames: 103821312. Throughput: 0: 6613.9. Samples: 15954924. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:48:28,945][100720] Avg episode reward: [(0, '4.472')] +[2024-12-28 14:48:30,454][100934] Updated weights for policy 0, policy_version 25357 (0.0007) +[2024-12-28 14:48:32,001][100934] Updated weights for policy 0, policy_version 25367 (0.0007) +[2024-12-28 14:48:33,856][100934] Updated weights for policy 0, policy_version 25377 (0.0008) +[2024-12-28 14:48:33,944][100720] Fps is (10 sec: 25395.1, 60 sec: 26282.6, 300 sec: 25770.1). Total num frames: 103944192. Throughput: 0: 6618.6. Samples: 15974718. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:48:33,945][100720] Avg episode reward: [(0, '4.480')] +[2024-12-28 14:48:35,760][100934] Updated weights for policy 0, policy_version 25387 (0.0008) +[2024-12-28 14:48:37,566][100934] Updated weights for policy 0, policy_version 25397 (0.0007) +[2024-12-28 14:48:38,944][100720] Fps is (10 sec: 23347.2, 60 sec: 25941.4, 300 sec: 25686.8). Total num frames: 104054784. Throughput: 0: 6472.1. Samples: 16007796. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:48:38,945][100720] Avg episode reward: [(0, '4.508')] +[2024-12-28 14:48:39,346][100934] Updated weights for policy 0, policy_version 25407 (0.0007) +[2024-12-28 14:48:41,116][100934] Updated weights for policy 0, policy_version 25417 (0.0010) +[2024-12-28 14:48:42,898][100934] Updated weights for policy 0, policy_version 25427 (0.0008) +[2024-12-28 14:48:43,944][100720] Fps is (10 sec: 22937.8, 60 sec: 25736.6, 300 sec: 25645.1). Total num frames: 104173568. Throughput: 0: 6374.8. Samples: 16042910. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:48:43,945][100720] Avg episode reward: [(0, '4.615')] +[2024-12-28 14:48:44,545][100934] Updated weights for policy 0, policy_version 25437 (0.0008) +[2024-12-28 14:48:46,121][100934] Updated weights for policy 0, policy_version 25447 (0.0007) +[2024-12-28 14:48:47,660][100934] Updated weights for policy 0, policy_version 25457 (0.0008) +[2024-12-28 14:48:48,944][100720] Fps is (10 sec: 24985.6, 60 sec: 25736.6, 300 sec: 25672.9). Total num frames: 104304640. Throughput: 0: 6364.4. Samples: 16062394. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:48:48,945][100720] Avg episode reward: [(0, '4.467')] +[2024-12-28 14:48:49,165][100934] Updated weights for policy 0, policy_version 25467 (0.0007) +[2024-12-28 14:48:50,716][100934] Updated weights for policy 0, policy_version 25477 (0.0007) +[2024-12-28 14:48:52,221][100934] Updated weights for policy 0, policy_version 25487 (0.0007) +[2024-12-28 14:48:53,725][100934] Updated weights for policy 0, policy_version 25497 (0.0006) +[2024-12-28 14:48:53,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25736.5, 300 sec: 25756.2). Total num frames: 104439808. Throughput: 0: 6376.2. Samples: 16102624. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:48:53,945][100720] Avg episode reward: [(0, '4.444')] +[2024-12-28 14:48:55,225][100934] Updated weights for policy 0, policy_version 25507 (0.0006) +[2024-12-28 14:48:56,793][100934] Updated weights for policy 0, policy_version 25517 (0.0006) +[2024-12-28 14:48:58,361][100934] Updated weights for policy 0, policy_version 25527 (0.0006) +[2024-12-28 14:48:58,944][100720] Fps is (10 sec: 26623.6, 60 sec: 25736.5, 300 sec: 25797.8). Total num frames: 104570880. Throughput: 0: 6385.5. Samples: 16142864. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 14:48:58,945][100720] Avg episode reward: [(0, '4.475')] +[2024-12-28 14:48:59,901][100934] Updated weights for policy 0, policy_version 25537 (0.0006) +[2024-12-28 14:49:01,419][100934] Updated weights for policy 0, policy_version 25547 (0.0006) +[2024-12-28 14:49:02,907][100934] Updated weights for policy 0, policy_version 25557 (0.0006) +[2024-12-28 14:49:03,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.8, 300 sec: 25811.7). Total num frames: 104706048. Throughput: 0: 6385.9. Samples: 16162916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:03,945][100720] Avg episode reward: [(0, '4.377')] +[2024-12-28 14:49:04,455][100934] Updated weights for policy 0, policy_version 25567 (0.0008) +[2024-12-28 14:49:06,013][100934] Updated weights for policy 0, policy_version 25577 (0.0007) +[2024-12-28 14:49:07,565][100934] Updated weights for policy 0, policy_version 25587 (0.0008) +[2024-12-28 14:49:08,944][100720] Fps is (10 sec: 27033.9, 60 sec: 25873.1, 300 sec: 25839.5). Total num frames: 104841216. Throughput: 0: 6398.9. Samples: 16202912. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:08,945][100720] Avg episode reward: [(0, '4.489')] +[2024-12-28 14:49:09,051][100934] Updated weights for policy 0, policy_version 25597 (0.0006) +[2024-12-28 14:49:10,714][100934] Updated weights for policy 0, policy_version 25607 (0.0007) +[2024-12-28 14:49:12,534][100934] Updated weights for policy 0, policy_version 25617 (0.0008) +[2024-12-28 14:49:13,944][100720] Fps is (10 sec: 24985.1, 60 sec: 25531.6, 300 sec: 25784.0). Total num frames: 104955904. Throughput: 0: 6304.9. Samples: 16238644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:13,945][100720] Avg episode reward: [(0, '4.442')] +[2024-12-28 14:49:14,404][100934] Updated weights for policy 0, policy_version 25627 (0.0008) +[2024-12-28 14:49:16,159][100934] Updated weights for policy 0, policy_version 25637 (0.0008) +[2024-12-28 14:49:17,949][100934] Updated weights for policy 0, policy_version 25647 (0.0007) +[2024-12-28 14:49:18,944][100720] Fps is (10 sec: 22936.4, 60 sec: 25258.5, 300 sec: 25714.5). Total num frames: 105070592. Throughput: 0: 6242.1. Samples: 16255616. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:18,945][100720] Avg episode reward: [(0, '4.649')] +[2024-12-28 14:49:19,887][100934] Updated weights for policy 0, policy_version 25657 (0.0009) +[2024-12-28 14:49:21,592][100934] Updated weights for policy 0, policy_version 25667 (0.0008) +[2024-12-28 14:49:23,106][100934] Updated weights for policy 0, policy_version 25677 (0.0007) +[2024-12-28 14:49:23,944][100720] Fps is (10 sec: 23757.1, 60 sec: 25053.8, 300 sec: 25686.8). Total num frames: 105193472. Throughput: 0: 6303.5. Samples: 16291454. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:49:23,945][100720] Avg episode reward: [(0, '4.289')] +[2024-12-28 14:49:24,618][100934] Updated weights for policy 0, policy_version 25687 (0.0006) +[2024-12-28 14:49:26,165][100934] Updated weights for policy 0, policy_version 25697 (0.0007) +[2024-12-28 14:49:27,894][100934] Updated weights for policy 0, policy_version 25707 (0.0008) +[2024-12-28 14:49:28,944][100720] Fps is (10 sec: 24577.4, 60 sec: 24917.3, 300 sec: 25659.0). Total num frames: 105316352. Throughput: 0: 6365.6. Samples: 16329360. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:49:28,945][100720] Avg episode reward: [(0, '4.342')] +[2024-12-28 14:49:29,682][100934] Updated weights for policy 0, policy_version 25717 (0.0008) +[2024-12-28 14:49:31,532][100934] Updated weights for policy 0, policy_version 25727 (0.0008) +[2024-12-28 14:49:33,355][100934] Updated weights for policy 0, policy_version 25737 (0.0009) +[2024-12-28 14:49:33,944][100720] Fps is (10 sec: 23756.7, 60 sec: 24780.8, 300 sec: 25589.6). Total num frames: 105431040. Throughput: 0: 6303.6. Samples: 16346056. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:49:33,945][100720] Avg episode reward: [(0, '4.356')] +[2024-12-28 14:49:35,131][100934] Updated weights for policy 0, policy_version 25747 (0.0007) +[2024-12-28 14:49:36,902][100934] Updated weights for policy 0, policy_version 25757 (0.0007) +[2024-12-28 14:49:38,564][100934] Updated weights for policy 0, policy_version 25767 (0.0007) +[2024-12-28 14:49:38,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24917.3, 300 sec: 25547.9). Total num frames: 105549824. Throughput: 0: 6177.0. Samples: 16380590. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:38,945][100720] Avg episode reward: [(0, '4.464')] +[2024-12-28 14:49:40,053][100934] Updated weights for policy 0, policy_version 25777 (0.0006) +[2024-12-28 14:49:41,784][100934] Updated weights for policy 0, policy_version 25787 (0.0008) +[2024-12-28 14:49:43,635][100934] Updated weights for policy 0, policy_version 25797 (0.0008) +[2024-12-28 14:49:43,944][100720] Fps is (10 sec: 23756.8, 60 sec: 24917.3, 300 sec: 25506.3). Total num frames: 105668608. Throughput: 0: 6099.7. Samples: 16417352. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:43,945][100720] Avg episode reward: [(0, '4.540')] +[2024-12-28 14:49:45,383][100934] Updated weights for policy 0, policy_version 25807 (0.0007) +[2024-12-28 14:49:47,172][100934] Updated weights for policy 0, policy_version 25817 (0.0007) +[2024-12-28 14:49:48,944][100720] Fps is (10 sec: 23347.1, 60 sec: 24644.3, 300 sec: 25450.7). Total num frames: 105783296. Throughput: 0: 6036.6. Samples: 16434562. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:48,945][100720] Avg episode reward: [(0, '4.499')] +[2024-12-28 14:49:49,041][100934] Updated weights for policy 0, policy_version 25827 (0.0010) +[2024-12-28 14:49:50,817][100934] Updated weights for policy 0, policy_version 25837 (0.0008) +[2024-12-28 14:49:52,522][100934] Updated weights for policy 0, policy_version 25847 (0.0007) +[2024-12-28 14:49:53,944][100720] Fps is (10 sec: 23757.1, 60 sec: 24439.5, 300 sec: 25423.0). Total num frames: 105906176. Throughput: 0: 5921.5. Samples: 16469378. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:53,945][100720] Avg episode reward: [(0, '4.339')] +[2024-12-28 14:49:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000025856_105906176.pth... +[2024-12-28 14:49:53,980][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000024378_99852288.pth +[2024-12-28 14:49:54,027][100934] Updated weights for policy 0, policy_version 25857 (0.0006) +[2024-12-28 14:49:55,601][100934] Updated weights for policy 0, policy_version 25867 (0.0007) +[2024-12-28 14:49:57,077][100934] Updated weights for policy 0, policy_version 25877 (0.0008) +[2024-12-28 14:49:58,581][100934] Updated weights for policy 0, policy_version 25887 (0.0006) +[2024-12-28 14:49:58,944][100720] Fps is (10 sec: 25804.8, 60 sec: 24507.8, 300 sec: 25436.9). Total num frames: 106041344. Throughput: 0: 6029.0. Samples: 16509948. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:49:58,945][100720] Avg episode reward: [(0, '4.520')] +[2024-12-28 14:50:00,070][100934] Updated weights for policy 0, policy_version 25897 (0.0006) +[2024-12-28 14:50:01,625][100934] Updated weights for policy 0, policy_version 25907 (0.0006) +[2024-12-28 14:50:03,077][100934] Updated weights for policy 0, policy_version 25917 (0.0006) +[2024-12-28 14:50:03,944][100720] Fps is (10 sec: 27033.3, 60 sec: 24507.7, 300 sec: 25436.9). Total num frames: 106176512. Throughput: 0: 6103.4. Samples: 16530266. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:50:03,945][100720] Avg episode reward: [(0, '4.436')] +[2024-12-28 14:50:04,573][100934] Updated weights for policy 0, policy_version 25927 (0.0007) +[2024-12-28 14:50:06,104][100934] Updated weights for policy 0, policy_version 25937 (0.0006) +[2024-12-28 14:50:07,614][100934] Updated weights for policy 0, policy_version 25947 (0.0006) +[2024-12-28 14:50:08,944][100720] Fps is (10 sec: 27033.4, 60 sec: 24507.7, 300 sec: 25450.7). Total num frames: 106311680. Throughput: 0: 6217.4. Samples: 16571236. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:50:08,945][100720] Avg episode reward: [(0, '4.417')] +[2024-12-28 14:50:09,202][100934] Updated weights for policy 0, policy_version 25957 (0.0008) +[2024-12-28 14:50:10,985][100934] Updated weights for policy 0, policy_version 25967 (0.0009) +[2024-12-28 14:50:12,750][100934] Updated weights for policy 0, policy_version 25977 (0.0008) +[2024-12-28 14:50:13,944][100720] Fps is (10 sec: 24985.7, 60 sec: 24507.8, 300 sec: 25395.2). Total num frames: 106426368. Throughput: 0: 6160.9. Samples: 16606602. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:50:13,945][100720] Avg episode reward: [(0, '4.467')] +[2024-12-28 14:50:14,522][100934] Updated weights for policy 0, policy_version 25987 (0.0007) +[2024-12-28 14:50:16,300][100934] Updated weights for policy 0, policy_version 25997 (0.0008) +[2024-12-28 14:50:18,118][100934] Updated weights for policy 0, policy_version 26007 (0.0008) +[2024-12-28 14:50:18,944][100720] Fps is (10 sec: 22937.7, 60 sec: 24507.9, 300 sec: 25339.7). Total num frames: 106541056. Throughput: 0: 6171.6. Samples: 16623778. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:50:18,945][100720] Avg episode reward: [(0, '4.397')] +[2024-12-28 14:50:19,977][100934] Updated weights for policy 0, policy_version 26017 (0.0007) +[2024-12-28 14:50:21,552][100934] Updated weights for policy 0, policy_version 26027 (0.0007) +[2024-12-28 14:50:23,016][100934] Updated weights for policy 0, policy_version 26037 (0.0006) +[2024-12-28 14:50:23,944][100720] Fps is (10 sec: 24575.9, 60 sec: 24644.3, 300 sec: 25339.7). Total num frames: 106672128. Throughput: 0: 6219.3. Samples: 16660458. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:50:23,945][100720] Avg episode reward: [(0, '4.635')] +[2024-12-28 14:50:24,523][100934] Updated weights for policy 0, policy_version 26047 (0.0007) +[2024-12-28 14:50:26,065][100934] Updated weights for policy 0, policy_version 26057 (0.0007) +[2024-12-28 14:50:27,529][100934] Updated weights for policy 0, policy_version 26067 (0.0006) +[2024-12-28 14:50:28,944][100720] Fps is (10 sec: 26624.1, 60 sec: 24849.1, 300 sec: 25353.5). Total num frames: 106807296. Throughput: 0: 6303.0. Samples: 16700984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:50:28,945][100720] Avg episode reward: [(0, '4.466')] +[2024-12-28 14:50:29,067][100934] Updated weights for policy 0, policy_version 26077 (0.0007) +[2024-12-28 14:50:30,590][100934] Updated weights for policy 0, policy_version 26087 (0.0006) +[2024-12-28 14:50:32,100][100934] Updated weights for policy 0, policy_version 26097 (0.0007) +[2024-12-28 14:50:33,626][100934] Updated weights for policy 0, policy_version 26107 (0.0007) +[2024-12-28 14:50:33,944][100720] Fps is (10 sec: 27033.9, 60 sec: 25190.5, 300 sec: 25367.4). Total num frames: 106942464. Throughput: 0: 6373.8. Samples: 16721382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:50:33,945][100720] Avg episode reward: [(0, '4.428')] +[2024-12-28 14:50:35,168][100934] Updated weights for policy 0, policy_version 26117 (0.0006) +[2024-12-28 14:50:36,705][100934] Updated weights for policy 0, policy_version 26127 (0.0008) +[2024-12-28 14:50:38,195][100934] Updated weights for policy 0, policy_version 26137 (0.0007) +[2024-12-28 14:50:38,944][100720] Fps is (10 sec: 27033.3, 60 sec: 25463.4, 300 sec: 25381.3). Total num frames: 107077632. Throughput: 0: 6494.2. Samples: 16761616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:50:38,946][100720] Avg episode reward: [(0, '4.269')] +[2024-12-28 14:50:39,722][100934] Updated weights for policy 0, policy_version 26147 (0.0007) +[2024-12-28 14:50:41,277][100934] Updated weights for policy 0, policy_version 26157 (0.0007) +[2024-12-28 14:50:42,810][100934] Updated weights for policy 0, policy_version 26167 (0.0007) +[2024-12-28 14:50:43,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25668.3, 300 sec: 25381.3). Total num frames: 107208704. Throughput: 0: 6486.6. Samples: 16801846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2024-12-28 14:50:43,945][100720] Avg episode reward: [(0, '4.420')] +[2024-12-28 14:50:44,333][100934] Updated weights for policy 0, policy_version 26177 (0.0006) +[2024-12-28 14:50:45,853][100934] Updated weights for policy 0, policy_version 26187 (0.0006) +[2024-12-28 14:50:47,409][100934] Updated weights for policy 0, policy_version 26197 (0.0006) +[2024-12-28 14:50:48,914][100934] Updated weights for policy 0, policy_version 26207 (0.0008) +[2024-12-28 14:50:48,944][100720] Fps is (10 sec: 26624.3, 60 sec: 26009.6, 300 sec: 25395.2). Total num frames: 107343872. Throughput: 0: 6484.8. Samples: 16822080. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:50:48,945][100720] Avg episode reward: [(0, '4.456')] +[2024-12-28 14:50:50,512][100934] Updated weights for policy 0, policy_version 26217 (0.0007) +[2024-12-28 14:50:52,064][100934] Updated weights for policy 0, policy_version 26227 (0.0006) +[2024-12-28 14:50:53,613][100934] Updated weights for policy 0, policy_version 26237 (0.0006) +[2024-12-28 14:50:53,944][100720] Fps is (10 sec: 26624.2, 60 sec: 26146.1, 300 sec: 25395.2). Total num frames: 107474944. Throughput: 0: 6450.9. Samples: 16861524. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:50:53,945][100720] Avg episode reward: [(0, '4.302')] +[2024-12-28 14:50:55,126][100934] Updated weights for policy 0, policy_version 26247 (0.0006) +[2024-12-28 14:50:56,782][100934] Updated weights for policy 0, policy_version 26257 (0.0008) +[2024-12-28 14:50:58,537][100934] Updated weights for policy 0, policy_version 26267 (0.0008) +[2024-12-28 14:50:58,944][100720] Fps is (10 sec: 25395.3, 60 sec: 25941.3, 300 sec: 25381.3). Total num frames: 107597824. Throughput: 0: 6493.3. Samples: 16898802. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:50:58,945][100720] Avg episode reward: [(0, '4.394')] +[2024-12-28 14:51:00,387][100934] Updated weights for policy 0, policy_version 26277 (0.0010) +[2024-12-28 14:51:02,207][100934] Updated weights for policy 0, policy_version 26287 (0.0007) +[2024-12-28 14:51:03,944][100720] Fps is (10 sec: 23346.9, 60 sec: 25531.7, 300 sec: 25311.9). Total num frames: 107708416. Throughput: 0: 6485.7. Samples: 16915634. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:51:03,945][100720] Avg episode reward: [(0, '4.513')] +[2024-12-28 14:51:04,045][100934] Updated weights for policy 0, policy_version 26297 (0.0009) +[2024-12-28 14:51:05,855][100934] Updated weights for policy 0, policy_version 26307 (0.0008) +[2024-12-28 14:51:07,499][100934] Updated weights for policy 0, policy_version 26317 (0.0007) +[2024-12-28 14:51:08,944][100720] Fps is (10 sec: 23347.1, 60 sec: 25327.0, 300 sec: 25339.7). Total num frames: 107831296. Throughput: 0: 6452.6. Samples: 16950824. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:51:08,945][100720] Avg episode reward: [(0, '4.390')] +[2024-12-28 14:51:09,006][100934] Updated weights for policy 0, policy_version 26327 (0.0007) +[2024-12-28 14:51:10,590][100934] Updated weights for policy 0, policy_version 26337 (0.0006) +[2024-12-28 14:51:12,050][100934] Updated weights for policy 0, policy_version 26347 (0.0006) +[2024-12-28 14:51:13,556][100934] Updated weights for policy 0, policy_version 26357 (0.0006) +[2024-12-28 14:51:13,944][100720] Fps is (10 sec: 25805.1, 60 sec: 25668.3, 300 sec: 25409.1). Total num frames: 107966464. Throughput: 0: 6451.8. Samples: 16991316. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:51:13,945][100720] Avg episode reward: [(0, '4.439')] +[2024-12-28 14:51:15,071][100934] Updated weights for policy 0, policy_version 26367 (0.0006) +[2024-12-28 14:51:16,560][100934] Updated weights for policy 0, policy_version 26377 (0.0007) +[2024-12-28 14:51:18,090][100934] Updated weights for policy 0, policy_version 26387 (0.0007) +[2024-12-28 14:51:18,944][100720] Fps is (10 sec: 27033.4, 60 sec: 26009.6, 300 sec: 25450.7). Total num frames: 108101632. Throughput: 0: 6450.4. Samples: 17011650. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:51:18,945][100720] Avg episode reward: [(0, '4.394')] +[2024-12-28 14:51:19,648][100934] Updated weights for policy 0, policy_version 26397 (0.0008) +[2024-12-28 14:51:21,232][100934] Updated weights for policy 0, policy_version 26407 (0.0006) +[2024-12-28 14:51:22,817][100934] Updated weights for policy 0, policy_version 26417 (0.0007) +[2024-12-28 14:51:23,944][100720] Fps is (10 sec: 26623.8, 60 sec: 26009.6, 300 sec: 25506.3). Total num frames: 108232704. Throughput: 0: 6432.8. Samples: 17051094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:51:23,945][100720] Avg episode reward: [(0, '4.374')] +[2024-12-28 14:51:24,345][100934] Updated weights for policy 0, policy_version 26427 (0.0007) +[2024-12-28 14:51:25,866][100934] Updated weights for policy 0, policy_version 26437 (0.0007) +[2024-12-28 14:51:27,391][100934] Updated weights for policy 0, policy_version 26447 (0.0006) +[2024-12-28 14:51:28,917][100934] Updated weights for policy 0, policy_version 26457 (0.0007) +[2024-12-28 14:51:28,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26009.6, 300 sec: 25589.6). Total num frames: 108367872. Throughput: 0: 6429.1. Samples: 17091156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:51:28,945][100720] Avg episode reward: [(0, '4.591')] +[2024-12-28 14:51:30,474][100934] Updated weights for policy 0, policy_version 26467 (0.0007) +[2024-12-28 14:51:32,030][100934] Updated weights for policy 0, policy_version 26477 (0.0007) +[2024-12-28 14:51:33,582][100934] Updated weights for policy 0, policy_version 26487 (0.0007) +[2024-12-28 14:51:33,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25941.3, 300 sec: 25575.7). Total num frames: 108498944. Throughput: 0: 6420.0. Samples: 17110978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:51:33,945][100720] Avg episode reward: [(0, '4.485')] +[2024-12-28 14:51:35,071][100934] Updated weights for policy 0, policy_version 26497 (0.0006) +[2024-12-28 14:51:36,607][100934] Updated weights for policy 0, policy_version 26507 (0.0008) +[2024-12-28 14:51:38,162][100934] Updated weights for policy 0, policy_version 26517 (0.0006) +[2024-12-28 14:51:38,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25941.4, 300 sec: 25589.6). Total num frames: 108634112. Throughput: 0: 6434.0. Samples: 17151054. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:51:38,945][100720] Avg episode reward: [(0, '4.530')] +[2024-12-28 14:51:39,692][100934] Updated weights for policy 0, policy_version 26527 (0.0007) +[2024-12-28 14:51:41,215][100934] Updated weights for policy 0, policy_version 26537 (0.0007) +[2024-12-28 14:51:42,718][100934] Updated weights for policy 0, policy_version 26547 (0.0006) +[2024-12-28 14:51:43,944][100720] Fps is (10 sec: 26623.7, 60 sec: 25941.3, 300 sec: 25589.6). Total num frames: 108765184. Throughput: 0: 6501.3. Samples: 17191362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:51:43,945][100720] Avg episode reward: [(0, '4.473')] +[2024-12-28 14:51:44,260][100934] Updated weights for policy 0, policy_version 26557 (0.0007) +[2024-12-28 14:51:45,808][100934] Updated weights for policy 0, policy_version 26567 (0.0007) +[2024-12-28 14:51:47,307][100934] Updated weights for policy 0, policy_version 26577 (0.0006) +[2024-12-28 14:51:48,812][100934] Updated weights for policy 0, policy_version 26587 (0.0007) +[2024-12-28 14:51:48,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25941.3, 300 sec: 25603.5). Total num frames: 108900352. Throughput: 0: 6574.3. Samples: 17211476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:51:48,945][100720] Avg episode reward: [(0, '4.362')] +[2024-12-28 14:51:50,419][100934] Updated weights for policy 0, policy_version 26597 (0.0006) +[2024-12-28 14:51:51,967][100934] Updated weights for policy 0, policy_version 26607 (0.0009) +[2024-12-28 14:51:53,513][100934] Updated weights for policy 0, policy_version 26617 (0.0007) +[2024-12-28 14:51:53,944][100720] Fps is (10 sec: 26623.6, 60 sec: 25941.2, 300 sec: 25603.4). Total num frames: 109031424. Throughput: 0: 6670.0. Samples: 17250974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:51:53,946][100720] Avg episode reward: [(0, '4.365')] +[2024-12-28 14:51:53,975][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000026620_109035520.pth... +[2024-12-28 14:51:54,014][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000025121_102895616.pth +[2024-12-28 14:51:55,071][100934] Updated weights for policy 0, policy_version 26627 (0.0007) +[2024-12-28 14:51:56,619][100934] Updated weights for policy 0, policy_version 26637 (0.0006) +[2024-12-28 14:51:58,127][100934] Updated weights for policy 0, policy_version 26647 (0.0006) +[2024-12-28 14:51:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26146.1, 300 sec: 25686.8). Total num frames: 109166592. Throughput: 0: 6654.4. Samples: 17290764. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:51:58,945][100720] Avg episode reward: [(0, '4.327')] +[2024-12-28 14:51:59,697][100934] Updated weights for policy 0, policy_version 26657 (0.0006) +[2024-12-28 14:52:01,297][100934] Updated weights for policy 0, policy_version 26667 (0.0007) +[2024-12-28 14:52:02,855][100934] Updated weights for policy 0, policy_version 26677 (0.0006) +[2024-12-28 14:52:03,944][100720] Fps is (10 sec: 26624.8, 60 sec: 26487.5, 300 sec: 25756.2). Total num frames: 109297664. Throughput: 0: 6639.3. Samples: 17310418. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:52:03,945][100720] Avg episode reward: [(0, '4.389')] +[2024-12-28 14:52:04,372][100934] Updated weights for policy 0, policy_version 26687 (0.0006) +[2024-12-28 14:52:05,939][100934] Updated weights for policy 0, policy_version 26697 (0.0007) +[2024-12-28 14:52:07,494][100934] Updated weights for policy 0, policy_version 26707 (0.0008) +[2024-12-28 14:52:08,944][100720] Fps is (10 sec: 26214.5, 60 sec: 26624.0, 300 sec: 25742.3). Total num frames: 109428736. Throughput: 0: 6644.5. Samples: 17350098. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:52:08,945][100720] Avg episode reward: [(0, '4.564')] +[2024-12-28 14:52:09,010][100934] Updated weights for policy 0, policy_version 26717 (0.0007) +[2024-12-28 14:52:10,550][100934] Updated weights for policy 0, policy_version 26727 (0.0006) +[2024-12-28 14:52:12,060][100934] Updated weights for policy 0, policy_version 26737 (0.0006) +[2024-12-28 14:52:13,591][100934] Updated weights for policy 0, policy_version 26747 (0.0006) +[2024-12-28 14:52:13,944][100720] Fps is (10 sec: 26623.7, 60 sec: 26624.0, 300 sec: 25742.3). Total num frames: 109563904. Throughput: 0: 6647.6. Samples: 17390296. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:13,945][100720] Avg episode reward: [(0, '4.443')] +[2024-12-28 14:52:15,163][100934] Updated weights for policy 0, policy_version 26757 (0.0006) +[2024-12-28 14:52:16,704][100934] Updated weights for policy 0, policy_version 26767 (0.0006) +[2024-12-28 14:52:18,399][100934] Updated weights for policy 0, policy_version 26777 (0.0007) +[2024-12-28 14:52:18,944][100720] Fps is (10 sec: 26214.3, 60 sec: 26487.5, 300 sec: 25728.4). Total num frames: 109690880. Throughput: 0: 6647.4. Samples: 17410112. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:18,945][100720] Avg episode reward: [(0, '4.271')] +[2024-12-28 14:52:20,096][100934] Updated weights for policy 0, policy_version 26787 (0.0006) +[2024-12-28 14:52:21,699][100934] Updated weights for policy 0, policy_version 26797 (0.0007) +[2024-12-28 14:52:23,497][100934] Updated weights for policy 0, policy_version 26807 (0.0009) +[2024-12-28 14:52:23,944][100720] Fps is (10 sec: 24576.0, 60 sec: 26282.7, 300 sec: 25686.8). Total num frames: 109809664. Throughput: 0: 6563.1. Samples: 17446392. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:23,945][100720] Avg episode reward: [(0, '4.481')] +[2024-12-28 14:52:25,298][100934] Updated weights for policy 0, policy_version 26817 (0.0008) +[2024-12-28 14:52:27,145][100934] Updated weights for policy 0, policy_version 26827 (0.0008) +[2024-12-28 14:52:28,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25873.1, 300 sec: 25603.5). Total num frames: 109920256. Throughput: 0: 6415.5. Samples: 17480058. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:28,945][100720] Avg episode reward: [(0, '4.513')] +[2024-12-28 14:52:29,018][100934] Updated weights for policy 0, policy_version 26837 (0.0008) +[2024-12-28 14:52:30,899][100934] Updated weights for policy 0, policy_version 26847 (0.0008) +[2024-12-28 14:52:32,681][100934] Updated weights for policy 0, policy_version 26857 (0.0007) +[2024-12-28 14:52:33,944][100720] Fps is (10 sec: 22937.8, 60 sec: 25668.3, 300 sec: 25561.8). Total num frames: 110039040. Throughput: 0: 6330.0. Samples: 17496326. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:52:33,945][100720] Avg episode reward: [(0, '4.483')] +[2024-12-28 14:52:34,236][100934] Updated weights for policy 0, policy_version 26867 (0.0007) +[2024-12-28 14:52:35,790][100934] Updated weights for policy 0, policy_version 26877 (0.0007) +[2024-12-28 14:52:37,317][100934] Updated weights for policy 0, policy_version 26887 (0.0007) +[2024-12-28 14:52:38,944][100720] Fps is (10 sec: 24576.0, 60 sec: 25531.7, 300 sec: 25547.9). Total num frames: 110166016. Throughput: 0: 6314.7. Samples: 17535136. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:52:38,945][100720] Avg episode reward: [(0, '4.415')] +[2024-12-28 14:52:39,109][100934] Updated weights for policy 0, policy_version 26897 (0.0008) +[2024-12-28 14:52:40,972][100934] Updated weights for policy 0, policy_version 26907 (0.0008) +[2024-12-28 14:52:42,845][100934] Updated weights for policy 0, policy_version 26917 (0.0009) +[2024-12-28 14:52:43,944][100720] Fps is (10 sec: 23347.1, 60 sec: 25122.2, 300 sec: 25464.6). Total num frames: 110272512. Throughput: 0: 6168.4. Samples: 17568340. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:52:43,945][100720] Avg episode reward: [(0, '4.279')] +[2024-12-28 14:52:44,695][100934] Updated weights for policy 0, policy_version 26927 (0.0008) +[2024-12-28 14:52:46,550][100934] Updated weights for policy 0, policy_version 26937 (0.0008) +[2024-12-28 14:52:48,317][100934] Updated weights for policy 0, policy_version 26947 (0.0007) +[2024-12-28 14:52:48,944][100720] Fps is (10 sec: 22118.1, 60 sec: 24780.7, 300 sec: 25395.2). Total num frames: 110387200. Throughput: 0: 6100.8. Samples: 17584956. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:48,945][100720] Avg episode reward: [(0, '4.213')] +[2024-12-28 14:52:49,969][100934] Updated weights for policy 0, policy_version 26957 (0.0008) +[2024-12-28 14:52:51,505][100934] Updated weights for policy 0, policy_version 26967 (0.0007) +[2024-12-28 14:52:53,107][100934] Updated weights for policy 0, policy_version 26977 (0.0007) +[2024-12-28 14:52:53,944][100720] Fps is (10 sec: 24575.9, 60 sec: 24780.9, 300 sec: 25395.2). Total num frames: 110518272. Throughput: 0: 6054.8. Samples: 17622566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:53,945][100720] Avg episode reward: [(0, '4.178')] +[2024-12-28 14:52:54,637][100934] Updated weights for policy 0, policy_version 26987 (0.0007) +[2024-12-28 14:52:56,180][100934] Updated weights for policy 0, policy_version 26997 (0.0007) +[2024-12-28 14:52:57,698][100934] Updated weights for policy 0, policy_version 27007 (0.0007) +[2024-12-28 14:52:58,944][100720] Fps is (10 sec: 26624.5, 60 sec: 24780.8, 300 sec: 25409.1). Total num frames: 110653440. Throughput: 0: 6051.2. Samples: 17662598. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:52:58,945][100720] Avg episode reward: [(0, '4.391')] +[2024-12-28 14:52:59,249][100934] Updated weights for policy 0, policy_version 27017 (0.0008) +[2024-12-28 14:53:00,780][100934] Updated weights for policy 0, policy_version 27027 (0.0007) +[2024-12-28 14:53:02,343][100934] Updated weights for policy 0, policy_version 27037 (0.0007) +[2024-12-28 14:53:03,889][100934] Updated weights for policy 0, policy_version 27047 (0.0006) +[2024-12-28 14:53:03,944][100720] Fps is (10 sec: 26624.2, 60 sec: 24780.8, 300 sec: 25409.1). Total num frames: 110784512. Throughput: 0: 6050.7. Samples: 17682392. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:53:03,945][100720] Avg episode reward: [(0, '4.358')] +[2024-12-28 14:53:05,393][100934] Updated weights for policy 0, policy_version 27057 (0.0008) +[2024-12-28 14:53:06,922][100934] Updated weights for policy 0, policy_version 27067 (0.0006) +[2024-12-28 14:53:08,488][100934] Updated weights for policy 0, policy_version 27077 (0.0007) +[2024-12-28 14:53:08,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24849.1, 300 sec: 25409.1). Total num frames: 110919680. Throughput: 0: 6133.2. Samples: 17722384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:08,945][100720] Avg episode reward: [(0, '4.482')] +[2024-12-28 14:53:10,001][100934] Updated weights for policy 0, policy_version 27087 (0.0006) +[2024-12-28 14:53:11,561][100934] Updated weights for policy 0, policy_version 27097 (0.0007) +[2024-12-28 14:53:13,103][100934] Updated weights for policy 0, policy_version 27107 (0.0007) +[2024-12-28 14:53:13,944][100720] Fps is (10 sec: 26624.0, 60 sec: 24780.8, 300 sec: 25409.1). Total num frames: 111050752. Throughput: 0: 6269.7. Samples: 17762194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:13,945][100720] Avg episode reward: [(0, '4.381')] +[2024-12-28 14:53:14,633][100934] Updated weights for policy 0, policy_version 27117 (0.0006) +[2024-12-28 14:53:16,198][100934] Updated weights for policy 0, policy_version 27127 (0.0006) +[2024-12-28 14:53:17,769][100934] Updated weights for policy 0, policy_version 27137 (0.0007) +[2024-12-28 14:53:18,944][100720] Fps is (10 sec: 26214.1, 60 sec: 24849.0, 300 sec: 25395.2). Total num frames: 111181824. Throughput: 0: 6348.0. Samples: 17781988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:18,945][100720] Avg episode reward: [(0, '4.273')] +[2024-12-28 14:53:19,336][100934] Updated weights for policy 0, policy_version 27147 (0.0008) +[2024-12-28 14:53:20,943][100934] Updated weights for policy 0, policy_version 27157 (0.0007) +[2024-12-28 14:53:22,511][100934] Updated weights for policy 0, policy_version 27167 (0.0007) +[2024-12-28 14:53:23,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25053.9, 300 sec: 25395.2). Total num frames: 111312896. Throughput: 0: 6350.0. Samples: 17820884. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:23,945][100720] Avg episode reward: [(0, '4.222')] +[2024-12-28 14:53:24,053][100934] Updated weights for policy 0, policy_version 27177 (0.0006) +[2024-12-28 14:53:25,573][100934] Updated weights for policy 0, policy_version 27187 (0.0007) +[2024-12-28 14:53:27,111][100934] Updated weights for policy 0, policy_version 27197 (0.0007) +[2024-12-28 14:53:28,694][100934] Updated weights for policy 0, policy_version 27207 (0.0006) +[2024-12-28 14:53:28,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25395.2, 300 sec: 25423.0). Total num frames: 111443968. Throughput: 0: 6497.8. Samples: 17860742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:28,945][100720] Avg episode reward: [(0, '4.492')] +[2024-12-28 14:53:30,255][100934] Updated weights for policy 0, policy_version 27217 (0.0007) +[2024-12-28 14:53:31,826][100934] Updated weights for policy 0, policy_version 27227 (0.0006) +[2024-12-28 14:53:33,382][100934] Updated weights for policy 0, policy_version 27237 (0.0006) +[2024-12-28 14:53:33,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25600.0, 300 sec: 25492.4). Total num frames: 111575040. Throughput: 0: 6566.4. Samples: 17880442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:33,945][100720] Avg episode reward: [(0, '4.198')] +[2024-12-28 14:53:34,966][100934] Updated weights for policy 0, policy_version 27247 (0.0007) +[2024-12-28 14:53:36,493][100934] Updated weights for policy 0, policy_version 27257 (0.0007) +[2024-12-28 14:53:38,035][100934] Updated weights for policy 0, policy_version 27267 (0.0006) +[2024-12-28 14:53:38,944][100720] Fps is (10 sec: 26214.6, 60 sec: 25668.3, 300 sec: 25534.0). Total num frames: 111706112. Throughput: 0: 6612.1. Samples: 17920110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:53:38,945][100720] Avg episode reward: [(0, '4.397')] +[2024-12-28 14:53:39,715][100934] Updated weights for policy 0, policy_version 27277 (0.0008) +[2024-12-28 14:53:41,521][100934] Updated weights for policy 0, policy_version 27287 (0.0008) +[2024-12-28 14:53:43,366][100934] Updated weights for policy 0, policy_version 27297 (0.0009) +[2024-12-28 14:53:43,944][100720] Fps is (10 sec: 24575.9, 60 sec: 25804.8, 300 sec: 25478.5). Total num frames: 111820800. Throughput: 0: 6490.3. Samples: 17954664. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:53:43,945][100720] Avg episode reward: [(0, '4.465')] +[2024-12-28 14:53:45,202][100934] Updated weights for policy 0, policy_version 27307 (0.0009) +[2024-12-28 14:53:46,992][100934] Updated weights for policy 0, policy_version 27317 (0.0007) +[2024-12-28 14:53:48,776][100934] Updated weights for policy 0, policy_version 27327 (0.0008) +[2024-12-28 14:53:48,944][100720] Fps is (10 sec: 22528.0, 60 sec: 25736.6, 300 sec: 25395.2). Total num frames: 111931392. Throughput: 0: 6426.0. Samples: 17971560. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:53:48,945][100720] Avg episode reward: [(0, '4.535')] +[2024-12-28 14:53:50,445][100934] Updated weights for policy 0, policy_version 27337 (0.0007) +[2024-12-28 14:53:51,991][100934] Updated weights for policy 0, policy_version 27347 (0.0007) +[2024-12-28 14:53:53,549][100934] Updated weights for policy 0, policy_version 27357 (0.0006) +[2024-12-28 14:53:53,944][100720] Fps is (10 sec: 24166.1, 60 sec: 25736.5, 300 sec: 25395.2). Total num frames: 112062464. Throughput: 0: 6365.3. Samples: 18008822. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:53:53,945][100720] Avg episode reward: [(0, '4.748')] +[2024-12-28 14:53:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000027359_112062464.pth... +[2024-12-28 14:53:53,981][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000025856_105906176.pth +[2024-12-28 14:53:55,082][100934] Updated weights for policy 0, policy_version 27367 (0.0006) +[2024-12-28 14:53:56,634][100934] Updated weights for policy 0, policy_version 27377 (0.0007) +[2024-12-28 14:53:58,165][100934] Updated weights for policy 0, policy_version 27387 (0.0007) +[2024-12-28 14:53:58,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25736.5, 300 sec: 25395.2). Total num frames: 112197632. Throughput: 0: 6364.2. Samples: 18048584. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:53:58,945][100720] Avg episode reward: [(0, '4.624')] +[2024-12-28 14:53:59,716][100934] Updated weights for policy 0, policy_version 27397 (0.0007) +[2024-12-28 14:54:01,215][100934] Updated weights for policy 0, policy_version 27407 (0.0006) +[2024-12-28 14:54:02,746][100934] Updated weights for policy 0, policy_version 27417 (0.0006) +[2024-12-28 14:54:03,944][100720] Fps is (10 sec: 26624.4, 60 sec: 25736.5, 300 sec: 25381.3). Total num frames: 112328704. Throughput: 0: 6374.1. Samples: 18068820. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:03,945][100720] Avg episode reward: [(0, '4.521')] +[2024-12-28 14:54:04,327][100934] Updated weights for policy 0, policy_version 27427 (0.0007) +[2024-12-28 14:54:05,881][100934] Updated weights for policy 0, policy_version 27437 (0.0007) +[2024-12-28 14:54:07,404][100934] Updated weights for policy 0, policy_version 27447 (0.0006) +[2024-12-28 14:54:08,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25668.3, 300 sec: 25436.9). Total num frames: 112459776. Throughput: 0: 6391.4. Samples: 18108496. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:08,946][100720] Avg episode reward: [(0, '4.425')] +[2024-12-28 14:54:08,956][100934] Updated weights for policy 0, policy_version 27457 (0.0006) +[2024-12-28 14:54:10,467][100934] Updated weights for policy 0, policy_version 27467 (0.0006) +[2024-12-28 14:54:12,005][100934] Updated weights for policy 0, policy_version 27477 (0.0007) +[2024-12-28 14:54:13,579][100934] Updated weights for policy 0, policy_version 27487 (0.0008) +[2024-12-28 14:54:13,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25736.5, 300 sec: 25506.3). Total num frames: 112594944. Throughput: 0: 6386.9. Samples: 18148150. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:13,945][100720] Avg episode reward: [(0, '4.533')] +[2024-12-28 14:54:15,132][100934] Updated weights for policy 0, policy_version 27497 (0.0007) +[2024-12-28 14:54:16,677][100934] Updated weights for policy 0, policy_version 27507 (0.0006) +[2024-12-28 14:54:18,227][100934] Updated weights for policy 0, policy_version 27517 (0.0006) +[2024-12-28 14:54:18,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25736.6, 300 sec: 25534.1). Total num frames: 112726016. Throughput: 0: 6392.4. Samples: 18168100. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:18,945][100720] Avg episode reward: [(0, '4.295')] +[2024-12-28 14:54:19,818][100934] Updated weights for policy 0, policy_version 27527 (0.0008) +[2024-12-28 14:54:21,371][100934] Updated weights for policy 0, policy_version 27537 (0.0007) +[2024-12-28 14:54:22,882][100934] Updated weights for policy 0, policy_version 27547 (0.0006) +[2024-12-28 14:54:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25804.8, 300 sec: 25575.7). Total num frames: 112861184. Throughput: 0: 6387.0. Samples: 18207526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:23,945][100720] Avg episode reward: [(0, '4.305')] +[2024-12-28 14:54:24,418][100934] Updated weights for policy 0, policy_version 27557 (0.0006) +[2024-12-28 14:54:25,999][100934] Updated weights for policy 0, policy_version 27567 (0.0007) +[2024-12-28 14:54:27,565][100934] Updated weights for policy 0, policy_version 27577 (0.0006) +[2024-12-28 14:54:28,944][100720] Fps is (10 sec: 26214.4, 60 sec: 25736.6, 300 sec: 25617.4). Total num frames: 112988160. Throughput: 0: 6502.4. Samples: 18247270. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:28,945][100720] Avg episode reward: [(0, '4.578')] +[2024-12-28 14:54:29,120][100934] Updated weights for policy 0, policy_version 27587 (0.0007) +[2024-12-28 14:54:30,675][100934] Updated weights for policy 0, policy_version 27597 (0.0006) +[2024-12-28 14:54:32,222][100934] Updated weights for policy 0, policy_version 27607 (0.0006) +[2024-12-28 14:54:33,822][100934] Updated weights for policy 0, policy_version 27617 (0.0007) +[2024-12-28 14:54:33,944][100720] Fps is (10 sec: 25804.6, 60 sec: 25736.5, 300 sec: 25659.0). Total num frames: 113119232. Throughput: 0: 6562.3. Samples: 18266862. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:54:33,945][100720] Avg episode reward: [(0, '4.614')] +[2024-12-28 14:54:35,730][100934] Updated weights for policy 0, policy_version 27627 (0.0009) +[2024-12-28 14:54:37,551][100934] Updated weights for policy 0, policy_version 27637 (0.0008) +[2024-12-28 14:54:38,944][100720] Fps is (10 sec: 24165.8, 60 sec: 25395.1, 300 sec: 25631.2). Total num frames: 113229824. Throughput: 0: 6510.8. Samples: 18301808. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:54:38,946][100720] Avg episode reward: [(0, '4.521')] +[2024-12-28 14:54:39,364][100934] Updated weights for policy 0, policy_version 27647 (0.0008) +[2024-12-28 14:54:41,195][100934] Updated weights for policy 0, policy_version 27657 (0.0008) +[2024-12-28 14:54:43,056][100934] Updated weights for policy 0, policy_version 27667 (0.0009) +[2024-12-28 14:54:43,944][100720] Fps is (10 sec: 22118.5, 60 sec: 25326.9, 300 sec: 25617.4). Total num frames: 113340416. Throughput: 0: 6368.8. Samples: 18335180. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:54:43,947][100720] Avg episode reward: [(0, '4.415')] +[2024-12-28 14:54:44,866][100934] Updated weights for policy 0, policy_version 27677 (0.0007) +[2024-12-28 14:54:46,382][100934] Updated weights for policy 0, policy_version 27687 (0.0006) +[2024-12-28 14:54:47,905][100934] Updated weights for policy 0, policy_version 27697 (0.0006) +[2024-12-28 14:54:48,944][100720] Fps is (10 sec: 24166.8, 60 sec: 25668.2, 300 sec: 25645.1). Total num frames: 113471488. Throughput: 0: 6345.7. Samples: 18354376. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:54:48,945][100720] Avg episode reward: [(0, '4.381')] +[2024-12-28 14:54:49,440][100934] Updated weights for policy 0, policy_version 27707 (0.0007) +[2024-12-28 14:54:51,004][100934] Updated weights for policy 0, policy_version 27717 (0.0008) +[2024-12-28 14:54:52,547][100934] Updated weights for policy 0, policy_version 27727 (0.0006) +[2024-12-28 14:54:53,944][100720] Fps is (10 sec: 26214.5, 60 sec: 25668.3, 300 sec: 25631.2). Total num frames: 113602560. Throughput: 0: 6348.7. Samples: 18394186. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:54:53,945][100720] Avg episode reward: [(0, '4.426')] +[2024-12-28 14:54:54,112][100934] Updated weights for policy 0, policy_version 27737 (0.0007) +[2024-12-28 14:54:55,701][100934] Updated weights for policy 0, policy_version 27747 (0.0008) +[2024-12-28 14:54:57,227][100934] Updated weights for policy 0, policy_version 27757 (0.0006) +[2024-12-28 14:54:58,745][100934] Updated weights for policy 0, policy_version 27767 (0.0006) +[2024-12-28 14:54:58,944][100720] Fps is (10 sec: 26624.3, 60 sec: 25668.3, 300 sec: 25631.2). Total num frames: 113737728. Throughput: 0: 6350.8. Samples: 18433938. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:54:58,945][100720] Avg episode reward: [(0, '4.595')] +[2024-12-28 14:55:00,224][100934] Updated weights for policy 0, policy_version 27777 (0.0006) +[2024-12-28 14:55:01,760][100934] Updated weights for policy 0, policy_version 27787 (0.0007) +[2024-12-28 14:55:03,331][100934] Updated weights for policy 0, policy_version 27797 (0.0008) +[2024-12-28 14:55:03,944][100720] Fps is (10 sec: 27033.4, 60 sec: 25736.5, 300 sec: 25631.2). Total num frames: 113872896. Throughput: 0: 6356.3. Samples: 18454132. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:55:03,945][100720] Avg episode reward: [(0, '4.454')] +[2024-12-28 14:55:04,847][100934] Updated weights for policy 0, policy_version 27807 (0.0007) +[2024-12-28 14:55:06,409][100934] Updated weights for policy 0, policy_version 27817 (0.0007) +[2024-12-28 14:55:07,917][100934] Updated weights for policy 0, policy_version 27827 (0.0006) +[2024-12-28 14:55:08,944][100720] Fps is (10 sec: 26623.5, 60 sec: 25736.4, 300 sec: 25686.8). Total num frames: 114003968. Throughput: 0: 6371.7. Samples: 18494256. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:55:08,945][100720] Avg episode reward: [(0, '4.447')] +[2024-12-28 14:55:09,518][100934] Updated weights for policy 0, policy_version 27837 (0.0007) +[2024-12-28 14:55:11,327][100934] Updated weights for policy 0, policy_version 27847 (0.0007) +[2024-12-28 14:55:13,086][100934] Updated weights for policy 0, policy_version 27857 (0.0008) +[2024-12-28 14:55:13,944][100720] Fps is (10 sec: 24576.2, 60 sec: 25395.2, 300 sec: 25686.8). Total num frames: 114118656. Throughput: 0: 6272.9. Samples: 18529552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:55:13,945][100720] Avg episode reward: [(0, '4.334')] +[2024-12-28 14:55:14,928][100934] Updated weights for policy 0, policy_version 27867 (0.0008) +[2024-12-28 14:55:16,826][100934] Updated weights for policy 0, policy_version 27877 (0.0008) +[2024-12-28 14:55:18,693][100934] Updated weights for policy 0, policy_version 27887 (0.0008) +[2024-12-28 14:55:18,944][100720] Fps is (10 sec: 22528.2, 60 sec: 25053.8, 300 sec: 25617.4). Total num frames: 114229248. Throughput: 0: 6200.1. Samples: 18545868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:55:18,945][100720] Avg episode reward: [(0, '4.470')] +[2024-12-28 14:55:20,455][100934] Updated weights for policy 0, policy_version 27897 (0.0007) +[2024-12-28 14:55:22,025][100934] Updated weights for policy 0, policy_version 27907 (0.0007) +[2024-12-28 14:55:23,551][100934] Updated weights for policy 0, policy_version 27917 (0.0007) +[2024-12-28 14:55:23,944][100720] Fps is (10 sec: 23756.9, 60 sec: 24917.3, 300 sec: 25589.6). Total num frames: 114356224. Throughput: 0: 6230.3. Samples: 18582170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:55:23,945][100720] Avg episode reward: [(0, '4.448')] +[2024-12-28 14:55:25,079][100934] Updated weights for policy 0, policy_version 27927 (0.0007) +[2024-12-28 14:55:26,857][100934] Updated weights for policy 0, policy_version 27937 (0.0008) +[2024-12-28 14:55:28,631][100934] Updated weights for policy 0, policy_version 27947 (0.0007) +[2024-12-28 14:55:28,944][100720] Fps is (10 sec: 24576.2, 60 sec: 24780.8, 300 sec: 25534.0). Total num frames: 114475008. Throughput: 0: 6301.6. Samples: 18618752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:55:28,945][100720] Avg episode reward: [(0, '4.344')] +[2024-12-28 14:55:30,473][100934] Updated weights for policy 0, policy_version 27957 (0.0007) +[2024-12-28 14:55:32,336][100934] Updated weights for policy 0, policy_version 27967 (0.0008) +[2024-12-28 14:55:33,944][100720] Fps is (10 sec: 22937.5, 60 sec: 24439.5, 300 sec: 25450.7). Total num frames: 114585600. Throughput: 0: 6241.0. Samples: 18635220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:55:33,945][100720] Avg episode reward: [(0, '4.442')] +[2024-12-28 14:55:34,226][100934] Updated weights for policy 0, policy_version 27977 (0.0009) +[2024-12-28 14:55:36,123][100934] Updated weights for policy 0, policy_version 27987 (0.0008) +[2024-12-28 14:55:37,672][100934] Updated weights for policy 0, policy_version 27997 (0.0007) +[2024-12-28 14:55:38,944][100720] Fps is (10 sec: 23347.2, 60 sec: 24644.4, 300 sec: 25423.0). Total num frames: 114708480. Throughput: 0: 6126.5. Samples: 18669878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:55:38,945][100720] Avg episode reward: [(0, '4.545')] +[2024-12-28 14:55:39,210][100934] Updated weights for policy 0, policy_version 28007 (0.0007) +[2024-12-28 14:55:40,721][100934] Updated weights for policy 0, policy_version 28017 (0.0007) +[2024-12-28 14:55:42,210][100934] Updated weights for policy 0, policy_version 28027 (0.0007) +[2024-12-28 14:55:43,725][100934] Updated weights for policy 0, policy_version 28037 (0.0006) +[2024-12-28 14:55:43,944][100720] Fps is (10 sec: 25804.9, 60 sec: 25053.9, 300 sec: 25423.0). Total num frames: 114843648. Throughput: 0: 6143.9. Samples: 18710414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:55:43,945][100720] Avg episode reward: [(0, '4.459')] +[2024-12-28 14:55:45,253][100934] Updated weights for policy 0, policy_version 28047 (0.0006) +[2024-12-28 14:55:46,843][100934] Updated weights for policy 0, policy_version 28057 (0.0007) +[2024-12-28 14:55:48,365][100934] Updated weights for policy 0, policy_version 28067 (0.0007) +[2024-12-28 14:55:48,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25053.9, 300 sec: 25423.0). Total num frames: 114974720. Throughput: 0: 6136.3. Samples: 18730264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2024-12-28 14:55:48,945][100720] Avg episode reward: [(0, '4.389')] +[2024-12-28 14:55:49,946][100934] Updated weights for policy 0, policy_version 28077 (0.0006) +[2024-12-28 14:55:51,474][100934] Updated weights for policy 0, policy_version 28087 (0.0006) +[2024-12-28 14:55:52,999][100934] Updated weights for policy 0, policy_version 28097 (0.0007) +[2024-12-28 14:55:53,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25122.1, 300 sec: 25464.6). Total num frames: 115109888. Throughput: 0: 6129.6. Samples: 18770086. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:55:53,945][100720] Avg episode reward: [(0, '4.644')] +[2024-12-28 14:55:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000028103_115109888.pth... +[2024-12-28 14:55:53,979][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000026620_109035520.pth +[2024-12-28 14:55:54,568][100934] Updated weights for policy 0, policy_version 28107 (0.0008) +[2024-12-28 14:55:56,113][100934] Updated weights for policy 0, policy_version 28117 (0.0007) +[2024-12-28 14:55:57,621][100934] Updated weights for policy 0, policy_version 28127 (0.0007) +[2024-12-28 14:55:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25053.9, 300 sec: 25534.1). Total num frames: 115240960. Throughput: 0: 6230.1. Samples: 18809906. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:55:58,945][100720] Avg episode reward: [(0, '4.366')] +[2024-12-28 14:55:59,153][100934] Updated weights for policy 0, policy_version 28137 (0.0006) +[2024-12-28 14:56:00,692][100934] Updated weights for policy 0, policy_version 28147 (0.0006) +[2024-12-28 14:56:02,260][100934] Updated weights for policy 0, policy_version 28157 (0.0008) +[2024-12-28 14:56:03,778][100934] Updated weights for policy 0, policy_version 28167 (0.0006) +[2024-12-28 14:56:03,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25053.9, 300 sec: 25575.7). Total num frames: 115376128. Throughput: 0: 6310.7. Samples: 18829850. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:03,945][100720] Avg episode reward: [(0, '4.291')] +[2024-12-28 14:56:05,310][100934] Updated weights for policy 0, policy_version 28177 (0.0006) +[2024-12-28 14:56:06,820][100934] Updated weights for policy 0, policy_version 28187 (0.0006) +[2024-12-28 14:56:08,347][100934] Updated weights for policy 0, policy_version 28197 (0.0006) +[2024-12-28 14:56:08,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25054.0, 300 sec: 25561.8). Total num frames: 115507200. Throughput: 0: 6400.7. Samples: 18870202. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:08,945][100720] Avg episode reward: [(0, '4.349')] +[2024-12-28 14:56:09,909][100934] Updated weights for policy 0, policy_version 28207 (0.0006) +[2024-12-28 14:56:11,410][100934] Updated weights for policy 0, policy_version 28217 (0.0006) +[2024-12-28 14:56:12,935][100934] Updated weights for policy 0, policy_version 28227 (0.0006) +[2024-12-28 14:56:13,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25395.2, 300 sec: 25561.8). Total num frames: 115642368. Throughput: 0: 6483.4. Samples: 18910504. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:13,945][100720] Avg episode reward: [(0, '4.336')] +[2024-12-28 14:56:14,470][100934] Updated weights for policy 0, policy_version 28237 (0.0006) +[2024-12-28 14:56:16,019][100934] Updated weights for policy 0, policy_version 28247 (0.0007) +[2024-12-28 14:56:17,538][100934] Updated weights for policy 0, policy_version 28257 (0.0007) +[2024-12-28 14:56:18,944][100720] Fps is (10 sec: 27033.4, 60 sec: 25804.8, 300 sec: 25575.7). Total num frames: 115777536. Throughput: 0: 6556.6. Samples: 18930268. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:18,945][100720] Avg episode reward: [(0, '4.649')] +[2024-12-28 14:56:19,054][100934] Updated weights for policy 0, policy_version 28267 (0.0007) +[2024-12-28 14:56:20,630][100934] Updated weights for policy 0, policy_version 28277 (0.0006) +[2024-12-28 14:56:22,190][100934] Updated weights for policy 0, policy_version 28287 (0.0007) +[2024-12-28 14:56:23,729][100934] Updated weights for policy 0, policy_version 28297 (0.0007) +[2024-12-28 14:56:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25561.8). Total num frames: 115908608. Throughput: 0: 6667.5. Samples: 18969916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:23,945][100720] Avg episode reward: [(0, '4.622')] +[2024-12-28 14:56:25,259][100934] Updated weights for policy 0, policy_version 28307 (0.0006) +[2024-12-28 14:56:26,803][100934] Updated weights for policy 0, policy_version 28317 (0.0007) +[2024-12-28 14:56:28,366][100934] Updated weights for policy 0, policy_version 28327 (0.0007) +[2024-12-28 14:56:28,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26077.9, 300 sec: 25561.8). Total num frames: 116039680. Throughput: 0: 6651.8. Samples: 19009744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:56:28,945][100720] Avg episode reward: [(0, '4.454')] +[2024-12-28 14:56:29,906][100934] Updated weights for policy 0, policy_version 28337 (0.0007) +[2024-12-28 14:56:31,440][100934] Updated weights for policy 0, policy_version 28347 (0.0006) +[2024-12-28 14:56:32,980][100934] Updated weights for policy 0, policy_version 28357 (0.0007) +[2024-12-28 14:56:33,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26487.5, 300 sec: 25561.8). Total num frames: 116174848. Throughput: 0: 6656.0. Samples: 19029782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:56:33,945][100720] Avg episode reward: [(0, '4.586')] +[2024-12-28 14:56:34,461][100934] Updated weights for policy 0, policy_version 28367 (0.0007) +[2024-12-28 14:56:36,009][100934] Updated weights for policy 0, policy_version 28377 (0.0006) +[2024-12-28 14:56:37,568][100934] Updated weights for policy 0, policy_version 28387 (0.0007) +[2024-12-28 14:56:38,944][100720] Fps is (10 sec: 27033.6, 60 sec: 26692.3, 300 sec: 25575.7). Total num frames: 116310016. Throughput: 0: 6661.7. Samples: 19069864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:56:38,945][100720] Avg episode reward: [(0, '4.460')] +[2024-12-28 14:56:39,123][100934] Updated weights for policy 0, policy_version 28397 (0.0006) +[2024-12-28 14:56:40,690][100934] Updated weights for policy 0, policy_version 28407 (0.0006) +[2024-12-28 14:56:42,229][100934] Updated weights for policy 0, policy_version 28417 (0.0006) +[2024-12-28 14:56:43,744][100934] Updated weights for policy 0, policy_version 28427 (0.0006) +[2024-12-28 14:56:43,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26624.0, 300 sec: 25561.8). Total num frames: 116441088. Throughput: 0: 6658.1. Samples: 19109520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:56:43,945][100720] Avg episode reward: [(0, '4.510')] +[2024-12-28 14:56:45,318][100934] Updated weights for policy 0, policy_version 28437 (0.0007) +[2024-12-28 14:56:46,818][100934] Updated weights for policy 0, policy_version 28447 (0.0007) +[2024-12-28 14:56:48,356][100934] Updated weights for policy 0, policy_version 28457 (0.0006) +[2024-12-28 14:56:48,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26624.0, 300 sec: 25561.8). Total num frames: 116572160. Throughput: 0: 6662.6. Samples: 19129666. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:48,945][100720] Avg episode reward: [(0, '4.469')] +[2024-12-28 14:56:49,951][100934] Updated weights for policy 0, policy_version 28467 (0.0007) +[2024-12-28 14:56:51,476][100934] Updated weights for policy 0, policy_version 28477 (0.0007) +[2024-12-28 14:56:53,061][100934] Updated weights for policy 0, policy_version 28487 (0.0007) +[2024-12-28 14:56:53,944][100720] Fps is (10 sec: 26214.4, 60 sec: 26555.7, 300 sec: 25547.9). Total num frames: 116703232. Throughput: 0: 6644.2. Samples: 19169190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:53,945][100720] Avg episode reward: [(0, '4.219')] +[2024-12-28 14:56:54,602][100934] Updated weights for policy 0, policy_version 28497 (0.0007) +[2024-12-28 14:56:56,167][100934] Updated weights for policy 0, policy_version 28507 (0.0007) +[2024-12-28 14:56:57,689][100934] Updated weights for policy 0, policy_version 28517 (0.0007) +[2024-12-28 14:56:58,944][100720] Fps is (10 sec: 26624.0, 60 sec: 26624.0, 300 sec: 25561.8). Total num frames: 116838400. Throughput: 0: 6630.9. Samples: 19208896. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:56:58,945][100720] Avg episode reward: [(0, '4.420')] +[2024-12-28 14:56:59,229][100934] Updated weights for policy 0, policy_version 28527 (0.0007) +[2024-12-28 14:57:00,757][100934] Updated weights for policy 0, policy_version 28537 (0.0007) +[2024-12-28 14:57:02,392][100934] Updated weights for policy 0, policy_version 28547 (0.0007) +[2024-12-28 14:57:03,944][100720] Fps is (10 sec: 25804.7, 60 sec: 26419.2, 300 sec: 25534.0). Total num frames: 116961280. Throughput: 0: 6632.7. Samples: 19228740. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:57:03,945][100720] Avg episode reward: [(0, '4.433')] +[2024-12-28 14:57:04,194][100934] Updated weights for policy 0, policy_version 28557 (0.0008) +[2024-12-28 14:57:05,986][100934] Updated weights for policy 0, policy_version 28567 (0.0007) +[2024-12-28 14:57:07,829][100934] Updated weights for policy 0, policy_version 28577 (0.0009) +[2024-12-28 14:57:08,944][100720] Fps is (10 sec: 23756.4, 60 sec: 26146.0, 300 sec: 25464.6). Total num frames: 117075968. Throughput: 0: 6508.2. Samples: 19262786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:08,946][100720] Avg episode reward: [(0, '4.603')] +[2024-12-28 14:57:09,659][100934] Updated weights for policy 0, policy_version 28587 (0.0008) +[2024-12-28 14:57:11,399][100934] Updated weights for policy 0, policy_version 28597 (0.0008) +[2024-12-28 14:57:13,152][100934] Updated weights for policy 0, policy_version 28607 (0.0007) +[2024-12-28 14:57:13,944][100720] Fps is (10 sec: 23347.3, 60 sec: 25873.1, 300 sec: 25436.9). Total num frames: 117194752. Throughput: 0: 6407.3. Samples: 19298074. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:13,945][100720] Avg episode reward: [(0, '4.568')] +[2024-12-28 14:57:14,658][100934] Updated weights for policy 0, policy_version 28617 (0.0006) +[2024-12-28 14:57:16,187][100934] Updated weights for policy 0, policy_version 28627 (0.0007) +[2024-12-28 14:57:17,724][100934] Updated weights for policy 0, policy_version 28637 (0.0006) +[2024-12-28 14:57:18,944][100720] Fps is (10 sec: 24985.9, 60 sec: 25804.8, 300 sec: 25478.5). Total num frames: 117325824. Throughput: 0: 6406.8. Samples: 19318090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:18,945][100720] Avg episode reward: [(0, '4.434')] +[2024-12-28 14:57:19,270][100934] Updated weights for policy 0, policy_version 28647 (0.0007) +[2024-12-28 14:57:20,740][100934] Updated weights for policy 0, policy_version 28657 (0.0006) +[2024-12-28 14:57:22,296][100934] Updated weights for policy 0, policy_version 28667 (0.0006) +[2024-12-28 14:57:23,811][100934] Updated weights for policy 0, policy_version 28677 (0.0006) +[2024-12-28 14:57:23,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25873.1, 300 sec: 25561.8). Total num frames: 117460992. Throughput: 0: 6411.7. Samples: 19358392. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:57:23,946][100720] Avg episode reward: [(0, '4.276')] +[2024-12-28 14:57:25,340][100934] Updated weights for policy 0, policy_version 28687 (0.0007) +[2024-12-28 14:57:26,854][100934] Updated weights for policy 0, policy_version 28697 (0.0006) +[2024-12-28 14:57:28,385][100934] Updated weights for policy 0, policy_version 28707 (0.0007) +[2024-12-28 14:57:28,944][100720] Fps is (10 sec: 27033.8, 60 sec: 25941.3, 300 sec: 25617.4). Total num frames: 117596160. Throughput: 0: 6430.1. Samples: 19398874. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:57:28,945][100720] Avg episode reward: [(0, '4.753')] +[2024-12-28 14:57:29,933][100934] Updated weights for policy 0, policy_version 28717 (0.0008) +[2024-12-28 14:57:31,514][100934] Updated weights for policy 0, policy_version 28727 (0.0007) +[2024-12-28 14:57:33,048][100934] Updated weights for policy 0, policy_version 28737 (0.0006) +[2024-12-28 14:57:33,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25873.1, 300 sec: 25631.2). Total num frames: 117727232. Throughput: 0: 6418.8. Samples: 19418512. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:57:33,945][100720] Avg episode reward: [(0, '4.243')] +[2024-12-28 14:57:34,597][100934] Updated weights for policy 0, policy_version 28747 (0.0006) +[2024-12-28 14:57:36,155][100934] Updated weights for policy 0, policy_version 28757 (0.0006) +[2024-12-28 14:57:37,701][100934] Updated weights for policy 0, policy_version 28767 (0.0007) +[2024-12-28 14:57:38,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25728.4). Total num frames: 117862400. Throughput: 0: 6425.0. Samples: 19458316. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:57:38,945][100720] Avg episode reward: [(0, '4.374')] +[2024-12-28 14:57:39,206][100934] Updated weights for policy 0, policy_version 28777 (0.0006) +[2024-12-28 14:57:40,728][100934] Updated weights for policy 0, policy_version 28787 (0.0006) +[2024-12-28 14:57:42,255][100934] Updated weights for policy 0, policy_version 28797 (0.0008) +[2024-12-28 14:57:43,760][100934] Updated weights for policy 0, policy_version 28807 (0.0006) +[2024-12-28 14:57:43,944][100720] Fps is (10 sec: 27033.7, 60 sec: 25941.4, 300 sec: 25797.9). Total num frames: 117997568. Throughput: 0: 6437.9. Samples: 19498600. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:43,945][100720] Avg episode reward: [(0, '4.540')] +[2024-12-28 14:57:45,352][100934] Updated weights for policy 0, policy_version 28817 (0.0006) +[2024-12-28 14:57:46,914][100934] Updated weights for policy 0, policy_version 28827 (0.0006) +[2024-12-28 14:57:48,426][100934] Updated weights for policy 0, policy_version 28837 (0.0006) +[2024-12-28 14:57:48,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25941.3, 300 sec: 25797.9). Total num frames: 118128640. Throughput: 0: 6431.9. Samples: 19518176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:48,945][100720] Avg episode reward: [(0, '4.300')] +[2024-12-28 14:57:50,026][100934] Updated weights for policy 0, policy_version 28847 (0.0008) +[2024-12-28 14:57:51,659][100934] Updated weights for policy 0, policy_version 28857 (0.0007) +[2024-12-28 14:57:53,398][100934] Updated weights for policy 0, policy_version 28867 (0.0008) +[2024-12-28 14:57:53,944][100720] Fps is (10 sec: 25395.0, 60 sec: 25804.8, 300 sec: 25756.2). Total num frames: 118251520. Throughput: 0: 6521.5. Samples: 19556252. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:53,945][100720] Avg episode reward: [(0, '4.370')] +[2024-12-28 14:57:53,949][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000028870_118251520.pth... +[2024-12-28 14:57:53,984][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000027359_112062464.pth +[2024-12-28 14:57:55,227][100934] Updated weights for policy 0, policy_version 28877 (0.0008) +[2024-12-28 14:57:57,031][100934] Updated weights for policy 0, policy_version 28887 (0.0008) +[2024-12-28 14:57:58,839][100934] Updated weights for policy 0, policy_version 28897 (0.0008) +[2024-12-28 14:57:58,944][100720] Fps is (10 sec: 23347.1, 60 sec: 25395.2, 300 sec: 25686.8). Total num frames: 118362112. Throughput: 0: 6493.0. Samples: 19590258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:57:58,945][100720] Avg episode reward: [(0, '4.435')] +[2024-12-28 14:58:00,702][100934] Updated weights for policy 0, policy_version 28907 (0.0008) +[2024-12-28 14:58:02,437][100934] Updated weights for policy 0, policy_version 28917 (0.0007) +[2024-12-28 14:58:03,944][100720] Fps is (10 sec: 22937.8, 60 sec: 25326.9, 300 sec: 25631.2). Total num frames: 118480896. Throughput: 0: 6424.1. Samples: 19607176. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:58:03,945][100720] Avg episode reward: [(0, '4.491')] +[2024-12-28 14:58:03,977][100934] Updated weights for policy 0, policy_version 28927 (0.0006) +[2024-12-28 14:58:05,473][100934] Updated weights for policy 0, policy_version 28937 (0.0006) +[2024-12-28 14:58:07,047][100934] Updated weights for policy 0, policy_version 28947 (0.0007) +[2024-12-28 14:58:08,605][100934] Updated weights for policy 0, policy_version 28957 (0.0006) +[2024-12-28 14:58:08,944][100720] Fps is (10 sec: 25395.1, 60 sec: 25668.3, 300 sec: 25645.1). Total num frames: 118616064. Throughput: 0: 6412.3. Samples: 19646948. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:58:08,945][100720] Avg episode reward: [(0, '4.317')] +[2024-12-28 14:58:10,160][100934] Updated weights for policy 0, policy_version 28967 (0.0007) +[2024-12-28 14:58:11,697][100934] Updated weights for policy 0, policy_version 28977 (0.0007) +[2024-12-28 14:58:13,212][100934] Updated weights for policy 0, policy_version 28987 (0.0007) +[2024-12-28 14:58:13,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25873.0, 300 sec: 25645.1). Total num frames: 118747136. Throughput: 0: 6392.1. Samples: 19686520. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:58:13,946][100720] Avg episode reward: [(0, '4.469')] +[2024-12-28 14:58:14,784][100934] Updated weights for policy 0, policy_version 28997 (0.0007) +[2024-12-28 14:58:16,325][100934] Updated weights for policy 0, policy_version 29007 (0.0007) +[2024-12-28 14:58:17,867][100934] Updated weights for policy 0, policy_version 29017 (0.0006) +[2024-12-28 14:58:18,944][100720] Fps is (10 sec: 26624.2, 60 sec: 25941.4, 300 sec: 25659.0). Total num frames: 118882304. Throughput: 0: 6401.5. Samples: 19706578. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:58:18,945][100720] Avg episode reward: [(0, '4.485')] +[2024-12-28 14:58:19,361][100934] Updated weights for policy 0, policy_version 29027 (0.0008) +[2024-12-28 14:58:20,894][100934] Updated weights for policy 0, policy_version 29037 (0.0007) +[2024-12-28 14:58:22,459][100934] Updated weights for policy 0, policy_version 29047 (0.0006) +[2024-12-28 14:58:23,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.0, 300 sec: 25659.0). Total num frames: 119013376. Throughput: 0: 6405.5. Samples: 19746566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:58:23,946][100720] Avg episode reward: [(0, '4.529')] +[2024-12-28 14:58:24,002][100934] Updated weights for policy 0, policy_version 29057 (0.0006) +[2024-12-28 14:58:25,532][100934] Updated weights for policy 0, policy_version 29067 (0.0007) +[2024-12-28 14:58:27,054][100934] Updated weights for policy 0, policy_version 29077 (0.0007) +[2024-12-28 14:58:28,577][100934] Updated weights for policy 0, policy_version 29087 (0.0006) +[2024-12-28 14:58:28,944][100720] Fps is (10 sec: 26624.0, 60 sec: 25873.1, 300 sec: 25672.9). Total num frames: 119148544. Throughput: 0: 6401.1. Samples: 19786648. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:58:28,945][100720] Avg episode reward: [(0, '4.651')] +[2024-12-28 14:58:30,166][100934] Updated weights for policy 0, policy_version 29097 (0.0007) +[2024-12-28 14:58:31,733][100934] Updated weights for policy 0, policy_version 29107 (0.0007) +[2024-12-28 14:58:33,224][100934] Updated weights for policy 0, policy_version 29117 (0.0006) +[2024-12-28 14:58:33,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25873.0, 300 sec: 25672.9). Total num frames: 119279616. Throughput: 0: 6402.3. Samples: 19806280. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:58:33,945][100720] Avg episode reward: [(0, '4.570')] +[2024-12-28 14:58:34,749][100934] Updated weights for policy 0, policy_version 29127 (0.0007) +[2024-12-28 14:58:36,262][100934] Updated weights for policy 0, policy_version 29137 (0.0007) +[2024-12-28 14:58:37,781][100934] Updated weights for policy 0, policy_version 29147 (0.0007) +[2024-12-28 14:58:38,944][100720] Fps is (10 sec: 26623.9, 60 sec: 25873.1, 300 sec: 25742.3). Total num frames: 119414784. Throughput: 0: 6456.3. Samples: 19846784. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:58:38,945][100720] Avg episode reward: [(0, '4.464')] +[2024-12-28 14:58:39,284][100934] Updated weights for policy 0, policy_version 29157 (0.0008) +[2024-12-28 14:58:40,841][100934] Updated weights for policy 0, policy_version 29167 (0.0007) +[2024-12-28 14:58:42,344][100934] Updated weights for policy 0, policy_version 29177 (0.0006) +[2024-12-28 14:58:43,839][100934] Updated weights for policy 0, policy_version 29187 (0.0006) +[2024-12-28 14:58:43,944][100720] Fps is (10 sec: 27033.5, 60 sec: 25873.0, 300 sec: 25825.6). Total num frames: 119549952. Throughput: 0: 6600.9. Samples: 19887298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:58:43,947][100720] Avg episode reward: [(0, '4.639')] +[2024-12-28 14:58:45,354][100934] Updated weights for policy 0, policy_version 29197 (0.0006) +[2024-12-28 14:58:46,880][100934] Updated weights for policy 0, policy_version 29207 (0.0007) +[2024-12-28 14:58:48,476][100934] Updated weights for policy 0, policy_version 29217 (0.0007) +[2024-12-28 14:58:48,944][100720] Fps is (10 sec: 27033.5, 60 sec: 25941.3, 300 sec: 25839.5). Total num frames: 119685120. Throughput: 0: 6670.4. Samples: 19907346. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:58:48,945][100720] Avg episode reward: [(0, '4.534')] +[2024-12-28 14:58:50,067][100934] Updated weights for policy 0, policy_version 29227 (0.0007) +[2024-12-28 14:58:51,613][100934] Updated weights for policy 0, policy_version 29237 (0.0007) +[2024-12-28 14:58:53,174][100934] Updated weights for policy 0, policy_version 29247 (0.0008) +[2024-12-28 14:58:53,944][100720] Fps is (10 sec: 26624.3, 60 sec: 26077.9, 300 sec: 25825.6). Total num frames: 119816192. Throughput: 0: 6658.4. Samples: 19946576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:58:53,947][100720] Avg episode reward: [(0, '4.530')] +[2024-12-28 14:58:54,711][100934] Updated weights for policy 0, policy_version 29257 (0.0007) +[2024-12-28 14:58:56,200][100934] Updated weights for policy 0, policy_version 29267 (0.0006) +[2024-12-28 14:58:57,694][100934] Updated weights for policy 0, policy_version 29277 (0.0006) +[2024-12-28 14:58:58,944][100720] Fps is (10 sec: 26624.1, 60 sec: 26487.5, 300 sec: 25839.5). Total num frames: 119951360. Throughput: 0: 6679.2. Samples: 19987084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:58:58,945][100720] Avg episode reward: [(0, '4.668')] +[2024-12-28 14:58:59,243][100934] Updated weights for policy 0, policy_version 29287 (0.0006) +[2024-12-28 14:59:00,898][100934] Updated weights for policy 0, policy_version 29297 (0.0008) +[2024-12-28 14:59:02,636][100934] Updated weights for policy 0, policy_version 29307 (0.0007) +[2024-12-28 14:59:03,944][100720] Fps is (10 sec: 25395.2, 60 sec: 26487.5, 300 sec: 25797.9). Total num frames: 120070144. Throughput: 0: 6643.1. Samples: 20005518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:59:03,945][100720] Avg episode reward: [(0, '4.433')] +[2024-12-28 14:59:04,415][100934] Updated weights for policy 0, policy_version 29317 (0.0008) +[2024-12-28 14:59:06,199][100934] Updated weights for policy 0, policy_version 29327 (0.0009) +[2024-12-28 14:59:08,027][100934] Updated weights for policy 0, policy_version 29337 (0.0008) +[2024-12-28 14:59:08,944][100720] Fps is (10 sec: 22937.5, 60 sec: 26077.9, 300 sec: 25714.5). Total num frames: 120180736. Throughput: 0: 6516.9. Samples: 20039826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:59:08,945][100720] Avg episode reward: [(0, '4.508')] +[2024-12-28 14:59:09,847][100934] Updated weights for policy 0, policy_version 29347 (0.0008) +[2024-12-28 14:59:11,604][100934] Updated weights for policy 0, policy_version 29357 (0.0007) +[2024-12-28 14:59:13,098][100934] Updated weights for policy 0, policy_version 29367 (0.0007) +[2024-12-28 14:59:13,944][100720] Fps is (10 sec: 23756.8, 60 sec: 26009.6, 300 sec: 25700.7). Total num frames: 120307712. Throughput: 0: 6439.5. Samples: 20076424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 14:59:13,945][100720] Avg episode reward: [(0, '4.322')] +[2024-12-28 14:59:14,767][100934] Updated weights for policy 0, policy_version 29377 (0.0008) +[2024-12-28 14:59:16,556][100934] Updated weights for policy 0, policy_version 29387 (0.0009) +[2024-12-28 14:59:18,297][100934] Updated weights for policy 0, policy_version 29397 (0.0007) +[2024-12-28 14:59:18,944][100720] Fps is (10 sec: 24166.2, 60 sec: 25668.2, 300 sec: 25631.2). Total num frames: 120422400. Throughput: 0: 6387.9. Samples: 20093734. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:59:18,945][100720] Avg episode reward: [(0, '4.348')] +[2024-12-28 14:59:20,127][100934] Updated weights for policy 0, policy_version 29407 (0.0008) +[2024-12-28 14:59:21,901][100934] Updated weights for policy 0, policy_version 29417 (0.0008) +[2024-12-28 14:59:23,702][100934] Updated weights for policy 0, policy_version 29427 (0.0007) +[2024-12-28 14:59:23,944][100720] Fps is (10 sec: 22937.6, 60 sec: 25395.2, 300 sec: 25589.6). Total num frames: 120537088. Throughput: 0: 6249.8. Samples: 20128026. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:59:23,945][100720] Avg episode reward: [(0, '4.456')] +[2024-12-28 14:59:25,350][100934] Updated weights for policy 0, policy_version 29437 (0.0007) +[2024-12-28 14:59:26,924][100934] Updated weights for policy 0, policy_version 29447 (0.0007) +[2024-12-28 14:59:28,471][100934] Updated weights for policy 0, policy_version 29457 (0.0006) +[2024-12-28 14:59:28,944][100720] Fps is (10 sec: 24576.3, 60 sec: 25326.9, 300 sec: 25589.6). Total num frames: 120668160. Throughput: 0: 6202.0. Samples: 20166388. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:59:28,945][100720] Avg episode reward: [(0, '4.557')] +[2024-12-28 14:59:29,960][100934] Updated weights for policy 0, policy_version 29467 (0.0006) +[2024-12-28 14:59:31,488][100934] Updated weights for policy 0, policy_version 29477 (0.0007) +[2024-12-28 14:59:33,041][100934] Updated weights for policy 0, policy_version 29487 (0.0007) +[2024-12-28 14:59:33,944][100720] Fps is (10 sec: 26624.1, 60 sec: 25395.3, 300 sec: 25672.9). Total num frames: 120803328. Throughput: 0: 6207.8. Samples: 20186698. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 14:59:33,945][100720] Avg episode reward: [(0, '4.470')] +[2024-12-28 14:59:34,536][100934] Updated weights for policy 0, policy_version 29497 (0.0006) +[2024-12-28 14:59:36,082][100934] Updated weights for policy 0, policy_version 29507 (0.0006) +[2024-12-28 14:59:37,588][100934] Updated weights for policy 0, policy_version 29517 (0.0007) +[2024-12-28 14:59:38,944][100720] Fps is (10 sec: 27033.6, 60 sec: 25395.2, 300 sec: 25756.2). Total num frames: 120938496. Throughput: 0: 6228.5. Samples: 20226858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:59:38,945][100720] Avg episode reward: [(0, '4.279')] +[2024-12-28 14:59:39,096][100934] Updated weights for policy 0, policy_version 29527 (0.0007) +[2024-12-28 14:59:40,609][100934] Updated weights for policy 0, policy_version 29537 (0.0007) +[2024-12-28 14:59:42,176][100934] Updated weights for policy 0, policy_version 29547 (0.0006) +[2024-12-28 14:59:43,746][100934] Updated weights for policy 0, policy_version 29557 (0.0007) +[2024-12-28 14:59:43,944][100720] Fps is (10 sec: 26623.6, 60 sec: 25326.9, 300 sec: 25756.2). Total num frames: 121069568. Throughput: 0: 6216.3. Samples: 20266818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:59:43,945][100720] Avg episode reward: [(0, '4.244')] +[2024-12-28 14:59:45,270][100934] Updated weights for policy 0, policy_version 29567 (0.0007) +[2024-12-28 14:59:46,836][100934] Updated weights for policy 0, policy_version 29577 (0.0006) +[2024-12-28 14:59:48,622][100934] Updated weights for policy 0, policy_version 29587 (0.0009) +[2024-12-28 14:59:48,944][100720] Fps is (10 sec: 25395.0, 60 sec: 25122.1, 300 sec: 25728.4). Total num frames: 121192448. Throughput: 0: 6247.0. Samples: 20286634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:59:48,945][100720] Avg episode reward: [(0, '4.330')] +[2024-12-28 14:59:50,413][100934] Updated weights for policy 0, policy_version 29597 (0.0010) +[2024-12-28 14:59:52,212][100934] Updated weights for policy 0, policy_version 29607 (0.0007) +[2024-12-28 14:59:53,944][100720] Fps is (10 sec: 23756.9, 60 sec: 24849.0, 300 sec: 25659.0). Total num frames: 121307136. Throughput: 0: 6248.0. Samples: 20320986. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2024-12-28 14:59:53,945][100720] Avg episode reward: [(0, '4.256')] +[2024-12-28 14:59:53,953][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000029616_121307136.pth... +[2024-12-28 14:59:53,998][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000028103_115109888.pth +[2024-12-28 14:59:54,048][100934] Updated weights for policy 0, policy_version 29617 (0.0007) +[2024-12-28 14:59:55,825][100934] Updated weights for policy 0, policy_version 29627 (0.0007) +[2024-12-28 14:59:57,543][100934] Updated weights for policy 0, policy_version 29637 (0.0008) +[2024-12-28 14:59:58,944][100720] Fps is (10 sec: 23347.4, 60 sec: 24576.0, 300 sec: 25603.5). Total num frames: 121425920. Throughput: 0: 6226.8. Samples: 20356632. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 14:59:58,945][100720] Avg episode reward: [(0, '4.436')] +[2024-12-28 14:59:59,119][100934] Updated weights for policy 0, policy_version 29647 (0.0007) +[2024-12-28 15:00:00,658][100934] Updated weights for policy 0, policy_version 29657 (0.0008) +[2024-12-28 15:00:02,186][100934] Updated weights for policy 0, policy_version 29667 (0.0006) +[2024-12-28 15:00:03,693][100934] Updated weights for policy 0, policy_version 29677 (0.0007) +[2024-12-28 15:00:03,944][100720] Fps is (10 sec: 25395.1, 60 sec: 24849.0, 300 sec: 25617.4). Total num frames: 121561088. Throughput: 0: 6284.6. Samples: 20376540. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:00:03,945][100720] Avg episode reward: [(0, '4.542')] +[2024-12-28 15:00:05,231][100934] Updated weights for policy 0, policy_version 29687 (0.0007) +[2024-12-28 15:00:06,761][100934] Updated weights for policy 0, policy_version 29697 (0.0006) +[2024-12-28 15:00:08,304][100934] Updated weights for policy 0, policy_version 29707 (0.0007) +[2024-12-28 15:00:08,944][100720] Fps is (10 sec: 26623.8, 60 sec: 25190.4, 300 sec: 25672.9). Total num frames: 121692160. Throughput: 0: 6418.7. Samples: 20416868. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:00:08,945][100720] Avg episode reward: [(0, '4.403')] +[2024-12-28 15:00:10,060][100934] Updated weights for policy 0, policy_version 29717 (0.0007) +[2024-12-28 15:00:11,887][100934] Updated weights for policy 0, policy_version 29727 (0.0007) +[2024-12-28 15:00:13,663][100934] Updated weights for policy 0, policy_version 29737 (0.0008) +[2024-12-28 15:00:13,944][100720] Fps is (10 sec: 24576.2, 60 sec: 24985.6, 300 sec: 25686.8). Total num frames: 121806848. Throughput: 0: 6335.0. Samples: 20451462. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:00:13,945][100720] Avg episode reward: [(0, '4.455')] +[2024-12-28 15:00:15,436][100934] Updated weights for policy 0, policy_version 29747 (0.0008) +[2024-12-28 15:00:17,182][100934] Updated weights for policy 0, policy_version 29757 (0.0008) +[2024-12-28 15:00:18,944][100720] Fps is (10 sec: 22937.9, 60 sec: 24985.7, 300 sec: 25645.1). Total num frames: 121921536. Throughput: 0: 6272.2. Samples: 20468946. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:00:18,945][100720] Avg episode reward: [(0, '4.462')] +[2024-12-28 15:00:18,959][100934] Updated weights for policy 0, policy_version 29767 (0.0007) +[2024-12-28 15:00:20,621][100934] Updated weights for policy 0, policy_version 29777 (0.0009) +[2024-12-28 15:00:22,147][100934] Updated weights for policy 0, policy_version 29787 (0.0006) +[2024-12-28 15:00:23,615][100934] Updated weights for policy 0, policy_version 29797 (0.0007) +[2024-12-28 15:00:23,944][100720] Fps is (10 sec: 24985.7, 60 sec: 25326.9, 300 sec: 25700.7). Total num frames: 122056704. Throughput: 0: 6212.8. Samples: 20506434. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:00:23,945][100720] Avg episode reward: [(0, '4.237')] +[2024-12-28 15:00:25,244][100934] Updated weights for policy 0, policy_version 29807 (0.0007) +[2024-12-28 15:00:27,050][100934] Updated weights for policy 0, policy_version 29817 (0.0009) +[2024-12-28 15:00:28,944][100720] Fps is (10 sec: 23346.8, 60 sec: 24780.7, 300 sec: 25659.0). Total num frames: 122155008. Throughput: 0: 6040.8. Samples: 20538656. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:00:28,945][100720] Avg episode reward: [(0, '4.546')] +[2024-12-28 15:00:29,621][100934] Updated weights for policy 0, policy_version 29827 (0.0007) +[2024-12-28 15:00:31,640][100934] Updated weights for policy 0, policy_version 29837 (0.0008) +[2024-12-28 15:00:33,673][100934] Updated weights for policy 0, policy_version 29847 (0.0007) +[2024-12-28 15:00:33,944][100720] Fps is (10 sec: 20070.0, 60 sec: 24234.6, 300 sec: 25589.6). Total num frames: 122257408. Throughput: 0: 5931.9. Samples: 20553572. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:33,945][100720] Avg episode reward: [(0, '4.372')] +[2024-12-28 15:00:35,550][100934] Updated weights for policy 0, policy_version 29857 (0.0009) +[2024-12-28 15:00:37,075][100934] Updated weights for policy 0, policy_version 29867 (0.0006) +[2024-12-28 15:00:38,680][100934] Updated weights for policy 0, policy_version 29877 (0.0007) +[2024-12-28 15:00:38,944][100720] Fps is (10 sec: 22528.1, 60 sec: 24029.8, 300 sec: 25547.9). Total num frames: 122380288. Throughput: 0: 5941.0. Samples: 20588330. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:38,945][100720] Avg episode reward: [(0, '4.379')] +[2024-12-28 15:00:40,239][100934] Updated weights for policy 0, policy_version 29887 (0.0008) +[2024-12-28 15:00:41,831][100934] Updated weights for policy 0, policy_version 29897 (0.0007) +[2024-12-28 15:00:43,688][100934] Updated weights for policy 0, policy_version 29907 (0.0007) +[2024-12-28 15:00:43,944][100720] Fps is (10 sec: 24576.4, 60 sec: 23893.4, 300 sec: 25520.2). Total num frames: 122503168. Throughput: 0: 5975.6. Samples: 20625536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:43,946][100720] Avg episode reward: [(0, '4.637')] +[2024-12-28 15:00:45,416][100934] Updated weights for policy 0, policy_version 29917 (0.0008) +[2024-12-28 15:00:47,177][100934] Updated weights for policy 0, policy_version 29927 (0.0007) +[2024-12-28 15:00:48,863][100934] Updated weights for policy 0, policy_version 29937 (0.0006) +[2024-12-28 15:00:48,944][100720] Fps is (10 sec: 24166.4, 60 sec: 23825.1, 300 sec: 25464.6). Total num frames: 122621952. Throughput: 0: 5920.8. Samples: 20642978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:48,945][100720] Avg episode reward: [(0, '4.476')] +[2024-12-28 15:00:50,632][100934] Updated weights for policy 0, policy_version 29947 (0.0008) +[2024-12-28 15:00:52,347][100934] Updated weights for policy 0, policy_version 29957 (0.0006) +[2024-12-28 15:00:53,944][100720] Fps is (10 sec: 23756.5, 60 sec: 23893.3, 300 sec: 25423.0). Total num frames: 122740736. Throughput: 0: 5821.3. Samples: 20678826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:53,945][100720] Avg episode reward: [(0, '4.479')] +[2024-12-28 15:00:53,993][100934] Updated weights for policy 0, policy_version 29967 (0.0007) +[2024-12-28 15:00:55,739][100934] Updated weights for policy 0, policy_version 29977 (0.0008) +[2024-12-28 15:00:57,442][100934] Updated weights for policy 0, policy_version 29987 (0.0008) +[2024-12-28 15:00:58,944][100720] Fps is (10 sec: 23347.2, 60 sec: 23825.0, 300 sec: 25353.5). Total num frames: 122855424. Throughput: 0: 5821.5. Samples: 20713430. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:00:58,945][100720] Avg episode reward: [(0, '4.349')] +[2024-12-28 15:00:59,480][100934] Updated weights for policy 0, policy_version 29997 (0.0008) +[2024-12-28 15:01:01,375][100934] Updated weights for policy 0, policy_version 30007 (0.0007) +[2024-12-28 15:01:03,396][100934] Updated weights for policy 0, policy_version 30017 (0.0010) +[2024-12-28 15:01:03,944][100720] Fps is (10 sec: 21709.1, 60 sec: 23279.0, 300 sec: 25256.3). Total num frames: 122957824. Throughput: 0: 5785.1. Samples: 20729276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:01:03,947][100720] Avg episode reward: [(0, '4.388')] +[2024-12-28 15:01:05,301][100934] Updated weights for policy 0, policy_version 30027 (0.0010) +[2024-12-28 15:01:07,350][100934] Updated weights for policy 0, policy_version 30037 (0.0010) +[2024-12-28 15:01:08,944][100720] Fps is (10 sec: 20889.8, 60 sec: 22869.4, 300 sec: 25159.2). Total num frames: 123064320. Throughput: 0: 5637.8. Samples: 20760134. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:01:08,945][100720] Avg episode reward: [(0, '4.523')] +[2024-12-28 15:01:09,141][100934] Updated weights for policy 0, policy_version 30047 (0.0007) +[2024-12-28 15:01:10,867][100934] Updated weights for policy 0, policy_version 30057 (0.0008) +[2024-12-28 15:01:12,497][100934] Updated weights for policy 0, policy_version 30067 (0.0007) +[2024-12-28 15:01:13,944][100720] Fps is (10 sec: 22937.5, 60 sec: 23005.9, 300 sec: 25117.5). Total num frames: 123187200. Throughput: 0: 5732.8. Samples: 20796632. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:13,945][100720] Avg episode reward: [(0, '4.440')] +[2024-12-28 15:01:14,165][100934] Updated weights for policy 0, policy_version 30077 (0.0007) +[2024-12-28 15:01:15,840][100934] Updated weights for policy 0, policy_version 30087 (0.0008) +[2024-12-28 15:01:17,566][100934] Updated weights for policy 0, policy_version 30097 (0.0006) +[2024-12-28 15:01:18,944][100720] Fps is (10 sec: 24575.9, 60 sec: 23142.4, 300 sec: 25089.7). Total num frames: 123310080. Throughput: 0: 5804.5. Samples: 20814774. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:18,945][100720] Avg episode reward: [(0, '4.339')] +[2024-12-28 15:01:19,247][100934] Updated weights for policy 0, policy_version 30107 (0.0007) +[2024-12-28 15:01:21,338][100934] Updated weights for policy 0, policy_version 30117 (0.0009) +[2024-12-28 15:01:23,354][100934] Updated weights for policy 0, policy_version 30127 (0.0008) +[2024-12-28 15:01:23,944][100720] Fps is (10 sec: 22118.3, 60 sec: 22528.0, 300 sec: 24978.7). Total num frames: 123408384. Throughput: 0: 5753.0. Samples: 20847216. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:23,946][100720] Avg episode reward: [(0, '4.510')] +[2024-12-28 15:01:25,411][100934] Updated weights for policy 0, policy_version 30137 (0.0008) +[2024-12-28 15:01:27,287][100934] Updated weights for policy 0, policy_version 30147 (0.0008) +[2024-12-28 15:01:28,944][100720] Fps is (10 sec: 20889.7, 60 sec: 22732.8, 300 sec: 24895.3). Total num frames: 123518976. Throughput: 0: 5628.2. Samples: 20878806. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:28,945][100720] Avg episode reward: [(0, '4.474')] +[2024-12-28 15:01:29,114][100934] Updated weights for policy 0, policy_version 30157 (0.0007) +[2024-12-28 15:01:30,907][100934] Updated weights for policy 0, policy_version 30167 (0.0007) +[2024-12-28 15:01:32,630][100934] Updated weights for policy 0, policy_version 30177 (0.0007) +[2024-12-28 15:01:33,944][100720] Fps is (10 sec: 22937.7, 60 sec: 23005.9, 300 sec: 24839.8). Total num frames: 123637760. Throughput: 0: 5626.2. Samples: 20896158. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:33,945][100720] Avg episode reward: [(0, '4.456')] +[2024-12-28 15:01:34,280][100934] Updated weights for policy 0, policy_version 30187 (0.0006) +[2024-12-28 15:01:35,908][100934] Updated weights for policy 0, policy_version 30197 (0.0007) +[2024-12-28 15:01:37,696][100934] Updated weights for policy 0, policy_version 30207 (0.0007) +[2024-12-28 15:01:38,944][100720] Fps is (10 sec: 22937.3, 60 sec: 22801.0, 300 sec: 24770.4). Total num frames: 123748352. Throughput: 0: 5634.5. Samples: 20932380. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:38,945][100720] Avg episode reward: [(0, '4.462')] +[2024-12-28 15:01:39,783][100934] Updated weights for policy 0, policy_version 30217 (0.0011) +[2024-12-28 15:01:41,825][100934] Updated weights for policy 0, policy_version 30227 (0.0010) +[2024-12-28 15:01:43,791][100934] Updated weights for policy 0, policy_version 30237 (0.0010) +[2024-12-28 15:01:43,944][100720] Fps is (10 sec: 21299.2, 60 sec: 22459.7, 300 sec: 24673.2). Total num frames: 123850752. Throughput: 0: 5538.5. Samples: 20962662. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:43,945][100720] Avg episode reward: [(0, '4.527')] +[2024-12-28 15:01:45,816][100934] Updated weights for policy 0, policy_version 30247 (0.0009) +[2024-12-28 15:01:47,865][100934] Updated weights for policy 0, policy_version 30257 (0.0011) +[2024-12-28 15:01:48,944][100720] Fps is (10 sec: 20480.2, 60 sec: 22186.7, 300 sec: 24576.0). Total num frames: 123953152. Throughput: 0: 5526.6. Samples: 20977972. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:48,945][100720] Avg episode reward: [(0, '4.351')] +[2024-12-28 15:01:49,909][100934] Updated weights for policy 0, policy_version 30267 (0.0010) +[2024-12-28 15:01:51,936][100934] Updated weights for policy 0, policy_version 30277 (0.0009) +[2024-12-28 15:01:53,944][100720] Fps is (10 sec: 20070.2, 60 sec: 21845.3, 300 sec: 24451.0). Total num frames: 124051456. Throughput: 0: 5497.1. Samples: 21007502. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:53,945][100720] Avg episode reward: [(0, '4.532')] +[2024-12-28 15:01:53,952][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000030286_124051456.pth... +[2024-12-28 15:01:53,987][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000028870_118251520.pth +[2024-12-28 15:01:54,104][100934] Updated weights for policy 0, policy_version 30287 (0.0010) +[2024-12-28 15:01:56,238][100934] Updated weights for policy 0, policy_version 30297 (0.0009) +[2024-12-28 15:01:57,995][100934] Updated weights for policy 0, policy_version 30307 (0.0008) +[2024-12-28 15:01:58,944][100720] Fps is (10 sec: 20070.5, 60 sec: 21640.6, 300 sec: 24381.6). Total num frames: 124153856. Throughput: 0: 5374.0. Samples: 21038464. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:01:58,945][100720] Avg episode reward: [(0, '4.376')] +[2024-12-28 15:01:59,987][100934] Updated weights for policy 0, policy_version 30317 (0.0008) +[2024-12-28 15:02:01,867][100934] Updated weights for policy 0, policy_version 30327 (0.0009) +[2024-12-28 15:02:03,640][100934] Updated weights for policy 0, policy_version 30337 (0.0007) +[2024-12-28 15:02:03,944][100720] Fps is (10 sec: 21299.5, 60 sec: 21777.1, 300 sec: 24367.7). Total num frames: 124264448. Throughput: 0: 5330.4. Samples: 21054642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:03,945][100720] Avg episode reward: [(0, '4.538')] +[2024-12-28 15:02:05,266][100934] Updated weights for policy 0, policy_version 30347 (0.0007) +[2024-12-28 15:02:06,908][100934] Updated weights for policy 0, policy_version 30357 (0.0007) +[2024-12-28 15:02:08,603][100934] Updated weights for policy 0, policy_version 30367 (0.0008) +[2024-12-28 15:02:08,944][100720] Fps is (10 sec: 23347.3, 60 sec: 22050.1, 300 sec: 24381.6). Total num frames: 124387328. Throughput: 0: 5421.3. Samples: 21091172. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:08,945][100720] Avg episode reward: [(0, '4.617')] +[2024-12-28 15:02:10,545][100934] Updated weights for policy 0, policy_version 30377 (0.0009) +[2024-12-28 15:02:12,476][100934] Updated weights for policy 0, policy_version 30387 (0.0010) +[2024-12-28 15:02:13,944][100720] Fps is (10 sec: 22937.6, 60 sec: 21777.1, 300 sec: 24298.3). Total num frames: 124493824. Throughput: 0: 5436.4. Samples: 21123444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:02:13,945][100720] Avg episode reward: [(0, '4.359')] +[2024-12-28 15:02:14,379][100934] Updated weights for policy 0, policy_version 30397 (0.0009) +[2024-12-28 15:02:16,310][100934] Updated weights for policy 0, policy_version 30407 (0.0009) +[2024-12-28 15:02:18,218][100934] Updated weights for policy 0, policy_version 30417 (0.0009) +[2024-12-28 15:02:18,944][100720] Fps is (10 sec: 21708.7, 60 sec: 21572.3, 300 sec: 24215.0). Total num frames: 124604416. Throughput: 0: 5404.0. Samples: 21139338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:02:18,945][100720] Avg episode reward: [(0, '4.549')] +[2024-12-28 15:02:20,130][100934] Updated weights for policy 0, policy_version 30427 (0.0010) +[2024-12-28 15:02:21,764][100934] Updated weights for policy 0, policy_version 30437 (0.0007) +[2024-12-28 15:02:23,349][100934] Updated weights for policy 0, policy_version 30447 (0.0006) +[2024-12-28 15:02:23,944][100720] Fps is (10 sec: 22937.7, 60 sec: 21913.6, 300 sec: 24159.5). Total num frames: 124723200. Throughput: 0: 5374.3. Samples: 21174224. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:02:23,946][100720] Avg episode reward: [(0, '4.497')] +[2024-12-28 15:02:24,993][100934] Updated weights for policy 0, policy_version 30457 (0.0008) +[2024-12-28 15:02:26,617][100934] Updated weights for policy 0, policy_version 30467 (0.0006) +[2024-12-28 15:02:28,160][100934] Updated weights for policy 0, policy_version 30477 (0.0007) +[2024-12-28 15:02:28,944][100720] Fps is (10 sec: 24576.0, 60 sec: 22186.7, 300 sec: 24145.6). Total num frames: 124850176. Throughput: 0: 5539.9. Samples: 21211956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:02:28,945][100720] Avg episode reward: [(0, '4.483')] +[2024-12-28 15:02:30,022][100934] Updated weights for policy 0, policy_version 30487 (0.0008) +[2024-12-28 15:02:31,842][100934] Updated weights for policy 0, policy_version 30497 (0.0008) +[2024-12-28 15:02:33,783][100934] Updated weights for policy 0, policy_version 30507 (0.0007) +[2024-12-28 15:02:33,944][100720] Fps is (10 sec: 23347.1, 60 sec: 21981.9, 300 sec: 24048.4). Total num frames: 124956672. Throughput: 0: 5569.1. Samples: 21228580. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:33,945][100720] Avg episode reward: [(0, '4.199')] +[2024-12-28 15:02:35,753][100934] Updated weights for policy 0, policy_version 30517 (0.0008) +[2024-12-28 15:02:37,677][100934] Updated weights for policy 0, policy_version 30527 (0.0008) +[2024-12-28 15:02:38,944][100720] Fps is (10 sec: 21299.2, 60 sec: 21913.6, 300 sec: 23951.2). Total num frames: 125063168. Throughput: 0: 5620.2. Samples: 21260410. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:38,945][100720] Avg episode reward: [(0, '4.675')] +[2024-12-28 15:02:39,520][100934] Updated weights for policy 0, policy_version 30537 (0.0007) +[2024-12-28 15:02:41,111][100934] Updated weights for policy 0, policy_version 30547 (0.0007) +[2024-12-28 15:02:42,702][100934] Updated weights for policy 0, policy_version 30557 (0.0007) +[2024-12-28 15:02:43,944][100720] Fps is (10 sec: 23347.1, 60 sec: 22323.2, 300 sec: 23937.3). Total num frames: 125190144. Throughput: 0: 5755.2. Samples: 21297448. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:43,945][100720] Avg episode reward: [(0, '4.282')] +[2024-12-28 15:02:44,313][100934] Updated weights for policy 0, policy_version 30567 (0.0007) +[2024-12-28 15:02:45,873][100934] Updated weights for policy 0, policy_version 30577 (0.0006) +[2024-12-28 15:02:47,465][100934] Updated weights for policy 0, policy_version 30587 (0.0007) +[2024-12-28 15:02:48,944][100720] Fps is (10 sec: 24985.6, 60 sec: 22664.5, 300 sec: 23937.3). Total num frames: 125313024. Throughput: 0: 5828.3. Samples: 21316916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:02:48,945][100720] Avg episode reward: [(0, '4.400')] +[2024-12-28 15:02:49,350][100934] Updated weights for policy 0, policy_version 30597 (0.0008) +[2024-12-28 15:02:51,308][100934] Updated weights for policy 0, policy_version 30607 (0.0008) +[2024-12-28 15:02:53,127][100934] Updated weights for policy 0, policy_version 30617 (0.0009) +[2024-12-28 15:02:53,944][100720] Fps is (10 sec: 23347.4, 60 sec: 22869.4, 300 sec: 23937.3). Total num frames: 125423616. Throughput: 0: 5750.8. Samples: 21349956. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 15:02:53,945][100720] Avg episode reward: [(0, '4.449')] +[2024-12-28 15:02:54,975][100934] Updated weights for policy 0, policy_version 30627 (0.0007) +[2024-12-28 15:02:56,976][100934] Updated weights for policy 0, policy_version 30637 (0.0007) +[2024-12-28 15:02:58,865][100934] Updated weights for policy 0, policy_version 30647 (0.0009) +[2024-12-28 15:02:58,944][100720] Fps is (10 sec: 21708.9, 60 sec: 22937.6, 300 sec: 23895.6). Total num frames: 125530112. Throughput: 0: 5749.9. Samples: 21382190. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 15:02:58,945][100720] Avg episode reward: [(0, '4.346')] +[2024-12-28 15:03:00,481][100934] Updated weights for policy 0, policy_version 30657 (0.0008) +[2024-12-28 15:03:02,041][100934] Updated weights for policy 0, policy_version 30667 (0.0006) +[2024-12-28 15:03:03,594][100934] Updated weights for policy 0, policy_version 30677 (0.0007) +[2024-12-28 15:03:03,944][100720] Fps is (10 sec: 23756.8, 60 sec: 23278.9, 300 sec: 23881.8). Total num frames: 125661184. Throughput: 0: 5824.6. Samples: 21401446. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 15:03:03,945][100720] Avg episode reward: [(0, '4.408')] +[2024-12-28 15:03:05,185][100934] Updated weights for policy 0, policy_version 30687 (0.0007) +[2024-12-28 15:03:06,751][100934] Updated weights for policy 0, policy_version 30697 (0.0007) +[2024-12-28 15:03:08,318][100934] Updated weights for policy 0, policy_version 30707 (0.0006) +[2024-12-28 15:03:08,944][100720] Fps is (10 sec: 26214.4, 60 sec: 23415.5, 300 sec: 23881.8). Total num frames: 125792256. Throughput: 0: 5918.4. Samples: 21440554. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2024-12-28 15:03:08,945][100720] Avg episode reward: [(0, '4.469')] +[2024-12-28 15:03:09,878][100934] Updated weights for policy 0, policy_version 30717 (0.0006) +[2024-12-28 15:03:11,475][100934] Updated weights for policy 0, policy_version 30727 (0.0006) +[2024-12-28 15:03:13,241][100934] Updated weights for policy 0, policy_version 30737 (0.0008) +[2024-12-28 15:03:13,944][100720] Fps is (10 sec: 24985.3, 60 sec: 23620.2, 300 sec: 23826.2). Total num frames: 125911040. Throughput: 0: 5905.9. Samples: 21477720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:03:13,946][100720] Avg episode reward: [(0, '4.534')] +[2024-12-28 15:03:15,130][100934] Updated weights for policy 0, policy_version 30747 (0.0008) +[2024-12-28 15:03:17,091][100934] Updated weights for policy 0, policy_version 30757 (0.0008) +[2024-12-28 15:03:18,944][100720] Fps is (10 sec: 22527.7, 60 sec: 23552.0, 300 sec: 23742.9). Total num frames: 126017536. Throughput: 0: 5890.9. Samples: 21493672. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:03:18,945][100720] Avg episode reward: [(0, '4.393')] +[2024-12-28 15:03:18,963][100934] Updated weights for policy 0, policy_version 30767 (0.0008) +[2024-12-28 15:03:20,913][100934] Updated weights for policy 0, policy_version 30777 (0.0009) +[2024-12-28 15:03:22,758][100934] Updated weights for policy 0, policy_version 30787 (0.0008) +[2024-12-28 15:03:23,944][100720] Fps is (10 sec: 22118.7, 60 sec: 23483.7, 300 sec: 23673.5). Total num frames: 126132224. Throughput: 0: 5902.8. Samples: 21526038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:03:23,945][100720] Avg episode reward: [(0, '4.686')] +[2024-12-28 15:03:24,347][100934] Updated weights for policy 0, policy_version 30797 (0.0007) +[2024-12-28 15:03:25,931][100934] Updated weights for policy 0, policy_version 30807 (0.0007) +[2024-12-28 15:03:27,548][100934] Updated weights for policy 0, policy_version 30817 (0.0007) +[2024-12-28 15:03:28,944][100720] Fps is (10 sec: 24166.5, 60 sec: 23483.7, 300 sec: 23659.6). Total num frames: 126259200. Throughput: 0: 5934.1. Samples: 21564482. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:03:28,945][100720] Avg episode reward: [(0, '4.402')] +[2024-12-28 15:03:29,161][100934] Updated weights for policy 0, policy_version 30827 (0.0007) +[2024-12-28 15:03:30,729][100934] Updated weights for policy 0, policy_version 30837 (0.0006) +[2024-12-28 15:03:32,330][100934] Updated weights for policy 0, policy_version 30847 (0.0007) +[2024-12-28 15:03:33,905][100934] Updated weights for policy 0, policy_version 30857 (0.0008) +[2024-12-28 15:03:33,944][100720] Fps is (10 sec: 25804.8, 60 sec: 23893.4, 300 sec: 23645.7). Total num frames: 126390272. Throughput: 0: 5934.5. Samples: 21583968. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:03:33,945][100720] Avg episode reward: [(0, '4.534')] +[2024-12-28 15:03:35,467][100934] Updated weights for policy 0, policy_version 30867 (0.0008) +[2024-12-28 15:03:37,241][100934] Updated weights for policy 0, policy_version 30877 (0.0008) +[2024-12-28 15:03:38,945][100720] Fps is (10 sec: 24984.7, 60 sec: 24098.0, 300 sec: 23590.2). Total num frames: 126509056. Throughput: 0: 6025.2. Samples: 21621092. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:03:38,946][100720] Avg episode reward: [(0, '4.456')] +[2024-12-28 15:03:39,102][100934] Updated weights for policy 0, policy_version 30887 (0.0007) +[2024-12-28 15:03:40,981][100934] Updated weights for policy 0, policy_version 30897 (0.0008) +[2024-12-28 15:03:42,898][100934] Updated weights for policy 0, policy_version 30907 (0.0008) +[2024-12-28 15:03:43,944][100720] Fps is (10 sec: 22527.8, 60 sec: 23756.8, 300 sec: 23493.0). Total num frames: 126615552. Throughput: 0: 6029.1. Samples: 21653498. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:03:43,945][100720] Avg episode reward: [(0, '4.418')] +[2024-12-28 15:03:44,823][100934] Updated weights for policy 0, policy_version 30917 (0.0009) +[2024-12-28 15:03:46,730][100934] Updated weights for policy 0, policy_version 30927 (0.0007) +[2024-12-28 15:03:48,350][100934] Updated weights for policy 0, policy_version 30937 (0.0007) +[2024-12-28 15:03:48,944][100720] Fps is (10 sec: 22119.4, 60 sec: 23620.3, 300 sec: 23437.5). Total num frames: 126730240. Throughput: 0: 5962.3. Samples: 21669750. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:03:48,945][100720] Avg episode reward: [(0, '4.607')] +[2024-12-28 15:03:50,105][100934] Updated weights for policy 0, policy_version 30947 (0.0007) +[2024-12-28 15:03:51,725][100934] Updated weights for policy 0, policy_version 30957 (0.0007) +[2024-12-28 15:03:53,302][100934] Updated weights for policy 0, policy_version 30967 (0.0007) +[2024-12-28 15:03:53,944][100720] Fps is (10 sec: 24166.5, 60 sec: 23893.3, 300 sec: 23409.7). Total num frames: 126857216. Throughput: 0: 5923.1. Samples: 21707094. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:03:53,945][100720] Avg episode reward: [(0, '4.496')] +[2024-12-28 15:03:53,950][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000030971_126857216.pth... +[2024-12-28 15:03:53,980][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000029616_121307136.pth +[2024-12-28 15:03:54,878][100934] Updated weights for policy 0, policy_version 30977 (0.0007) +[2024-12-28 15:03:56,531][100934] Updated weights for policy 0, policy_version 30987 (0.0006) +[2024-12-28 15:03:58,155][100934] Updated weights for policy 0, policy_version 30997 (0.0007) +[2024-12-28 15:03:58,944][100720] Fps is (10 sec: 25394.9, 60 sec: 24234.6, 300 sec: 23437.4). Total num frames: 126984192. Throughput: 0: 5944.4. Samples: 21745216. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:03:58,945][100720] Avg episode reward: [(0, '4.276')] +[2024-12-28 15:03:59,733][100934] Updated weights for policy 0, policy_version 31007 (0.0007) +[2024-12-28 15:04:01,363][100934] Updated weights for policy 0, policy_version 31017 (0.0007) +[2024-12-28 15:04:02,945][100934] Updated weights for policy 0, policy_version 31027 (0.0006) +[2024-12-28 15:04:03,944][100720] Fps is (10 sec: 25395.3, 60 sec: 24166.4, 300 sec: 23493.0). Total num frames: 127111168. Throughput: 0: 6015.9. Samples: 21764386. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:03,945][100720] Avg episode reward: [(0, '4.422')] +[2024-12-28 15:04:04,533][100934] Updated weights for policy 0, policy_version 31037 (0.0007) +[2024-12-28 15:04:06,168][100934] Updated weights for policy 0, policy_version 31047 (0.0007) +[2024-12-28 15:04:07,806][100934] Updated weights for policy 0, policy_version 31057 (0.0007) +[2024-12-28 15:04:08,944][100720] Fps is (10 sec: 24985.9, 60 sec: 24029.9, 300 sec: 23479.1). Total num frames: 127234048. Throughput: 0: 6139.3. Samples: 21802308. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:08,945][100720] Avg episode reward: [(0, '4.396')] +[2024-12-28 15:04:09,453][100934] Updated weights for policy 0, policy_version 31067 (0.0007) +[2024-12-28 15:04:11,106][100934] Updated weights for policy 0, policy_version 31077 (0.0007) +[2024-12-28 15:04:12,741][100934] Updated weights for policy 0, policy_version 31087 (0.0008) +[2024-12-28 15:04:13,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24166.4, 300 sec: 23520.8). Total num frames: 127361024. Throughput: 0: 6117.6. Samples: 21839774. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:04:13,945][100720] Avg episode reward: [(0, '4.402')] +[2024-12-28 15:04:14,394][100934] Updated weights for policy 0, policy_version 31097 (0.0007) +[2024-12-28 15:04:16,030][100934] Updated weights for policy 0, policy_version 31107 (0.0007) +[2024-12-28 15:04:17,633][100934] Updated weights for policy 0, policy_version 31117 (0.0008) +[2024-12-28 15:04:18,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24439.5, 300 sec: 23548.5). Total num frames: 127483904. Throughput: 0: 6103.6. Samples: 21858632. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:04:18,945][100720] Avg episode reward: [(0, '4.426')] +[2024-12-28 15:04:19,312][100934] Updated weights for policy 0, policy_version 31127 (0.0010) +[2024-12-28 15:04:20,986][100934] Updated weights for policy 0, policy_version 31137 (0.0008) +[2024-12-28 15:04:22,747][100934] Updated weights for policy 0, policy_version 31147 (0.0009) +[2024-12-28 15:04:23,944][100720] Fps is (10 sec: 24166.4, 60 sec: 24507.7, 300 sec: 23506.9). Total num frames: 127602688. Throughput: 0: 6080.7. Samples: 21894722. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:04:23,945][100720] Avg episode reward: [(0, '4.368')] +[2024-12-28 15:04:24,624][100934] Updated weights for policy 0, policy_version 31157 (0.0009) +[2024-12-28 15:04:26,590][100934] Updated weights for policy 0, policy_version 31167 (0.0009) +[2024-12-28 15:04:28,524][100934] Updated weights for policy 0, policy_version 31177 (0.0008) +[2024-12-28 15:04:28,944][100720] Fps is (10 sec: 22528.1, 60 sec: 24166.5, 300 sec: 23409.7). Total num frames: 127709184. Throughput: 0: 6069.8. Samples: 21926638. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2024-12-28 15:04:28,945][100720] Avg episode reward: [(0, '4.503')] +[2024-12-28 15:04:30,527][100934] Updated weights for policy 0, policy_version 31187 (0.0008) +[2024-12-28 15:04:32,405][100934] Updated weights for policy 0, policy_version 31197 (0.0008) +[2024-12-28 15:04:33,944][100720] Fps is (10 sec: 21298.9, 60 sec: 23756.8, 300 sec: 23312.5). Total num frames: 127815680. Throughput: 0: 6059.6. Samples: 21942432. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:33,946][100720] Avg episode reward: [(0, '4.627')] +[2024-12-28 15:04:34,142][100934] Updated weights for policy 0, policy_version 31207 (0.0008) +[2024-12-28 15:04:35,754][100934] Updated weights for policy 0, policy_version 31217 (0.0007) +[2024-12-28 15:04:37,359][100934] Updated weights for policy 0, policy_version 31227 (0.0007) +[2024-12-28 15:04:38,944][100720] Fps is (10 sec: 23347.1, 60 sec: 23893.5, 300 sec: 23298.6). Total num frames: 127942656. Throughput: 0: 6046.6. Samples: 21979192. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:38,945][100720] Avg episode reward: [(0, '4.689')] +[2024-12-28 15:04:39,021][100934] Updated weights for policy 0, policy_version 31237 (0.0006) +[2024-12-28 15:04:40,649][100934] Updated weights for policy 0, policy_version 31247 (0.0007) +[2024-12-28 15:04:42,259][100934] Updated weights for policy 0, policy_version 31257 (0.0008) +[2024-12-28 15:04:43,858][100934] Updated weights for policy 0, policy_version 31267 (0.0007) +[2024-12-28 15:04:43,944][100720] Fps is (10 sec: 25395.4, 60 sec: 24234.7, 300 sec: 23312.5). Total num frames: 128069632. Throughput: 0: 6042.7. Samples: 22017138. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:43,945][100720] Avg episode reward: [(0, '4.369')] +[2024-12-28 15:04:45,491][100934] Updated weights for policy 0, policy_version 31277 (0.0007) +[2024-12-28 15:04:47,145][100934] Updated weights for policy 0, policy_version 31287 (0.0008) +[2024-12-28 15:04:48,795][100934] Updated weights for policy 0, policy_version 31297 (0.0009) +[2024-12-28 15:04:48,944][100720] Fps is (10 sec: 24985.6, 60 sec: 24371.2, 300 sec: 23340.3). Total num frames: 128192512. Throughput: 0: 6034.6. Samples: 22035944. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2024-12-28 15:04:48,945][100720] Avg episode reward: [(0, '4.347')] +[2024-12-28 15:04:50,759][100934] Updated weights for policy 0, policy_version 31307 (0.0008) +[2024-12-28 15:04:52,640][100934] Updated weights for policy 0, policy_version 31317 (0.0008) +[2024-12-28 15:04:53,944][100720] Fps is (10 sec: 22937.3, 60 sec: 24029.8, 300 sec: 23298.6). Total num frames: 128299008. Throughput: 0: 5932.4. Samples: 22069266. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:04:53,945][100720] Avg episode reward: [(0, '4.439')] +[2024-12-28 15:04:54,603][100934] Updated weights for policy 0, policy_version 31327 (0.0009) +[2024-12-28 15:04:56,509][100934] Updated weights for policy 0, policy_version 31337 (0.0008) +[2024-12-28 15:04:58,425][100934] Updated weights for policy 0, policy_version 31347 (0.0009) +[2024-12-28 15:04:58,944][100720] Fps is (10 sec: 21299.2, 60 sec: 23688.6, 300 sec: 23201.4). Total num frames: 128405504. Throughput: 0: 5815.5. Samples: 22101472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:04:58,945][100720] Avg episode reward: [(0, '4.513')] +[2024-12-28 15:05:00,160][100934] Updated weights for policy 0, policy_version 31357 (0.0008) +[2024-12-28 15:05:01,779][100934] Updated weights for policy 0, policy_version 31367 (0.0008) +[2024-12-28 15:05:03,450][100934] Updated weights for policy 0, policy_version 31377 (0.0007) +[2024-12-28 15:05:03,944][100720] Fps is (10 sec: 22937.9, 60 sec: 23620.3, 300 sec: 23173.6). Total num frames: 128528384. Throughput: 0: 5812.3. Samples: 22120184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:05:03,945][100720] Avg episode reward: [(0, '4.476')] +[2024-12-28 15:05:05,402][100934] Updated weights for policy 0, policy_version 31387 (0.0009) +[2024-12-28 15:05:07,279][100934] Updated weights for policy 0, policy_version 31397 (0.0008) +[2024-12-28 15:05:08,944][100720] Fps is (10 sec: 22937.6, 60 sec: 23347.2, 300 sec: 23145.9). Total num frames: 128634880. Throughput: 0: 5749.3. Samples: 22153440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:05:08,945][100720] Avg episode reward: [(0, '4.675')] +[2024-12-28 15:05:09,168][100934] Updated weights for policy 0, policy_version 31407 (0.0009) +[2024-12-28 15:05:11,186][100934] Updated weights for policy 0, policy_version 31417 (0.0009) +[2024-12-28 15:05:13,195][100934] Updated weights for policy 0, policy_version 31427 (0.0009) +[2024-12-28 15:05:13,944][100720] Fps is (10 sec: 20889.0, 60 sec: 22937.5, 300 sec: 23104.2). Total num frames: 128737280. Throughput: 0: 5729.6. Samples: 22184472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:05:13,945][100720] Avg episode reward: [(0, '4.385')] +[2024-12-28 15:05:15,192][100934] Updated weights for policy 0, policy_version 31437 (0.0008) +[2024-12-28 15:05:17,215][100934] Updated weights for policy 0, policy_version 31447 (0.0009) +[2024-12-28 15:05:18,944][100720] Fps is (10 sec: 20889.7, 60 sec: 22664.5, 300 sec: 23007.0). Total num frames: 128843776. Throughput: 0: 5714.1. Samples: 22199564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2024-12-28 15:05:18,945][100720] Avg episode reward: [(0, '4.498')] +[2024-12-28 15:05:19,076][100934] Updated weights for policy 0, policy_version 31457 (0.0008) +[2024-12-28 15:05:20,723][100934] Updated weights for policy 0, policy_version 31467 (0.0006) +[2024-12-28 15:05:21,849][100720] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 100720], exiting... +[2024-12-28 15:05:21,850][100918] Stopping Batcher_0... +[2024-12-28 15:05:21,851][100918] Loop batcher_evt_loop terminating... +[2024-12-28 15:05:21,850][100720] Runner profile tree view: +main_loop: 3565.8569 +[2024-12-28 15:05:21,851][100918] Saving /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000031474_128917504.pth... +[2024-12-28 15:05:21,851][100720] Collected {0: 128917504}, FPS: 24934.2 +[2024-12-28 15:05:21,864][100934] Weights refcount: 2 0 +[2024-12-28 15:05:21,865][100934] Stopping InferenceWorker_p0-w0... +[2024-12-28 15:05:21,866][100934] Loop inference_proc0-0_evt_loop terminating... +[2024-12-28 15:05:21,886][100938] Stopping RolloutWorker_w3... +[2024-12-28 15:05:21,886][100939] Stopping RolloutWorker_w5... +[2024-12-28 15:05:21,886][100939] Loop rollout_proc5_evt_loop terminating... +[2024-12-28 15:05:21,886][100938] Loop rollout_proc3_evt_loop terminating... +[2024-12-28 15:05:21,887][100936] Stopping RolloutWorker_w2... +[2024-12-28 15:05:21,887][100936] Loop rollout_proc2_evt_loop terminating... +[2024-12-28 15:05:21,888][100941] Stopping RolloutWorker_w6... +[2024-12-28 15:05:21,888][100937] Stopping RolloutWorker_w1... +[2024-12-28 15:05:21,888][100941] Loop rollout_proc6_evt_loop terminating... +[2024-12-28 15:05:21,888][100940] Stopping RolloutWorker_w4... +[2024-12-28 15:05:21,888][100937] Loop rollout_proc1_evt_loop terminating... +[2024-12-28 15:05:21,888][100940] Loop rollout_proc4_evt_loop terminating... +[2024-12-28 15:05:21,889][100935] Stopping RolloutWorker_w0... +[2024-12-28 15:05:21,889][100942] Stopping RolloutWorker_w7... +[2024-12-28 15:05:21,889][100935] Loop rollout_proc0_evt_loop terminating... +[2024-12-28 15:05:21,890][100942] Loop rollout_proc7_evt_loop terminating... +[2024-12-28 15:05:21,891][100918] Removing /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000030286_124051456.pth +[2024-12-28 15:05:21,894][100918] Stopping LearnerWorker_p0... +[2024-12-28 15:05:21,895][100918] Loop learner_proc0_evt_loop terminating... +[2024-12-28 15:05:24,631][100720] Loading existing experiment configuration from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json +[2024-12-28 15:05:24,632][100720] Overriding arg 'num_workers' with value 1 passed from command line +[2024-12-28 15:05:24,633][100720] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-12-28 15:05:24,633][100720] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-12-28 15:05:24,634][100720] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 15:05:24,634][100720] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-12-28 15:05:24,635][100720] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 15:05:24,636][100720] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-12-28 15:05:24,636][100720] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2024-12-28 15:05:24,637][100720] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2024-12-28 15:05:24,637][100720] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-12-28 15:05:24,637][100720] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-12-28 15:05:24,638][100720] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-12-28 15:05:24,638][100720] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-12-28 15:05:24,639][100720] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-12-28 15:05:24,652][100720] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-12-28 15:05:24,654][100720] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 15:05:24,656][100720] RunningMeanStd input shape: (1,) +[2024-12-28 15:05:24,666][100720] ConvEncoder: input_channels=3 +[2024-12-28 15:05:24,727][100720] Conv encoder output size: 512 +[2024-12-28 15:05:24,729][100720] Policy head output size: 512 +[2024-12-28 15:05:25,369][100720] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000031474_128917504.pth... +[2024-12-28 15:05:25,879][100720] Num frames 100... +[2024-12-28 15:05:25,974][100720] Num frames 200... +[2024-12-28 15:05:26,070][100720] Num frames 300... +[2024-12-28 15:05:26,200][100720] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2024-12-28 15:05:26,201][100720] Avg episode reward: 3.840, avg true_objective: 3.840 +[2024-12-28 15:05:26,223][100720] Num frames 400... +[2024-12-28 15:05:26,328][100720] Num frames 500... +[2024-12-28 15:05:26,415][100720] Num frames 600... +[2024-12-28 15:05:26,506][100720] Num frames 700... +[2024-12-28 15:05:26,620][100720] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2024-12-28 15:05:26,621][100720] Avg episode reward: 3.840, avg true_objective: 3.840 +[2024-12-28 15:05:26,652][100720] Num frames 800... +[2024-12-28 15:05:26,738][100720] Num frames 900... +[2024-12-28 15:05:26,834][100720] Num frames 1000... +[2024-12-28 15:05:26,925][100720] Num frames 1100... +[2024-12-28 15:05:27,014][100720] Num frames 1200... +[2024-12-28 15:05:27,081][100720] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053 +[2024-12-28 15:05:27,082][100720] Avg episode reward: 4.387, avg true_objective: 4.053 +[2024-12-28 15:05:27,157][100720] Num frames 1300... +[2024-12-28 15:05:27,243][100720] Num frames 1400... +[2024-12-28 15:05:27,332][100720] Num frames 1500... +[2024-12-28 15:05:27,427][100720] Num frames 1600... +[2024-12-28 15:05:27,478][100720] Avg episode rewards: #0: 4.250, true rewards: #0: 4.000 +[2024-12-28 15:05:27,479][100720] Avg episode reward: 4.250, avg true_objective: 4.000 +[2024-12-28 15:05:27,567][100720] Num frames 1700... +[2024-12-28 15:05:27,655][100720] Num frames 1800... +[2024-12-28 15:05:27,758][100720] Avg episode rewards: #0: 3.912, true rewards: #0: 3.712 +[2024-12-28 15:05:27,759][100720] Avg episode reward: 3.912, avg true_objective: 3.712 +[2024-12-28 15:05:27,800][100720] Num frames 1900... +[2024-12-28 15:05:27,888][100720] Num frames 2000... +[2024-12-28 15:05:27,973][100720] Num frames 2100... +[2024-12-28 15:05:28,062][100720] Num frames 2200... +[2024-12-28 15:05:28,149][100720] Num frames 2300... +[2024-12-28 15:05:28,205][100720] Avg episode rewards: #0: 4.173, true rewards: #0: 3.840 +[2024-12-28 15:05:28,206][100720] Avg episode reward: 4.173, avg true_objective: 3.840 +[2024-12-28 15:05:28,299][100720] Num frames 2400... +[2024-12-28 15:05:28,400][100720] Num frames 2500... +[2024-12-28 15:05:28,495][100720] Num frames 2600... +[2024-12-28 15:05:28,585][100720] Num frames 2700... +[2024-12-28 15:05:28,656][100720] Avg episode rewards: #0: 4.314, true rewards: #0: 3.886 +[2024-12-28 15:05:28,657][100720] Avg episode reward: 4.314, avg true_objective: 3.886 +[2024-12-28 15:05:28,729][100720] Num frames 2800... +[2024-12-28 15:05:28,818][100720] Num frames 2900... +[2024-12-28 15:05:28,905][100720] Num frames 3000... +[2024-12-28 15:05:28,992][100720] Num frames 3100... +[2024-12-28 15:05:29,049][100720] Avg episode rewards: #0: 4.255, true rewards: #0: 3.880 +[2024-12-28 15:05:29,049][100720] Avg episode reward: 4.255, avg true_objective: 3.880 +[2024-12-28 15:05:29,135][100720] Num frames 3200... +[2024-12-28 15:05:29,221][100720] Num frames 3300... +[2024-12-28 15:05:29,316][100720] Num frames 3400... +[2024-12-28 15:05:29,454][100720] Avg episode rewards: #0: 4.209, true rewards: #0: 3.876 +[2024-12-28 15:05:29,455][100720] Avg episode reward: 4.209, avg true_objective: 3.876 +[2024-12-28 15:05:29,467][100720] Num frames 3500... +[2024-12-28 15:05:29,561][100720] Num frames 3600... +[2024-12-28 15:05:29,651][100720] Num frames 3700... +[2024-12-28 15:05:29,741][100720] Num frames 3800... +[2024-12-28 15:05:29,859][100720] Avg episode rewards: #0: 4.172, true rewards: #0: 3.872 +[2024-12-28 15:05:29,860][100720] Avg episode reward: 4.172, avg true_objective: 3.872 +[2024-12-28 15:05:34,080][100720] Replay video saved to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/replay.mp4! +[2024-12-28 15:06:47,425][100720] Loading existing experiment configuration from /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/config.json +[2024-12-28 15:06:47,426][100720] Overriding arg 'num_workers' with value 1 passed from command line +[2024-12-28 15:06:47,427][100720] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-12-28 15:06:47,427][100720] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-12-28 15:06:47,428][100720] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-12-28 15:06:47,428][100720] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-12-28 15:06:47,429][100720] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2024-12-28 15:06:47,429][100720] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-12-28 15:06:47,430][100720] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2024-12-28 15:06:47,431][100720] Adding new argument 'hf_repository'='Snorlax/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2024-12-28 15:06:47,431][100720] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-12-28 15:06:47,431][100720] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-12-28 15:06:47,432][100720] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-12-28 15:06:47,432][100720] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-12-28 15:06:47,433][100720] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-12-28 15:06:47,446][100720] RunningMeanStd input shape: (3, 72, 128) +[2024-12-28 15:06:47,447][100720] RunningMeanStd input shape: (1,) +[2024-12-28 15:06:47,453][100720] ConvEncoder: input_channels=3 +[2024-12-28 15:06:47,476][100720] Conv encoder output size: 512 +[2024-12-28 15:06:47,478][100720] Policy head output size: 512 +[2024-12-28 15:06:47,525][100720] Loading state from checkpoint /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/checkpoint_p0/checkpoint_000031474_128917504.pth... +[2024-12-28 15:06:47,876][100720] Num frames 100... +[2024-12-28 15:06:48,013][100720] Num frames 200... +[2024-12-28 15:06:48,142][100720] Num frames 300... +[2024-12-28 15:06:48,258][100720] Num frames 400... +[2024-12-28 15:06:48,334][100720] Avg episode rewards: #0: 4.160, true rewards: #0: 4.160 +[2024-12-28 15:06:48,335][100720] Avg episode reward: 4.160, avg true_objective: 4.160 +[2024-12-28 15:06:48,437][100720] Num frames 500... +[2024-12-28 15:06:48,550][100720] Num frames 600... +[2024-12-28 15:06:48,669][100720] Num frames 700... +[2024-12-28 15:06:48,790][100720] Num frames 800... +[2024-12-28 15:06:48,841][100720] Avg episode rewards: #0: 4.000, true rewards: #0: 4.000 +[2024-12-28 15:06:48,842][100720] Avg episode reward: 4.000, avg true_objective: 4.000 +[2024-12-28 15:06:48,966][100720] Num frames 900... +[2024-12-28 15:06:49,095][100720] Num frames 1000... +[2024-12-28 15:06:49,220][100720] Num frames 1100... +[2024-12-28 15:06:49,368][100720] Avg episode rewards: #0: 3.947, true rewards: #0: 3.947 +[2024-12-28 15:06:49,369][100720] Avg episode reward: 3.947, avg true_objective: 3.947 +[2024-12-28 15:06:49,388][100720] Num frames 1200... +[2024-12-28 15:06:49,502][100720] Num frames 1300... +[2024-12-28 15:06:49,621][100720] Num frames 1400... +[2024-12-28 15:06:49,739][100720] Num frames 1500... +[2024-12-28 15:06:49,856][100720] Num frames 1600... +[2024-12-28 15:06:49,948][100720] Avg episode rewards: #0: 4.330, true rewards: #0: 4.080 +[2024-12-28 15:06:49,949][100720] Avg episode reward: 4.330, avg true_objective: 4.080 +[2024-12-28 15:06:50,039][100720] Num frames 1700... +[2024-12-28 15:06:50,159][100720] Num frames 1800... +[2024-12-28 15:06:50,268][100720] Num frames 1900... +[2024-12-28 15:06:50,379][100720] Num frames 2000... +[2024-12-28 15:06:50,454][100720] Avg episode rewards: #0: 4.232, true rewards: #0: 4.032 +[2024-12-28 15:06:50,455][100720] Avg episode reward: 4.232, avg true_objective: 4.032 +[2024-12-28 15:06:50,559][100720] Num frames 2100... +[2024-12-28 15:06:50,675][100720] Num frames 2200... +[2024-12-28 15:06:50,785][100720] Num frames 2300... +[2024-12-28 15:06:50,905][100720] Num frames 2400... +[2024-12-28 15:06:51,029][100720] Avg episode rewards: #0: 4.440, true rewards: #0: 4.107 +[2024-12-28 15:06:51,030][100720] Avg episode reward: 4.440, avg true_objective: 4.107 +[2024-12-28 15:06:51,074][100720] Num frames 2500... +[2024-12-28 15:06:51,188][100720] Num frames 2600... +[2024-12-28 15:06:51,300][100720] Num frames 2700... +[2024-12-28 15:06:51,409][100720] Num frames 2800... +[2024-12-28 15:06:51,519][100720] Num frames 2900... +[2024-12-28 15:06:51,619][100720] Avg episode rewards: #0: 4.777, true rewards: #0: 4.206 +[2024-12-28 15:06:51,620][100720] Avg episode reward: 4.777, avg true_objective: 4.206 +[2024-12-28 15:06:51,679][100720] Num frames 3000... +[2024-12-28 15:06:51,788][100720] Num frames 3100... +[2024-12-28 15:06:51,908][100720] Num frames 3200... +[2024-12-28 15:06:52,025][100720] Num frames 3300... +[2024-12-28 15:06:52,107][100720] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2024-12-28 15:06:52,108][100720] Avg episode reward: 4.660, avg true_objective: 4.160 +[2024-12-28 15:06:52,188][100720] Num frames 3400... +[2024-12-28 15:06:52,305][100720] Num frames 3500... +[2024-12-28 15:06:52,409][100720] Num frames 3600... +[2024-12-28 15:06:52,521][100720] Num frames 3700... +[2024-12-28 15:06:52,588][100720] Avg episode rewards: #0: 4.569, true rewards: #0: 4.124 +[2024-12-28 15:06:52,589][100720] Avg episode reward: 4.569, avg true_objective: 4.124 +[2024-12-28 15:06:52,688][100720] Num frames 3800... +[2024-12-28 15:06:52,796][100720] Num frames 3900... +[2024-12-28 15:06:52,909][100720] Num frames 4000... +[2024-12-28 15:06:53,021][100720] Num frames 4100... +[2024-12-28 15:06:53,145][100720] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2024-12-28 15:06:53,146][100720] Avg episode reward: 4.660, avg true_objective: 4.160 +[2024-12-28 15:06:57,427][100720] Replay video saved to /home/zhangsz/Workspace/HF_Deep_RL_Course/unit8/train_dir/default_experiment/replay.mp4!