diff --git "a/sf_log.txt" "b/sf_log.txt" --- "a/sf_log.txt" +++ "b/sf_log.txt" @@ -646,3 +646,1773 @@ main_loop: 115.9303 [2024-09-30 00:28:24,946][1149865] Avg episode rewards: #0: 33.076, true rewards: #0: 13.476 [2024-09-30 00:28:24,946][1149865] Avg episode reward: 33.076, avg true_objective: 13.476 [2024-09-30 00:28:42,313][1149865] Replay video saved to /home/luyang/workspace/rl/train_dir/default_experiment/replay.mp4! +[2024-09-30 00:29:17,655][1149865] The model has been pushed to https://huggingface.co/esperesa/rl_course_vizdoom_health_gathering_supreme +[2024-09-30 00:33:37,627][1153456] Saving configuration to /home/luyang/workspace/rl/train_dir/default_experiment/config.json... +[2024-09-30 00:33:37,631][1153456] Rollout worker 0 uses device cpu +[2024-09-30 00:33:37,631][1153456] Rollout worker 1 uses device cpu +[2024-09-30 00:33:37,631][1153456] Rollout worker 2 uses device cpu +[2024-09-30 00:33:37,631][1153456] Rollout worker 3 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 4 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 5 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 6 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 7 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 8 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 9 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 10 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 11 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 12 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 13 uses device cpu +[2024-09-30 00:33:37,632][1153456] Rollout worker 14 uses device cpu +[2024-09-30 00:33:37,633][1153456] Rollout worker 15 uses device cpu +[2024-09-30 00:33:37,744][1153456] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-09-30 00:33:37,745][1153456] InferenceWorker_p0-w0: min num requests: 5 +[2024-09-30 00:33:37,828][1153456] Starting all processes... +[2024-09-30 00:33:37,828][1153456] Starting process learner_proc0 +[2024-09-30 00:33:39,432][1153456] Starting all processes... +[2024-09-30 00:33:39,436][1153683] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-09-30 00:33:39,437][1153683] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2024-09-30 00:33:39,438][1153456] Starting process inference_proc0-0 +[2024-09-30 00:33:39,438][1153456] Starting process rollout_proc0 +[2024-09-30 00:33:39,438][1153456] Starting process rollout_proc1 +[2024-09-30 00:33:39,441][1153456] Starting process rollout_proc2 +[2024-09-30 00:33:39,441][1153456] Starting process rollout_proc3 +[2024-09-30 00:33:39,444][1153456] Starting process rollout_proc4 +[2024-09-30 00:33:39,444][1153456] Starting process rollout_proc5 +[2024-09-30 00:33:39,444][1153456] Starting process rollout_proc6 +[2024-09-30 00:33:39,444][1153456] Starting process rollout_proc7 +[2024-09-30 00:33:39,447][1153456] Starting process rollout_proc8 +[2024-09-30 00:33:39,454][1153456] Starting process rollout_proc9 +[2024-09-30 00:33:39,454][1153456] Starting process rollout_proc10 +[2024-09-30 00:33:39,455][1153456] Starting process rollout_proc11 +[2024-09-30 00:33:39,455][1153456] Starting process rollout_proc12 +[2024-09-30 00:33:39,458][1153456] Starting process rollout_proc13 +[2024-09-30 00:33:39,460][1153456] Starting process rollout_proc14 +[2024-09-30 00:33:39,478][1153683] Num visible devices: 1 +[2024-09-30 00:33:39,484][1153683] Starting seed is not provided +[2024-09-30 00:33:39,484][1153683] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-09-30 00:33:39,485][1153683] Initializing actor-critic model on device cuda:0 +[2024-09-30 00:33:39,485][1153683] RunningMeanStd input shape: (3, 72, 128) +[2024-09-30 00:33:39,485][1153683] RunningMeanStd input shape: (1,) +[2024-09-30 00:33:39,494][1153683] ConvEncoder: input_channels=3 +[2024-09-30 00:33:39,566][1153683] Conv encoder output size: 512 +[2024-09-30 00:33:39,566][1153683] Policy head output size: 512 +[2024-09-30 00:33:39,578][1153683] Created Actor Critic model with architecture: +[2024-09-30 00:33:39,578][1153683] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2024-09-30 00:33:39,783][1153683] Using optimizer +[2024-09-30 00:33:40,533][1153683] Loading state from checkpoint /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2024-09-30 00:33:40,555][1153683] Loading model from checkpoint +[2024-09-30 00:33:40,556][1153683] Loaded experiment state at self.train_step=978, self.env_steps=4005888 +[2024-09-30 00:33:40,556][1153683] Initialized policy 0 weights for model version 978 +[2024-09-30 00:33:40,558][1153683] LearnerWorker_p0 finished initialization! +[2024-09-30 00:33:40,558][1153683] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-09-30 00:33:41,097][1153456] Starting process rollout_proc15 +[2024-09-30 00:33:41,101][1153813] Worker 7 uses CPU cores [42, 43, 44, 45, 46, 47] +[2024-09-30 00:33:41,103][1153807] Worker 1 uses CPU cores [6, 7, 8, 9, 10, 11] +[2024-09-30 00:33:41,104][1153814] Worker 8 uses CPU cores [48, 49, 50, 51, 52, 53] +[2024-09-30 00:33:41,118][1153809] Worker 2 uses CPU cores [12, 13, 14, 15, 16, 17] +[2024-09-30 00:33:41,128][1153812] Worker 5 uses CPU cores [30, 31, 32, 33, 34, 35] +[2024-09-30 00:33:41,140][1153808] Worker 3 uses CPU cores [18, 19, 20, 21, 22, 23] +[2024-09-30 00:33:41,157][1153816] Worker 10 uses CPU cores [60, 61, 62, 63, 64, 65] +[2024-09-30 00:33:41,178][1153881] Worker 14 uses CPU cores [84, 85, 86, 87, 88, 89] +[2024-09-30 00:33:41,182][1153882] Worker 11 uses CPU cores [66, 67, 68, 69, 70, 71] +[2024-09-30 00:33:41,188][1153806] Worker 0 uses CPU cores [0, 1, 2, 3, 4, 5] +[2024-09-30 00:33:41,197][1153880] Worker 13 uses CPU cores [78, 79, 80, 81, 82, 83] +[2024-09-30 00:33:41,206][1153810] Worker 4 uses CPU cores [24, 25, 26, 27, 28, 29] +[2024-09-30 00:33:41,206][1153883] Worker 12 uses CPU cores [72, 73, 74, 75, 76, 77] +[2024-09-30 00:33:41,210][1153805] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2024-09-30 00:33:41,210][1153805] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2024-09-30 00:33:41,223][1153811] Worker 6 uses CPU cores [36, 37, 38, 39, 40, 41] +[2024-09-30 00:33:41,240][1153815] Worker 9 uses CPU cores [54, 55, 56, 57, 58, 59] +[2024-09-30 00:33:41,305][1153805] Num visible devices: 1 +[2024-09-30 00:33:41,407][1153805] RunningMeanStd input shape: (3, 72, 128) +[2024-09-30 00:33:41,408][1153805] RunningMeanStd input shape: (1,) +[2024-09-30 00:33:41,416][1153805] ConvEncoder: input_channels=3 +[2024-09-30 00:33:41,488][1153805] Conv encoder output size: 512 +[2024-09-30 00:33:41,488][1153805] Policy head output size: 512 +[2024-09-30 00:33:42,519][1153456] Inference worker 0-0 is ready! +[2024-09-30 00:33:42,519][1153456] All inference workers are ready! Signal rollout workers to start! +[2024-09-30 00:33:42,520][1153456] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-09-30 00:33:42,521][1154909] Worker 15 uses CPU cores [90, 91, 92, 93, 94, 95] +[2024-09-30 00:33:42,544][1153882] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,545][1153808] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,546][1153806] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,546][1153810] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,550][1153814] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,551][1153883] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,552][1153813] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,554][1153811] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,555][1153881] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,565][1153807] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,565][1153815] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,567][1153809] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,567][1153816] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,570][1153880] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,570][1153812] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,640][1154909] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:33:42,822][1153882] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,823][1153808] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,827][1153810] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,827][1153883] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,829][1153814] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,833][1153811] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,842][1153815] Decorrelating experience for 0 frames... +[2024-09-30 00:33:42,846][1153809] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,038][1153882] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,040][1153813] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,042][1153814] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,043][1153883] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,057][1154909] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,059][1153809] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,079][1153881] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,256][1153815] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,267][1153882] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,272][1154909] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,274][1153806] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,286][1153809] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,295][1153807] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,296][1153881] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,296][1153811] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,297][1153808] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,478][1153815] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,496][1153810] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,498][1154909] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,524][1153808] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,524][1153809] Decorrelating experience for 96 frames... +[2024-09-30 00:33:43,526][1153814] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,528][1153883] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,533][1153806] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,550][1153816] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,704][1153880] Decorrelating experience for 0 frames... +[2024-09-30 00:33:43,711][1153882] Decorrelating experience for 96 frames... +[2024-09-30 00:33:43,714][1153815] Decorrelating experience for 96 frames... +[2024-09-30 00:33:43,757][1153806] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,759][1153808] Decorrelating experience for 96 frames... +[2024-09-30 00:33:43,773][1153881] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,773][1153810] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,915][1153880] Decorrelating experience for 32 frames... +[2024-09-30 00:33:43,924][1153811] Decorrelating experience for 64 frames... +[2024-09-30 00:33:43,935][1154909] Decorrelating experience for 96 frames... +[2024-09-30 00:33:43,960][1153807] Decorrelating experience for 32 frames... +[2024-09-30 00:33:44,012][1153881] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,012][1153810] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,013][1153813] Decorrelating experience for 32 frames... +[2024-09-30 00:33:44,017][1153816] Decorrelating experience for 32 frames... +[2024-09-30 00:33:44,021][1153815] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,143][1153806] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,158][1153812] Decorrelating experience for 0 frames... +[2024-09-30 00:33:44,165][1153883] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,234][1153809] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,235][1153807] Decorrelating experience for 64 frames... +[2024-09-30 00:33:44,240][1153816] Decorrelating experience for 64 frames... +[2024-09-30 00:33:44,242][1153813] Decorrelating experience for 64 frames... +[2024-09-30 00:33:44,266][1153808] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,344][1153810] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,396][1154909] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,472][1153881] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,480][1153813] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,488][1153815] Decorrelating experience for 160 frames... +[2024-09-30 00:33:44,488][1153816] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,495][1153880] Decorrelating experience for 64 frames... +[2024-09-30 00:33:44,567][1153812] Decorrelating experience for 32 frames... +[2024-09-30 00:33:44,609][1153811] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,701][1153806] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,704][1154909] Decorrelating experience for 160 frames... +[2024-09-30 00:33:44,730][1153880] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,738][1153883] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,738][1153881] Decorrelating experience for 160 frames... +[2024-09-30 00:33:44,792][1153816] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,793][1153812] Decorrelating experience for 64 frames... +[2024-09-30 00:33:44,804][1153810] Decorrelating experience for 160 frames... +[2024-09-30 00:33:44,819][1153814] Decorrelating experience for 96 frames... +[2024-09-30 00:33:44,919][1153882] Decorrelating experience for 128 frames... +[2024-09-30 00:33:44,958][1153806] Decorrelating experience for 160 frames... +[2024-09-30 00:33:44,990][1153813] Decorrelating experience for 128 frames... +[2024-09-30 00:33:45,006][1153883] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,028][1153812] Decorrelating experience for 96 frames... +[2024-09-30 00:33:45,034][1153880] Decorrelating experience for 128 frames... +[2024-09-30 00:33:45,075][1153815] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,131][1153808] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,168][1153814] Decorrelating experience for 128 frames... +[2024-09-30 00:33:45,178][1153882] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,228][1153806] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,247][1153813] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,271][1153883] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,326][1153807] Decorrelating experience for 96 frames... +[2024-09-30 00:33:45,328][1153812] Decorrelating experience for 128 frames... +[2024-09-30 00:33:45,362][1153809] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,401][1153816] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,426][1153814] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,497][1153880] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,526][1153806] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,554][1153808] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,584][1154909] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,592][1153810] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,609][1153812] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,614][1153815] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,624][1153807] Decorrelating experience for 128 frames... +[2024-09-30 00:33:45,684][1153813] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,773][1153882] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,807][1153814] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,856][1153808] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,856][1153883] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,879][1153810] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,885][1154909] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,894][1153807] Decorrelating experience for 160 frames... +[2024-09-30 00:33:45,931][1153880] Decorrelating experience for 192 frames... +[2024-09-30 00:33:45,962][1153813] Decorrelating experience for 224 frames... +[2024-09-30 00:33:45,987][1153456] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2024-09-30 00:33:46,018][1153811] Decorrelating experience for 128 frames... +[2024-09-30 00:33:46,056][1153882] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,068][1153881] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,129][1153812] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,207][1153880] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,239][1153809] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,277][1153811] Decorrelating experience for 160 frames... +[2024-09-30 00:33:46,324][1153807] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,350][1153683] Signal inference workers to stop experience collection... +[2024-09-30 00:33:46,354][1153805] InferenceWorker_p0-w0: stopping experience collection +[2024-09-30 00:33:46,413][1153812] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,456][1153881] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,488][1153816] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,546][1153809] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,551][1153811] Decorrelating experience for 192 frames... +[2024-09-30 00:33:46,711][1153807] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,743][1153814] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,789][1153816] Decorrelating experience for 224 frames... +[2024-09-30 00:33:46,978][1153811] Decorrelating experience for 224 frames... +[2024-09-30 00:33:47,381][1153683] Signal inference workers to resume experience collection... +[2024-09-30 00:33:47,382][1153805] InferenceWorker_p0-w0: resuming experience collection +[2024-09-30 00:33:48,488][1153805] Updated weights for policy 0, policy_version 989 (0.0124) +[2024-09-30 00:33:49,140][1153805] Updated weights for policy 0, policy_version 999 (0.0006) +[2024-09-30 00:33:49,805][1153805] Updated weights for policy 0, policy_version 1009 (0.0006) +[2024-09-30 00:33:50,347][1153805] Updated weights for policy 0, policy_version 1019 (0.0006) +[2024-09-30 00:33:50,892][1153805] Updated weights for policy 0, policy_version 1029 (0.0006) +[2024-09-30 00:33:50,987][1153456] Fps is (10 sec: 25156.3, 60 sec: 25156.3, 300 sec: 25156.3). Total num frames: 4218880. Throughput: 0: 5607.8. Samples: 47480. Policy #0 lag: (min: 0.0, avg: 2.5, max: 5.0) +[2024-09-30 00:33:50,987][1153456] Avg episode reward: [(0, '20.825')] +[2024-09-30 00:33:51,478][1153805] Updated weights for policy 0, policy_version 1039 (0.0006) +[2024-09-30 00:33:52,018][1153805] Updated weights for policy 0, policy_version 1049 (0.0006) +[2024-09-30 00:33:52,549][1153805] Updated weights for policy 0, policy_version 1059 (0.0006) +[2024-09-30 00:33:53,078][1153805] Updated weights for policy 0, policy_version 1069 (0.0006) +[2024-09-30 00:33:53,599][1153805] Updated weights for policy 0, policy_version 1079 (0.0006) +[2024-09-30 00:33:54,116][1153805] Updated weights for policy 0, policy_version 1089 (0.0006) +[2024-09-30 00:33:54,662][1153805] Updated weights for policy 0, policy_version 1099 (0.0006) +[2024-09-30 00:33:55,159][1153805] Updated weights for policy 0, policy_version 1109 (0.0006) +[2024-09-30 00:33:55,702][1153805] Updated weights for policy 0, policy_version 1119 (0.0006) +[2024-09-30 00:33:55,987][1153456] Fps is (10 sec: 59801.2, 60 sec: 44406.8, 300 sec: 44406.8). Total num frames: 4603904. Throughput: 0: 7749.4. Samples: 104360. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:33:55,987][1153456] Avg episode reward: [(0, '24.186')] +[2024-09-30 00:33:56,193][1153805] Updated weights for policy 0, policy_version 1129 (0.0006) +[2024-09-30 00:33:56,744][1153805] Updated weights for policy 0, policy_version 1139 (0.0006) +[2024-09-30 00:33:57,289][1153805] Updated weights for policy 0, policy_version 1149 (0.0006) +[2024-09-30 00:33:57,735][1153456] Heartbeat connected on Batcher_0 +[2024-09-30 00:33:57,739][1153456] Heartbeat connected on LearnerWorker_p0 +[2024-09-30 00:33:57,747][1153456] Heartbeat connected on InferenceWorker_p0-w0 +[2024-09-30 00:33:57,751][1153456] Heartbeat connected on RolloutWorker_w0 +[2024-09-30 00:33:57,755][1153456] Heartbeat connected on RolloutWorker_w1 +[2024-09-30 00:33:57,757][1153456] Heartbeat connected on RolloutWorker_w2 +[2024-09-30 00:33:57,769][1153456] Heartbeat connected on RolloutWorker_w3 +[2024-09-30 00:33:57,770][1153456] Heartbeat connected on RolloutWorker_w5 +[2024-09-30 00:33:57,772][1153456] Heartbeat connected on RolloutWorker_w4 +[2024-09-30 00:33:57,780][1153456] Heartbeat connected on RolloutWorker_w6 +[2024-09-30 00:33:57,784][1153456] Heartbeat connected on RolloutWorker_w7 +[2024-09-30 00:33:57,786][1153456] Heartbeat connected on RolloutWorker_w8 +[2024-09-30 00:33:57,789][1153456] Heartbeat connected on RolloutWorker_w9 +[2024-09-30 00:33:57,792][1153456] Heartbeat connected on RolloutWorker_w10 +[2024-09-30 00:33:57,801][1153456] Heartbeat connected on RolloutWorker_w11 +[2024-09-30 00:33:57,803][1153456] Heartbeat connected on RolloutWorker_w12 +[2024-09-30 00:33:57,803][1153456] Heartbeat connected on RolloutWorker_w13 +[2024-09-30 00:33:57,817][1153805] Updated weights for policy 0, policy_version 1159 (0.0006) +[2024-09-30 00:33:57,823][1153456] Heartbeat connected on RolloutWorker_w14 +[2024-09-30 00:33:57,831][1153456] Heartbeat connected on RolloutWorker_w15 +[2024-09-30 00:33:58,309][1153805] Updated weights for policy 0, policy_version 1169 (0.0006) +[2024-09-30 00:33:58,804][1153805] Updated weights for policy 0, policy_version 1179 (0.0006) +[2024-09-30 00:33:59,290][1153805] Updated weights for policy 0, policy_version 1189 (0.0006) +[2024-09-30 00:33:59,795][1153805] Updated weights for policy 0, policy_version 1199 (0.0006) +[2024-09-30 00:34:00,308][1153805] Updated weights for policy 0, policy_version 1209 (0.0006) +[2024-09-30 00:34:00,813][1153805] Updated weights for policy 0, policy_version 1219 (0.0006) +[2024-09-30 00:34:00,987][1153456] Fps is (10 sec: 78643.4, 60 sec: 54120.2, 300 sec: 54120.2). Total num frames: 5005312. Throughput: 0: 12071.2. Samples: 222916. Policy #0 lag: (min: 1.0, avg: 2.4, max: 6.0) +[2024-09-30 00:34:00,987][1153456] Avg episode reward: [(0, '30.110')] +[2024-09-30 00:34:00,988][1153683] Saving new best policy, reward=30.110! +[2024-09-30 00:34:01,305][1153805] Updated weights for policy 0, policy_version 1229 (0.0006) +[2024-09-30 00:34:01,811][1153805] Updated weights for policy 0, policy_version 1239 (0.0006) +[2024-09-30 00:34:02,345][1153805] Updated weights for policy 0, policy_version 1249 (0.0006) +[2024-09-30 00:34:02,857][1153805] Updated weights for policy 0, policy_version 1259 (0.0006) +[2024-09-30 00:34:03,366][1153805] Updated weights for policy 0, policy_version 1269 (0.0006) +[2024-09-30 00:34:03,888][1153805] Updated weights for policy 0, policy_version 1279 (0.0006) +[2024-09-30 00:34:04,411][1153805] Updated weights for policy 0, policy_version 1289 (0.0006) +[2024-09-30 00:34:04,905][1153805] Updated weights for policy 0, policy_version 1299 (0.0006) +[2024-09-30 00:34:05,396][1153805] Updated weights for policy 0, policy_version 1309 (0.0006) +[2024-09-30 00:34:05,887][1153805] Updated weights for policy 0, policy_version 1319 (0.0006) +[2024-09-30 00:34:05,987][1153456] Fps is (10 sec: 80691.1, 60 sec: 59868.8, 300 sec: 59868.8). Total num frames: 5410816. Throughput: 0: 14672.0. Samples: 344304. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:34:05,987][1153456] Avg episode reward: [(0, '26.263')] +[2024-09-30 00:34:06,367][1153805] Updated weights for policy 0, policy_version 1329 (0.0006) +[2024-09-30 00:34:06,902][1153805] Updated weights for policy 0, policy_version 1339 (0.0006) +[2024-09-30 00:34:07,398][1153805] Updated weights for policy 0, policy_version 1349 (0.0006) +[2024-09-30 00:34:07,857][1153805] Updated weights for policy 0, policy_version 1359 (0.0006) +[2024-09-30 00:34:08,393][1153805] Updated weights for policy 0, policy_version 1369 (0.0006) +[2024-09-30 00:34:08,923][1153805] Updated weights for policy 0, policy_version 1379 (0.0006) +[2024-09-30 00:34:09,422][1153805] Updated weights for policy 0, policy_version 1389 (0.0006) +[2024-09-30 00:34:09,921][1153805] Updated weights for policy 0, policy_version 1399 (0.0006) +[2024-09-30 00:34:10,424][1153805] Updated weights for policy 0, policy_version 1409 (0.0006) +[2024-09-30 00:34:10,932][1153805] Updated weights for policy 0, policy_version 1419 (0.0006) +[2024-09-30 00:34:10,987][1153456] Fps is (10 sec: 81100.9, 60 sec: 63598.2, 300 sec: 63598.2). Total num frames: 5816320. Throughput: 0: 14244.4. Samples: 405492. Policy #0 lag: (min: 0.0, avg: 2.5, max: 7.0) +[2024-09-30 00:34:10,987][1153456] Avg episode reward: [(0, '24.072')] +[2024-09-30 00:34:11,427][1153805] Updated weights for policy 0, policy_version 1429 (0.0006) +[2024-09-30 00:34:11,997][1153805] Updated weights for policy 0, policy_version 1439 (0.0006) +[2024-09-30 00:34:12,604][1153805] Updated weights for policy 0, policy_version 1449 (0.0006) +[2024-09-30 00:34:13,189][1153805] Updated weights for policy 0, policy_version 1459 (0.0006) +[2024-09-30 00:34:13,760][1153805] Updated weights for policy 0, policy_version 1469 (0.0006) +[2024-09-30 00:34:14,325][1153805] Updated weights for policy 0, policy_version 1479 (0.0006) +[2024-09-30 00:34:14,867][1153805] Updated weights for policy 0, policy_version 1489 (0.0006) +[2024-09-30 00:34:15,422][1153805] Updated weights for policy 0, policy_version 1499 (0.0006) +[2024-09-30 00:34:15,925][1153805] Updated weights for policy 0, policy_version 1509 (0.0006) +[2024-09-30 00:34:15,987][1153456] Fps is (10 sec: 77415.0, 60 sec: 65111.6, 300 sec: 65111.6). Total num frames: 6184960. Throughput: 0: 15510.1. Samples: 519072. Policy #0 lag: (min: 0.0, avg: 2.9, max: 6.0) +[2024-09-30 00:34:15,987][1153456] Avg episode reward: [(0, '27.426')] +[2024-09-30 00:34:16,430][1153805] Updated weights for policy 0, policy_version 1519 (0.0006) +[2024-09-30 00:34:16,937][1153805] Updated weights for policy 0, policy_version 1529 (0.0006) +[2024-09-30 00:34:17,445][1153805] Updated weights for policy 0, policy_version 1539 (0.0006) +[2024-09-30 00:34:17,474][1153683] Signal inference workers to stop experience collection... (50 times) +[2024-09-30 00:34:17,475][1153683] Signal inference workers to resume experience collection... (50 times) +[2024-09-30 00:34:17,479][1153805] InferenceWorker_p0-w0: stopping experience collection (50 times) +[2024-09-30 00:34:17,479][1153805] InferenceWorker_p0-w0: resuming experience collection (50 times) +[2024-09-30 00:34:17,909][1153805] Updated weights for policy 0, policy_version 1549 (0.0006) +[2024-09-30 00:34:18,408][1153805] Updated weights for policy 0, policy_version 1559 (0.0006) +[2024-09-30 00:34:18,914][1153805] Updated weights for policy 0, policy_version 1569 (0.0006) +[2024-09-30 00:34:19,420][1153805] Updated weights for policy 0, policy_version 1579 (0.0006) +[2024-09-30 00:34:19,933][1153805] Updated weights for policy 0, policy_version 1589 (0.0006) +[2024-09-30 00:34:20,433][1153805] Updated weights for policy 0, policy_version 1599 (0.0005) +[2024-09-30 00:34:20,874][1153805] Updated weights for policy 0, policy_version 1609 (0.0006) +[2024-09-30 00:34:20,987][1153456] Fps is (10 sec: 77824.6, 60 sec: 67296.5, 300 sec: 67296.5). Total num frames: 6594560. Throughput: 0: 16654.0. Samples: 640624. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:34:20,987][1153456] Avg episode reward: [(0, '26.127')] +[2024-09-30 00:34:21,376][1153805] Updated weights for policy 0, policy_version 1619 (0.0006) +[2024-09-30 00:34:21,913][1153805] Updated weights for policy 0, policy_version 1629 (0.0006) +[2024-09-30 00:34:22,441][1153805] Updated weights for policy 0, policy_version 1639 (0.0005) +[2024-09-30 00:34:22,968][1153805] Updated weights for policy 0, policy_version 1649 (0.0006) +[2024-09-30 00:34:23,482][1153805] Updated weights for policy 0, policy_version 1659 (0.0006) +[2024-09-30 00:34:23,975][1153805] Updated weights for policy 0, policy_version 1669 (0.0006) +[2024-09-30 00:34:24,474][1153805] Updated weights for policy 0, policy_version 1679 (0.0006) +[2024-09-30 00:34:25,003][1153805] Updated weights for policy 0, policy_version 1689 (0.0006) +[2024-09-30 00:34:25,527][1153805] Updated weights for policy 0, policy_version 1699 (0.0006) +[2024-09-30 00:34:25,987][1153456] Fps is (10 sec: 81099.9, 60 sec: 68789.9, 300 sec: 68789.9). Total num frames: 6995968. Throughput: 0: 16121.9. Samples: 700768. Policy #0 lag: (min: 0.0, avg: 2.0, max: 5.0) +[2024-09-30 00:34:25,987][1153456] Avg episode reward: [(0, '27.073')] +[2024-09-30 00:34:26,023][1153805] Updated weights for policy 0, policy_version 1709 (0.0006) +[2024-09-30 00:34:26,528][1153805] Updated weights for policy 0, policy_version 1719 (0.0006) +[2024-09-30 00:34:27,034][1153805] Updated weights for policy 0, policy_version 1729 (0.0006) +[2024-09-30 00:34:27,559][1153805] Updated weights for policy 0, policy_version 1739 (0.0006) +[2024-09-30 00:34:28,048][1153805] Updated weights for policy 0, policy_version 1749 (0.0006) +[2024-09-30 00:34:28,561][1153805] Updated weights for policy 0, policy_version 1759 (0.0006) +[2024-09-30 00:34:29,053][1153805] Updated weights for policy 0, policy_version 1769 (0.0006) +[2024-09-30 00:34:29,700][1153805] Updated weights for policy 0, policy_version 1779 (0.0006) +[2024-09-30 00:34:30,335][1153805] Updated weights for policy 0, policy_version 1789 (0.0006) +[2024-09-30 00:34:30,926][1153805] Updated weights for policy 0, policy_version 1799 (0.0006) +[2024-09-30 00:34:30,987][1153456] Fps is (10 sec: 77822.5, 60 sec: 69468.4, 300 sec: 69468.4). Total num frames: 7372800. Throughput: 0: 18231.6. Samples: 820424. Policy #0 lag: (min: 0.0, avg: 2.4, max: 5.0) +[2024-09-30 00:34:30,987][1153456] Avg episode reward: [(0, '25.497')] +[2024-09-30 00:34:31,523][1153805] Updated weights for policy 0, policy_version 1809 (0.0006) +[2024-09-30 00:34:32,164][1153805] Updated weights for policy 0, policy_version 1819 (0.0006) +[2024-09-30 00:34:32,809][1153805] Updated weights for policy 0, policy_version 1829 (0.0006) +[2024-09-30 00:34:33,413][1153805] Updated weights for policy 0, policy_version 1839 (0.0006) +[2024-09-30 00:34:34,008][1153805] Updated weights for policy 0, policy_version 1849 (0.0006) +[2024-09-30 00:34:34,647][1153805] Updated weights for policy 0, policy_version 1859 (0.0006) +[2024-09-30 00:34:35,275][1153805] Updated weights for policy 0, policy_version 1869 (0.0006) +[2024-09-30 00:34:35,822][1153805] Updated weights for policy 0, policy_version 1879 (0.0007) +[2024-09-30 00:34:35,987][1153456] Fps is (10 sec: 70861.3, 60 sec: 69177.4, 300 sec: 69177.4). Total num frames: 7704576. Throughput: 0: 19378.1. Samples: 919492. Policy #0 lag: (min: 0.0, avg: 3.0, max: 6.0) +[2024-09-30 00:34:35,987][1153456] Avg episode reward: [(0, '27.382')] +[2024-09-30 00:34:36,437][1153805] Updated weights for policy 0, policy_version 1889 (0.0006) +[2024-09-30 00:34:36,983][1153805] Updated weights for policy 0, policy_version 1899 (0.0006) +[2024-09-30 00:34:37,522][1153805] Updated weights for policy 0, policy_version 1909 (0.0006) +[2024-09-30 00:34:38,151][1153805] Updated weights for policy 0, policy_version 1919 (0.0006) +[2024-09-30 00:34:38,763][1153805] Updated weights for policy 0, policy_version 1929 (0.0006) +[2024-09-30 00:34:39,293][1153805] Updated weights for policy 0, policy_version 1939 (0.0007) +[2024-09-30 00:34:39,914][1153805] Updated weights for policy 0, policy_version 1949 (0.0006) +[2024-09-30 00:34:40,439][1153805] Updated weights for policy 0, policy_version 1959 (0.0006) +[2024-09-30 00:34:40,987][1153456] Fps is (10 sec: 68812.7, 60 sec: 69356.2, 300 sec: 69356.2). Total num frames: 8060928. Throughput: 0: 19301.7. Samples: 972936. Policy #0 lag: (min: 0.0, avg: 2.8, max: 6.0) +[2024-09-30 00:34:40,987][1153456] Avg episode reward: [(0, '27.449')] +[2024-09-30 00:34:41,028][1153805] Updated weights for policy 0, policy_version 1969 (0.0006) +[2024-09-30 00:34:41,523][1153805] Updated weights for policy 0, policy_version 1979 (0.0006) +[2024-09-30 00:34:42,083][1153805] Updated weights for policy 0, policy_version 1989 (0.0006) +[2024-09-30 00:34:42,584][1153805] Updated weights for policy 0, policy_version 1999 (0.0006) +[2024-09-30 00:34:43,064][1153805] Updated weights for policy 0, policy_version 2009 (0.0007) +[2024-09-30 00:34:43,554][1153805] Updated weights for policy 0, policy_version 2019 (0.0006) +[2024-09-30 00:34:44,043][1153805] Updated weights for policy 0, policy_version 2029 (0.0006) +[2024-09-30 00:34:44,531][1153805] Updated weights for policy 0, policy_version 2039 (0.0006) +[2024-09-30 00:34:45,030][1153805] Updated weights for policy 0, policy_version 2049 (0.0006) +[2024-09-30 00:34:45,524][1153805] Updated weights for policy 0, policy_version 2059 (0.0006) +[2024-09-30 00:34:45,987][1153456] Fps is (10 sec: 76594.4, 60 sec: 74410.5, 300 sec: 70346.0). Total num frames: 8470528. Throughput: 0: 19239.1. Samples: 1088680. Policy #0 lag: (min: 0.0, avg: 2.3, max: 5.0) +[2024-09-30 00:34:45,987][1153456] Avg episode reward: [(0, '31.135')] +[2024-09-30 00:34:46,000][1153683] Saving new best policy, reward=31.135! +[2024-09-30 00:34:46,001][1153805] Updated weights for policy 0, policy_version 2069 (0.0006) +[2024-09-30 00:34:46,586][1153805] Updated weights for policy 0, policy_version 2079 (0.0006) +[2024-09-30 00:34:47,190][1153805] Updated weights for policy 0, policy_version 2089 (0.0006) +[2024-09-30 00:34:47,788][1153805] Updated weights for policy 0, policy_version 2099 (0.0006) +[2024-09-30 00:34:48,430][1153805] Updated weights for policy 0, policy_version 2109 (0.0006) +[2024-09-30 00:34:49,037][1153805] Updated weights for policy 0, policy_version 2119 (0.0006) +[2024-09-30 00:34:49,642][1153805] Updated weights for policy 0, policy_version 2129 (0.0006) +[2024-09-30 00:34:50,271][1153805] Updated weights for policy 0, policy_version 2139 (0.0006) +[2024-09-30 00:34:50,444][1153683] Signal inference workers to stop experience collection... (100 times) +[2024-09-30 00:34:50,445][1153683] Signal inference workers to resume experience collection... (100 times) +[2024-09-30 00:34:50,448][1153805] InferenceWorker_p0-w0: stopping experience collection (100 times) +[2024-09-30 00:34:50,450][1153805] InferenceWorker_p0-w0: resuming experience collection (100 times) +[2024-09-30 00:34:50,837][1153805] Updated weights for policy 0, policy_version 2149 (0.0006) +[2024-09-30 00:34:50,987][1153456] Fps is (10 sec: 75366.2, 60 sec: 76595.0, 300 sec: 70234.0). Total num frames: 8814592. Throughput: 0: 18930.5. Samples: 1196180. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:34:50,987][1153456] Avg episode reward: [(0, '26.156')] +[2024-09-30 00:34:51,431][1153805] Updated weights for policy 0, policy_version 2159 (0.0006) +[2024-09-30 00:34:52,020][1153805] Updated weights for policy 0, policy_version 2169 (0.0006) +[2024-09-30 00:34:52,617][1153805] Updated weights for policy 0, policy_version 2179 (0.0007) +[2024-09-30 00:34:53,186][1153805] Updated weights for policy 0, policy_version 2189 (0.0006) +[2024-09-30 00:34:53,758][1153805] Updated weights for policy 0, policy_version 2199 (0.0006) +[2024-09-30 00:34:54,387][1153805] Updated weights for policy 0, policy_version 2209 (0.0006) +[2024-09-30 00:34:54,943][1153805] Updated weights for policy 0, policy_version 2219 (0.0006) +[2024-09-30 00:34:55,535][1153805] Updated weights for policy 0, policy_version 2229 (0.0006) +[2024-09-30 00:34:55,987][1153456] Fps is (10 sec: 68813.5, 60 sec: 75912.6, 300 sec: 70137.4). Total num frames: 9158656. Throughput: 0: 18726.7. Samples: 1248192. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:34:55,987][1153456] Avg episode reward: [(0, '29.746')] +[2024-09-30 00:34:56,118][1153805] Updated weights for policy 0, policy_version 2239 (0.0006) +[2024-09-30 00:34:56,694][1153805] Updated weights for policy 0, policy_version 2249 (0.0006) +[2024-09-30 00:34:57,301][1153805] Updated weights for policy 0, policy_version 2259 (0.0007) +[2024-09-30 00:34:57,805][1153805] Updated weights for policy 0, policy_version 2269 (0.0006) +[2024-09-30 00:34:58,374][1153805] Updated weights for policy 0, policy_version 2279 (0.0006) +[2024-09-30 00:34:58,906][1153805] Updated weights for policy 0, policy_version 2289 (0.0006) +[2024-09-30 00:34:59,442][1153805] Updated weights for policy 0, policy_version 2299 (0.0006) +[2024-09-30 00:34:59,969][1153805] Updated weights for policy 0, policy_version 2309 (0.0006) +[2024-09-30 00:35:00,474][1153805] Updated weights for policy 0, policy_version 2319 (0.0006) +[2024-09-30 00:35:00,987][1153456] Fps is (10 sec: 72090.6, 60 sec: 75502.9, 300 sec: 70470.6). Total num frames: 9535488. Throughput: 0: 18617.4. Samples: 1356856. Policy #0 lag: (min: 0.0, avg: 2.2, max: 6.0) +[2024-09-30 00:35:00,987][1153456] Avg episode reward: [(0, '31.327')] +[2024-09-30 00:35:00,988][1153683] Saving new best policy, reward=31.327! +[2024-09-30 00:35:01,059][1153805] Updated weights for policy 0, policy_version 2329 (0.0006) +[2024-09-30 00:35:01,541][1153805] Updated weights for policy 0, policy_version 2339 (0.0006) +[2024-09-30 00:35:02,014][1153805] Updated weights for policy 0, policy_version 2349 (0.0006) +[2024-09-30 00:35:02,509][1153805] Updated weights for policy 0, policy_version 2359 (0.0006) +[2024-09-30 00:35:03,003][1153805] Updated weights for policy 0, policy_version 2369 (0.0006) +[2024-09-30 00:35:03,490][1153805] Updated weights for policy 0, policy_version 2379 (0.0006) +[2024-09-30 00:35:03,986][1153805] Updated weights for policy 0, policy_version 2389 (0.0006) +[2024-09-30 00:35:04,468][1153805] Updated weights for policy 0, policy_version 2399 (0.0006) +[2024-09-30 00:35:04,947][1153805] Updated weights for policy 0, policy_version 2409 (0.0006) +[2024-09-30 00:35:05,456][1153805] Updated weights for policy 0, policy_version 2419 (0.0006) +[2024-09-30 00:35:05,890][1153805] Updated weights for policy 0, policy_version 2429 (0.0006) +[2024-09-30 00:35:05,987][1153456] Fps is (10 sec: 79463.2, 60 sec: 75707.9, 300 sec: 71254.7). Total num frames: 9953280. Throughput: 0: 18651.0. Samples: 1479920. Policy #0 lag: (min: 0.0, avg: 2.7, max: 7.0) +[2024-09-30 00:35:05,987][1153456] Avg episode reward: [(0, '29.677')] +[2024-09-30 00:35:06,386][1153805] Updated weights for policy 0, policy_version 2439 (0.0006) +[2024-09-30 00:35:06,873][1153805] Updated weights for policy 0, policy_version 2449 (0.0006) +[2024-09-30 00:35:07,354][1153805] Updated weights for policy 0, policy_version 2459 (0.0006) +[2024-09-30 00:35:07,831][1153805] Updated weights for policy 0, policy_version 2469 (0.0006) +[2024-09-30 00:35:08,356][1153805] Updated weights for policy 0, policy_version 2479 (0.0006) +[2024-09-30 00:35:08,867][1153805] Updated weights for policy 0, policy_version 2489 (0.0006) +[2024-09-30 00:35:09,352][1153805] Updated weights for policy 0, policy_version 2499 (0.0006) +[2024-09-30 00:35:09,840][1153805] Updated weights for policy 0, policy_version 2509 (0.0006) +[2024-09-30 00:35:10,285][1153805] Updated weights for policy 0, policy_version 2519 (0.0006) +[2024-09-30 00:35:10,741][1153805] Updated weights for policy 0, policy_version 2529 (0.0006) +[2024-09-30 00:35:10,987][1153456] Fps is (10 sec: 84378.4, 60 sec: 76049.2, 300 sec: 72042.7). Total num frames: 10379264. Throughput: 0: 18705.0. Samples: 1542492. Policy #0 lag: (min: 0.0, avg: 2.6, max: 6.0) +[2024-09-30 00:35:10,987][1153456] Avg episode reward: [(0, '27.577')] +[2024-09-30 00:35:11,230][1153805] Updated weights for policy 0, policy_version 2539 (0.0006) +[2024-09-30 00:35:11,721][1153805] Updated weights for policy 0, policy_version 2549 (0.0006) +[2024-09-30 00:35:12,216][1153805] Updated weights for policy 0, policy_version 2559 (0.0006) +[2024-09-30 00:35:12,704][1153805] Updated weights for policy 0, policy_version 2569 (0.0006) +[2024-09-30 00:35:13,169][1153805] Updated weights for policy 0, policy_version 2579 (0.0006) +[2024-09-30 00:35:13,664][1153805] Updated weights for policy 0, policy_version 2589 (0.0006) +[2024-09-30 00:35:14,143][1153805] Updated weights for policy 0, policy_version 2599 (0.0006) +[2024-09-30 00:35:14,638][1153805] Updated weights for policy 0, policy_version 2609 (0.0006) +[2024-09-30 00:35:15,138][1153805] Updated weights for policy 0, policy_version 2619 (0.0006) +[2024-09-30 00:35:15,632][1153805] Updated weights for policy 0, policy_version 2629 (0.0006) +[2024-09-30 00:35:15,987][1153456] Fps is (10 sec: 84376.8, 60 sec: 76868.2, 300 sec: 72658.7). Total num frames: 10797056. Throughput: 0: 18878.2. Samples: 1669940. Policy #0 lag: (min: 0.0, avg: 2.5, max: 7.0) +[2024-09-30 00:35:15,987][1153456] Avg episode reward: [(0, '27.694')] +[2024-09-30 00:35:16,124][1153805] Updated weights for policy 0, policy_version 2639 (0.0006) +[2024-09-30 00:35:16,624][1153805] Updated weights for policy 0, policy_version 2649 (0.0006) +[2024-09-30 00:35:17,085][1153805] Updated weights for policy 0, policy_version 2659 (0.0006) +[2024-09-30 00:35:17,585][1153805] Updated weights for policy 0, policy_version 2669 (0.0006) +[2024-09-30 00:35:18,084][1153805] Updated weights for policy 0, policy_version 2679 (0.0006) +[2024-09-30 00:35:18,552][1153805] Updated weights for policy 0, policy_version 2689 (0.0006) +[2024-09-30 00:35:19,049][1153805] Updated weights for policy 0, policy_version 2699 (0.0006) +[2024-09-30 00:35:19,542][1153805] Updated weights for policy 0, policy_version 2709 (0.0006) +[2024-09-30 00:35:20,017][1153805] Updated weights for policy 0, policy_version 2719 (0.0006) +[2024-09-30 00:35:20,512][1153805] Updated weights for policy 0, policy_version 2729 (0.0006) +[2024-09-30 00:35:20,987][1153456] Fps is (10 sec: 83558.1, 60 sec: 77004.7, 300 sec: 73212.2). Total num frames: 11214848. Throughput: 0: 19474.6. Samples: 1795848. Policy #0 lag: (min: 0.0, avg: 2.7, max: 7.0) +[2024-09-30 00:35:20,987][1153456] Avg episode reward: [(0, '29.085')] +[2024-09-30 00:35:20,997][1153805] Updated weights for policy 0, policy_version 2739 (0.0006) +[2024-09-30 00:35:21,452][1153805] Updated weights for policy 0, policy_version 2749 (0.0006) +[2024-09-30 00:35:21,950][1153805] Updated weights for policy 0, policy_version 2759 (0.0006) +[2024-09-30 00:35:22,446][1153805] Updated weights for policy 0, policy_version 2769 (0.0006) +[2024-09-30 00:35:22,925][1153805] Updated weights for policy 0, policy_version 2779 (0.0006) +[2024-09-30 00:35:23,377][1153805] Updated weights for policy 0, policy_version 2789 (0.0006) +[2024-09-30 00:35:23,859][1153805] Updated weights for policy 0, policy_version 2799 (0.0006) +[2024-09-30 00:35:24,310][1153805] Updated weights for policy 0, policy_version 2809 (0.0006) +[2024-09-30 00:35:24,789][1153805] Updated weights for policy 0, policy_version 2819 (0.0006) +[2024-09-30 00:35:25,289][1153805] Updated weights for policy 0, policy_version 2829 (0.0006) +[2024-09-30 00:35:25,785][1153805] Updated weights for policy 0, policy_version 2839 (0.0006) +[2024-09-30 00:35:25,987][1153456] Fps is (10 sec: 84786.5, 60 sec: 77482.6, 300 sec: 73830.8). Total num frames: 11644928. Throughput: 0: 19711.9. Samples: 1859972. Policy #0 lag: (min: 0.0, avg: 2.2, max: 4.0) +[2024-09-30 00:35:25,987][1153456] Avg episode reward: [(0, '30.366')] +[2024-09-30 00:35:26,243][1153805] Updated weights for policy 0, policy_version 2849 (0.0006) +[2024-09-30 00:35:26,732][1153805] Updated weights for policy 0, policy_version 2859 (0.0006) +[2024-09-30 00:35:27,237][1153805] Updated weights for policy 0, policy_version 2869 (0.0006) +[2024-09-30 00:35:27,736][1153805] Updated weights for policy 0, policy_version 2879 (0.0006) +[2024-09-30 00:35:28,219][1153805] Updated weights for policy 0, policy_version 2889 (0.0006) +[2024-09-30 00:35:28,734][1153805] Updated weights for policy 0, policy_version 2899 (0.0006) +[2024-09-30 00:35:29,198][1153805] Updated weights for policy 0, policy_version 2909 (0.0006) +[2024-09-30 00:35:29,572][1153683] Signal inference workers to stop experience collection... (150 times) +[2024-09-30 00:35:29,573][1153683] Signal inference workers to resume experience collection... (150 times) +[2024-09-30 00:35:29,579][1153805] InferenceWorker_p0-w0: stopping experience collection (150 times) +[2024-09-30 00:35:29,579][1153805] InferenceWorker_p0-w0: resuming experience collection (150 times) +[2024-09-30 00:35:29,690][1153805] Updated weights for policy 0, policy_version 2919 (0.0006) +[2024-09-30 00:35:30,178][1153805] Updated weights for policy 0, policy_version 2929 (0.0006) +[2024-09-30 00:35:30,663][1153805] Updated weights for policy 0, policy_version 2939 (0.0006) +[2024-09-30 00:35:30,987][1153456] Fps is (10 sec: 84785.6, 60 sec: 78165.3, 300 sec: 74279.2). Total num frames: 12062720. Throughput: 0: 19949.9. Samples: 1986428. Policy #0 lag: (min: 0.0, avg: 2.2, max: 7.0) +[2024-09-30 00:35:30,987][1153456] Avg episode reward: [(0, '28.907')] +[2024-09-30 00:35:31,146][1153805] Updated weights for policy 0, policy_version 2949 (0.0006) +[2024-09-30 00:35:31,642][1153805] Updated weights for policy 0, policy_version 2959 (0.0006) +[2024-09-30 00:35:32,128][1153805] Updated weights for policy 0, policy_version 2969 (0.0006) +[2024-09-30 00:35:32,610][1153805] Updated weights for policy 0, policy_version 2979 (0.0006) +[2024-09-30 00:35:33,108][1153805] Updated weights for policy 0, policy_version 2989 (0.0006) +[2024-09-30 00:35:33,600][1153805] Updated weights for policy 0, policy_version 2999 (0.0006) +[2024-09-30 00:35:34,079][1153805] Updated weights for policy 0, policy_version 3009 (0.0005) +[2024-09-30 00:35:34,540][1153805] Updated weights for policy 0, policy_version 3019 (0.0006) +[2024-09-30 00:35:34,991][1153805] Updated weights for policy 0, policy_version 3029 (0.0006) +[2024-09-30 00:35:35,483][1153805] Updated weights for policy 0, policy_version 3039 (0.0006) +[2024-09-30 00:35:35,982][1153805] Updated weights for policy 0, policy_version 3049 (0.0006) +[2024-09-30 00:35:35,987][1153456] Fps is (10 sec: 84376.8, 60 sec: 79735.2, 300 sec: 74760.3). Total num frames: 12488704. Throughput: 0: 20386.2. Samples: 2113560. Policy #0 lag: (min: 0.0, avg: 2.1, max: 5.0) +[2024-09-30 00:35:35,987][1153456] Avg episode reward: [(0, '30.962')] +[2024-09-30 00:35:35,994][1153683] Saving /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000003049_12488704.pth... +[2024-09-30 00:35:36,041][1153683] Removing /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth +[2024-09-30 00:35:36,477][1153805] Updated weights for policy 0, policy_version 3059 (0.0006) +[2024-09-30 00:35:36,961][1153805] Updated weights for policy 0, policy_version 3069 (0.0006) +[2024-09-30 00:35:37,449][1153805] Updated weights for policy 0, policy_version 3079 (0.0006) +[2024-09-30 00:35:37,970][1153805] Updated weights for policy 0, policy_version 3089 (0.0006) +[2024-09-30 00:35:38,472][1153805] Updated weights for policy 0, policy_version 3099 (0.0006) +[2024-09-30 00:35:38,981][1153805] Updated weights for policy 0, policy_version 3109 (0.0006) +[2024-09-30 00:35:39,484][1153805] Updated weights for policy 0, policy_version 3119 (0.0006) +[2024-09-30 00:35:39,999][1153805] Updated weights for policy 0, policy_version 3129 (0.0006) +[2024-09-30 00:35:40,501][1153805] Updated weights for policy 0, policy_version 3139 (0.0006) +[2024-09-30 00:35:40,987][1153456] Fps is (10 sec: 83149.7, 60 sec: 80554.8, 300 sec: 75028.0). Total num frames: 12894208. Throughput: 0: 20605.2. Samples: 2175424. Policy #0 lag: (min: 0.0, avg: 2.2, max: 7.0) +[2024-09-30 00:35:40,987][1153456] Avg episode reward: [(0, '29.171')] +[2024-09-30 00:35:41,017][1153805] Updated weights for policy 0, policy_version 3149 (0.0006) +[2024-09-30 00:35:41,519][1153805] Updated weights for policy 0, policy_version 3159 (0.0006) +[2024-09-30 00:35:42,026][1153805] Updated weights for policy 0, policy_version 3169 (0.0006) +[2024-09-30 00:35:42,545][1153805] Updated weights for policy 0, policy_version 3179 (0.0006) +[2024-09-30 00:35:43,055][1153805] Updated weights for policy 0, policy_version 3189 (0.0006) +[2024-09-30 00:35:43,559][1153805] Updated weights for policy 0, policy_version 3199 (0.0006) +[2024-09-30 00:35:44,059][1153805] Updated weights for policy 0, policy_version 3209 (0.0006) +[2024-09-30 00:35:44,590][1153805] Updated weights for policy 0, policy_version 3219 (0.0006) +[2024-09-30 00:35:45,092][1153805] Updated weights for policy 0, policy_version 3229 (0.0006) +[2024-09-30 00:35:45,591][1153805] Updated weights for policy 0, policy_version 3239 (0.0006) +[2024-09-30 00:35:45,987][1153456] Fps is (10 sec: 81101.8, 60 sec: 80486.5, 300 sec: 75273.9). Total num frames: 13299712. Throughput: 0: 20868.2. Samples: 2295924. Policy #0 lag: (min: 0.0, avg: 1.5, max: 6.0) +[2024-09-30 00:35:45,987][1153456] Avg episode reward: [(0, '32.326')] +[2024-09-30 00:35:45,993][1153683] Saving new best policy, reward=32.326! +[2024-09-30 00:35:46,083][1153805] Updated weights for policy 0, policy_version 3249 (0.0006) +[2024-09-30 00:35:46,528][1153805] Updated weights for policy 0, policy_version 3259 (0.0006) +[2024-09-30 00:35:47,059][1153805] Updated weights for policy 0, policy_version 3269 (0.0006) +[2024-09-30 00:35:47,599][1153805] Updated weights for policy 0, policy_version 3279 (0.0006) +[2024-09-30 00:35:48,021][1153805] Updated weights for policy 0, policy_version 3289 (0.0006) +[2024-09-30 00:35:48,518][1153805] Updated weights for policy 0, policy_version 3299 (0.0006) +[2024-09-30 00:35:49,019][1153805] Updated weights for policy 0, policy_version 3309 (0.0006) +[2024-09-30 00:35:49,513][1153805] Updated weights for policy 0, policy_version 3319 (0.0006) +[2024-09-30 00:35:50,010][1153805] Updated weights for policy 0, policy_version 3329 (0.0006) +[2024-09-30 00:35:50,507][1153805] Updated weights for policy 0, policy_version 3339 (0.0006) +[2024-09-30 00:35:50,987][1153456] Fps is (10 sec: 81510.6, 60 sec: 81578.9, 300 sec: 75532.6). Total num frames: 13709312. Throughput: 0: 20888.1. Samples: 2419888. Policy #0 lag: (min: 0.0, avg: 2.0, max: 6.0) +[2024-09-30 00:35:50,987][1153456] Avg episode reward: [(0, '30.049')] +[2024-09-30 00:35:51,006][1153805] Updated weights for policy 0, policy_version 3349 (0.0006) +[2024-09-30 00:35:51,498][1153805] Updated weights for policy 0, policy_version 3359 (0.0006) +[2024-09-30 00:35:51,998][1153805] Updated weights for policy 0, policy_version 3369 (0.0006) +[2024-09-30 00:35:52,492][1153805] Updated weights for policy 0, policy_version 3379 (0.0006) +[2024-09-30 00:35:52,990][1153805] Updated weights for policy 0, policy_version 3389 (0.0006) +[2024-09-30 00:35:53,493][1153805] Updated weights for policy 0, policy_version 3399 (0.0006) +[2024-09-30 00:35:54,000][1153805] Updated weights for policy 0, policy_version 3409 (0.0006) +[2024-09-30 00:35:54,477][1153805] Updated weights for policy 0, policy_version 3419 (0.0006) +[2024-09-30 00:35:54,996][1153805] Updated weights for policy 0, policy_version 3429 (0.0006) +[2024-09-30 00:35:54,999][1153683] Signal inference workers to stop experience collection... (200 times) +[2024-09-30 00:35:55,000][1153683] Signal inference workers to resume experience collection... (200 times) +[2024-09-30 00:35:55,005][1153805] InferenceWorker_p0-w0: stopping experience collection (200 times) +[2024-09-30 00:35:55,005][1153805] InferenceWorker_p0-w0: resuming experience collection (200 times) +[2024-09-30 00:35:55,468][1153805] Updated weights for policy 0, policy_version 3439 (0.0006) +[2024-09-30 00:35:55,971][1153805] Updated weights for policy 0, policy_version 3449 (0.0006) +[2024-09-30 00:35:55,987][1153456] Fps is (10 sec: 82739.3, 60 sec: 82807.4, 300 sec: 75833.2). Total num frames: 14127104. Throughput: 0: 20878.1. Samples: 2482008. Policy #0 lag: (min: 0.0, avg: 1.5, max: 4.0) +[2024-09-30 00:35:55,987][1153456] Avg episode reward: [(0, '30.073')] +[2024-09-30 00:35:56,474][1153805] Updated weights for policy 0, policy_version 3459 (0.0006) +[2024-09-30 00:35:56,968][1153805] Updated weights for policy 0, policy_version 3469 (0.0006) +[2024-09-30 00:35:57,460][1153805] Updated weights for policy 0, policy_version 3479 (0.0006) +[2024-09-30 00:35:57,956][1153805] Updated weights for policy 0, policy_version 3489 (0.0006) +[2024-09-30 00:35:58,456][1153805] Updated weights for policy 0, policy_version 3499 (0.0006) +[2024-09-30 00:35:58,951][1153805] Updated weights for policy 0, policy_version 3509 (0.0006) +[2024-09-30 00:35:59,425][1153805] Updated weights for policy 0, policy_version 3519 (0.0006) +[2024-09-30 00:35:59,942][1153805] Updated weights for policy 0, policy_version 3529 (0.0006) +[2024-09-30 00:36:00,445][1153805] Updated weights for policy 0, policy_version 3539 (0.0006) +[2024-09-30 00:36:00,868][1153805] Updated weights for policy 0, policy_version 3549 (0.0006) +[2024-09-30 00:36:00,987][1153456] Fps is (10 sec: 83558.3, 60 sec: 83490.1, 300 sec: 76112.2). Total num frames: 14544896. Throughput: 0: 20815.6. Samples: 2606640. Policy #0 lag: (min: 0.0, avg: 2.8, max: 7.0) +[2024-09-30 00:36:00,987][1153456] Avg episode reward: [(0, '29.898')] +[2024-09-30 00:36:01,375][1153805] Updated weights for policy 0, policy_version 3559 (0.0006) +[2024-09-30 00:36:01,870][1153805] Updated weights for policy 0, policy_version 3569 (0.0006) +[2024-09-30 00:36:02,406][1153805] Updated weights for policy 0, policy_version 3579 (0.0006) +[2024-09-30 00:36:02,918][1153805] Updated weights for policy 0, policy_version 3589 (0.0006) +[2024-09-30 00:36:03,430][1153805] Updated weights for policy 0, policy_version 3599 (0.0006) +[2024-09-30 00:36:03,939][1153805] Updated weights for policy 0, policy_version 3609 (0.0006) +[2024-09-30 00:36:04,431][1153805] Updated weights for policy 0, policy_version 3619 (0.0006) +[2024-09-30 00:36:04,919][1153805] Updated weights for policy 0, policy_version 3629 (0.0006) +[2024-09-30 00:36:05,408][1153805] Updated weights for policy 0, policy_version 3639 (0.0006) +[2024-09-30 00:36:05,896][1153805] Updated weights for policy 0, policy_version 3649 (0.0006) +[2024-09-30 00:36:05,987][1153456] Fps is (10 sec: 82329.6, 60 sec: 83285.2, 300 sec: 76286.0). Total num frames: 14950400. Throughput: 0: 20749.6. Samples: 2729580. Policy #0 lag: (min: 0.0, avg: 3.0, max: 7.0) +[2024-09-30 00:36:05,987][1153456] Avg episode reward: [(0, '28.929')] +[2024-09-30 00:36:06,379][1153805] Updated weights for policy 0, policy_version 3659 (0.0006) +[2024-09-30 00:36:06,854][1153805] Updated weights for policy 0, policy_version 3669 (0.0006) +[2024-09-30 00:36:07,330][1153805] Updated weights for policy 0, policy_version 3679 (0.0006) +[2024-09-30 00:36:07,826][1153805] Updated weights for policy 0, policy_version 3689 (0.0006) +[2024-09-30 00:36:08,324][1153805] Updated weights for policy 0, policy_version 3699 (0.0006) +[2024-09-30 00:36:08,795][1153805] Updated weights for policy 0, policy_version 3709 (0.0006) +[2024-09-30 00:36:09,262][1153805] Updated weights for policy 0, policy_version 3719 (0.0006) +[2024-09-30 00:36:09,783][1153805] Updated weights for policy 0, policy_version 3729 (0.0006) +[2024-09-30 00:36:10,356][1153805] Updated weights for policy 0, policy_version 3739 (0.0006) +[2024-09-30 00:36:10,949][1153683] Signal inference workers to stop experience collection... (250 times) +[2024-09-30 00:36:10,953][1153805] InferenceWorker_p0-w0: stopping experience collection (250 times) +[2024-09-30 00:36:10,961][1153683] Signal inference workers to resume experience collection... (250 times) +[2024-09-30 00:36:10,962][1153805] InferenceWorker_p0-w0: resuming experience collection (250 times) +[2024-09-30 00:36:10,963][1153805] Updated weights for policy 0, policy_version 3749 (0.0007) +[2024-09-30 00:36:10,987][1153456] Fps is (10 sec: 81510.3, 60 sec: 83012.1, 300 sec: 76475.8). Total num frames: 15360000. Throughput: 0: 20739.1. Samples: 2793232. Policy #0 lag: (min: 0.0, avg: 2.4, max: 5.0) +[2024-09-30 00:36:10,987][1153456] Avg episode reward: [(0, '26.908')] +[2024-09-30 00:36:11,545][1153805] Updated weights for policy 0, policy_version 3759 (0.0006) +[2024-09-30 00:36:12,099][1153805] Updated weights for policy 0, policy_version 3769 (0.0006) +[2024-09-30 00:36:12,712][1153805] Updated weights for policy 0, policy_version 3779 (0.0006) +[2024-09-30 00:36:13,288][1153805] Updated weights for policy 0, policy_version 3789 (0.0006) +[2024-09-30 00:36:13,856][1153805] Updated weights for policy 0, policy_version 3799 (0.0006) +[2024-09-30 00:36:14,503][1153805] Updated weights for policy 0, policy_version 3809 (0.0006) +[2024-09-30 00:36:15,056][1153805] Updated weights for policy 0, policy_version 3819 (0.0006) +[2024-09-30 00:36:15,624][1153805] Updated weights for policy 0, policy_version 3829 (0.0006) +[2024-09-30 00:36:15,987][1153456] Fps is (10 sec: 75776.2, 60 sec: 81851.7, 300 sec: 76252.8). Total num frames: 15708160. Throughput: 0: 20355.5. Samples: 2902424. Policy #0 lag: (min: 0.0, avg: 1.9, max: 5.0) +[2024-09-30 00:36:15,987][1153456] Avg episode reward: [(0, '26.893')] +[2024-09-30 00:36:16,198][1153805] Updated weights for policy 0, policy_version 3839 (0.0006) +[2024-09-30 00:36:16,738][1153805] Updated weights for policy 0, policy_version 3849 (0.0006) +[2024-09-30 00:36:17,265][1153805] Updated weights for policy 0, policy_version 3859 (0.0006) +[2024-09-30 00:36:17,784][1153805] Updated weights for policy 0, policy_version 3869 (0.0006) +[2024-09-30 00:36:18,296][1153805] Updated weights for policy 0, policy_version 3879 (0.0006) +[2024-09-30 00:36:18,841][1153805] Updated weights for policy 0, policy_version 3889 (0.0006) +[2024-09-30 00:36:19,381][1153805] Updated weights for policy 0, policy_version 3899 (0.0006) +[2024-09-30 00:36:19,926][1153805] Updated weights for policy 0, policy_version 3909 (0.0006) +[2024-09-30 00:36:20,497][1153805] Updated weights for policy 0, policy_version 3919 (0.0006) +[2024-09-30 00:36:20,987][1153456] Fps is (10 sec: 72908.8, 60 sec: 81237.2, 300 sec: 76250.7). Total num frames: 16089088. Throughput: 0: 20031.3. Samples: 3014964. Policy #0 lag: (min: 0.0, avg: 2.8, max: 6.0) +[2024-09-30 00:36:20,987][1153456] Avg episode reward: [(0, '31.297')] +[2024-09-30 00:36:21,040][1153805] Updated weights for policy 0, policy_version 3929 (0.0006) +[2024-09-30 00:36:21,560][1153805] Updated weights for policy 0, policy_version 3939 (0.0006) +[2024-09-30 00:36:22,089][1153805] Updated weights for policy 0, policy_version 3949 (0.0006) +[2024-09-30 00:36:22,606][1153805] Updated weights for policy 0, policy_version 3959 (0.0006) +[2024-09-30 00:36:23,124][1153805] Updated weights for policy 0, policy_version 3969 (0.0006) +[2024-09-30 00:36:23,631][1153805] Updated weights for policy 0, policy_version 3979 (0.0006) +[2024-09-30 00:36:24,165][1153805] Updated weights for policy 0, policy_version 3989 (0.0006) +[2024-09-30 00:36:24,669][1153805] Updated weights for policy 0, policy_version 3999 (0.0006) +[2024-09-30 00:36:25,176][1153805] Updated weights for policy 0, policy_version 4009 (0.0006) +[2024-09-30 00:36:25,678][1153805] Updated weights for policy 0, policy_version 4019 (0.0006) +[2024-09-30 00:36:25,987][1153456] Fps is (10 sec: 77414.2, 60 sec: 80623.0, 300 sec: 76323.9). Total num frames: 16482304. Throughput: 0: 19945.3. Samples: 3072964. Policy #0 lag: (min: 0.0, avg: 2.7, max: 6.0) +[2024-09-30 00:36:25,987][1153456] Avg episode reward: [(0, '28.992')] +[2024-09-30 00:36:26,193][1153805] Updated weights for policy 0, policy_version 4029 (0.0006) +[2024-09-30 00:36:26,701][1153805] Updated weights for policy 0, policy_version 4039 (0.0006) +[2024-09-30 00:36:27,200][1153805] Updated weights for policy 0, policy_version 4049 (0.0006) +[2024-09-30 00:36:27,756][1153805] Updated weights for policy 0, policy_version 4059 (0.0006) +[2024-09-30 00:36:28,252][1153805] Updated weights for policy 0, policy_version 4069 (0.0006) +[2024-09-30 00:36:28,802][1153805] Updated weights for policy 0, policy_version 4079 (0.0006) +[2024-09-30 00:36:29,340][1153805] Updated weights for policy 0, policy_version 4089 (0.0006) +[2024-09-30 00:36:29,857][1153805] Updated weights for policy 0, policy_version 4099 (0.0006) +[2024-09-30 00:36:30,377][1153805] Updated weights for policy 0, policy_version 4109 (0.0006) +[2024-09-30 00:36:30,878][1153805] Updated weights for policy 0, policy_version 4119 (0.0006) +[2024-09-30 00:36:30,987][1153456] Fps is (10 sec: 78642.5, 60 sec: 80213.4, 300 sec: 76392.7). Total num frames: 16875520. Throughput: 0: 19912.1. Samples: 3191968. Policy #0 lag: (min: 0.0, avg: 2.1, max: 5.0) +[2024-09-30 00:36:30,987][1153456] Avg episode reward: [(0, '32.306')] +[2024-09-30 00:36:31,392][1153805] Updated weights for policy 0, policy_version 4129 (0.0006) +[2024-09-30 00:36:31,912][1153805] Updated weights for policy 0, policy_version 4139 (0.0006) +[2024-09-30 00:36:32,416][1153805] Updated weights for policy 0, policy_version 4149 (0.0006) +[2024-09-30 00:36:32,913][1153805] Updated weights for policy 0, policy_version 4159 (0.0006) +[2024-09-30 00:36:33,424][1153805] Updated weights for policy 0, policy_version 4169 (0.0006) +[2024-09-30 00:36:33,972][1153805] Updated weights for policy 0, policy_version 4179 (0.0006) +[2024-09-30 00:36:34,497][1153805] Updated weights for policy 0, policy_version 4189 (0.0006) +[2024-09-30 00:36:35,013][1153805] Updated weights for policy 0, policy_version 4199 (0.0006) +[2024-09-30 00:36:35,537][1153805] Updated weights for policy 0, policy_version 4209 (0.0006) +[2024-09-30 00:36:35,987][1153456] Fps is (10 sec: 79052.9, 60 sec: 79735.7, 300 sec: 76481.2). Total num frames: 17272832. Throughput: 0: 19802.7. Samples: 3311008. Policy #0 lag: (min: 0.0, avg: 2.1, max: 6.0) +[2024-09-30 00:36:35,987][1153456] Avg episode reward: [(0, '28.748')] +[2024-09-30 00:36:36,058][1153805] Updated weights for policy 0, policy_version 4219 (0.0006) +[2024-09-30 00:36:36,601][1153805] Updated weights for policy 0, policy_version 4229 (0.0006) +[2024-09-30 00:36:37,143][1153805] Updated weights for policy 0, policy_version 4239 (0.0006) +[2024-09-30 00:36:37,660][1153805] Updated weights for policy 0, policy_version 4249 (0.0006) +[2024-09-30 00:36:38,201][1153805] Updated weights for policy 0, policy_version 4259 (0.0006) +[2024-09-30 00:36:38,733][1153805] Updated weights for policy 0, policy_version 4269 (0.0006) +[2024-09-30 00:36:39,255][1153805] Updated weights for policy 0, policy_version 4279 (0.0007) +[2024-09-30 00:36:39,849][1153805] Updated weights for policy 0, policy_version 4289 (0.0006) +[2024-09-30 00:36:40,361][1153805] Updated weights for policy 0, policy_version 4299 (0.0006) +[2024-09-30 00:36:40,931][1153805] Updated weights for policy 0, policy_version 4309 (0.0006) +[2024-09-30 00:36:40,987][1153456] Fps is (10 sec: 77824.2, 60 sec: 79325.8, 300 sec: 76472.9). Total num frames: 17653760. Throughput: 0: 19700.2. Samples: 3368520. Policy #0 lag: (min: 0.0, avg: 2.6, max: 6.0) +[2024-09-30 00:36:40,987][1153456] Avg episode reward: [(0, '32.397')] +[2024-09-30 00:36:40,988][1153683] Saving new best policy, reward=32.397! +[2024-09-30 00:36:41,473][1153805] Updated weights for policy 0, policy_version 4319 (0.0006) +[2024-09-30 00:36:41,971][1153805] Updated weights for policy 0, policy_version 4329 (0.0006) +[2024-09-30 00:36:42,475][1153805] Updated weights for policy 0, policy_version 4339 (0.0006) +[2024-09-30 00:36:42,975][1153805] Updated weights for policy 0, policy_version 4349 (0.0006) +[2024-09-30 00:36:43,513][1153805] Updated weights for policy 0, policy_version 4359 (0.0006) +[2024-09-30 00:36:44,114][1153805] Updated weights for policy 0, policy_version 4369 (0.0006) +[2024-09-30 00:36:44,624][1153805] Updated weights for policy 0, policy_version 4379 (0.0006) +[2024-09-30 00:36:45,224][1153805] Updated weights for policy 0, policy_version 4389 (0.0006) +[2024-09-30 00:36:45,814][1153805] Updated weights for policy 0, policy_version 4399 (0.0006) +[2024-09-30 00:36:45,987][1153456] Fps is (10 sec: 75775.2, 60 sec: 78847.9, 300 sec: 76442.7). Total num frames: 18030592. Throughput: 0: 19471.0. Samples: 3482836. Policy #0 lag: (min: 0.0, avg: 2.9, max: 6.0) +[2024-09-30 00:36:45,987][1153456] Avg episode reward: [(0, '30.083')] +[2024-09-30 00:36:46,425][1153805] Updated weights for policy 0, policy_version 4409 (0.0006) +[2024-09-30 00:36:46,966][1153805] Updated weights for policy 0, policy_version 4419 (0.0006) +[2024-09-30 00:36:47,568][1153805] Updated weights for policy 0, policy_version 4429 (0.0006) +[2024-09-30 00:36:48,186][1153805] Updated weights for policy 0, policy_version 4439 (0.0006) +[2024-09-30 00:36:48,789][1153805] Updated weights for policy 0, policy_version 4449 (0.0006) +[2024-09-30 00:36:49,403][1153805] Updated weights for policy 0, policy_version 4459 (0.0006) +[2024-09-30 00:36:50,084][1153805] Updated weights for policy 0, policy_version 4469 (0.0006) +[2024-09-30 00:36:50,708][1153805] Updated weights for policy 0, policy_version 4479 (0.0006) +[2024-09-30 00:36:50,987][1153456] Fps is (10 sec: 70861.2, 60 sec: 77550.9, 300 sec: 76175.1). Total num frames: 18362368. Throughput: 0: 19006.6. Samples: 3584876. Policy #0 lag: (min: 0.0, avg: 1.8, max: 5.0) +[2024-09-30 00:36:50,987][1153456] Avg episode reward: [(0, '31.676')] +[2024-09-30 00:36:51,338][1153805] Updated weights for policy 0, policy_version 4489 (0.0006) +[2024-09-30 00:36:51,652][1153683] Signal inference workers to stop experience collection... (300 times) +[2024-09-30 00:36:51,653][1153683] Signal inference workers to resume experience collection... (300 times) +[2024-09-30 00:36:51,658][1153805] InferenceWorker_p0-w0: stopping experience collection (300 times) +[2024-09-30 00:36:51,658][1153805] InferenceWorker_p0-w0: resuming experience collection (300 times) +[2024-09-30 00:36:51,935][1153805] Updated weights for policy 0, policy_version 4499 (0.0007) +[2024-09-30 00:36:52,592][1153805] Updated weights for policy 0, policy_version 4509 (0.0007) +[2024-09-30 00:36:53,218][1153805] Updated weights for policy 0, policy_version 4519 (0.0006) +[2024-09-30 00:36:53,857][1153805] Updated weights for policy 0, policy_version 4529 (0.0006) +[2024-09-30 00:36:54,505][1153805] Updated weights for policy 0, policy_version 4539 (0.0006) +[2024-09-30 00:36:55,103][1153805] Updated weights for policy 0, policy_version 4549 (0.0006) +[2024-09-30 00:36:55,714][1153805] Updated weights for policy 0, policy_version 4559 (0.0006) +[2024-09-30 00:36:55,987][1153456] Fps is (10 sec: 65945.6, 60 sec: 76048.9, 300 sec: 75900.1). Total num frames: 18690048. Throughput: 0: 18664.3. Samples: 3633128. Policy #0 lag: (min: 0.0, avg: 2.4, max: 5.0) +[2024-09-30 00:36:55,987][1153456] Avg episode reward: [(0, '31.825')] +[2024-09-30 00:36:56,368][1153805] Updated weights for policy 0, policy_version 4569 (0.0005) +[2024-09-30 00:36:57,015][1153805] Updated weights for policy 0, policy_version 4579 (0.0006) +[2024-09-30 00:36:57,622][1153805] Updated weights for policy 0, policy_version 4589 (0.0006) +[2024-09-30 00:36:58,234][1153805] Updated weights for policy 0, policy_version 4599 (0.0006) +[2024-09-30 00:36:58,833][1153805] Updated weights for policy 0, policy_version 4609 (0.0006) +[2024-09-30 00:36:59,473][1153805] Updated weights for policy 0, policy_version 4619 (0.0006) +[2024-09-30 00:37:00,103][1153805] Updated weights for policy 0, policy_version 4629 (0.0006) +[2024-09-30 00:37:00,712][1153805] Updated weights for policy 0, policy_version 4639 (0.0006) +[2024-09-30 00:37:00,987][1153456] Fps is (10 sec: 65536.0, 60 sec: 74547.2, 300 sec: 75639.1). Total num frames: 19017728. Throughput: 0: 18433.4. Samples: 3731928. Policy #0 lag: (min: 1.0, avg: 3.1, max: 6.0) +[2024-09-30 00:37:00,987][1153456] Avg episode reward: [(0, '33.036')] +[2024-09-30 00:37:01,002][1153683] Saving new best policy, reward=33.036! +[2024-09-30 00:37:01,285][1153805] Updated weights for policy 0, policy_version 4649 (0.0006) +[2024-09-30 00:37:01,881][1153805] Updated weights for policy 0, policy_version 4659 (0.0006) +[2024-09-30 00:37:02,478][1153805] Updated weights for policy 0, policy_version 4669 (0.0006) +[2024-09-30 00:37:03,069][1153805] Updated weights for policy 0, policy_version 4679 (0.0006) +[2024-09-30 00:37:03,661][1153805] Updated weights for policy 0, policy_version 4689 (0.0006) +[2024-09-30 00:37:04,245][1153805] Updated weights for policy 0, policy_version 4699 (0.0006) +[2024-09-30 00:37:04,843][1153805] Updated weights for policy 0, policy_version 4709 (0.0006) +[2024-09-30 00:37:05,405][1153805] Updated weights for policy 0, policy_version 4719 (0.0006) +[2024-09-30 00:37:05,938][1153805] Updated weights for policy 0, policy_version 4729 (0.0006) +[2024-09-30 00:37:05,987][1153456] Fps is (10 sec: 67994.1, 60 sec: 73659.7, 300 sec: 75511.6). Total num frames: 19369984. Throughput: 0: 18232.3. Samples: 3835416. Policy #0 lag: (min: 0.0, avg: 2.5, max: 6.0) +[2024-09-30 00:37:05,987][1153456] Avg episode reward: [(0, '35.498')] +[2024-09-30 00:37:05,993][1153683] Saving new best policy, reward=35.498! +[2024-09-30 00:37:06,456][1153805] Updated weights for policy 0, policy_version 4739 (0.0006) +[2024-09-30 00:37:06,976][1153805] Updated weights for policy 0, policy_version 4749 (0.0006) +[2024-09-30 00:37:07,533][1153805] Updated weights for policy 0, policy_version 4759 (0.0006) +[2024-09-30 00:37:08,044][1153805] Updated weights for policy 0, policy_version 4769 (0.0006) +[2024-09-30 00:37:08,580][1153805] Updated weights for policy 0, policy_version 4779 (0.0006) +[2024-09-30 00:37:09,180][1153805] Updated weights for policy 0, policy_version 4789 (0.0006) +[2024-09-30 00:37:09,702][1153805] Updated weights for policy 0, policy_version 4799 (0.0006) +[2024-09-30 00:37:10,238][1153805] Updated weights for policy 0, policy_version 4809 (0.0006) +[2024-09-30 00:37:10,736][1153805] Updated weights for policy 0, policy_version 4819 (0.0006) +[2024-09-30 00:37:10,987][1153456] Fps is (10 sec: 73728.7, 60 sec: 73250.2, 300 sec: 75547.4). Total num frames: 19755008. Throughput: 0: 18227.1. Samples: 3893184. Policy #0 lag: (min: 0.0, avg: 2.1, max: 4.0) +[2024-09-30 00:37:10,987][1153456] Avg episode reward: [(0, '34.137')] +[2024-09-30 00:37:11,235][1153805] Updated weights for policy 0, policy_version 4829 (0.0006) +[2024-09-30 00:37:11,747][1153805] Updated weights for policy 0, policy_version 4839 (0.0006) +[2024-09-30 00:37:12,291][1153805] Updated weights for policy 0, policy_version 4849 (0.0006) +[2024-09-30 00:37:12,854][1153805] Updated weights for policy 0, policy_version 4859 (0.0006) +[2024-09-30 00:37:13,396][1153805] Updated weights for policy 0, policy_version 4869 (0.0006) +[2024-09-30 00:37:13,934][1153805] Updated weights for policy 0, policy_version 4879 (0.0006) +[2024-09-30 00:37:14,487][1153805] Updated weights for policy 0, policy_version 4889 (0.0006) +[2024-09-30 00:37:15,023][1153805] Updated weights for policy 0, policy_version 4899 (0.0006) +[2024-09-30 00:37:15,574][1153805] Updated weights for policy 0, policy_version 4909 (0.0006) +[2024-09-30 00:37:15,987][1153456] Fps is (10 sec: 76595.0, 60 sec: 73796.2, 300 sec: 75562.3). Total num frames: 20135936. Throughput: 0: 18132.6. Samples: 4007936. Policy #0 lag: (min: 0.0, avg: 2.5, max: 6.0) +[2024-09-30 00:37:15,987][1153456] Avg episode reward: [(0, '31.965')] +[2024-09-30 00:37:16,103][1153805] Updated weights for policy 0, policy_version 4919 (0.0006) +[2024-09-30 00:37:16,646][1153805] Updated weights for policy 0, policy_version 4929 (0.0006) +[2024-09-30 00:37:17,197][1153805] Updated weights for policy 0, policy_version 4939 (0.0006) +[2024-09-30 00:37:17,753][1153805] Updated weights for policy 0, policy_version 4949 (0.0006) +[2024-09-30 00:37:18,290][1153805] Updated weights for policy 0, policy_version 4959 (0.0006) +[2024-09-30 00:37:18,847][1153805] Updated weights for policy 0, policy_version 4969 (0.0006) +[2024-09-30 00:37:19,377][1153805] Updated weights for policy 0, policy_version 4979 (0.0006) +[2024-09-30 00:37:19,945][1153805] Updated weights for policy 0, policy_version 4989 (0.0006) +[2024-09-30 00:37:20,483][1153805] Updated weights for policy 0, policy_version 4999 (0.0006) +[2024-09-30 00:37:20,987][1153456] Fps is (10 sec: 75774.5, 60 sec: 73727.8, 300 sec: 75557.8). Total num frames: 20512768. Throughput: 0: 17993.9. Samples: 4120736. Policy #0 lag: (min: 0.0, avg: 2.4, max: 6.0) +[2024-09-30 00:37:20,987][1153456] Avg episode reward: [(0, '31.147')] +[2024-09-30 00:37:21,069][1153805] Updated weights for policy 0, policy_version 5009 (0.0006) +[2024-09-30 00:37:21,564][1153805] Updated weights for policy 0, policy_version 5019 (0.0006) +[2024-09-30 00:37:22,124][1153805] Updated weights for policy 0, policy_version 5029 (0.0006) +[2024-09-30 00:37:22,665][1153805] Updated weights for policy 0, policy_version 5039 (0.0006) +[2024-09-30 00:37:23,191][1153805] Updated weights for policy 0, policy_version 5049 (0.0006) +[2024-09-30 00:37:23,748][1153805] Updated weights for policy 0, policy_version 5059 (0.0006) +[2024-09-30 00:37:24,304][1153805] Updated weights for policy 0, policy_version 5069 (0.0006) +[2024-09-30 00:37:24,852][1153805] Updated weights for policy 0, policy_version 5079 (0.0006) +[2024-09-30 00:37:25,379][1153805] Updated weights for policy 0, policy_version 5089 (0.0006) +[2024-09-30 00:37:25,920][1153805] Updated weights for policy 0, policy_version 5099 (0.0006) +[2024-09-30 00:37:25,987][1153456] Fps is (10 sec: 74955.9, 60 sec: 73386.5, 300 sec: 75535.2). Total num frames: 20885504. Throughput: 0: 17969.4. Samples: 4177144. Policy #0 lag: (min: 0.0, avg: 2.4, max: 5.0) +[2024-09-30 00:37:25,987][1153456] Avg episode reward: [(0, '30.199')] +[2024-09-30 00:37:26,485][1153683] Signal inference workers to stop experience collection... (350 times) +[2024-09-30 00:37:26,488][1153805] InferenceWorker_p0-w0: stopping experience collection (350 times) +[2024-09-30 00:37:26,492][1153683] Signal inference workers to resume experience collection... (350 times) +[2024-09-30 00:37:26,492][1153805] InferenceWorker_p0-w0: resuming experience collection (350 times) +[2024-09-30 00:37:26,494][1153805] Updated weights for policy 0, policy_version 5109 (0.0006) +[2024-09-30 00:37:27,004][1153805] Updated weights for policy 0, policy_version 5119 (0.0006) +[2024-09-30 00:37:27,520][1153805] Updated weights for policy 0, policy_version 5129 (0.0006) +[2024-09-30 00:37:28,076][1153805] Updated weights for policy 0, policy_version 5139 (0.0006) +[2024-09-30 00:37:28,632][1153805] Updated weights for policy 0, policy_version 5149 (0.0006) +[2024-09-30 00:37:29,138][1153805] Updated weights for policy 0, policy_version 5159 (0.0006) +[2024-09-30 00:37:29,663][1153805] Updated weights for policy 0, policy_version 5169 (0.0006) +[2024-09-30 00:37:30,201][1153805] Updated weights for policy 0, policy_version 5179 (0.0006) +[2024-09-30 00:37:30,744][1153805] Updated weights for policy 0, policy_version 5189 (0.0006) +[2024-09-30 00:37:30,987][1153456] Fps is (10 sec: 75776.8, 60 sec: 73250.2, 300 sec: 75567.4). Total num frames: 21270528. Throughput: 0: 17967.7. Samples: 4291380. Policy #0 lag: (min: 0.0, avg: 2.8, max: 7.0) +[2024-09-30 00:37:30,987][1153456] Avg episode reward: [(0, '33.326')] +[2024-09-30 00:37:31,297][1153805] Updated weights for policy 0, policy_version 5199 (0.0006) +[2024-09-30 00:37:31,833][1153805] Updated weights for policy 0, policy_version 5209 (0.0006) +[2024-09-30 00:37:32,347][1153805] Updated weights for policy 0, policy_version 5219 (0.0006) +[2024-09-30 00:37:32,883][1153805] Updated weights for policy 0, policy_version 5229 (0.0006) +[2024-09-30 00:37:33,442][1153805] Updated weights for policy 0, policy_version 5239 (0.0006) +[2024-09-30 00:37:33,983][1153805] Updated weights for policy 0, policy_version 5249 (0.0006) +[2024-09-30 00:37:34,535][1153805] Updated weights for policy 0, policy_version 5259 (0.0006) +[2024-09-30 00:37:35,084][1153805] Updated weights for policy 0, policy_version 5269 (0.0006) +[2024-09-30 00:37:35,680][1153805] Updated weights for policy 0, policy_version 5279 (0.0006) +[2024-09-30 00:37:35,987][1153456] Fps is (10 sec: 75367.2, 60 sec: 72772.2, 300 sec: 75528.0). Total num frames: 21639168. Throughput: 0: 18204.5. Samples: 4404080. Policy #0 lag: (min: 0.0, avg: 1.9, max: 5.0) +[2024-09-30 00:37:35,987][1153456] Avg episode reward: [(0, '35.834')] +[2024-09-30 00:37:35,996][1153683] Saving /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000005284_21643264.pth... +[2024-09-30 00:37:36,058][1153683] Removing /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth +[2024-09-30 00:37:36,065][1153683] Saving new best policy, reward=35.834! +[2024-09-30 00:37:36,363][1153805] Updated weights for policy 0, policy_version 5289 (0.0006) +[2024-09-30 00:37:36,924][1153805] Updated weights for policy 0, policy_version 5299 (0.0006) +[2024-09-30 00:37:37,469][1153805] Updated weights for policy 0, policy_version 5309 (0.0006) +[2024-09-30 00:37:38,032][1153805] Updated weights for policy 0, policy_version 5319 (0.0006) +[2024-09-30 00:37:38,584][1153805] Updated weights for policy 0, policy_version 5329 (0.0006) +[2024-09-30 00:37:39,133][1153805] Updated weights for policy 0, policy_version 5339 (0.0006) +[2024-09-30 00:37:39,666][1153805] Updated weights for policy 0, policy_version 5349 (0.0006) +[2024-09-30 00:37:40,216][1153805] Updated weights for policy 0, policy_version 5359 (0.0006) +[2024-09-30 00:37:40,749][1153805] Updated weights for policy 0, policy_version 5369 (0.0006) +[2024-09-30 00:37:40,987][1153456] Fps is (10 sec: 73728.2, 60 sec: 72567.6, 300 sec: 75490.3). Total num frames: 22007808. Throughput: 0: 18301.4. Samples: 4456688. Policy #0 lag: (min: 0.0, avg: 2.0, max: 6.0) +[2024-09-30 00:37:40,987][1153456] Avg episode reward: [(0, '33.306')] +[2024-09-30 00:37:41,283][1153805] Updated weights for policy 0, policy_version 5379 (0.0006) +[2024-09-30 00:37:41,867][1153805] Updated weights for policy 0, policy_version 5389 (0.0006) +[2024-09-30 00:37:42,410][1153805] Updated weights for policy 0, policy_version 5399 (0.0006) +[2024-09-30 00:37:42,993][1153805] Updated weights for policy 0, policy_version 5409 (0.0006) +[2024-09-30 00:37:43,566][1153805] Updated weights for policy 0, policy_version 5419 (0.0006) +[2024-09-30 00:37:44,117][1153805] Updated weights for policy 0, policy_version 5429 (0.0006) +[2024-09-30 00:37:44,709][1153805] Updated weights for policy 0, policy_version 5439 (0.0006) +[2024-09-30 00:37:45,294][1153805] Updated weights for policy 0, policy_version 5449 (0.0006) +[2024-09-30 00:37:45,865][1153805] Updated weights for policy 0, policy_version 5459 (0.0006) +[2024-09-30 00:37:45,987][1153456] Fps is (10 sec: 72499.8, 60 sec: 72226.3, 300 sec: 75403.6). Total num frames: 22364160. Throughput: 0: 18562.6. Samples: 4567244. Policy #0 lag: (min: 0.0, avg: 2.8, max: 7.0) +[2024-09-30 00:37:45,987][1153456] Avg episode reward: [(0, '31.728')] +[2024-09-30 00:37:46,452][1153805] Updated weights for policy 0, policy_version 5469 (0.0006) +[2024-09-30 00:37:47,029][1153805] Updated weights for policy 0, policy_version 5479 (0.0006) +[2024-09-30 00:37:47,603][1153805] Updated weights for policy 0, policy_version 5489 (0.0006) +[2024-09-30 00:37:48,137][1153805] Updated weights for policy 0, policy_version 5499 (0.0006) +[2024-09-30 00:37:48,647][1153805] Updated weights for policy 0, policy_version 5509 (0.0006) +[2024-09-30 00:37:49,246][1153805] Updated weights for policy 0, policy_version 5519 (0.0006) +[2024-09-30 00:37:49,837][1153805] Updated weights for policy 0, policy_version 5529 (0.0006) +[2024-09-30 00:37:50,379][1153805] Updated weights for policy 0, policy_version 5539 (0.0006) +[2024-09-30 00:37:50,987][1153456] Fps is (10 sec: 71679.5, 60 sec: 72703.9, 300 sec: 75336.9). Total num frames: 22724608. Throughput: 0: 18659.8. Samples: 4675108. Policy #0 lag: (min: 0.0, avg: 2.8, max: 7.0) +[2024-09-30 00:37:50,987][1153456] Avg episode reward: [(0, '35.538')] +[2024-09-30 00:37:50,993][1153805] Updated weights for policy 0, policy_version 5549 (0.0006) +[2024-09-30 00:37:51,576][1153805] Updated weights for policy 0, policy_version 5559 (0.0006) +[2024-09-30 00:37:51,586][1153683] Signal inference workers to stop experience collection... (400 times) +[2024-09-30 00:37:51,587][1153683] Signal inference workers to resume experience collection... (400 times) +[2024-09-30 00:37:51,592][1153805] InferenceWorker_p0-w0: stopping experience collection (400 times) +[2024-09-30 00:37:51,592][1153805] InferenceWorker_p0-w0: resuming experience collection (400 times) +[2024-09-30 00:37:52,137][1153805] Updated weights for policy 0, policy_version 5569 (0.0006) +[2024-09-30 00:37:52,758][1153805] Updated weights for policy 0, policy_version 5579 (0.0006) +[2024-09-30 00:37:53,430][1153805] Updated weights for policy 0, policy_version 5589 (0.0006) +[2024-09-30 00:37:54,028][1153805] Updated weights for policy 0, policy_version 5599 (0.0006) +[2024-09-30 00:37:54,654][1153805] Updated weights for policy 0, policy_version 5609 (0.0006) +[2024-09-30 00:37:55,291][1153805] Updated weights for policy 0, policy_version 5619 (0.0006) +[2024-09-30 00:37:55,812][1153805] Updated weights for policy 0, policy_version 5629 (0.0006) +[2024-09-30 00:37:55,987][1153456] Fps is (10 sec: 70450.1, 60 sec: 72977.0, 300 sec: 75208.2). Total num frames: 23068672. Throughput: 0: 18507.6. Samples: 4726028. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:37:55,987][1153456] Avg episode reward: [(0, '32.545')] +[2024-09-30 00:37:56,351][1153805] Updated weights for policy 0, policy_version 5639 (0.0006) +[2024-09-30 00:37:56,938][1153805] Updated weights for policy 0, policy_version 5649 (0.0006) +[2024-09-30 00:37:57,464][1153805] Updated weights for policy 0, policy_version 5659 (0.0006) +[2024-09-30 00:37:58,014][1153805] Updated weights for policy 0, policy_version 5669 (0.0006) +[2024-09-30 00:37:58,577][1153805] Updated weights for policy 0, policy_version 5679 (0.0006) +[2024-09-30 00:37:59,128][1153805] Updated weights for policy 0, policy_version 5689 (0.0006) +[2024-09-30 00:37:59,716][1153805] Updated weights for policy 0, policy_version 5699 (0.0006) +[2024-09-30 00:38:00,277][1153805] Updated weights for policy 0, policy_version 5709 (0.0006) +[2024-09-30 00:38:00,878][1153805] Updated weights for policy 0, policy_version 5719 (0.0006) +[2024-09-30 00:38:00,987][1153456] Fps is (10 sec: 70860.6, 60 sec: 73591.4, 300 sec: 75163.7). Total num frames: 23433216. Throughput: 0: 18348.6. Samples: 4833624. Policy #0 lag: (min: 0.0, avg: 2.8, max: 6.0) +[2024-09-30 00:38:00,987][1153456] Avg episode reward: [(0, '31.310')] +[2024-09-30 00:38:01,425][1153805] Updated weights for policy 0, policy_version 5729 (0.0006) +[2024-09-30 00:38:01,970][1153805] Updated weights for policy 0, policy_version 5739 (0.0006) +[2024-09-30 00:38:02,544][1153805] Updated weights for policy 0, policy_version 5749 (0.0006) +[2024-09-30 00:38:03,093][1153805] Updated weights for policy 0, policy_version 5759 (0.0006) +[2024-09-30 00:38:03,645][1153805] Updated weights for policy 0, policy_version 5769 (0.0006) +[2024-09-30 00:38:04,202][1153805] Updated weights for policy 0, policy_version 5779 (0.0006) +[2024-09-30 00:38:04,777][1153805] Updated weights for policy 0, policy_version 5789 (0.0005) +[2024-09-30 00:38:05,367][1153805] Updated weights for policy 0, policy_version 5799 (0.0006) +[2024-09-30 00:38:05,950][1153805] Updated weights for policy 0, policy_version 5809 (0.0006) +[2024-09-30 00:38:05,987][1153456] Fps is (10 sec: 72909.2, 60 sec: 73796.2, 300 sec: 75120.9). Total num frames: 23797760. Throughput: 0: 18245.4. Samples: 4941780. Policy #0 lag: (min: 1.0, avg: 2.6, max: 6.0) +[2024-09-30 00:38:05,987][1153456] Avg episode reward: [(0, '35.395')] +[2024-09-30 00:38:06,379][1153805] Updated weights for policy 0, policy_version 5819 (0.0006) +[2024-09-30 00:38:06,834][1153805] Updated weights for policy 0, policy_version 5829 (0.0006) +[2024-09-30 00:38:07,285][1153805] Updated weights for policy 0, policy_version 5839 (0.0006) +[2024-09-30 00:38:07,726][1153805] Updated weights for policy 0, policy_version 5849 (0.0006) +[2024-09-30 00:38:08,191][1153805] Updated weights for policy 0, policy_version 5859 (0.0006) +[2024-09-30 00:38:08,664][1153805] Updated weights for policy 0, policy_version 5869 (0.0006) +[2024-09-30 00:38:09,197][1153805] Updated weights for policy 0, policy_version 5879 (0.0007) +[2024-09-30 00:38:09,717][1153805] Updated weights for policy 0, policy_version 5889 (0.0006) +[2024-09-30 00:38:10,222][1153805] Updated weights for policy 0, policy_version 5899 (0.0006) +[2024-09-30 00:38:10,734][1153805] Updated weights for policy 0, policy_version 5909 (0.0006) +[2024-09-30 00:38:10,987][1153456] Fps is (10 sec: 79053.1, 60 sec: 74478.8, 300 sec: 75308.6). Total num frames: 24223744. Throughput: 0: 18459.2. Samples: 5007804. Policy #0 lag: (min: 0.0, avg: 2.6, max: 4.0) +[2024-09-30 00:38:10,987][1153456] Avg episode reward: [(0, '34.288')] +[2024-09-30 00:38:11,263][1153805] Updated weights for policy 0, policy_version 5919 (0.0006) +[2024-09-30 00:38:11,739][1153805] Updated weights for policy 0, policy_version 5929 (0.0006) +[2024-09-30 00:38:12,179][1153805] Updated weights for policy 0, policy_version 5939 (0.0006) +[2024-09-30 00:38:12,651][1153805] Updated weights for policy 0, policy_version 5949 (0.0006) +[2024-09-30 00:38:13,141][1153805] Updated weights for policy 0, policy_version 5959 (0.0006) +[2024-09-30 00:38:13,647][1153805] Updated weights for policy 0, policy_version 5969 (0.0006) +[2024-09-30 00:38:14,145][1153805] Updated weights for policy 0, policy_version 5979 (0.0006) +[2024-09-30 00:38:14,653][1153805] Updated weights for policy 0, policy_version 5989 (0.0006) +[2024-09-30 00:38:15,157][1153805] Updated weights for policy 0, policy_version 5999 (0.0006) +[2024-09-30 00:38:15,325][1153683] Signal inference workers to stop experience collection... (450 times) +[2024-09-30 00:38:15,326][1153683] Signal inference workers to resume experience collection... (450 times) +[2024-09-30 00:38:15,330][1153805] InferenceWorker_p0-w0: stopping experience collection (450 times) +[2024-09-30 00:38:15,330][1153805] InferenceWorker_p0-w0: resuming experience collection (450 times) +[2024-09-30 00:38:15,676][1153805] Updated weights for policy 0, policy_version 6009 (0.0006) +[2024-09-30 00:38:15,987][1153456] Fps is (10 sec: 83559.8, 60 sec: 74957.0, 300 sec: 75429.5). Total num frames: 24633344. Throughput: 0: 18669.2. Samples: 5131492. Policy #0 lag: (min: 0.0, avg: 2.2, max: 4.0) +[2024-09-30 00:38:15,987][1153456] Avg episode reward: [(0, '34.141')] +[2024-09-30 00:38:16,195][1153805] Updated weights for policy 0, policy_version 6019 (0.0006) +[2024-09-30 00:38:16,744][1153805] Updated weights for policy 0, policy_version 6029 (0.0006) +[2024-09-30 00:38:17,279][1153805] Updated weights for policy 0, policy_version 6039 (0.0006) +[2024-09-30 00:38:17,787][1153805] Updated weights for policy 0, policy_version 6049 (0.0006) +[2024-09-30 00:38:18,324][1153805] Updated weights for policy 0, policy_version 6059 (0.0006) +[2024-09-30 00:38:18,780][1153805] Updated weights for policy 0, policy_version 6069 (0.0006) +[2024-09-30 00:38:19,228][1153805] Updated weights for policy 0, policy_version 6079 (0.0006) +[2024-09-30 00:38:19,724][1153805] Updated weights for policy 0, policy_version 6089 (0.0006) +[2024-09-30 00:38:20,209][1153805] Updated weights for policy 0, policy_version 6099 (0.0006) +[2024-09-30 00:38:20,656][1153805] Updated weights for policy 0, policy_version 6109 (0.0006) +[2024-09-30 00:38:20,987][1153456] Fps is (10 sec: 82739.4, 60 sec: 75639.6, 300 sec: 75575.4). Total num frames: 25051136. Throughput: 0: 18873.6. Samples: 5253392. Policy #0 lag: (min: 0.0, avg: 2.1, max: 6.0) +[2024-09-30 00:38:20,987][1153456] Avg episode reward: [(0, '36.407')] +[2024-09-30 00:38:20,987][1153683] Saving new best policy, reward=36.407! +[2024-09-30 00:38:21,130][1153805] Updated weights for policy 0, policy_version 6119 (0.0006) +[2024-09-30 00:38:21,599][1153805] Updated weights for policy 0, policy_version 6129 (0.0006) +[2024-09-30 00:38:22,096][1153805] Updated weights for policy 0, policy_version 6139 (0.0006) +[2024-09-30 00:38:22,603][1153805] Updated weights for policy 0, policy_version 6149 (0.0006) +[2024-09-30 00:38:23,066][1153805] Updated weights for policy 0, policy_version 6159 (0.0006) +[2024-09-30 00:38:23,527][1153805] Updated weights for policy 0, policy_version 6169 (0.0006) +[2024-09-30 00:38:24,044][1153805] Updated weights for policy 0, policy_version 6179 (0.0006) +[2024-09-30 00:38:24,551][1153805] Updated weights for policy 0, policy_version 6189 (0.0006) +[2024-09-30 00:38:25,055][1153805] Updated weights for policy 0, policy_version 6199 (0.0006) +[2024-09-30 00:38:25,570][1153805] Updated weights for policy 0, policy_version 6209 (0.0006) +[2024-09-30 00:38:25,987][1153456] Fps is (10 sec: 82739.0, 60 sec: 76254.2, 300 sec: 75687.4). Total num frames: 25460736. Throughput: 0: 19130.2. Samples: 5317548. Policy #0 lag: (min: 0.0, avg: 1.6, max: 5.0) +[2024-09-30 00:38:25,987][1153456] Avg episode reward: [(0, '36.124')] +[2024-09-30 00:38:26,100][1153805] Updated weights for policy 0, policy_version 6219 (0.0006) +[2024-09-30 00:38:26,688][1153805] Updated weights for policy 0, policy_version 6229 (0.0006) +[2024-09-30 00:38:27,155][1153805] Updated weights for policy 0, policy_version 6239 (0.0006) +[2024-09-30 00:38:27,684][1153805] Updated weights for policy 0, policy_version 6249 (0.0006) +[2024-09-30 00:38:28,201][1153805] Updated weights for policy 0, policy_version 6259 (0.0006) +[2024-09-30 00:38:28,716][1153805] Updated weights for policy 0, policy_version 6269 (0.0006) +[2024-09-30 00:38:29,221][1153805] Updated weights for policy 0, policy_version 6279 (0.0006) +[2024-09-30 00:38:29,713][1153805] Updated weights for policy 0, policy_version 6289 (0.0006) +[2024-09-30 00:38:30,263][1153805] Updated weights for policy 0, policy_version 6299 (0.0006) +[2024-09-30 00:38:30,756][1153805] Updated weights for policy 0, policy_version 6309 (0.0006) +[2024-09-30 00:38:30,987][1153456] Fps is (10 sec: 80692.2, 60 sec: 76458.8, 300 sec: 75752.8). Total num frames: 25858048. Throughput: 0: 19327.5. Samples: 5436980. Policy #0 lag: (min: 0.0, avg: 2.0, max: 6.0) +[2024-09-30 00:38:30,987][1153456] Avg episode reward: [(0, '36.220')] +[2024-09-30 00:38:31,288][1153805] Updated weights for policy 0, policy_version 6319 (0.0006) +[2024-09-30 00:38:31,786][1153805] Updated weights for policy 0, policy_version 6329 (0.0006) +[2024-09-30 00:38:32,300][1153805] Updated weights for policy 0, policy_version 6339 (0.0006) +[2024-09-30 00:38:32,817][1153805] Updated weights for policy 0, policy_version 6349 (0.0006) +[2024-09-30 00:38:33,141][1153683] Signal inference workers to stop experience collection... (500 times) +[2024-09-30 00:38:33,142][1153683] Signal inference workers to resume experience collection... (500 times) +[2024-09-30 00:38:33,146][1153805] InferenceWorker_p0-w0: stopping experience collection (500 times) +[2024-09-30 00:38:33,146][1153805] InferenceWorker_p0-w0: resuming experience collection (500 times) +[2024-09-30 00:38:33,322][1153805] Updated weights for policy 0, policy_version 6359 (0.0006) +[2024-09-30 00:38:33,828][1153805] Updated weights for policy 0, policy_version 6369 (0.0006) +[2024-09-30 00:38:34,375][1153805] Updated weights for policy 0, policy_version 6379 (0.0006) +[2024-09-30 00:38:34,903][1153805] Updated weights for policy 0, policy_version 6389 (0.0006) +[2024-09-30 00:38:35,470][1153805] Updated weights for policy 0, policy_version 6399 (0.0006) +[2024-09-30 00:38:35,987][1153456] Fps is (10 sec: 78643.5, 60 sec: 76800.2, 300 sec: 75788.1). Total num frames: 26247168. Throughput: 0: 19545.0. Samples: 5554628. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:38:35,987][1153456] Avg episode reward: [(0, '33.533')] +[2024-09-30 00:38:36,005][1153805] Updated weights for policy 0, policy_version 6409 (0.0006) +[2024-09-30 00:38:36,554][1153805] Updated weights for policy 0, policy_version 6419 (0.0006) +[2024-09-30 00:38:37,136][1153805] Updated weights for policy 0, policy_version 6429 (0.0006) +[2024-09-30 00:38:37,662][1153805] Updated weights for policy 0, policy_version 6439 (0.0006) +[2024-09-30 00:38:38,183][1153805] Updated weights for policy 0, policy_version 6449 (0.0006) +[2024-09-30 00:38:38,727][1153805] Updated weights for policy 0, policy_version 6459 (0.0006) +[2024-09-30 00:38:39,231][1153805] Updated weights for policy 0, policy_version 6469 (0.0006) +[2024-09-30 00:38:39,770][1153805] Updated weights for policy 0, policy_version 6479 (0.0006) +[2024-09-30 00:38:40,301][1153805] Updated weights for policy 0, policy_version 6489 (0.0006) +[2024-09-30 00:38:40,796][1153805] Updated weights for policy 0, policy_version 6499 (0.0006) +[2024-09-30 00:38:40,987][1153456] Fps is (10 sec: 77414.6, 60 sec: 77073.2, 300 sec: 76699.4). Total num frames: 26632192. Throughput: 0: 19670.1. Samples: 5611176. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:38:40,987][1153456] Avg episode reward: [(0, '33.683')] +[2024-09-30 00:38:41,295][1153805] Updated weights for policy 0, policy_version 6509 (0.0006) +[2024-09-30 00:38:41,846][1153805] Updated weights for policy 0, policy_version 6519 (0.0006) +[2024-09-30 00:38:42,349][1153805] Updated weights for policy 0, policy_version 6529 (0.0006) +[2024-09-30 00:38:42,868][1153805] Updated weights for policy 0, policy_version 6539 (0.0006) +[2024-09-30 00:38:43,369][1153805] Updated weights for policy 0, policy_version 6549 (0.0006) +[2024-09-30 00:38:43,896][1153805] Updated weights for policy 0, policy_version 6559 (0.0006) +[2024-09-30 00:38:44,423][1153805] Updated weights for policy 0, policy_version 6569 (0.0006) +[2024-09-30 00:38:44,983][1153805] Updated weights for policy 0, policy_version 6579 (0.0006) +[2024-09-30 00:38:45,539][1153805] Updated weights for policy 0, policy_version 6589 (0.0006) +[2024-09-30 00:38:45,987][1153456] Fps is (10 sec: 77823.9, 60 sec: 77687.6, 300 sec: 77310.3). Total num frames: 27025408. Throughput: 0: 19904.2. Samples: 5729312. Policy #0 lag: (min: 0.0, avg: 2.2, max: 6.0) +[2024-09-30 00:38:45,987][1153456] Avg episode reward: [(0, '35.226')] +[2024-09-30 00:38:46,040][1153805] Updated weights for policy 0, policy_version 6599 (0.0006) +[2024-09-30 00:38:46,534][1153805] Updated weights for policy 0, policy_version 6609 (0.0006) +[2024-09-30 00:38:47,048][1153805] Updated weights for policy 0, policy_version 6619 (0.0006) +[2024-09-30 00:38:47,568][1153805] Updated weights for policy 0, policy_version 6629 (0.0006) +[2024-09-30 00:38:48,136][1153805] Updated weights for policy 0, policy_version 6639 (0.0006) +[2024-09-30 00:38:48,693][1153805] Updated weights for policy 0, policy_version 6649 (0.0006) +[2024-09-30 00:38:49,245][1153805] Updated weights for policy 0, policy_version 6659 (0.0006) +[2024-09-30 00:38:49,768][1153805] Updated weights for policy 0, policy_version 6669 (0.0006) +[2024-09-30 00:38:50,257][1153805] Updated weights for policy 0, policy_version 6679 (0.0006) +[2024-09-30 00:38:50,790][1153805] Updated weights for policy 0, policy_version 6689 (0.0006) +[2024-09-30 00:38:50,987][1153456] Fps is (10 sec: 77823.8, 60 sec: 78097.3, 300 sec: 77310.3). Total num frames: 27410432. Throughput: 0: 20076.0. Samples: 5845196. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:38:50,987][1153456] Avg episode reward: [(0, '36.787')] +[2024-09-30 00:38:50,987][1153683] Saving new best policy, reward=36.787! +[2024-09-30 00:38:51,376][1153805] Updated weights for policy 0, policy_version 6699 (0.0006) +[2024-09-30 00:38:51,882][1153805] Updated weights for policy 0, policy_version 6709 (0.0006) +[2024-09-30 00:38:52,390][1153805] Updated weights for policy 0, policy_version 6719 (0.0006) +[2024-09-30 00:38:52,903][1153805] Updated weights for policy 0, policy_version 6729 (0.0006) +[2024-09-30 00:38:53,405][1153805] Updated weights for policy 0, policy_version 6739 (0.0006) +[2024-09-30 00:38:53,881][1153805] Updated weights for policy 0, policy_version 6749 (0.0006) +[2024-09-30 00:38:54,400][1153805] Updated weights for policy 0, policy_version 6759 (0.0006) +[2024-09-30 00:38:54,906][1153805] Updated weights for policy 0, policy_version 6769 (0.0006) +[2024-09-30 00:38:55,406][1153805] Updated weights for policy 0, policy_version 6779 (0.0006) +[2024-09-30 00:38:55,933][1153805] Updated weights for policy 0, policy_version 6789 (0.0006) +[2024-09-30 00:38:55,987][1153456] Fps is (10 sec: 78643.2, 60 sec: 79053.1, 300 sec: 77310.3). Total num frames: 27811840. Throughput: 0: 19923.3. Samples: 5904348. Policy #0 lag: (min: 0.0, avg: 2.2, max: 6.0) +[2024-09-30 00:38:55,987][1153456] Avg episode reward: [(0, '36.261')] +[2024-09-30 00:38:56,457][1153805] Updated weights for policy 0, policy_version 6799 (0.0006) +[2024-09-30 00:38:56,994][1153805] Updated weights for policy 0, policy_version 6809 (0.0006) +[2024-09-30 00:38:57,514][1153805] Updated weights for policy 0, policy_version 6819 (0.0006) +[2024-09-30 00:38:57,635][1153683] Signal inference workers to stop experience collection... (550 times) +[2024-09-30 00:38:57,636][1153683] Signal inference workers to resume experience collection... (550 times) +[2024-09-30 00:38:57,639][1153805] InferenceWorker_p0-w0: stopping experience collection (550 times) +[2024-09-30 00:38:57,642][1153805] InferenceWorker_p0-w0: resuming experience collection (550 times) +[2024-09-30 00:38:58,012][1153805] Updated weights for policy 0, policy_version 6829 (0.0006) +[2024-09-30 00:38:58,526][1153805] Updated weights for policy 0, policy_version 6839 (0.0006) +[2024-09-30 00:38:59,060][1153805] Updated weights for policy 0, policy_version 6849 (0.0006) +[2024-09-30 00:38:59,621][1153805] Updated weights for policy 0, policy_version 6859 (0.0006) +[2024-09-30 00:39:00,137][1153805] Updated weights for policy 0, policy_version 6869 (0.0006) +[2024-09-30 00:39:00,635][1153805] Updated weights for policy 0, policy_version 6879 (0.0006) +[2024-09-30 00:39:00,987][1153456] Fps is (10 sec: 79052.6, 60 sec: 79462.6, 300 sec: 77254.8). Total num frames: 28200960. Throughput: 0: 19812.7. Samples: 6023064. Policy #0 lag: (min: 0.0, avg: 2.9, max: 6.0) +[2024-09-30 00:39:00,987][1153456] Avg episode reward: [(0, '37.537')] +[2024-09-30 00:39:00,987][1153683] Saving new best policy, reward=37.537! +[2024-09-30 00:39:01,189][1153805] Updated weights for policy 0, policy_version 6889 (0.0006) +[2024-09-30 00:39:01,723][1153805] Updated weights for policy 0, policy_version 6899 (0.0006) +[2024-09-30 00:39:02,306][1153805] Updated weights for policy 0, policy_version 6909 (0.0006) +[2024-09-30 00:39:02,857][1153805] Updated weights for policy 0, policy_version 6919 (0.0006) +[2024-09-30 00:39:03,423][1153805] Updated weights for policy 0, policy_version 6929 (0.0006) +[2024-09-30 00:39:03,974][1153805] Updated weights for policy 0, policy_version 6939 (0.0006) +[2024-09-30 00:39:04,545][1153805] Updated weights for policy 0, policy_version 6949 (0.0006) +[2024-09-30 00:39:05,108][1153805] Updated weights for policy 0, policy_version 6959 (0.0006) +[2024-09-30 00:39:05,646][1153805] Updated weights for policy 0, policy_version 6969 (0.0006) +[2024-09-30 00:39:05,987][1153456] Fps is (10 sec: 75776.1, 60 sec: 79530.9, 300 sec: 77129.8). Total num frames: 28569600. Throughput: 0: 19589.9. Samples: 6134936. Policy #0 lag: (min: 0.0, avg: 2.9, max: 6.0) +[2024-09-30 00:39:05,987][1153456] Avg episode reward: [(0, '31.062')] +[2024-09-30 00:39:06,198][1153805] Updated weights for policy 0, policy_version 6979 (0.0006) +[2024-09-30 00:39:06,782][1153805] Updated weights for policy 0, policy_version 6989 (0.0006) +[2024-09-30 00:39:07,289][1153805] Updated weights for policy 0, policy_version 6999 (0.0006) +[2024-09-30 00:39:07,855][1153805] Updated weights for policy 0, policy_version 7009 (0.0006) +[2024-09-30 00:39:08,422][1153805] Updated weights for policy 0, policy_version 7019 (0.0006) +[2024-09-30 00:39:08,962][1153805] Updated weights for policy 0, policy_version 7029 (0.0006) +[2024-09-30 00:39:09,472][1153805] Updated weights for policy 0, policy_version 7039 (0.0006) +[2024-09-30 00:39:09,974][1153805] Updated weights for policy 0, policy_version 7049 (0.0006) +[2024-09-30 00:39:10,493][1153805] Updated weights for policy 0, policy_version 7059 (0.0006) +[2024-09-30 00:39:10,987][1153456] Fps is (10 sec: 74547.6, 60 sec: 78711.7, 300 sec: 77157.5). Total num frames: 28946432. Throughput: 0: 19397.1. Samples: 6190416. Policy #0 lag: (min: 0.0, avg: 2.9, max: 6.0) +[2024-09-30 00:39:10,987][1153456] Avg episode reward: [(0, '37.698')] +[2024-09-30 00:39:10,987][1153683] Saving new best policy, reward=37.698! +[2024-09-30 00:39:11,073][1153805] Updated weights for policy 0, policy_version 7069 (0.0006) +[2024-09-30 00:39:11,653][1153805] Updated weights for policy 0, policy_version 7079 (0.0006) +[2024-09-30 00:39:12,211][1153805] Updated weights for policy 0, policy_version 7089 (0.0006) +[2024-09-30 00:39:12,784][1153805] Updated weights for policy 0, policy_version 7099 (0.0006) +[2024-09-30 00:39:13,328][1153805] Updated weights for policy 0, policy_version 7109 (0.0006) +[2024-09-30 00:39:13,907][1153805] Updated weights for policy 0, policy_version 7119 (0.0006) +[2024-09-30 00:39:14,423][1153805] Updated weights for policy 0, policy_version 7129 (0.0006) +[2024-09-30 00:39:14,999][1153805] Updated weights for policy 0, policy_version 7139 (0.0006) +[2024-09-30 00:39:15,563][1153805] Updated weights for policy 0, policy_version 7149 (0.0006) +[2024-09-30 00:39:15,987][1153456] Fps is (10 sec: 73727.9, 60 sec: 77892.3, 300 sec: 76990.9). Total num frames: 29306880. Throughput: 0: 19229.0. Samples: 6302284. Policy #0 lag: (min: 0.0, avg: 1.7, max: 5.0) +[2024-09-30 00:39:15,987][1153456] Avg episode reward: [(0, '37.531')] +[2024-09-30 00:39:16,120][1153805] Updated weights for policy 0, policy_version 7159 (0.0006) +[2024-09-30 00:39:16,717][1153805] Updated weights for policy 0, policy_version 7169 (0.0006) +[2024-09-30 00:39:17,248][1153805] Updated weights for policy 0, policy_version 7179 (0.0006) +[2024-09-30 00:39:17,807][1153805] Updated weights for policy 0, policy_version 7189 (0.0006) +[2024-09-30 00:39:18,381][1153805] Updated weights for policy 0, policy_version 7199 (0.0006) +[2024-09-30 00:39:18,895][1153805] Updated weights for policy 0, policy_version 7209 (0.0006) +[2024-09-30 00:39:19,442][1153805] Updated weights for policy 0, policy_version 7219 (0.0006) +[2024-09-30 00:39:19,998][1153805] Updated weights for policy 0, policy_version 7229 (0.0006) +[2024-09-30 00:39:20,524][1153805] Updated weights for policy 0, policy_version 7239 (0.0006) +[2024-09-30 00:39:20,987][1153456] Fps is (10 sec: 73727.2, 60 sec: 77209.7, 300 sec: 76907.6). Total num frames: 29683712. Throughput: 0: 19084.1. Samples: 6413416. Policy #0 lag: (min: 1.0, avg: 2.6, max: 6.0) +[2024-09-30 00:39:20,987][1153456] Avg episode reward: [(0, '40.574')] +[2024-09-30 00:39:20,988][1153683] Saving new best policy, reward=40.574! +[2024-09-30 00:39:21,072][1153805] Updated weights for policy 0, policy_version 7249 (0.0006) +[2024-09-30 00:39:21,657][1153805] Updated weights for policy 0, policy_version 7259 (0.0006) +[2024-09-30 00:39:22,183][1153805] Updated weights for policy 0, policy_version 7269 (0.0006) +[2024-09-30 00:39:22,755][1153805] Updated weights for policy 0, policy_version 7279 (0.0006) +[2024-09-30 00:39:23,315][1153805] Updated weights for policy 0, policy_version 7289 (0.0006) +[2024-09-30 00:39:23,825][1153805] Updated weights for policy 0, policy_version 7299 (0.0006) +[2024-09-30 00:39:24,351][1153805] Updated weights for policy 0, policy_version 7309 (0.0006) +[2024-09-30 00:39:24,854][1153805] Updated weights for policy 0, policy_version 7319 (0.0006) +[2024-09-30 00:39:25,379][1153805] Updated weights for policy 0, policy_version 7329 (0.0006) +[2024-09-30 00:39:25,884][1153805] Updated weights for policy 0, policy_version 7339 (0.0006) +[2024-09-30 00:39:25,987][1153456] Fps is (10 sec: 76185.6, 60 sec: 76800.0, 300 sec: 76935.4). Total num frames: 30068736. Throughput: 0: 19063.5. Samples: 6469036. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:39:25,987][1153456] Avg episode reward: [(0, '36.766')] +[2024-09-30 00:39:26,392][1153805] Updated weights for policy 0, policy_version 7349 (0.0006) +[2024-09-30 00:39:26,886][1153805] Updated weights for policy 0, policy_version 7359 (0.0006) +[2024-09-30 00:39:26,954][1153683] Signal inference workers to stop experience collection... (600 times) +[2024-09-30 00:39:26,956][1153805] InferenceWorker_p0-w0: stopping experience collection (600 times) +[2024-09-30 00:39:26,961][1153683] Signal inference workers to resume experience collection... (600 times) +[2024-09-30 00:39:26,962][1153805] InferenceWorker_p0-w0: resuming experience collection (600 times) +[2024-09-30 00:39:27,404][1153805] Updated weights for policy 0, policy_version 7369 (0.0007) +[2024-09-30 00:39:27,906][1153805] Updated weights for policy 0, policy_version 7379 (0.0006) +[2024-09-30 00:39:28,419][1153805] Updated weights for policy 0, policy_version 7389 (0.0006) +[2024-09-30 00:39:28,919][1153805] Updated weights for policy 0, policy_version 7399 (0.0006) +[2024-09-30 00:39:29,413][1153805] Updated weights for policy 0, policy_version 7409 (0.0006) +[2024-09-30 00:39:29,959][1153805] Updated weights for policy 0, policy_version 7419 (0.0006) +[2024-09-30 00:39:30,484][1153805] Updated weights for policy 0, policy_version 7429 (0.0006) +[2024-09-30 00:39:30,987][1153456] Fps is (10 sec: 78233.6, 60 sec: 76799.9, 300 sec: 77157.5). Total num frames: 30466048. Throughput: 0: 19112.9. Samples: 6589396. Policy #0 lag: (min: 0.0, avg: 1.5, max: 4.0) +[2024-09-30 00:39:30,987][1153456] Avg episode reward: [(0, '38.132')] +[2024-09-30 00:39:31,012][1153805] Updated weights for policy 0, policy_version 7439 (0.0006) +[2024-09-30 00:39:31,513][1153805] Updated weights for policy 0, policy_version 7449 (0.0006) +[2024-09-30 00:39:32,024][1153805] Updated weights for policy 0, policy_version 7459 (0.0006) +[2024-09-30 00:39:32,533][1153805] Updated weights for policy 0, policy_version 7469 (0.0006) +[2024-09-30 00:39:33,012][1153805] Updated weights for policy 0, policy_version 7479 (0.0006) +[2024-09-30 00:39:33,496][1153805] Updated weights for policy 0, policy_version 7489 (0.0006) +[2024-09-30 00:39:33,987][1153805] Updated weights for policy 0, policy_version 7499 (0.0006) +[2024-09-30 00:39:34,487][1153805] Updated weights for policy 0, policy_version 7509 (0.0007) +[2024-09-30 00:39:34,974][1153805] Updated weights for policy 0, policy_version 7519 (0.0006) +[2024-09-30 00:39:35,447][1153805] Updated weights for policy 0, policy_version 7529 (0.0007) +[2024-09-30 00:39:35,957][1153805] Updated weights for policy 0, policy_version 7539 (0.0006) +[2024-09-30 00:39:35,987][1153456] Fps is (10 sec: 81100.9, 60 sec: 77209.6, 300 sec: 77352.0). Total num frames: 30879744. Throughput: 0: 19258.4. Samples: 6711824. Policy #0 lag: (min: 0.0, avg: 1.5, max: 4.0) +[2024-09-30 00:39:35,987][1153456] Avg episode reward: [(0, '37.346')] +[2024-09-30 00:39:35,990][1153683] Saving /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000007540_30883840.pth... +[2024-09-30 00:39:36,036][1153683] Removing /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000003049_12488704.pth +[2024-09-30 00:39:36,450][1153805] Updated weights for policy 0, policy_version 7549 (0.0007) +[2024-09-30 00:39:36,946][1153805] Updated weights for policy 0, policy_version 7559 (0.0006) +[2024-09-30 00:39:37,429][1153805] Updated weights for policy 0, policy_version 7569 (0.0006) +[2024-09-30 00:39:37,933][1153805] Updated weights for policy 0, policy_version 7579 (0.0006) +[2024-09-30 00:39:38,436][1153805] Updated weights for policy 0, policy_version 7589 (0.0006) +[2024-09-30 00:39:38,945][1153805] Updated weights for policy 0, policy_version 7599 (0.0006) +[2024-09-30 00:39:39,448][1153805] Updated weights for policy 0, policy_version 7609 (0.0006) +[2024-09-30 00:39:39,963][1153805] Updated weights for policy 0, policy_version 7619 (0.0006) +[2024-09-30 00:39:40,477][1153805] Updated weights for policy 0, policy_version 7629 (0.0006) +[2024-09-30 00:39:40,987][1153456] Fps is (10 sec: 81920.2, 60 sec: 77550.8, 300 sec: 77338.1). Total num frames: 31285248. Throughput: 0: 19327.2. Samples: 6774072. Policy #0 lag: (min: 0.0, avg: 2.6, max: 6.0) +[2024-09-30 00:39:40,987][1153456] Avg episode reward: [(0, '32.475')] +[2024-09-30 00:39:41,008][1153805] Updated weights for policy 0, policy_version 7639 (0.0006) +[2024-09-30 00:39:41,562][1153805] Updated weights for policy 0, policy_version 7649 (0.0007) +[2024-09-30 00:39:42,115][1153805] Updated weights for policy 0, policy_version 7659 (0.0006) +[2024-09-30 00:39:42,657][1153805] Updated weights for policy 0, policy_version 7669 (0.0007) +[2024-09-30 00:39:43,176][1153805] Updated weights for policy 0, policy_version 7679 (0.0006) +[2024-09-30 00:39:43,704][1153805] Updated weights for policy 0, policy_version 7689 (0.0006) +[2024-09-30 00:39:44,267][1153805] Updated weights for policy 0, policy_version 7699 (0.0006) +[2024-09-30 00:39:44,804][1153805] Updated weights for policy 0, policy_version 7709 (0.0006) +[2024-09-30 00:39:45,342][1153805] Updated weights for policy 0, policy_version 7719 (0.0006) +[2024-09-30 00:39:45,916][1153805] Updated weights for policy 0, policy_version 7729 (0.0006) +[2024-09-30 00:39:45,987][1153456] Fps is (10 sec: 78643.1, 60 sec: 77346.1, 300 sec: 77463.1). Total num frames: 31666176. Throughput: 0: 19257.4. Samples: 6889648. Policy #0 lag: (min: 0.0, avg: 2.7, max: 7.0) +[2024-09-30 00:39:45,987][1153456] Avg episode reward: [(0, '38.643')] +[2024-09-30 00:39:46,449][1153805] Updated weights for policy 0, policy_version 7739 (0.0006) +[2024-09-30 00:39:46,982][1153805] Updated weights for policy 0, policy_version 7749 (0.0006) +[2024-09-30 00:39:47,494][1153805] Updated weights for policy 0, policy_version 7759 (0.0006) +[2024-09-30 00:39:48,030][1153805] Updated weights for policy 0, policy_version 7769 (0.0006) +[2024-09-30 00:39:48,558][1153805] Updated weights for policy 0, policy_version 7779 (0.0006) +[2024-09-30 00:39:49,115][1153805] Updated weights for policy 0, policy_version 7789 (0.0006) +[2024-09-30 00:39:49,690][1153805] Updated weights for policy 0, policy_version 7799 (0.0006) +[2024-09-30 00:39:50,207][1153805] Updated weights for policy 0, policy_version 7809 (0.0006) +[2024-09-30 00:39:50,748][1153805] Updated weights for policy 0, policy_version 7819 (0.0006) +[2024-09-30 00:39:50,987][1153456] Fps is (10 sec: 76185.5, 60 sec: 77277.8, 300 sec: 77588.0). Total num frames: 32047104. Throughput: 0: 19303.1. Samples: 7003576. Policy #0 lag: (min: 0.0, avg: 2.4, max: 7.0) +[2024-09-30 00:39:50,987][1153456] Avg episode reward: [(0, '36.930')] +[2024-09-30 00:39:51,230][1153805] Updated weights for policy 0, policy_version 7829 (0.0005) +[2024-09-30 00:39:51,769][1153805] Updated weights for policy 0, policy_version 7839 (0.0006) +[2024-09-30 00:39:52,271][1153805] Updated weights for policy 0, policy_version 7849 (0.0007) +[2024-09-30 00:39:52,320][1153683] Signal inference workers to stop experience collection... (650 times) +[2024-09-30 00:39:52,324][1153683] Signal inference workers to resume experience collection... (650 times) +[2024-09-30 00:39:52,325][1153805] InferenceWorker_p0-w0: stopping experience collection (650 times) +[2024-09-30 00:39:52,328][1153805] InferenceWorker_p0-w0: resuming experience collection (650 times) +[2024-09-30 00:39:52,784][1153805] Updated weights for policy 0, policy_version 7859 (0.0006) +[2024-09-30 00:39:53,287][1153805] Updated weights for policy 0, policy_version 7869 (0.0006) +[2024-09-30 00:39:53,804][1153805] Updated weights for policy 0, policy_version 7879 (0.0006) +[2024-09-30 00:39:54,327][1153805] Updated weights for policy 0, policy_version 7889 (0.0006) +[2024-09-30 00:39:54,840][1153805] Updated weights for policy 0, policy_version 7899 (0.0006) +[2024-09-30 00:39:55,366][1153805] Updated weights for policy 0, policy_version 7909 (0.0006) +[2024-09-30 00:39:55,876][1153805] Updated weights for policy 0, policy_version 7919 (0.0006) +[2024-09-30 00:39:55,994][1153456] Fps is (10 sec: 77768.1, 60 sec: 77200.4, 300 sec: 77655.5). Total num frames: 32444416. Throughput: 0: 19397.2. Samples: 7063428. Policy #0 lag: (min: 0.0, avg: 2.7, max: 7.0) +[2024-09-30 00:39:55,995][1153456] Avg episode reward: [(0, '32.909')] +[2024-09-30 00:39:56,394][1153805] Updated weights for policy 0, policy_version 7929 (0.0007) +[2024-09-30 00:39:56,906][1153805] Updated weights for policy 0, policy_version 7939 (0.0006) +[2024-09-30 00:39:57,435][1153805] Updated weights for policy 0, policy_version 7949 (0.0006) +[2024-09-30 00:39:57,972][1153805] Updated weights for policy 0, policy_version 7959 (0.0006) +[2024-09-30 00:39:58,494][1153805] Updated weights for policy 0, policy_version 7969 (0.0006) +[2024-09-30 00:39:59,032][1153805] Updated weights for policy 0, policy_version 7979 (0.0006) +[2024-09-30 00:39:59,555][1153805] Updated weights for policy 0, policy_version 7989 (0.0006) +[2024-09-30 00:40:00,089][1153805] Updated weights for policy 0, policy_version 7999 (0.0006) +[2024-09-30 00:40:00,622][1153805] Updated weights for policy 0, policy_version 8009 (0.0006) +[2024-09-30 00:40:00,987][1153456] Fps is (10 sec: 78643.9, 60 sec: 77209.7, 300 sec: 77560.2). Total num frames: 32833536. Throughput: 0: 19529.9. Samples: 7181128. Policy #0 lag: (min: 0.0, avg: 2.7, max: 7.0) +[2024-09-30 00:40:00,987][1153456] Avg episode reward: [(0, '38.021')] +[2024-09-30 00:40:01,066][1153805] Updated weights for policy 0, policy_version 8019 (0.0006) +[2024-09-30 00:40:01,563][1153805] Updated weights for policy 0, policy_version 8029 (0.0006) +[2024-09-30 00:40:02,005][1153805] Updated weights for policy 0, policy_version 8039 (0.0006) +[2024-09-30 00:40:02,491][1153805] Updated weights for policy 0, policy_version 8049 (0.0006) +[2024-09-30 00:40:02,984][1153805] Updated weights for policy 0, policy_version 8059 (0.0006) +[2024-09-30 00:40:03,473][1153805] Updated weights for policy 0, policy_version 8069 (0.0006) +[2024-09-30 00:40:03,958][1153805] Updated weights for policy 0, policy_version 8079 (0.0006) +[2024-09-30 00:40:04,448][1153805] Updated weights for policy 0, policy_version 8089 (0.0006) +[2024-09-30 00:40:04,923][1153805] Updated weights for policy 0, policy_version 8099 (0.0006) +[2024-09-30 00:40:05,414][1153805] Updated weights for policy 0, policy_version 8109 (0.0006) +[2024-09-30 00:40:05,872][1153805] Updated weights for policy 0, policy_version 8119 (0.0006) +[2024-09-30 00:40:05,987][1153456] Fps is (10 sec: 81978.8, 60 sec: 78233.6, 300 sec: 77574.1). Total num frames: 33263616. Throughput: 0: 19866.1. Samples: 7307388. Policy #0 lag: (min: 0.0, avg: 2.6, max: 6.0) +[2024-09-30 00:40:05,987][1153456] Avg episode reward: [(0, '38.524')] +[2024-09-30 00:40:06,339][1153805] Updated weights for policy 0, policy_version 8129 (0.0006) +[2024-09-30 00:40:06,803][1153805] Updated weights for policy 0, policy_version 8139 (0.0006) +[2024-09-30 00:40:07,296][1153805] Updated weights for policy 0, policy_version 8149 (0.0006) +[2024-09-30 00:40:07,803][1153805] Updated weights for policy 0, policy_version 8159 (0.0006) +[2024-09-30 00:40:08,302][1153805] Updated weights for policy 0, policy_version 8169 (0.0006) +[2024-09-30 00:40:08,809][1153805] Updated weights for policy 0, policy_version 8179 (0.0006) +[2024-09-30 00:40:09,281][1153805] Updated weights for policy 0, policy_version 8189 (0.0006) +[2024-09-30 00:40:09,784][1153805] Updated weights for policy 0, policy_version 8199 (0.0006) +[2024-09-30 00:40:10,308][1153805] Updated weights for policy 0, policy_version 8209 (0.0006) +[2024-09-30 00:40:10,864][1153805] Updated weights for policy 0, policy_version 8219 (0.0006) +[2024-09-30 00:40:10,987][1153456] Fps is (10 sec: 83966.9, 60 sec: 78779.6, 300 sec: 77546.3). Total num frames: 33673216. Throughput: 0: 20033.8. Samples: 7370560. Policy #0 lag: (min: 0.0, avg: 2.3, max: 4.0) +[2024-09-30 00:40:10,987][1153456] Avg episode reward: [(0, '36.684')] +[2024-09-30 00:40:11,401][1153805] Updated weights for policy 0, policy_version 8229 (0.0006) +[2024-09-30 00:40:11,959][1153805] Updated weights for policy 0, policy_version 8239 (0.0007) +[2024-09-30 00:40:12,516][1153805] Updated weights for policy 0, policy_version 8249 (0.0006) +[2024-09-30 00:40:13,065][1153805] Updated weights for policy 0, policy_version 8259 (0.0006) +[2024-09-30 00:40:13,619][1153805] Updated weights for policy 0, policy_version 8269 (0.0006) +[2024-09-30 00:40:14,143][1153805] Updated weights for policy 0, policy_version 8279 (0.0007) +[2024-09-30 00:40:14,698][1153805] Updated weights for policy 0, policy_version 8289 (0.0006) +[2024-09-30 00:40:15,264][1153805] Updated weights for policy 0, policy_version 8299 (0.0006) +[2024-09-30 00:40:15,843][1153805] Updated weights for policy 0, policy_version 8309 (0.0006) +[2024-09-30 00:40:15,987][1153456] Fps is (10 sec: 77823.4, 60 sec: 78916.2, 300 sec: 77379.7). Total num frames: 34041856. Throughput: 0: 19911.3. Samples: 7485404. Policy #0 lag: (min: 0.0, avg: 2.3, max: 4.0) +[2024-09-30 00:40:15,987][1153456] Avg episode reward: [(0, '39.926')] +[2024-09-30 00:40:16,429][1153805] Updated weights for policy 0, policy_version 8319 (0.0006) +[2024-09-30 00:40:17,073][1153805] Updated weights for policy 0, policy_version 8329 (0.0006) +[2024-09-30 00:40:17,710][1153805] Updated weights for policy 0, policy_version 8339 (0.0006) +[2024-09-30 00:40:17,710][1153683] Signal inference workers to stop experience collection... (700 times) +[2024-09-30 00:40:17,711][1153683] Signal inference workers to resume experience collection... (700 times) +[2024-09-30 00:40:17,714][1153805] InferenceWorker_p0-w0: stopping experience collection (700 times) +[2024-09-30 00:40:17,714][1153805] InferenceWorker_p0-w0: resuming experience collection (700 times) +[2024-09-30 00:40:18,299][1153805] Updated weights for policy 0, policy_version 8349 (0.0006) +[2024-09-30 00:40:18,871][1153805] Updated weights for policy 0, policy_version 8359 (0.0006) +[2024-09-30 00:40:19,454][1153805] Updated weights for policy 0, policy_version 8369 (0.0006) +[2024-09-30 00:40:20,063][1153805] Updated weights for policy 0, policy_version 8379 (0.0006) +[2024-09-30 00:40:20,608][1153805] Updated weights for policy 0, policy_version 8389 (0.0006) +[2024-09-30 00:40:20,987][1153456] Fps is (10 sec: 71270.5, 60 sec: 78370.1, 300 sec: 77088.1). Total num frames: 34385920. Throughput: 0: 19502.4. Samples: 7589436. Policy #0 lag: (min: 0.0, avg: 3.3, max: 6.0) +[2024-09-30 00:40:20,987][1153456] Avg episode reward: [(0, '38.640')] +[2024-09-30 00:40:21,184][1153805] Updated weights for policy 0, policy_version 8399 (0.0006) +[2024-09-30 00:40:21,798][1153805] Updated weights for policy 0, policy_version 8409 (0.0006) +[2024-09-30 00:40:22,354][1153805] Updated weights for policy 0, policy_version 8419 (0.0006) +[2024-09-30 00:40:22,894][1153805] Updated weights for policy 0, policy_version 8429 (0.0006) +[2024-09-30 00:40:23,502][1153805] Updated weights for policy 0, policy_version 8439 (0.0006) +[2024-09-30 00:40:24,060][1153805] Updated weights for policy 0, policy_version 8449 (0.0006) +[2024-09-30 00:40:24,629][1153805] Updated weights for policy 0, policy_version 8459 (0.0006) +[2024-09-30 00:40:25,202][1153805] Updated weights for policy 0, policy_version 8469 (0.0006) +[2024-09-30 00:40:25,786][1153805] Updated weights for policy 0, policy_version 8479 (0.0006) +[2024-09-30 00:40:25,987][1153456] Fps is (10 sec: 70451.3, 60 sec: 77960.4, 300 sec: 76893.8). Total num frames: 34746368. Throughput: 0: 19310.0. Samples: 7643024. Policy #0 lag: (min: 0.0, avg: 2.3, max: 5.0) +[2024-09-30 00:40:25,987][1153456] Avg episode reward: [(0, '40.783')] +[2024-09-30 00:40:25,990][1153683] Saving new best policy, reward=40.783! +[2024-09-30 00:40:26,314][1153805] Updated weights for policy 0, policy_version 8489 (0.0006) +[2024-09-30 00:40:26,845][1153805] Updated weights for policy 0, policy_version 8499 (0.0006) +[2024-09-30 00:40:27,376][1153805] Updated weights for policy 0, policy_version 8509 (0.0006) +[2024-09-30 00:40:27,888][1153805] Updated weights for policy 0, policy_version 8519 (0.0008) +[2024-09-30 00:40:28,413][1153805] Updated weights for policy 0, policy_version 8529 (0.0008) +[2024-09-30 00:40:28,860][1153805] Updated weights for policy 0, policy_version 8539 (0.0008) +[2024-09-30 00:40:29,372][1153805] Updated weights for policy 0, policy_version 8549 (0.0007) +[2024-09-30 00:40:29,855][1153805] Updated weights for policy 0, policy_version 8559 (0.0007) +[2024-09-30 00:40:30,345][1153805] Updated weights for policy 0, policy_version 8569 (0.0006) +[2024-09-30 00:40:30,847][1153805] Updated weights for policy 0, policy_version 8579 (0.0006) +[2024-09-30 00:40:30,987][1153456] Fps is (10 sec: 76185.3, 60 sec: 78028.7, 300 sec: 76810.4). Total num frames: 35147776. Throughput: 0: 19307.9. Samples: 7758504. Policy #0 lag: (min: 0.0, avg: 1.7, max: 5.0) +[2024-09-30 00:40:30,987][1153456] Avg episode reward: [(0, '37.805')] +[2024-09-30 00:40:31,352][1153805] Updated weights for policy 0, policy_version 8589 (0.0006) +[2024-09-30 00:40:31,869][1153805] Updated weights for policy 0, policy_version 8599 (0.0006) +[2024-09-30 00:40:32,391][1153805] Updated weights for policy 0, policy_version 8609 (0.0006) +[2024-09-30 00:40:32,916][1153805] Updated weights for policy 0, policy_version 8619 (0.0006) +[2024-09-30 00:40:33,368][1153805] Updated weights for policy 0, policy_version 8629 (0.0006) +[2024-09-30 00:40:33,835][1153805] Updated weights for policy 0, policy_version 8639 (0.0006) +[2024-09-30 00:40:34,343][1153805] Updated weights for policy 0, policy_version 8649 (0.0006) +[2024-09-30 00:40:34,787][1153805] Updated weights for policy 0, policy_version 8659 (0.0006) +[2024-09-30 00:40:35,295][1153805] Updated weights for policy 0, policy_version 8669 (0.0006) +[2024-09-30 00:40:35,784][1153805] Updated weights for policy 0, policy_version 8679 (0.0006) +[2024-09-30 00:40:35,987][1153456] Fps is (10 sec: 81920.0, 60 sec: 78097.0, 300 sec: 76852.1). Total num frames: 35565568. Throughput: 0: 19536.3. Samples: 7882708. Policy #0 lag: (min: 0.0, avg: 1.7, max: 5.0) +[2024-09-30 00:40:35,987][1153456] Avg episode reward: [(0, '37.921')] +[2024-09-30 00:40:36,285][1153805] Updated weights for policy 0, policy_version 8689 (0.0006) +[2024-09-30 00:40:36,803][1153805] Updated weights for policy 0, policy_version 8699 (0.0006) +[2024-09-30 00:40:37,307][1153805] Updated weights for policy 0, policy_version 8709 (0.0006) +[2024-09-30 00:40:37,815][1153805] Updated weights for policy 0, policy_version 8719 (0.0006) +[2024-09-30 00:40:38,319][1153805] Updated weights for policy 0, policy_version 8729 (0.0006) +[2024-09-30 00:40:38,805][1153805] Updated weights for policy 0, policy_version 8739 (0.0006) +[2024-09-30 00:40:39,292][1153805] Updated weights for policy 0, policy_version 8749 (0.0006) +[2024-09-30 00:40:39,790][1153805] Updated weights for policy 0, policy_version 8759 (0.0006) +[2024-09-30 00:40:40,269][1153805] Updated weights for policy 0, policy_version 8769 (0.0006) +[2024-09-30 00:40:40,776][1153805] Updated weights for policy 0, policy_version 8779 (0.0006) +[2024-09-30 00:40:40,987][1153456] Fps is (10 sec: 82739.4, 60 sec: 78165.3, 300 sec: 76866.0). Total num frames: 35975168. Throughput: 0: 19563.6. Samples: 7943652. Policy #0 lag: (min: 0.0, avg: 2.8, max: 6.0) +[2024-09-30 00:40:40,987][1153456] Avg episode reward: [(0, '38.878')] +[2024-09-30 00:40:41,258][1153805] Updated weights for policy 0, policy_version 8789 (0.0006) +[2024-09-30 00:40:41,711][1153805] Updated weights for policy 0, policy_version 8799 (0.0006) +[2024-09-30 00:40:42,190][1153805] Updated weights for policy 0, policy_version 8809 (0.0006) +[2024-09-30 00:40:42,661][1153805] Updated weights for policy 0, policy_version 8819 (0.0006) +[2024-09-30 00:40:43,121][1153805] Updated weights for policy 0, policy_version 8829 (0.0006) +[2024-09-30 00:40:43,576][1153805] Updated weights for policy 0, policy_version 8839 (0.0006) +[2024-09-30 00:40:44,070][1153805] Updated weights for policy 0, policy_version 8849 (0.0006) +[2024-09-30 00:40:44,561][1153805] Updated weights for policy 0, policy_version 8859 (0.0006) +[2024-09-30 00:40:45,043][1153805] Updated weights for policy 0, policy_version 8869 (0.0006) +[2024-09-30 00:40:45,518][1153805] Updated weights for policy 0, policy_version 8879 (0.0006) +[2024-09-30 00:40:45,985][1153805] Updated weights for policy 0, policy_version 8889 (0.0006) +[2024-09-30 00:40:45,987][1153456] Fps is (10 sec: 84377.8, 60 sec: 79052.7, 300 sec: 76949.3). Total num frames: 36409344. Throughput: 0: 19795.6. Samples: 8071932. Policy #0 lag: (min: 0.0, avg: 1.7, max: 4.0) +[2024-09-30 00:40:45,987][1153456] Avg episode reward: [(0, '39.409')] +[2024-09-30 00:40:46,476][1153805] Updated weights for policy 0, policy_version 8899 (0.0006) +[2024-09-30 00:40:46,965][1153805] Updated weights for policy 0, policy_version 8909 (0.0006) +[2024-09-30 00:40:47,435][1153805] Updated weights for policy 0, policy_version 8919 (0.0006) +[2024-09-30 00:40:47,896][1153805] Updated weights for policy 0, policy_version 8929 (0.0006) +[2024-09-30 00:40:48,376][1153805] Updated weights for policy 0, policy_version 8939 (0.0006) +[2024-09-30 00:40:48,877][1153805] Updated weights for policy 0, policy_version 8949 (0.0006) +[2024-09-30 00:40:49,370][1153805] Updated weights for policy 0, policy_version 8959 (0.0006) +[2024-09-30 00:40:49,815][1153805] Updated weights for policy 0, policy_version 8969 (0.0005) +[2024-09-30 00:40:50,314][1153805] Updated weights for policy 0, policy_version 8979 (0.0006) +[2024-09-30 00:40:50,821][1153805] Updated weights for policy 0, policy_version 8989 (0.0006) +[2024-09-30 00:40:50,987][1153456] Fps is (10 sec: 85607.4, 60 sec: 79735.6, 300 sec: 76963.2). Total num frames: 36831232. Throughput: 0: 19828.9. Samples: 8199688. Policy #0 lag: (min: 0.0, avg: 1.7, max: 4.0) +[2024-09-30 00:40:50,987][1153456] Avg episode reward: [(0, '40.095')] +[2024-09-30 00:40:51,317][1153805] Updated weights for policy 0, policy_version 8999 (0.0006) +[2024-09-30 00:40:51,807][1153805] Updated weights for policy 0, policy_version 9009 (0.0006) +[2024-09-30 00:40:52,304][1153805] Updated weights for policy 0, policy_version 9019 (0.0006) +[2024-09-30 00:40:52,792][1153805] Updated weights for policy 0, policy_version 9029 (0.0006) +[2024-09-30 00:40:53,249][1153805] Updated weights for policy 0, policy_version 9039 (0.0006) +[2024-09-30 00:40:53,724][1153683] Signal inference workers to stop experience collection... (750 times) +[2024-09-30 00:40:53,727][1153805] InferenceWorker_p0-w0: stopping experience collection (750 times) +[2024-09-30 00:40:53,733][1153683] Signal inference workers to resume experience collection... (750 times) +[2024-09-30 00:40:53,733][1153805] InferenceWorker_p0-w0: resuming experience collection (750 times) +[2024-09-30 00:40:53,734][1153805] Updated weights for policy 0, policy_version 9049 (0.0006) +[2024-09-30 00:40:54,154][1153805] Updated weights for policy 0, policy_version 9059 (0.0006) +[2024-09-30 00:40:54,605][1153805] Updated weights for policy 0, policy_version 9069 (0.0006) +[2024-09-30 00:40:55,068][1153805] Updated weights for policy 0, policy_version 9079 (0.0006) +[2024-09-30 00:40:55,528][1153805] Updated weights for policy 0, policy_version 9089 (0.0006) +[2024-09-30 00:40:55,963][1153805] Updated weights for policy 0, policy_version 9099 (0.0006) +[2024-09-30 00:40:55,987][1153456] Fps is (10 sec: 86015.8, 60 sec: 80427.7, 300 sec: 77032.6). Total num frames: 37269504. Throughput: 0: 19831.6. Samples: 8262980. Policy #0 lag: (min: 0.0, avg: 1.9, max: 5.0) +[2024-09-30 00:40:55,987][1153456] Avg episode reward: [(0, '39.394')] +[2024-09-30 00:40:56,449][1153805] Updated weights for policy 0, policy_version 9109 (0.0006) +[2024-09-30 00:40:56,943][1153805] Updated weights for policy 0, policy_version 9119 (0.0006) +[2024-09-30 00:40:57,356][1153805] Updated weights for policy 0, policy_version 9129 (0.0006) +[2024-09-30 00:40:57,845][1153805] Updated weights for policy 0, policy_version 9139 (0.0006) +[2024-09-30 00:40:58,338][1153805] Updated weights for policy 0, policy_version 9149 (0.0006) +[2024-09-30 00:40:58,825][1153805] Updated weights for policy 0, policy_version 9159 (0.0006) +[2024-09-30 00:40:59,322][1153805] Updated weights for policy 0, policy_version 9169 (0.0007) +[2024-09-30 00:40:59,747][1153805] Updated weights for policy 0, policy_version 9179 (0.0006) +[2024-09-30 00:41:00,224][1153805] Updated weights for policy 0, policy_version 9189 (0.0006) +[2024-09-30 00:41:00,704][1153805] Updated weights for policy 0, policy_version 9199 (0.0006) +[2024-09-30 00:41:00,987][1153456] Fps is (10 sec: 86834.9, 60 sec: 81100.7, 300 sec: 77115.9). Total num frames: 37699584. Throughput: 0: 20213.8. Samples: 8395024. Policy #0 lag: (min: 0.0, avg: 2.3, max: 7.0) +[2024-09-30 00:41:00,987][1153456] Avg episode reward: [(0, '39.348')] +[2024-09-30 00:41:01,187][1153805] Updated weights for policy 0, policy_version 9209 (0.0006) +[2024-09-30 00:41:01,725][1153805] Updated weights for policy 0, policy_version 9219 (0.0006) +[2024-09-30 00:41:02,225][1153805] Updated weights for policy 0, policy_version 9229 (0.0006) +[2024-09-30 00:41:02,750][1153805] Updated weights for policy 0, policy_version 9239 (0.0006) +[2024-09-30 00:41:03,281][1153805] Updated weights for policy 0, policy_version 9249 (0.0007) +[2024-09-30 00:41:03,808][1153805] Updated weights for policy 0, policy_version 9259 (0.0006) +[2024-09-30 00:41:04,312][1153805] Updated weights for policy 0, policy_version 9269 (0.0006) +[2024-09-30 00:41:04,850][1153805] Updated weights for policy 0, policy_version 9279 (0.0006) +[2024-09-30 00:41:05,373][1153805] Updated weights for policy 0, policy_version 9289 (0.0006) +[2024-09-30 00:41:05,876][1153805] Updated weights for policy 0, policy_version 9299 (0.0006) +[2024-09-30 00:41:05,987][1153456] Fps is (10 sec: 82330.4, 60 sec: 80486.4, 300 sec: 77060.4). Total num frames: 38092800. Throughput: 0: 20583.2. Samples: 8515676. Policy #0 lag: (min: 0.0, avg: 2.3, max: 7.0) +[2024-09-30 00:41:05,987][1153456] Avg episode reward: [(0, '38.735')] +[2024-09-30 00:41:06,451][1153805] Updated weights for policy 0, policy_version 9309 (0.0006) +[2024-09-30 00:41:06,939][1153805] Updated weights for policy 0, policy_version 9319 (0.0006) +[2024-09-30 00:41:07,418][1153805] Updated weights for policy 0, policy_version 9329 (0.0006) +[2024-09-30 00:41:07,901][1153805] Updated weights for policy 0, policy_version 9339 (0.0006) +[2024-09-30 00:41:08,383][1153805] Updated weights for policy 0, policy_version 9349 (0.0006) +[2024-09-30 00:41:08,874][1153805] Updated weights for policy 0, policy_version 9359 (0.0006) +[2024-09-30 00:41:09,379][1153805] Updated weights for policy 0, policy_version 9369 (0.0006) +[2024-09-30 00:41:09,857][1153805] Updated weights for policy 0, policy_version 9379 (0.0006) +[2024-09-30 00:41:10,361][1153805] Updated weights for policy 0, policy_version 9389 (0.0006) +[2024-09-30 00:41:10,865][1153805] Updated weights for policy 0, policy_version 9399 (0.0006) +[2024-09-30 00:41:10,987][1153456] Fps is (10 sec: 80691.1, 60 sec: 80554.8, 300 sec: 77282.5). Total num frames: 38506496. Throughput: 0: 20753.0. Samples: 8576908. Policy #0 lag: (min: 0.0, avg: 2.6, max: 5.0) +[2024-09-30 00:41:10,987][1153456] Avg episode reward: [(0, '40.802')] +[2024-09-30 00:41:10,987][1153683] Saving new best policy, reward=40.802! +[2024-09-30 00:41:11,400][1153805] Updated weights for policy 0, policy_version 9409 (0.0006) +[2024-09-30 00:41:11,983][1153805] Updated weights for policy 0, policy_version 9419 (0.0006) +[2024-09-30 00:41:12,567][1153805] Updated weights for policy 0, policy_version 9429 (0.0006) +[2024-09-30 00:41:13,159][1153805] Updated weights for policy 0, policy_version 9439 (0.0006) +[2024-09-30 00:41:13,691][1153805] Updated weights for policy 0, policy_version 9449 (0.0006) +[2024-09-30 00:41:14,257][1153805] Updated weights for policy 0, policy_version 9459 (0.0006) +[2024-09-30 00:41:14,794][1153805] Updated weights for policy 0, policy_version 9469 (0.0006) +[2024-09-30 00:41:15,351][1153805] Updated weights for policy 0, policy_version 9479 (0.0006) +[2024-09-30 00:41:15,890][1153805] Updated weights for policy 0, policy_version 9489 (0.0006) +[2024-09-30 00:41:15,987][1153456] Fps is (10 sec: 77823.7, 60 sec: 80486.5, 300 sec: 77227.0). Total num frames: 38871040. Throughput: 0: 20742.6. Samples: 8691920. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:41:15,987][1153456] Avg episode reward: [(0, '41.660')] +[2024-09-30 00:41:15,990][1153683] Saving new best policy, reward=41.660! +[2024-09-30 00:41:16,508][1153805] Updated weights for policy 0, policy_version 9499 (0.0006) +[2024-09-30 00:41:17,082][1153805] Updated weights for policy 0, policy_version 9509 (0.0006) +[2024-09-30 00:41:17,628][1153805] Updated weights for policy 0, policy_version 9519 (0.0006) +[2024-09-30 00:41:18,156][1153805] Updated weights for policy 0, policy_version 9529 (0.0006) +[2024-09-30 00:41:18,679][1153805] Updated weights for policy 0, policy_version 9539 (0.0006) +[2024-09-30 00:41:19,228][1153805] Updated weights for policy 0, policy_version 9549 (0.0006) +[2024-09-30 00:41:19,750][1153805] Updated weights for policy 0, policy_version 9559 (0.0006) +[2024-09-30 00:41:20,318][1153805] Updated weights for policy 0, policy_version 9569 (0.0006) +[2024-09-30 00:41:20,879][1153805] Updated weights for policy 0, policy_version 9579 (0.0006) +[2024-09-30 00:41:20,987][1153456] Fps is (10 sec: 73317.9, 60 sec: 80896.0, 300 sec: 77143.7). Total num frames: 39239680. Throughput: 0: 20449.0. Samples: 8802912. Policy #0 lag: (min: 0.0, avg: 2.2, max: 5.0) +[2024-09-30 00:41:20,987][1153456] Avg episode reward: [(0, '38.028')] +[2024-09-30 00:41:21,455][1153805] Updated weights for policy 0, policy_version 9589 (0.0006) +[2024-09-30 00:41:22,014][1153805] Updated weights for policy 0, policy_version 9599 (0.0006) +[2024-09-30 00:41:22,566][1153805] Updated weights for policy 0, policy_version 9609 (0.0006) +[2024-09-30 00:41:23,075][1153805] Updated weights for policy 0, policy_version 9619 (0.0006) +[2024-09-30 00:41:23,562][1153805] Updated weights for policy 0, policy_version 9629 (0.0006) +[2024-09-30 00:41:24,088][1153805] Updated weights for policy 0, policy_version 9639 (0.0006) +[2024-09-30 00:41:24,645][1153805] Updated weights for policy 0, policy_version 9649 (0.0006) +[2024-09-30 00:41:25,171][1153805] Updated weights for policy 0, policy_version 9659 (0.0006) +[2024-09-30 00:41:25,650][1153805] Updated weights for policy 0, policy_version 9669 (0.0006) +[2024-09-30 00:41:25,987][1153456] Fps is (10 sec: 75774.7, 60 sec: 81373.7, 300 sec: 77129.8). Total num frames: 39628800. Throughput: 0: 20358.3. Samples: 8859776. Policy #0 lag: (min: 0.0, avg: 2.3, max: 6.0) +[2024-09-30 00:41:25,987][1153456] Avg episode reward: [(0, '40.390')] +[2024-09-30 00:41:26,178][1153805] Updated weights for policy 0, policy_version 9679 (0.0006) +[2024-09-30 00:41:26,714][1153805] Updated weights for policy 0, policy_version 9689 (0.0006) +[2024-09-30 00:41:27,245][1153805] Updated weights for policy 0, policy_version 9699 (0.0006) +[2024-09-30 00:41:27,745][1153805] Updated weights for policy 0, policy_version 9709 (0.0006) +[2024-09-30 00:41:28,229][1153805] Updated weights for policy 0, policy_version 9719 (0.0006) +[2024-09-30 00:41:28,746][1153805] Updated weights for policy 0, policy_version 9729 (0.0006) +[2024-09-30 00:41:29,299][1153805] Updated weights for policy 0, policy_version 9739 (0.0006) +[2024-09-30 00:41:29,854][1153805] Updated weights for policy 0, policy_version 9749 (0.0006) +[2024-09-30 00:41:30,375][1153805] Updated weights for policy 0, policy_version 9759 (0.0006) +[2024-09-30 00:41:30,848][1153683] Stopping Batcher_0... +[2024-09-30 00:41:30,848][1153456] Component Batcher_0 stopped! +[2024-09-30 00:41:30,848][1153683] Loop batcher_evt_loop terminating... +[2024-09-30 00:41:30,859][1153683] Saving /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000009768_40009728.pth... +[2024-09-30 00:41:30,867][1153805] Weights refcount: 2 0 +[2024-09-30 00:41:30,868][1153805] Stopping InferenceWorker_p0-w0... +[2024-09-30 00:41:30,868][1153456] Component InferenceWorker_p0-w0 stopped! +[2024-09-30 00:41:30,868][1153805] Loop inference_proc0-0_evt_loop terminating... +[2024-09-30 00:41:30,909][1153683] Removing /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000005284_21643264.pth +[2024-09-30 00:41:30,913][1153806] Stopping RolloutWorker_w0... +[2024-09-30 00:41:30,913][1153456] Component RolloutWorker_w0 stopped! +[2024-09-30 00:41:30,913][1153806] Loop rollout_proc0_evt_loop terminating... +[2024-09-30 00:41:30,916][1153456] Component RolloutWorker_w11 stopped! +[2024-09-30 00:41:30,916][1153882] Stopping RolloutWorker_w11... +[2024-09-30 00:41:30,917][1153882] Loop rollout_proc11_evt_loop terminating... +[2024-09-30 00:41:30,917][1153456] Component RolloutWorker_w8 stopped! +[2024-09-30 00:41:30,917][1153814] Stopping RolloutWorker_w8... +[2024-09-30 00:41:30,917][1153456] Component RolloutWorker_w9 stopped! +[2024-09-30 00:41:30,917][1153814] Loop rollout_proc8_evt_loop terminating... +[2024-09-30 00:41:30,917][1153815] Stopping RolloutWorker_w9... +[2024-09-30 00:41:30,918][1153683] Saving new best policy, reward=41.848! +[2024-09-30 00:41:30,918][1153456] Component RolloutWorker_w1 stopped! +[2024-09-30 00:41:30,918][1153807] Stopping RolloutWorker_w1... +[2024-09-30 00:41:30,918][1153815] Loop rollout_proc9_evt_loop terminating... +[2024-09-30 00:41:30,918][1153807] Loop rollout_proc1_evt_loop terminating... +[2024-09-30 00:41:30,920][1153456] Component RolloutWorker_w3 stopped! +[2024-09-30 00:41:30,920][1153808] Stopping RolloutWorker_w3... +[2024-09-30 00:41:30,921][1153456] Component RolloutWorker_w7 stopped! +[2024-09-30 00:41:30,921][1153808] Loop rollout_proc3_evt_loop terminating... +[2024-09-30 00:41:30,921][1153813] Stopping RolloutWorker_w7... +[2024-09-30 00:41:30,921][1153456] Component RolloutWorker_w15 stopped! +[2024-09-30 00:41:30,921][1154909] Stopping RolloutWorker_w15... +[2024-09-30 00:41:30,921][1153813] Loop rollout_proc7_evt_loop terminating... +[2024-09-30 00:41:30,921][1154909] Loop rollout_proc15_evt_loop terminating... +[2024-09-30 00:41:30,922][1153456] Component RolloutWorker_w5 stopped! +[2024-09-30 00:41:30,922][1153812] Stopping RolloutWorker_w5... +[2024-09-30 00:41:30,922][1153812] Loop rollout_proc5_evt_loop terminating... +[2024-09-30 00:41:30,923][1153456] Component RolloutWorker_w13 stopped! +[2024-09-30 00:41:30,923][1153880] Stopping RolloutWorker_w13... +[2024-09-30 00:41:30,923][1153880] Loop rollout_proc13_evt_loop terminating... +[2024-09-30 00:41:30,925][1153456] Component RolloutWorker_w2 stopped! +[2024-09-30 00:41:30,925][1153809] Stopping RolloutWorker_w2... +[2024-09-30 00:41:30,926][1153809] Loop rollout_proc2_evt_loop terminating... +[2024-09-30 00:41:30,926][1153456] Component RolloutWorker_w14 stopped! +[2024-09-30 00:41:30,926][1153881] Stopping RolloutWorker_w14... +[2024-09-30 00:41:30,927][1153881] Loop rollout_proc14_evt_loop terminating... +[2024-09-30 00:41:30,927][1153456] Component RolloutWorker_w6 stopped! +[2024-09-30 00:41:30,927][1153811] Stopping RolloutWorker_w6... +[2024-09-30 00:41:30,927][1153456] Component RolloutWorker_w4 stopped! +[2024-09-30 00:41:30,927][1153811] Loop rollout_proc6_evt_loop terminating... +[2024-09-30 00:41:30,927][1153810] Stopping RolloutWorker_w4... +[2024-09-30 00:41:30,927][1153810] Loop rollout_proc4_evt_loop terminating... +[2024-09-30 00:41:30,928][1153456] Component RolloutWorker_w12 stopped! +[2024-09-30 00:41:30,928][1153883] Stopping RolloutWorker_w12... +[2024-09-30 00:41:30,928][1153883] Loop rollout_proc12_evt_loop terminating... +[2024-09-30 00:41:30,929][1153456] Component RolloutWorker_w10 stopped! +[2024-09-30 00:41:30,929][1153816] Stopping RolloutWorker_w10... +[2024-09-30 00:41:30,930][1153816] Loop rollout_proc10_evt_loop terminating... +[2024-09-30 00:41:30,973][1153683] Saving /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000009768_40009728.pth... +[2024-09-30 00:41:31,094][1153683] Stopping LearnerWorker_p0... +[2024-09-30 00:41:31,095][1153683] Loop learner_proc0_evt_loop terminating... +[2024-09-30 00:41:31,095][1153456] Component LearnerWorker_p0 stopped! +[2024-09-30 00:41:31,095][1153456] Waiting for process learner_proc0 to stop... +[2024-09-30 00:41:31,697][1153456] Waiting for process inference_proc0-0 to join... +[2024-09-30 00:41:31,697][1153456] Waiting for process rollout_proc0 to join... +[2024-09-30 00:41:31,698][1153456] Waiting for process rollout_proc1 to join... +[2024-09-30 00:41:31,698][1153456] Waiting for process rollout_proc2 to join... +[2024-09-30 00:41:31,698][1153456] Waiting for process rollout_proc3 to join... +[2024-09-30 00:41:31,698][1153456] Waiting for process rollout_proc4 to join... +[2024-09-30 00:41:31,699][1153456] Waiting for process rollout_proc5 to join... +[2024-09-30 00:41:31,699][1153456] Waiting for process rollout_proc6 to join... +[2024-09-30 00:41:31,699][1153456] Waiting for process rollout_proc7 to join... +[2024-09-30 00:41:31,699][1153456] Waiting for process rollout_proc8 to join... +[2024-09-30 00:41:31,700][1153456] Waiting for process rollout_proc9 to join... +[2024-09-30 00:41:31,700][1153456] Waiting for process rollout_proc10 to join... +[2024-09-30 00:41:31,700][1153456] Waiting for process rollout_proc11 to join... +[2024-09-30 00:41:31,700][1153456] Waiting for process rollout_proc12 to join... +[2024-09-30 00:41:31,701][1153456] Waiting for process rollout_proc13 to join... +[2024-09-30 00:41:31,701][1153456] Waiting for process rollout_proc14 to join... +[2024-09-30 00:41:31,701][1153456] Waiting for process rollout_proc15 to join... +[2024-09-30 00:41:31,701][1153456] Batcher 0 profile tree view: +batching: 77.9109, releasing_batches: 0.2341 +[2024-09-30 00:41:31,702][1153456] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 14.1223 +update_model: 8.9162 + weight_update: 0.0006 +one_step: 0.0019 + handle_policy_step: 425.6982 + deserialize: 29.2039, stack: 1.9383, obs_to_device_normalize: 99.8019, forward: 200.3020, send_messages: 26.0623 + prepare_outputs: 51.2828 + to_cpu: 27.0228 +[2024-09-30 00:41:31,702][1153456] Learner 0 profile tree view: +misc: 0.0281, prepare_batch: 35.7234 +train: 109.3318 + epoch_init: 0.0392, minibatch_init: 0.0370, losses_postprocess: 1.8251, kl_divergence: 2.1194, after_optimizer: 1.1084 + calculate_losses: 42.6500 + losses_init: 0.0257, forward_head: 3.9350, bptt_initial: 19.0626, tail: 3.7535, advantages_returns: 1.0221, losses: 7.0241 + bptt: 6.6162 + bptt_forward_core: 6.2523 + update: 58.9957 + clip: 4.9422 +[2024-09-30 00:41:31,702][1153456] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3443, enqueue_policy_requests: 17.9735, env_step: 350.5113, overhead: 16.8544, complete_rollouts: 0.3431 +save_policy_outputs: 26.8913 + split_output_tensors: 8.9608 +[2024-09-30 00:41:31,702][1153456] RolloutWorker_w15 profile tree view: +wait_for_trajectories: 0.3431, enqueue_policy_requests: 17.7994, env_step: 349.1141, overhead: 17.0094, complete_rollouts: 0.3473 +save_policy_outputs: 27.3007 + split_output_tensors: 9.0405 +[2024-09-30 00:41:31,702][1153456] Loop Runner_EvtLoop terminating... +[2024-09-30 00:41:31,702][1153456] Runner profile tree view: +main_loop: 473.8744 +[2024-09-30 00:41:31,703][1153456] Collected {0: 40009728}, FPS: 75977.6 +[2024-09-30 00:41:31,893][1153456] Loading existing experiment configuration from /home/luyang/workspace/rl/train_dir/default_experiment/config.json +[2024-09-30 00:41:31,894][1153456] Overriding arg 'num_workers' with value 1 passed from command line +[2024-09-30 00:41:31,894][1153456] Adding new argument 'no_render'=True that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'save_video'=True that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'video_name'=None that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'hf_repository'='esperesa/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'train_script'=None that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2024-09-30 00:41:31,894][1153456] Using frameskip 1 and render_action_repeat=4 for evaluation +[2024-09-30 00:41:31,913][1153456] Doom resolution: 160x120, resize resolution: (128, 72) +[2024-09-30 00:41:31,914][1153456] RunningMeanStd input shape: (3, 72, 128) +[2024-09-30 00:41:31,915][1153456] RunningMeanStd input shape: (1,) +[2024-09-30 00:41:31,923][1153456] ConvEncoder: input_channels=3 +[2024-09-30 00:41:31,987][1153456] Conv encoder output size: 512 +[2024-09-30 00:41:31,987][1153456] Policy head output size: 512 +[2024-09-30 00:41:32,135][1153456] Loading state from checkpoint /home/luyang/workspace/rl/train_dir/default_experiment/checkpoint_p0/checkpoint_000009768_40009728.pth... +[2024-09-30 00:41:32,713][1153456] Num frames 100... +[2024-09-30 00:41:32,786][1153456] Num frames 200... +[2024-09-30 00:41:32,860][1153456] Num frames 300... +[2024-09-30 00:41:32,934][1153456] Num frames 400... +[2024-09-30 00:41:33,007][1153456] Num frames 500... +[2024-09-30 00:41:33,081][1153456] Num frames 600... +[2024-09-30 00:41:33,155][1153456] Num frames 700... +[2024-09-30 00:41:33,229][1153456] Num frames 800... +[2024-09-30 00:41:33,304][1153456] Num frames 900... +[2024-09-30 00:41:33,378][1153456] Num frames 1000... +[2024-09-30 00:41:33,451][1153456] Num frames 1100... +[2024-09-30 00:41:33,525][1153456] Num frames 1200... +[2024-09-30 00:41:33,600][1153456] Num frames 1300... +[2024-09-30 00:41:33,673][1153456] Num frames 1400... +[2024-09-30 00:41:33,748][1153456] Num frames 1500... +[2024-09-30 00:41:33,823][1153456] Num frames 1600... +[2024-09-30 00:41:33,896][1153456] Num frames 1700... +[2024-09-30 00:41:33,970][1153456] Num frames 1800... +[2024-09-30 00:41:34,072][1153456] Avg episode rewards: #0: 56.649, true rewards: #0: 18.650 +[2024-09-30 00:41:34,072][1153456] Avg episode reward: 56.649, avg true_objective: 18.650 +[2024-09-30 00:41:34,099][1153456] Num frames 1900... +[2024-09-30 00:41:34,172][1153456] Num frames 2000... +[2024-09-30 00:41:34,247][1153456] Num frames 2100... +[2024-09-30 00:41:34,322][1153456] Num frames 2200... +[2024-09-30 00:41:34,396][1153456] Num frames 2300... +[2024-09-30 00:41:34,470][1153456] Num frames 2400... +[2024-09-30 00:41:34,545][1153456] Num frames 2500... +[2024-09-30 00:41:34,618][1153456] Num frames 2600... +[2024-09-30 00:41:34,693][1153456] Num frames 2700... +[2024-09-30 00:41:34,768][1153456] Num frames 2800... +[2024-09-30 00:41:34,842][1153456] Num frames 2900... +[2024-09-30 00:41:34,916][1153456] Num frames 3000... +[2024-09-30 00:41:34,991][1153456] Num frames 3100... +[2024-09-30 00:41:35,065][1153456] Num frames 3200... +[2024-09-30 00:41:35,140][1153456] Num frames 3300... +[2024-09-30 00:41:35,214][1153456] Num frames 3400... +[2024-09-30 00:41:35,289][1153456] Num frames 3500... +[2024-09-30 00:41:35,368][1153456] Num frames 3600... +[2024-09-30 00:41:35,457][1153456] Num frames 3700... +[2024-09-30 00:41:35,548][1153456] Num frames 3800... +[2024-09-30 00:41:35,638][1153456] Num frames 3900... +[2024-09-30 00:41:35,752][1153456] Avg episode rewards: #0: 56.824, true rewards: #0: 19.825 +[2024-09-30 00:41:35,752][1153456] Avg episode reward: 56.824, avg true_objective: 19.825 +[2024-09-30 00:41:35,800][1153456] Num frames 4000... +[2024-09-30 00:41:35,893][1153456] Num frames 4100... +[2024-09-30 00:41:35,980][1153456] Num frames 4200... +[2024-09-30 00:41:36,069][1153456] Num frames 4300... +[2024-09-30 00:41:36,158][1153456] Num frames 4400... +[2024-09-30 00:41:36,246][1153456] Num frames 4500... +[2024-09-30 00:41:36,332][1153456] Num frames 4600... +[2024-09-30 00:41:36,422][1153456] Num frames 4700... +[2024-09-30 00:41:36,511][1153456] Num frames 4800... +[2024-09-30 00:41:36,599][1153456] Num frames 4900... +[2024-09-30 00:41:36,687][1153456] Num frames 5000... +[2024-09-30 00:41:36,777][1153456] Num frames 5100... +[2024-09-30 00:41:36,866][1153456] Num frames 5200... +[2024-09-30 00:41:36,954][1153456] Num frames 5300... +[2024-09-30 00:41:37,044][1153456] Num frames 5400... +[2024-09-30 00:41:37,136][1153456] Num frames 5500... +[2024-09-30 00:41:37,226][1153456] Num frames 5600... +[2024-09-30 00:41:37,315][1153456] Num frames 5700... +[2024-09-30 00:41:37,422][1153456] Avg episode rewards: #0: 51.856, true rewards: #0: 19.190 +[2024-09-30 00:41:37,422][1153456] Avg episode reward: 51.856, avg true_objective: 19.190 +[2024-09-30 00:41:37,475][1153456] Num frames 5800... +[2024-09-30 00:41:37,564][1153456] Num frames 5900... +[2024-09-30 00:41:37,653][1153456] Num frames 6000... +[2024-09-30 00:41:37,741][1153456] Num frames 6100... +[2024-09-30 00:41:37,828][1153456] Num frames 6200... +[2024-09-30 00:41:37,919][1153456] Num frames 6300... +[2024-09-30 00:41:38,008][1153456] Num frames 6400... +[2024-09-30 00:41:38,096][1153456] Num frames 6500... +[2024-09-30 00:41:38,186][1153456] Num frames 6600... +[2024-09-30 00:41:38,275][1153456] Num frames 6700... +[2024-09-30 00:41:38,364][1153456] Num frames 6800... +[2024-09-30 00:41:38,453][1153456] Num frames 6900... +[2024-09-30 00:41:38,541][1153456] Num frames 7000... +[2024-09-30 00:41:38,630][1153456] Num frames 7100... +[2024-09-30 00:41:38,718][1153456] Num frames 7200... +[2024-09-30 00:41:38,807][1153456] Num frames 7300... +[2024-09-30 00:41:38,916][1153456] Avg episode rewards: #0: 49.392, true rewards: #0: 18.393 +[2024-09-30 00:41:38,916][1153456] Avg episode reward: 49.392, avg true_objective: 18.393 +[2024-09-30 00:41:38,972][1153456] Num frames 7400... +[2024-09-30 00:41:39,061][1153456] Num frames 7500... +[2024-09-30 00:41:39,149][1153456] Num frames 7600... +[2024-09-30 00:41:39,237][1153456] Num frames 7700... +[2024-09-30 00:41:39,325][1153456] Num frames 7800... +[2024-09-30 00:41:39,413][1153456] Num frames 7900... +[2024-09-30 00:41:39,502][1153456] Num frames 8000... +[2024-09-30 00:41:39,590][1153456] Num frames 8100... +[2024-09-30 00:41:39,678][1153456] Num frames 8200... +[2024-09-30 00:41:39,767][1153456] Num frames 8300... +[2024-09-30 00:41:39,857][1153456] Num frames 8400... +[2024-09-30 00:41:39,946][1153456] Num frames 8500... +[2024-09-30 00:41:40,033][1153456] Num frames 8600... +[2024-09-30 00:41:40,125][1153456] Num frames 8700... +[2024-09-30 00:41:40,217][1153456] Num frames 8800... +[2024-09-30 00:41:40,306][1153456] Num frames 8900... +[2024-09-30 00:41:40,395][1153456] Num frames 9000... +[2024-09-30 00:41:40,485][1153456] Num frames 9100... +[2024-09-30 00:41:40,575][1153456] Num frames 9200... +[2024-09-30 00:41:40,666][1153456] Num frames 9300... +[2024-09-30 00:41:40,755][1153456] Num frames 9400... +[2024-09-30 00:41:40,863][1153456] Avg episode rewards: #0: 53.513, true rewards: #0: 18.914 +[2024-09-30 00:41:40,863][1153456] Avg episode reward: 53.513, avg true_objective: 18.914 +[2024-09-30 00:41:40,926][1153456] Num frames 9500... +[2024-09-30 00:41:41,014][1153456] Num frames 9600... +[2024-09-30 00:41:41,102][1153456] Num frames 9700... +[2024-09-30 00:41:41,191][1153456] Num frames 9800... +[2024-09-30 00:41:41,279][1153456] Num frames 9900... +[2024-09-30 00:41:41,367][1153456] Num frames 10000... +[2024-09-30 00:41:41,456][1153456] Num frames 10100... +[2024-09-30 00:41:41,543][1153456] Num frames 10200... +[2024-09-30 00:41:41,633][1153456] Num frames 10300... +[2024-09-30 00:41:41,723][1153456] Num frames 10400... +[2024-09-30 00:41:41,815][1153456] Num frames 10500... +[2024-09-30 00:41:41,904][1153456] Num frames 10600... +[2024-09-30 00:41:41,993][1153456] Num frames 10700... +[2024-09-30 00:41:42,082][1153456] Num frames 10800... +[2024-09-30 00:41:42,171][1153456] Num frames 10900... +[2024-09-30 00:41:42,262][1153456] Num frames 11000... +[2024-09-30 00:41:42,353][1153456] Num frames 11100... +[2024-09-30 00:41:42,443][1153456] Num frames 11200... +[2024-09-30 00:41:42,533][1153456] Num frames 11300... +[2024-09-30 00:41:42,622][1153456] Num frames 11400... +[2024-09-30 00:41:42,711][1153456] Num frames 11500... +[2024-09-30 00:41:42,818][1153456] Avg episode rewards: #0: 54.261, true rewards: #0: 19.262 +[2024-09-30 00:41:42,818][1153456] Avg episode reward: 54.261, avg true_objective: 19.262 +[2024-09-30 00:41:42,881][1153456] Num frames 11600... +[2024-09-30 00:41:42,972][1153456] Num frames 11700... +[2024-09-30 00:41:43,061][1153456] Num frames 11800... +[2024-09-30 00:41:43,148][1153456] Num frames 11900... +[2024-09-30 00:41:43,238][1153456] Num frames 12000... +[2024-09-30 00:41:43,326][1153456] Num frames 12100... +[2024-09-30 00:41:43,415][1153456] Num frames 12200... +[2024-09-30 00:41:43,503][1153456] Num frames 12300... +[2024-09-30 00:41:43,592][1153456] Num frames 12400... +[2024-09-30 00:41:43,681][1153456] Num frames 12500... +[2024-09-30 00:41:43,769][1153456] Num frames 12600... +[2024-09-30 00:41:43,859][1153456] Num frames 12700... +[2024-09-30 00:41:43,949][1153456] Num frames 12800... +[2024-09-30 00:41:44,039][1153456] Num frames 12900... +[2024-09-30 00:41:44,131][1153456] Avg episode rewards: #0: 51.486, true rewards: #0: 18.487 +[2024-09-30 00:41:44,131][1153456] Avg episode reward: 51.486, avg true_objective: 18.487 +[2024-09-30 00:41:44,201][1153456] Num frames 13000... +[2024-09-30 00:41:44,290][1153456] Num frames 13100... +[2024-09-30 00:41:44,378][1153456] Num frames 13200... +[2024-09-30 00:41:44,466][1153456] Num frames 13300... +[2024-09-30 00:41:44,543][1153456] Avg episode rewards: #0: 46.031, true rewards: #0: 16.656 +[2024-09-30 00:41:44,543][1153456] Avg episode reward: 46.031, avg true_objective: 16.656 +[2024-09-30 00:41:44,626][1153456] Num frames 13400... +[2024-09-30 00:41:44,713][1153456] Num frames 13500... +[2024-09-30 00:41:44,802][1153456] Num frames 13600... +[2024-09-30 00:41:44,890][1153456] Num frames 13700... +[2024-09-30 00:41:44,978][1153456] Num frames 13800... +[2024-09-30 00:41:45,067][1153456] Num frames 13900... +[2024-09-30 00:41:45,156][1153456] Num frames 14000... +[2024-09-30 00:41:45,243][1153456] Num frames 14100... +[2024-09-30 00:41:45,333][1153456] Num frames 14200... +[2024-09-30 00:41:45,423][1153456] Num frames 14300... +[2024-09-30 00:41:45,511][1153456] Num frames 14400... +[2024-09-30 00:41:45,600][1153456] Num frames 14500... +[2024-09-30 00:41:45,689][1153456] Num frames 14600... +[2024-09-30 00:41:45,777][1153456] Num frames 14700... +[2024-09-30 00:41:45,869][1153456] Num frames 14800... +[2024-09-30 00:41:45,957][1153456] Num frames 14900... +[2024-09-30 00:41:46,048][1153456] Num frames 15000... +[2024-09-30 00:41:46,137][1153456] Num frames 15100... +[2024-09-30 00:41:46,226][1153456] Num frames 15200... +[2024-09-30 00:41:46,314][1153456] Num frames 15300... +[2024-09-30 00:41:46,377][1153456] Avg episode rewards: #0: 47.676, true rewards: #0: 17.010 +[2024-09-30 00:41:46,377][1153456] Avg episode reward: 47.676, avg true_objective: 17.010 +[2024-09-30 00:41:46,476][1153456] Num frames 15400... +[2024-09-30 00:41:46,563][1153456] Num frames 15500... +[2024-09-30 00:41:46,651][1153456] Num frames 15600... +[2024-09-30 00:41:46,738][1153456] Num frames 15700... +[2024-09-30 00:41:46,826][1153456] Num frames 15800... +[2024-09-30 00:41:46,915][1153456] Num frames 15900... +[2024-09-30 00:41:47,004][1153456] Num frames 16000... +[2024-09-30 00:41:47,092][1153456] Num frames 16100... +[2024-09-30 00:41:47,182][1153456] Num frames 16200... +[2024-09-30 00:41:47,272][1153456] Num frames 16300... +[2024-09-30 00:41:47,362][1153456] Num frames 16400... +[2024-09-30 00:41:47,451][1153456] Num frames 16500... +[2024-09-30 00:41:47,539][1153456] Num frames 16600... +[2024-09-30 00:41:47,629][1153456] Num frames 16700... +[2024-09-30 00:41:47,720][1153456] Num frames 16800... +[2024-09-30 00:41:47,811][1153456] Num frames 16900... +[2024-09-30 00:41:47,901][1153456] Num frames 17000... +[2024-09-30 00:41:47,989][1153456] Num frames 17100... +[2024-09-30 00:41:48,079][1153456] Num frames 17200... +[2024-09-30 00:41:48,169][1153456] Num frames 17300... +[2024-09-30 00:41:48,259][1153456] Num frames 17400... +[2024-09-30 00:41:48,322][1153456] Avg episode rewards: #0: 48.708, true rewards: #0: 17.409 +[2024-09-30 00:41:48,322][1153456] Avg episode reward: 48.708, avg true_objective: 17.409 +[2024-09-30 00:42:10,996][1153456] Replay video saved to /home/luyang/workspace/rl/train_dir/default_experiment/replay.mp4!