vall-e_libritts / libritts-r /log /log-train-2024-08-06-08-06-14-7
yuekai's picture
Upload folder using huggingface_hub
c96c265 verified
raw
history blame contribute delete
No virus
71.5 kB
2024-08-06 08:06:14,313 INFO [trainer.py:870] (7/8) Training started
2024-08-06 08:06:14,314 INFO [trainer.py:889] (7/8) Device: cuda:7
2024-08-06 08:06:14,314 INFO [trainer.py:890] (7/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '279b0c87015a615b81b147251814d737a548f397', 'k2-git-date': 'Wed May 24 22:24:09 2023', 'lhotse-version': '1.26.0', 'torch-version': '2.0.1+cu118', 'torch-cuda-available': True, 'torch-cuda-version': '11.8', 'python-version': '3.10', 'icefall-git-branch': None, 'icefall-git-sha1': None, 'icefall-git-date': None, 'icefall-path': '/workspace/icefall_llm', 'k2-path': '/usr/local/lib/python3.10/dist-packages/k2/__init__.py', 'lhotse-path': '/usr/local/lib/python3.10/dist-packages/lhotse/__init__.py', 'hostname': '6867463', 'IP address': '0.104.202.7'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 20, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('exp/valle'), 'optimizer_name': 'ScaledAdam', 'scheduler_name': 'Eden', 'base_lr': 0.03, 'warmup_steps': 200, 'seed': 42, 'inf_check': False, 'save_every_n': 20000, 'keep_last_k': 20, 'average_period': 0, 'accumulate_grad_steps': 1, 'dtype': 'bfloat16', 'filter_min_duration': 0.5, 'filter_max_duration': 14.0, 'train_stage': 1, 'visualize': False, 'oom_check': False, 'model_name': 'valle', 'decoder_dim': 1024, 'nhead': 16, 'num_decoder_layers': 12, 'scale_factor': 1.0, 'norm_first': True, 'add_prenet': False, 'prefix_mode': 1, 'share_embedding': True, 'prepend_bos': False, 'num_quantizers': 8, 'scaling_xformers': False, 'manifest_dir': PosixPath('data/tokenized'), 'max_duration': 320, 'bucketing_sampler': True, 'num_buckets': 6, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 0.1, 'on_the_fly_feats': False, 'shuffle': True, 'buffer_size': 40000, 'shuffle_buffer_size': 100000, 'drop_last': False, 'return_cuts': True, 'num_workers': 8, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': 80, 'input_strategy': 'PrecomputedFeatures', 'dataset': 'libritts', 'text_tokens': 'data/tokenized/unique_text_tokens.k2symbols', 'sampling_rate': 24000}
2024-08-06 08:06:14,314 INFO [trainer.py:892] (7/8) About to create model
2024-08-06 08:06:15,084 INFO [trainer.py:899] (7/8) Number of model parameters: 367386628
2024-08-06 08:06:16,739 INFO [trainer.py:914] (7/8) Using DDP
2024-08-06 08:06:19,151 INFO [datamodule.py:427] (7/8) About to get train cuts
2024-08-06 08:06:19,153 INFO [datamodule.py:434] (7/8) About to get dev cuts
2024-08-06 08:06:19,154 INFO [datamodule.py:292] (7/8) Disable SpecAugment
2024-08-06 08:06:19,154 INFO [datamodule.py:294] (7/8) About to create train dataset
2024-08-06 08:06:19,155 INFO [datamodule.py:323] (7/8) Using DynamicBucketingSampler
2024-08-06 08:06:19,768 INFO [datamodule.py:344] (7/8) About to create train dataloader
2024-08-06 08:06:19,768 INFO [datamodule.py:367] (7/8) About to create dev dataset
2024-08-06 08:06:20,094 INFO [datamodule.py:388] (7/8) About to create dev dataloader
2024-08-06 08:08:02,126 INFO [trainer.py:765] (7/8) Epoch 1, batch 100, train_loss[loss=4.362, ArTop10Accuracy=0.4896, over 14457.00 frames. ], tot_loss[loss=5.049, ArTop10Accuracy=0.3745, over 4762.40 frames. ], batch size: 62, lr: 2.25e-02
2024-08-06 08:09:28,833 INFO [trainer.py:765] (7/8) Epoch 1, batch 200, train_loss[loss=4.118, ArTop10Accuracy=0.5253, over 13509.00 frames. ], tot_loss[loss=4.493, ArTop10Accuracy=0.467, over 7761.95 frames. ], batch size: 34, lr: 3.00e-02
2024-08-06 08:10:52,434 INFO [trainer.py:765] (7/8) Epoch 1, batch 300, train_loss[loss=3.871, ArTop10Accuracy=0.5676, over 14031.00 frames. ], tot_loss[loss=4.216, ArTop10Accuracy=0.5135, over 9370.01 frames. ], batch size: 44, lr: 3.00e-02
2024-08-06 08:12:12,702 INFO [trainer.py:765] (7/8) Epoch 1, batch 400, train_loss[loss=3.659, ArTop10Accuracy=0.6127, over 10974.00 frames. ], tot_loss[loss=4.026, ArTop10Accuracy=0.5457, over 10298.14 frames. ], batch size: 15, lr: 3.00e-02
2024-08-06 08:13:40,053 INFO [trainer.py:765] (7/8) Epoch 1, batch 500, train_loss[loss=3.59, ArTop10Accuracy=0.627, over 12051.00 frames. ], tot_loss[loss=3.88, ArTop10Accuracy=0.5713, over 10855.87 frames. ], batch size: 22, lr: 2.99e-02
2024-08-06 08:15:00,246 INFO [trainer.py:765] (7/8) Epoch 1, batch 600, train_loss[loss=3.581, ArTop10Accuracy=0.6283, over 11385.00 frames. ], tot_loss[loss=3.765, ArTop10Accuracy=0.5915, over 11370.53 frames. ], batch size: 18, lr: 2.99e-02
2024-08-06 08:16:26,428 INFO [trainer.py:765] (7/8) Epoch 1, batch 700, train_loss[loss=3.46, ArTop10Accuracy=0.6474, over 9840.00 frames. ], tot_loss[loss=3.686, ArTop10Accuracy=0.6055, over 11504.33 frames. ], batch size: 12, lr: 2.99e-02
2024-08-06 08:17:43,021 INFO [trainer.py:765] (7/8) Epoch 1, batch 800, train_loss[loss=3.483, ArTop10Accuracy=0.6427, over 9441.00 frames. ], tot_loss[loss=3.625, ArTop10Accuracy=0.6167, over 11619.59 frames. ], batch size: 11, lr: 2.98e-02
2024-08-06 08:18:56,154 INFO [trainer.py:765] (7/8) Epoch 1, batch 900, train_loss[loss=3.48, ArTop10Accuracy=0.6414, over 13068.00 frames. ], tot_loss[loss=3.565, ArTop10Accuracy=0.6279, over 11679.43 frames. ], batch size: 27, lr: 2.98e-02
2024-08-06 08:20:12,866 INFO [trainer.py:765] (7/8) Epoch 1, batch 1000, train_loss[loss=3.465, ArTop10Accuracy=0.6448, over 13329.00 frames. ], tot_loss[loss=3.524, ArTop10Accuracy=0.6351, over 11878.27 frames. ], batch size: 28, lr: 2.97e-02
2024-08-06 08:20:13,547 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 9.300e+01 1.871e+02 2.675e+02 4.030e+02 9.119e+03, threshold=5.351e+02, percent-clipped=0.0
2024-08-06 08:21:29,160 INFO [trainer.py:765] (7/8) Epoch 1, batch 1100, train_loss[loss=3.453, ArTop10Accuracy=0.6489, over 14163.00 frames. ], tot_loss[loss=3.487, ArTop10Accuracy=0.6419, over 11947.56 frames. ], batch size: 35, lr: 2.96e-02
2024-08-06 08:22:45,419 INFO [trainer.py:765] (7/8) Epoch 1, batch 1200, train_loss[loss=3.473, ArTop10Accuracy=0.6507, over 12900.00 frames. ], tot_loss[loss=3.462, ArTop10Accuracy=0.6467, over 11842.32 frames. ], batch size: 103, lr: 2.96e-02
2024-08-06 08:23:45,173 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 08:25:36,245 INFO [trainer.py:765] (7/8) Epoch 2, batch 100, train_loss[loss=3.444, ArTop10Accuracy=0.6432, over 14592.00 frames. ], tot_loss[loss=3.421, ArTop10Accuracy=0.6525, over 4756.08 frames. ], batch size: 62, lr: 2.90e-02
2024-08-06 08:26:58,964 INFO [trainer.py:765] (7/8) Epoch 2, batch 200, train_loss[loss=3.272, ArTop10Accuracy=0.6805, over 13635.00 frames. ], tot_loss[loss=3.39, ArTop10Accuracy=0.6589, over 7735.02 frames. ], batch size: 34, lr: 2.89e-02
2024-08-06 08:28:25,540 INFO [trainer.py:765] (7/8) Epoch 2, batch 300, train_loss[loss=3.393, ArTop10Accuracy=0.6549, over 14247.00 frames. ], tot_loss[loss=3.372, ArTop10Accuracy=0.6625, over 9363.55 frames. ], batch size: 44, lr: 2.89e-02
2024-08-06 08:29:48,645 INFO [trainer.py:765] (7/8) Epoch 2, batch 400, train_loss[loss=3.26, ArTop10Accuracy=0.6889, over 10353.00 frames. ], tot_loss[loss=3.355, ArTop10Accuracy=0.6658, over 10272.53 frames. ], batch size: 14, lr: 2.88e-02
2024-08-06 08:31:22,910 INFO [trainer.py:765] (7/8) Epoch 2, batch 500, train_loss[loss=3.372, ArTop10Accuracy=0.6614, over 12180.00 frames. ], tot_loss[loss=3.339, ArTop10Accuracy=0.6693, over 10849.42 frames. ], batch size: 22, lr: 2.87e-02
2024-08-06 08:32:45,693 INFO [trainer.py:765] (7/8) Epoch 2, batch 600, train_loss[loss=3.357, ArTop10Accuracy=0.6623, over 11406.00 frames. ], tot_loss[loss=3.329, ArTop10Accuracy=0.6711, over 11357.08 frames. ], batch size: 18, lr: 2.86e-02
2024-08-06 08:34:13,589 INFO [trainer.py:765] (7/8) Epoch 2, batch 700, train_loss[loss=3.281, ArTop10Accuracy=0.6847, over 9357.00 frames. ], tot_loss[loss=3.323, ArTop10Accuracy=0.6721, over 11496.48 frames. ], batch size: 11, lr: 2.85e-02
2024-08-06 08:34:31,180 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 08:34:40,888 INFO [trainer.py:811] (7/8) Epoch 2, validation: loss=3.277, ArTop10Accuracy=0.6803, over 1827537.00 frames.
2024-08-06 08:34:40,889 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 28320MB
2024-08-06 08:34:41,706 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 7.953e+01 1.592e+02 2.200e+02 3.344e+02 2.949e+03, threshold=4.400e+02, percent-clipped=8.6
2024-08-06 08:35:39,884 INFO [trainer.py:765] (7/8) Epoch 2, batch 800, train_loss[loss=3.251, ArTop10Accuracy=0.6922, over 10062.00 frames. ], tot_loss[loss=3.323, ArTop10Accuracy=0.6723, over 11621.00 frames. ], batch size: 12, lr: 2.84e-02
2024-08-06 08:36:56,378 INFO [trainer.py:765] (7/8) Epoch 2, batch 900, train_loss[loss=3.305, ArTop10Accuracy=0.6733, over 12777.00 frames. ], tot_loss[loss=3.309, ArTop10Accuracy=0.6752, over 11675.12 frames. ], batch size: 27, lr: 2.83e-02
2024-08-06 08:38:10,518 INFO [trainer.py:765] (7/8) Epoch 2, batch 1000, train_loss[loss=3.269, ArTop10Accuracy=0.6866, over 12738.00 frames. ], tot_loss[loss=3.299, ArTop10Accuracy=0.677, over 11877.04 frames. ], batch size: 27, lr: 2.82e-02
2024-08-06 08:39:25,065 INFO [trainer.py:765] (7/8) Epoch 2, batch 1100, train_loss[loss=3.277, ArTop10Accuracy=0.6814, over 13695.00 frames. ], tot_loss[loss=3.289, ArTop10Accuracy=0.679, over 11951.32 frames. ], batch size: 34, lr: 2.81e-02
2024-08-06 08:40:38,225 INFO [trainer.py:765] (7/8) Epoch 2, batch 1200, train_loss[loss=3.377, ArTop10Accuracy=0.6608, over 12480.00 frames. ], tot_loss[loss=3.278, ArTop10Accuracy=0.681, over 11872.33 frames. ], batch size: 103, lr: 2.80e-02
2024-08-06 08:41:38,406 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 08:43:36,655 INFO [trainer.py:765] (7/8) Epoch 3, batch 100, train_loss[loss=3.251, ArTop10Accuracy=0.6868, over 14670.00 frames. ], tot_loss[loss=3.247, ArTop10Accuracy=0.6854, over 4767.29 frames. ], batch size: 62, lr: 2.67e-02
2024-08-06 08:45:10,507 INFO [trainer.py:765] (7/8) Epoch 3, batch 200, train_loss[loss=3.166, ArTop10Accuracy=0.7034, over 13593.00 frames. ], tot_loss[loss=3.22, ArTop10Accuracy=0.6913, over 7727.64 frames. ], batch size: 34, lr: 2.66e-02
2024-08-06 08:46:29,264 INFO [trainer.py:765] (7/8) Epoch 3, batch 300, train_loss[loss=3.151, ArTop10Accuracy=0.7055, over 14394.00 frames. ], tot_loss[loss=3.198, ArTop10Accuracy=0.6953, over 9351.33 frames. ], batch size: 45, lr: 2.64e-02
2024-08-06 08:48:04,224 INFO [trainer.py:765] (7/8) Epoch 3, batch 400, train_loss[loss=3.084, ArTop10Accuracy=0.7187, over 10278.00 frames. ], tot_loss[loss=3.18, ArTop10Accuracy=0.6989, over 10262.26 frames. ], batch size: 14, lr: 2.63e-02
2024-08-06 08:48:40,887 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 9.282e+01 1.561e+02 1.981e+02 2.686e+02 1.768e+03, threshold=3.962e+02, percent-clipped=7.6
2024-08-06 08:49:25,547 INFO [trainer.py:765] (7/8) Epoch 3, batch 500, train_loss[loss=3.081, ArTop10Accuracy=0.7195, over 12162.00 frames. ], tot_loss[loss=3.166, ArTop10Accuracy=0.7019, over 10825.79 frames. ], batch size: 22, lr: 2.62e-02
2024-08-06 08:51:00,482 INFO [trainer.py:765] (7/8) Epoch 3, batch 600, train_loss[loss=3.024, ArTop10Accuracy=0.7292, over 11460.00 frames. ], tot_loss[loss=3.154, ArTop10Accuracy=0.7042, over 11342.13 frames. ], batch size: 18, lr: 2.61e-02
2024-08-06 08:52:31,623 INFO [trainer.py:765] (7/8) Epoch 3, batch 700, train_loss[loss=3.017, ArTop10Accuracy=0.7306, over 9441.00 frames. ], tot_loss[loss=3.146, ArTop10Accuracy=0.7056, over 11498.02 frames. ], batch size: 11, lr: 2.60e-02
2024-08-06 08:53:57,394 INFO [trainer.py:765] (7/8) Epoch 3, batch 800, train_loss[loss=3.106, ArTop10Accuracy=0.7194, over 9345.00 frames. ], tot_loss[loss=3.141, ArTop10Accuracy=0.7065, over 11638.04 frames. ], batch size: 11, lr: 2.59e-02
2024-08-06 08:55:15,124 INFO [trainer.py:765] (7/8) Epoch 3, batch 900, train_loss[loss=3.124, ArTop10Accuracy=0.7148, over 12777.00 frames. ], tot_loss[loss=3.12, ArTop10Accuracy=0.7106, over 11686.70 frames. ], batch size: 27, lr: 2.57e-02
2024-08-06 08:56:31,563 INFO [trainer.py:765] (7/8) Epoch 3, batch 1000, train_loss[loss=3.048, ArTop10Accuracy=0.7245, over 13221.00 frames. ], tot_loss[loss=3.112, ArTop10Accuracy=0.7118, over 11891.42 frames. ], batch size: 28, lr: 2.56e-02
2024-08-06 08:57:46,512 INFO [trainer.py:765] (7/8) Epoch 3, batch 1100, train_loss[loss=3.064, ArTop10Accuracy=0.7176, over 13782.00 frames. ], tot_loss[loss=3.105, ArTop10Accuracy=0.713, over 11944.77 frames. ], batch size: 34, lr: 2.55e-02
2024-08-06 08:59:01,406 INFO [trainer.py:765] (7/8) Epoch 3, batch 1200, train_loss[loss=3.197, ArTop10Accuracy=0.6902, over 11523.00 frames. ], tot_loss[loss=3.094, ArTop10Accuracy=0.7152, over 11857.67 frames. ], batch size: 101, lr: 2.54e-02
2024-08-06 09:00:01,956 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 09:01:50,747 INFO [trainer.py:765] (7/8) Epoch 4, batch 100, train_loss[loss=3.081, ArTop10Accuracy=0.7135, over 14556.00 frames. ], tot_loss[loss=3.068, ArTop10Accuracy=0.7198, over 4752.46 frames. ], batch size: 62, lr: 2.38e-02
2024-08-06 09:02:52,864 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 09:03:02,384 INFO [trainer.py:811] (7/8) Epoch 4, validation: loss=2.997, ArTop10Accuracy=0.7338, over 1827537.00 frames.
2024-08-06 09:03:02,385 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 32854MB
2024-08-06 09:03:03,370 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.499e+02 1.782e+02 2.273e+02 1.100e+03, threshold=3.565e+02, percent-clipped=4.7
2024-08-06 09:03:29,279 INFO [trainer.py:765] (7/8) Epoch 4, batch 200, train_loss[loss=3.04, ArTop10Accuracy=0.7267, over 13431.00 frames. ], tot_loss[loss=3.05, ArTop10Accuracy=0.7235, over 7758.26 frames. ], batch size: 34, lr: 2.37e-02
2024-08-06 09:05:01,738 INFO [trainer.py:765] (7/8) Epoch 4, batch 300, train_loss[loss=3.08, ArTop10Accuracy=0.7196, over 14442.00 frames. ], tot_loss[loss=3.04, ArTop10Accuracy=0.7252, over 9376.27 frames. ], batch size: 45, lr: 2.36e-02
2024-08-06 09:06:28,158 INFO [trainer.py:765] (7/8) Epoch 4, batch 400, train_loss[loss=2.945, ArTop10Accuracy=0.7439, over 11004.00 frames. ], tot_loss[loss=3.035, ArTop10Accuracy=0.7263, over 10292.54 frames. ], batch size: 15, lr: 2.34e-02
2024-08-06 09:08:01,931 INFO [trainer.py:765] (7/8) Epoch 4, batch 500, train_loss[loss=3.035, ArTop10Accuracy=0.7318, over 12075.00 frames. ], tot_loss[loss=3.026, ArTop10Accuracy=0.7281, over 10859.34 frames. ], batch size: 22, lr: 2.33e-02
2024-08-06 09:09:28,546 INFO [trainer.py:765] (7/8) Epoch 4, batch 600, train_loss[loss=3.041, ArTop10Accuracy=0.7257, over 11595.00 frames. ], tot_loss[loss=3.022, ArTop10Accuracy=0.7289, over 11371.96 frames. ], batch size: 18, lr: 2.32e-02
2024-08-06 09:10:59,871 INFO [trainer.py:765] (7/8) Epoch 4, batch 700, train_loss[loss=2.907, ArTop10Accuracy=0.7516, over 9414.00 frames. ], tot_loss[loss=3.02, ArTop10Accuracy=0.7293, over 11502.51 frames. ], batch size: 11, lr: 2.31e-02
2024-08-06 09:12:17,518 INFO [trainer.py:765] (7/8) Epoch 4, batch 800, train_loss[loss=3.083, ArTop10Accuracy=0.7137, over 10341.00 frames. ], tot_loss[loss=3.021, ArTop10Accuracy=0.7287, over 11643.99 frames. ], batch size: 12, lr: 2.30e-02
2024-08-06 09:13:33,218 INFO [trainer.py:765] (7/8) Epoch 4, batch 900, train_loss[loss=2.993, ArTop10Accuracy=0.7329, over 13167.00 frames. ], tot_loss[loss=3.013, ArTop10Accuracy=0.7305, over 11694.20 frames. ], batch size: 28, lr: 2.29e-02
2024-08-06 09:14:47,526 INFO [trainer.py:765] (7/8) Epoch 4, batch 1000, train_loss[loss=2.966, ArTop10Accuracy=0.7427, over 13272.00 frames. ], tot_loss[loss=3.011, ArTop10Accuracy=0.7308, over 11890.45 frames. ], batch size: 28, lr: 2.28e-02
2024-08-06 09:16:02,988 INFO [trainer.py:765] (7/8) Epoch 4, batch 1100, train_loss[loss=3.074, ArTop10Accuracy=0.7127, over 14052.00 frames. ], tot_loss[loss=3.012, ArTop10Accuracy=0.7307, over 11951.97 frames. ], batch size: 35, lr: 2.26e-02
2024-08-06 09:16:53,297 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.440e+02 1.636e+02 1.968e+02 7.702e+02, threshold=3.273e+02, percent-clipped=1.3
2024-08-06 09:17:18,350 INFO [trainer.py:765] (7/8) Epoch 4, batch 1200, train_loss[loss=3.072, ArTop10Accuracy=0.7225, over 12207.00 frames. ], tot_loss[loss=3.011, ArTop10Accuracy=0.7309, over 11891.15 frames. ], batch size: 101, lr: 2.25e-02
2024-08-06 09:18:17,193 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 09:20:17,179 INFO [trainer.py:765] (7/8) Epoch 5, batch 100, train_loss[loss=2.969, ArTop10Accuracy=0.738, over 14637.00 frames. ], tot_loss[loss=2.991, ArTop10Accuracy=0.7338, over 4753.25 frames. ], batch size: 63, lr: 2.10e-02
2024-08-06 09:21:52,302 INFO [trainer.py:765] (7/8) Epoch 5, batch 200, train_loss[loss=2.949, ArTop10Accuracy=0.742, over 13872.00 frames. ], tot_loss[loss=2.98, ArTop10Accuracy=0.736, over 7749.37 frames. ], batch size: 34, lr: 2.09e-02
2024-08-06 09:23:19,247 INFO [trainer.py:765] (7/8) Epoch 5, batch 300, train_loss[loss=2.975, ArTop10Accuracy=0.737, over 14184.00 frames. ], tot_loss[loss=2.972, ArTop10Accuracy=0.7381, over 9371.28 frames. ], batch size: 44, lr: 2.08e-02
2024-08-06 09:24:53,543 INFO [trainer.py:765] (7/8) Epoch 5, batch 400, train_loss[loss=2.977, ArTop10Accuracy=0.7381, over 10341.00 frames. ], tot_loss[loss=2.966, ArTop10Accuracy=0.7394, over 10277.25 frames. ], batch size: 14, lr: 2.07e-02
2024-08-06 09:26:19,423 INFO [trainer.py:765] (7/8) Epoch 5, batch 500, train_loss[loss=2.844, ArTop10Accuracy=0.7612, over 12285.00 frames. ], tot_loss[loss=2.96, ArTop10Accuracy=0.7406, over 10834.88 frames. ], batch size: 22, lr: 2.06e-02
2024-08-06 09:27:49,543 INFO [trainer.py:765] (7/8) Epoch 5, batch 600, train_loss[loss=2.869, ArTop10Accuracy=0.7593, over 11508.00 frames. ], tot_loss[loss=2.962, ArTop10Accuracy=0.7401, over 11349.06 frames. ], batch size: 18, lr: 2.05e-02
2024-08-06 09:29:21,675 INFO [trainer.py:765] (7/8) Epoch 5, batch 700, train_loss[loss=2.854, ArTop10Accuracy=0.7634, over 10257.00 frames. ], tot_loss[loss=2.967, ArTop10Accuracy=0.7394, over 11495.18 frames. ], batch size: 12, lr: 2.04e-02
2024-08-06 09:30:44,698 INFO [trainer.py:765] (7/8) Epoch 5, batch 800, train_loss[loss=2.925, ArTop10Accuracy=0.7477, over 10026.00 frames. ], tot_loss[loss=2.969, ArTop10Accuracy=0.7388, over 11633.89 frames. ], batch size: 12, lr: 2.03e-02
2024-08-06 09:31:51,245 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 09:32:00,762 INFO [trainer.py:811] (7/8) Epoch 5, validation: loss=2.926, ArTop10Accuracy=0.7466, over 1827537.00 frames.
2024-08-06 09:32:00,763 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 09:32:01,716 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.349e+02 1.525e+02 1.806e+02 1.007e+03, threshold=3.049e+02, percent-clipped=2.3
2024-08-06 09:32:10,561 INFO [trainer.py:765] (7/8) Epoch 5, batch 900, train_loss[loss=2.964, ArTop10Accuracy=0.7428, over 12957.00 frames. ], tot_loss[loss=2.962, ArTop10Accuracy=0.7401, over 11679.29 frames. ], batch size: 27, lr: 2.02e-02
2024-08-06 09:33:27,329 INFO [trainer.py:765] (7/8) Epoch 5, batch 1000, train_loss[loss=2.99, ArTop10Accuracy=0.7287, over 13071.00 frames. ], tot_loss[loss=2.962, ArTop10Accuracy=0.7401, over 11873.13 frames. ], batch size: 27, lr: 2.01e-02
2024-08-06 09:34:42,306 INFO [trainer.py:765] (7/8) Epoch 5, batch 1100, train_loss[loss=2.925, ArTop10Accuracy=0.744, over 13614.00 frames. ], tot_loss[loss=2.962, ArTop10Accuracy=0.7402, over 11964.34 frames. ], batch size: 34, lr: 2.00e-02
2024-08-06 09:35:56,337 INFO [trainer.py:765] (7/8) Epoch 5, batch 1200, train_loss[loss=3.056, ArTop10Accuracy=0.7254, over 12483.00 frames. ], tot_loss[loss=2.96, ArTop10Accuracy=0.7404, over 11842.33 frames. ], batch size: 101, lr: 1.99e-02
2024-08-06 09:36:55,307 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 09:38:52,672 INFO [trainer.py:765] (7/8) Epoch 6, batch 100, train_loss[loss=2.971, ArTop10Accuracy=0.7363, over 14304.00 frames. ], tot_loss[loss=2.949, ArTop10Accuracy=0.7418, over 4755.22 frames. ], batch size: 62, lr: 1.85e-02
2024-08-06 09:40:19,840 INFO [trainer.py:765] (7/8) Epoch 6, batch 200, train_loss[loss=2.939, ArTop10Accuracy=0.7429, over 13713.00 frames. ], tot_loss[loss=2.942, ArTop10Accuracy=0.7432, over 7740.92 frames. ], batch size: 34, lr: 1.84e-02
2024-08-06 09:41:52,973 INFO [trainer.py:765] (7/8) Epoch 6, batch 300, train_loss[loss=2.971, ArTop10Accuracy=0.7397, over 14376.00 frames. ], tot_loss[loss=2.934, ArTop10Accuracy=0.7451, over 9374.66 frames. ], batch size: 45, lr: 1.83e-02
2024-08-06 09:43:17,836 INFO [trainer.py:765] (7/8) Epoch 6, batch 400, train_loss[loss=2.886, ArTop10Accuracy=0.7542, over 10356.00 frames. ], tot_loss[loss=2.926, ArTop10Accuracy=0.7469, over 10317.75 frames. ], batch size: 14, lr: 1.83e-02
2024-08-06 09:44:54,136 INFO [trainer.py:765] (7/8) Epoch 6, batch 500, train_loss[loss=2.996, ArTop10Accuracy=0.7306, over 12210.00 frames. ], tot_loss[loss=2.917, ArTop10Accuracy=0.7488, over 10868.63 frames. ], batch size: 22, lr: 1.82e-02
2024-08-06 09:46:22,879 INFO [trainer.py:765] (7/8) Epoch 6, batch 600, train_loss[loss=2.871, ArTop10Accuracy=0.7539, over 11418.00 frames. ], tot_loss[loss=2.922, ArTop10Accuracy=0.7478, over 11372.97 frames. ], batch size: 18, lr: 1.81e-02
2024-08-06 09:46:37,226 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.339e+02 1.480e+02 1.701e+02 7.506e+02, threshold=2.959e+02, percent-clipped=1.1
2024-08-06 09:47:57,878 INFO [trainer.py:765] (7/8) Epoch 6, batch 700, train_loss[loss=2.864, ArTop10Accuracy=0.7643, over 9345.00 frames. ], tot_loss[loss=2.927, ArTop10Accuracy=0.7467, over 11521.82 frames. ], batch size: 11, lr: 1.80e-02
2024-08-06 09:49:15,961 INFO [trainer.py:765] (7/8) Epoch 6, batch 800, train_loss[loss=2.941, ArTop10Accuracy=0.7444, over 10086.00 frames. ], tot_loss[loss=2.925, ArTop10Accuracy=0.7472, over 11648.74 frames. ], batch size: 12, lr: 1.79e-02
2024-08-06 09:50:32,141 INFO [trainer.py:765] (7/8) Epoch 6, batch 900, train_loss[loss=2.924, ArTop10Accuracy=0.7467, over 12903.00 frames. ], tot_loss[loss=2.921, ArTop10Accuracy=0.7481, over 11680.34 frames. ], batch size: 27, lr: 1.78e-02
2024-08-06 09:51:47,308 INFO [trainer.py:765] (7/8) Epoch 6, batch 1000, train_loss[loss=2.873, ArTop10Accuracy=0.7545, over 12843.00 frames. ], tot_loss[loss=2.923, ArTop10Accuracy=0.7476, over 11881.10 frames. ], batch size: 27, lr: 1.77e-02
2024-08-06 09:53:00,927 INFO [trainer.py:765] (7/8) Epoch 6, batch 1100, train_loss[loss=2.894, ArTop10Accuracy=0.7513, over 14028.00 frames. ], tot_loss[loss=2.927, ArTop10Accuracy=0.7467, over 11949.15 frames. ], batch size: 35, lr: 1.77e-02
2024-08-06 09:54:14,343 INFO [trainer.py:765] (7/8) Epoch 6, batch 1200, train_loss[loss=3.005, ArTop10Accuracy=0.7324, over 12450.00 frames. ], tot_loss[loss=2.925, ArTop10Accuracy=0.7471, over 11859.26 frames. ], batch size: 101, lr: 1.76e-02
2024-08-06 09:55:13,167 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 09:57:06,705 INFO [trainer.py:765] (7/8) Epoch 7, batch 100, train_loss[loss=2.988, ArTop10Accuracy=0.7343, over 14619.00 frames. ], tot_loss[loss=2.914, ArTop10Accuracy=0.7489, over 4754.07 frames. ], batch size: 62, lr: 1.64e-02
2024-08-06 09:58:39,433 INFO [trainer.py:765] (7/8) Epoch 7, batch 200, train_loss[loss=2.872, ArTop10Accuracy=0.7554, over 13872.00 frames. ], tot_loss[loss=2.906, ArTop10Accuracy=0.7506, over 7746.71 frames. ], batch size: 35, lr: 1.64e-02
2024-08-06 10:00:06,089 INFO [trainer.py:765] (7/8) Epoch 7, batch 300, train_loss[loss=2.922, ArTop10Accuracy=0.7488, over 14031.00 frames. ], tot_loss[loss=2.899, ArTop10Accuracy=0.7517, over 9369.72 frames. ], batch size: 44, lr: 1.63e-02
2024-08-06 10:00:40,515 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 10:00:50,245 INFO [trainer.py:811] (7/8) Epoch 7, validation: loss=2.88, ArTop10Accuracy=0.7554, over 1827537.00 frames.
2024-08-06 10:00:50,246 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 10:00:50,983 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.286e+02 1.429e+02 1.605e+02 1.020e+03, threshold=2.857e+02, percent-clipped=1.5
2024-08-06 10:01:49,123 INFO [trainer.py:765] (7/8) Epoch 7, batch 400, train_loss[loss=2.751, ArTop10Accuracy=0.7792, over 10182.00 frames. ], tot_loss[loss=2.894, ArTop10Accuracy=0.7523, over 10267.82 frames. ], batch size: 14, lr: 1.62e-02
2024-08-06 10:03:21,465 INFO [trainer.py:765] (7/8) Epoch 7, batch 500, train_loss[loss=2.903, ArTop10Accuracy=0.7533, over 12147.00 frames. ], tot_loss[loss=2.891, ArTop10Accuracy=0.7532, over 10812.25 frames. ], batch size: 22, lr: 1.61e-02
2024-08-06 10:04:51,890 INFO [trainer.py:765] (7/8) Epoch 7, batch 600, train_loss[loss=2.828, ArTop10Accuracy=0.7683, over 11442.00 frames. ], tot_loss[loss=2.893, ArTop10Accuracy=0.753, over 11357.88 frames. ], batch size: 18, lr: 1.61e-02
2024-08-06 10:06:25,118 INFO [trainer.py:765] (7/8) Epoch 7, batch 700, train_loss[loss=2.884, ArTop10Accuracy=0.7584, over 10107.00 frames. ], tot_loss[loss=2.899, ArTop10Accuracy=0.7519, over 11531.01 frames. ], batch size: 12, lr: 1.60e-02
2024-08-06 10:07:46,957 INFO [trainer.py:765] (7/8) Epoch 7, batch 800, train_loss[loss=2.865, ArTop10Accuracy=0.7595, over 10149.00 frames. ], tot_loss[loss=2.898, ArTop10Accuracy=0.752, over 11660.33 frames. ], batch size: 12, lr: 1.59e-02
2024-08-06 10:09:02,830 INFO [trainer.py:765] (7/8) Epoch 7, batch 900, train_loss[loss=2.844, ArTop10Accuracy=0.7642, over 13023.00 frames. ], tot_loss[loss=2.892, ArTop10Accuracy=0.7533, over 11695.51 frames. ], batch size: 27, lr: 1.59e-02
2024-08-06 10:10:19,642 INFO [trainer.py:765] (7/8) Epoch 7, batch 1000, train_loss[loss=2.875, ArTop10Accuracy=0.7589, over 13020.00 frames. ], tot_loss[loss=2.893, ArTop10Accuracy=0.753, over 11897.82 frames. ], batch size: 27, lr: 1.58e-02
2024-08-06 10:11:35,214 INFO [trainer.py:765] (7/8) Epoch 7, batch 1100, train_loss[loss=2.877, ArTop10Accuracy=0.76, over 13656.00 frames. ], tot_loss[loss=2.903, ArTop10Accuracy=0.751, over 11971.94 frames. ], batch size: 34, lr: 1.57e-02
2024-08-06 10:12:48,210 INFO [trainer.py:765] (7/8) Epoch 7, batch 1200, train_loss[loss=3.014, ArTop10Accuracy=0.7296, over 11841.00 frames. ], tot_loss[loss=2.901, ArTop10Accuracy=0.7513, over 11848.41 frames. ], batch size: 101, lr: 1.57e-02
2024-08-06 10:13:46,715 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 10:15:03,607 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.283e+02 1.410e+02 1.601e+02 1.017e+03, threshold=2.820e+02, percent-clipped=0.9
2024-08-06 10:15:40,827 INFO [trainer.py:765] (7/8) Epoch 8, batch 100, train_loss[loss=2.945, ArTop10Accuracy=0.7462, over 14433.00 frames. ], tot_loss[loss=2.896, ArTop10Accuracy=0.7518, over 4743.12 frames. ], batch size: 62, lr: 1.47e-02
2024-08-06 10:17:12,869 INFO [trainer.py:765] (7/8) Epoch 8, batch 200, train_loss[loss=2.804, ArTop10Accuracy=0.773, over 13626.00 frames. ], tot_loss[loss=2.878, ArTop10Accuracy=0.7555, over 7739.33 frames. ], batch size: 34, lr: 1.46e-02
2024-08-06 10:18:37,904 INFO [trainer.py:765] (7/8) Epoch 8, batch 300, train_loss[loss=2.89, ArTop10Accuracy=0.7555, over 14010.00 frames. ], tot_loss[loss=2.87, ArTop10Accuracy=0.7572, over 9379.75 frames. ], batch size: 44, lr: 1.46e-02
2024-08-06 10:20:06,347 INFO [trainer.py:765] (7/8) Epoch 8, batch 400, train_loss[loss=2.805, ArTop10Accuracy=0.7662, over 10131.00 frames. ], tot_loss[loss=2.867, ArTop10Accuracy=0.7579, over 10274.71 frames. ], batch size: 14, lr: 1.45e-02
2024-08-06 10:21:32,417 INFO [trainer.py:765] (7/8) Epoch 8, batch 500, train_loss[loss=2.796, ArTop10Accuracy=0.7714, over 12276.00 frames. ], tot_loss[loss=2.862, ArTop10Accuracy=0.7587, over 10839.39 frames. ], batch size: 22, lr: 1.45e-02
2024-08-06 10:23:00,980 INFO [trainer.py:765] (7/8) Epoch 8, batch 600, train_loss[loss=2.787, ArTop10Accuracy=0.7728, over 11571.00 frames. ], tot_loss[loss=2.86, ArTop10Accuracy=0.7592, over 11359.98 frames. ], batch size: 18, lr: 1.44e-02
2024-08-06 10:24:37,793 INFO [trainer.py:765] (7/8) Epoch 8, batch 700, train_loss[loss=2.754, ArTop10Accuracy=0.7844, over 10128.00 frames. ], tot_loss[loss=2.866, ArTop10Accuracy=0.7583, over 11502.93 frames. ], batch size: 12, lr: 1.43e-02
2024-08-06 10:25:56,094 INFO [trainer.py:765] (7/8) Epoch 8, batch 800, train_loss[loss=2.865, ArTop10Accuracy=0.7526, over 9339.00 frames. ], tot_loss[loss=2.87, ArTop10Accuracy=0.7573, over 11619.76 frames. ], batch size: 11, lr: 1.43e-02
2024-08-06 10:27:12,252 INFO [trainer.py:765] (7/8) Epoch 8, batch 900, train_loss[loss=2.84, ArTop10Accuracy=0.7642, over 13110.00 frames. ], tot_loss[loss=2.866, ArTop10Accuracy=0.758, over 11686.94 frames. ], batch size: 27, lr: 1.42e-02
2024-08-06 10:28:25,269 INFO [trainer.py:765] (7/8) Epoch 8, batch 1000, train_loss[loss=2.851, ArTop10Accuracy=0.7588, over 12954.00 frames. ], tot_loss[loss=2.869, ArTop10Accuracy=0.7574, over 11893.33 frames. ], batch size: 27, lr: 1.42e-02
2024-08-06 10:29:07,161 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 10:29:16,830 INFO [trainer.py:811] (7/8) Epoch 8, validation: loss=2.858, ArTop10Accuracy=0.7594, over 1827537.00 frames.
2024-08-06 10:29:16,831 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 10:29:17,497 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.275e+02 1.390e+02 1.547e+02 3.717e+02, threshold=2.781e+02, percent-clipped=0.7
2024-08-06 10:29:51,738 INFO [trainer.py:765] (7/8) Epoch 8, batch 1100, train_loss[loss=2.876, ArTop10Accuracy=0.753, over 13728.00 frames. ], tot_loss[loss=2.877, ArTop10Accuracy=0.756, over 11966.78 frames. ], batch size: 34, lr: 1.41e-02
2024-08-06 10:31:05,955 INFO [trainer.py:765] (7/8) Epoch 8, batch 1200, train_loss[loss=2.958, ArTop10Accuracy=0.7403, over 12834.00 frames. ], tot_loss[loss=2.877, ArTop10Accuracy=0.756, over 11882.51 frames. ], batch size: 103, lr: 1.40e-02
2024-08-06 10:32:05,689 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 10:34:01,263 INFO [trainer.py:765] (7/8) Epoch 9, batch 100, train_loss[loss=2.89, ArTop10Accuracy=0.7558, over 14211.00 frames. ], tot_loss[loss=2.861, ArTop10Accuracy=0.7584, over 4769.26 frames. ], batch size: 62, lr: 1.32e-02
2024-08-06 10:35:31,779 INFO [trainer.py:765] (7/8) Epoch 9, batch 200, train_loss[loss=2.84, ArTop10Accuracy=0.7597, over 13761.00 frames. ], tot_loss[loss=2.853, ArTop10Accuracy=0.7601, over 7753.37 frames. ], batch size: 34, lr: 1.32e-02
2024-08-06 10:36:57,933 INFO [trainer.py:765] (7/8) Epoch 9, batch 300, train_loss[loss=2.919, ArTop10Accuracy=0.7475, over 14613.00 frames. ], tot_loss[loss=2.846, ArTop10Accuracy=0.7615, over 9383.26 frames. ], batch size: 44, lr: 1.31e-02
2024-08-06 10:38:32,706 INFO [trainer.py:765] (7/8) Epoch 9, batch 400, train_loss[loss=2.867, ArTop10Accuracy=0.7587, over 10458.00 frames. ], tot_loss[loss=2.843, ArTop10Accuracy=0.7625, over 10292.17 frames. ], batch size: 14, lr: 1.31e-02
2024-08-06 10:39:59,263 INFO [trainer.py:765] (7/8) Epoch 9, batch 500, train_loss[loss=2.778, ArTop10Accuracy=0.7717, over 12252.00 frames. ], tot_loss[loss=2.839, ArTop10Accuracy=0.7633, over 10865.89 frames. ], batch size: 22, lr: 1.30e-02
2024-08-06 10:41:29,697 INFO [trainer.py:765] (7/8) Epoch 9, batch 600, train_loss[loss=2.849, ArTop10Accuracy=0.7639, over 11511.00 frames. ], tot_loss[loss=2.843, ArTop10Accuracy=0.7624, over 11375.79 frames. ], batch size: 18, lr: 1.30e-02
2024-08-06 10:42:58,448 INFO [trainer.py:765] (7/8) Epoch 9, batch 700, train_loss[loss=2.758, ArTop10Accuracy=0.7781, over 10119.00 frames. ], tot_loss[loss=2.846, ArTop10Accuracy=0.7617, over 11528.05 frames. ], batch size: 12, lr: 1.29e-02
2024-08-06 10:44:02,958 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.039e+02 1.253e+02 1.352e+02 1.493e+02 7.010e+02, threshold=2.704e+02, percent-clipped=0.6
2024-08-06 10:44:19,676 INFO [trainer.py:765] (7/8) Epoch 9, batch 800, train_loss[loss=2.705, ArTop10Accuracy=0.7909, over 10119.00 frames. ], tot_loss[loss=2.849, ArTop10Accuracy=0.7611, over 11632.30 frames. ], batch size: 12, lr: 1.29e-02
2024-08-06 10:45:35,729 INFO [trainer.py:765] (7/8) Epoch 9, batch 900, train_loss[loss=2.831, ArTop10Accuracy=0.7652, over 12933.00 frames. ], tot_loss[loss=2.843, ArTop10Accuracy=0.7623, over 11672.27 frames. ], batch size: 27, lr: 1.28e-02
2024-08-06 10:46:51,278 INFO [trainer.py:765] (7/8) Epoch 9, batch 1000, train_loss[loss=2.873, ArTop10Accuracy=0.753, over 13110.00 frames. ], tot_loss[loss=2.85, ArTop10Accuracy=0.7612, over 11899.80 frames. ], batch size: 27, lr: 1.28e-02
2024-08-06 10:48:06,254 INFO [trainer.py:765] (7/8) Epoch 9, batch 1100, train_loss[loss=2.87, ArTop10Accuracy=0.7525, over 13704.00 frames. ], tot_loss[loss=2.854, ArTop10Accuracy=0.7602, over 11962.63 frames. ], batch size: 34, lr: 1.28e-02
2024-08-06 10:49:21,061 INFO [trainer.py:765] (7/8) Epoch 9, batch 1200, train_loss[loss=2.96, ArTop10Accuracy=0.738, over 12012.00 frames. ], tot_loss[loss=2.853, ArTop10Accuracy=0.7602, over 11866.13 frames. ], batch size: 101, lr: 1.27e-02
2024-08-06 10:50:21,919 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 10:52:12,333 INFO [trainer.py:765] (7/8) Epoch 10, batch 100, train_loss[loss=2.963, ArTop10Accuracy=0.7393, over 14571.00 frames. ], tot_loss[loss=2.845, ArTop10Accuracy=0.7611, over 4761.85 frames. ], batch size: 63, lr: 1.20e-02
2024-08-06 10:53:44,592 INFO [trainer.py:765] (7/8) Epoch 10, batch 200, train_loss[loss=2.862, ArTop10Accuracy=0.7616, over 13656.00 frames. ], tot_loss[loss=2.838, ArTop10Accuracy=0.7631, over 7751.64 frames. ], batch size: 34, lr: 1.20e-02
2024-08-06 10:55:08,097 INFO [trainer.py:765] (7/8) Epoch 10, batch 300, train_loss[loss=2.885, ArTop10Accuracy=0.7585, over 14229.00 frames. ], tot_loss[loss=2.831, ArTop10Accuracy=0.7644, over 9362.96 frames. ], batch size: 44, lr: 1.19e-02
2024-08-06 10:56:41,184 INFO [trainer.py:765] (7/8) Epoch 10, batch 400, train_loss[loss=2.817, ArTop10Accuracy=0.7678, over 10332.00 frames. ], tot_loss[loss=2.829, ArTop10Accuracy=0.7651, over 10282.26 frames. ], batch size: 14, lr: 1.19e-02
2024-08-06 10:58:04,944 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 10:58:14,560 INFO [trainer.py:811] (7/8) Epoch 10, validation: loss=2.842, ArTop10Accuracy=0.7624, over 1827537.00 frames.
2024-08-06 10:58:14,560 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 10:58:15,580 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.228e+02 1.320e+02 1.458e+02 6.096e+02, threshold=2.641e+02, percent-clipped=0.6
2024-08-06 10:58:15,587 INFO [trainer.py:765] (7/8) Epoch 10, batch 500, train_loss[loss=2.76, ArTop10Accuracy=0.7781, over 12621.00 frames. ], tot_loss[loss=2.824, ArTop10Accuracy=0.7657, over 10826.79 frames. ], batch size: 23, lr: 1.19e-02
2024-08-06 10:59:42,823 INFO [trainer.py:765] (7/8) Epoch 10, batch 600, train_loss[loss=2.799, ArTop10Accuracy=0.7657, over 11370.00 frames. ], tot_loss[loss=2.827, ArTop10Accuracy=0.7653, over 11339.91 frames. ], batch size: 18, lr: 1.18e-02
2024-08-06 11:01:18,113 INFO [trainer.py:765] (7/8) Epoch 10, batch 700, train_loss[loss=2.74, ArTop10Accuracy=0.7784, over 10212.00 frames. ], tot_loss[loss=2.832, ArTop10Accuracy=0.7643, over 11501.92 frames. ], batch size: 12, lr: 1.18e-02
2024-08-06 11:02:36,923 INFO [trainer.py:765] (7/8) Epoch 10, batch 800, train_loss[loss=2.903, ArTop10Accuracy=0.7523, over 10266.00 frames. ], tot_loss[loss=2.837, ArTop10Accuracy=0.7636, over 11651.31 frames. ], batch size: 12, lr: 1.17e-02
2024-08-06 11:03:51,218 INFO [trainer.py:765] (7/8) Epoch 10, batch 900, train_loss[loss=2.735, ArTop10Accuracy=0.7816, over 13029.00 frames. ], tot_loss[loss=2.828, ArTop10Accuracy=0.7653, over 11691.04 frames. ], batch size: 27, lr: 1.17e-02
2024-08-06 11:05:06,357 INFO [trainer.py:765] (7/8) Epoch 10, batch 1000, train_loss[loss=2.796, ArTop10Accuracy=0.767, over 12981.00 frames. ], tot_loss[loss=2.832, ArTop10Accuracy=0.7644, over 11907.49 frames. ], batch size: 27, lr: 1.17e-02
2024-08-06 11:06:21,730 INFO [trainer.py:765] (7/8) Epoch 10, batch 1100, train_loss[loss=2.942, ArTop10Accuracy=0.7401, over 13641.00 frames. ], tot_loss[loss=2.836, ArTop10Accuracy=0.7638, over 11975.93 frames. ], batch size: 34, lr: 1.16e-02
2024-08-06 11:07:34,778 INFO [trainer.py:765] (7/8) Epoch 10, batch 1200, train_loss[loss=2.959, ArTop10Accuracy=0.7386, over 12051.00 frames. ], tot_loss[loss=2.837, ArTop10Accuracy=0.7635, over 11872.57 frames. ], batch size: 103, lr: 1.16e-02
2024-08-06 11:08:34,013 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 11:10:29,961 INFO [trainer.py:765] (7/8) Epoch 11, batch 100, train_loss[loss=2.832, ArTop10Accuracy=0.7628, over 14784.00 frames. ], tot_loss[loss=2.819, ArTop10Accuracy=0.766, over 4757.71 frames. ], batch size: 63, lr: 1.10e-02
2024-08-06 11:12:04,680 INFO [trainer.py:765] (7/8) Epoch 11, batch 200, train_loss[loss=2.848, ArTop10Accuracy=0.7591, over 13668.00 frames. ], tot_loss[loss=2.812, ArTop10Accuracy=0.7679, over 7745.12 frames. ], batch size: 34, lr: 1.10e-02
2024-08-06 11:12:22,833 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 9.884e+01 1.240e+02 1.333e+02 1.457e+02 6.939e+02, threshold=2.667e+02, percent-clipped=0.1
2024-08-06 11:13:31,557 INFO [trainer.py:765] (7/8) Epoch 11, batch 300, train_loss[loss=2.783, ArTop10Accuracy=0.7766, over 14295.00 frames. ], tot_loss[loss=2.805, ArTop10Accuracy=0.7693, over 9379.76 frames. ], batch size: 44, lr: 1.09e-02
2024-08-06 11:15:03,276 INFO [trainer.py:765] (7/8) Epoch 11, batch 400, train_loss[loss=2.744, ArTop10Accuracy=0.7853, over 10455.00 frames. ], tot_loss[loss=2.804, ArTop10Accuracy=0.7697, over 10293.17 frames. ], batch size: 14, lr: 1.09e-02
2024-08-06 11:16:29,644 INFO [trainer.py:765] (7/8) Epoch 11, batch 500, train_loss[loss=2.775, ArTop10Accuracy=0.7748, over 12375.00 frames. ], tot_loss[loss=2.801, ArTop10Accuracy=0.7703, over 10869.57 frames. ], batch size: 22, lr: 1.09e-02
2024-08-06 11:18:00,524 INFO [trainer.py:765] (7/8) Epoch 11, batch 600, train_loss[loss=2.686, ArTop10Accuracy=0.789, over 11559.00 frames. ], tot_loss[loss=2.803, ArTop10Accuracy=0.7701, over 11358.60 frames. ], batch size: 18, lr: 1.08e-02
2024-08-06 11:19:34,519 INFO [trainer.py:765] (7/8) Epoch 11, batch 700, train_loss[loss=2.631, ArTop10Accuracy=0.8006, over 9285.00 frames. ], tot_loss[loss=2.809, ArTop10Accuracy=0.7687, over 11492.14 frames. ], batch size: 11, lr: 1.08e-02
2024-08-06 11:20:55,489 INFO [trainer.py:765] (7/8) Epoch 11, batch 800, train_loss[loss=2.737, ArTop10Accuracy=0.7849, over 10227.00 frames. ], tot_loss[loss=2.816, ArTop10Accuracy=0.7675, over 11631.71 frames. ], batch size: 12, lr: 1.07e-02
2024-08-06 11:22:13,711 INFO [trainer.py:765] (7/8) Epoch 11, batch 900, train_loss[loss=2.9, ArTop10Accuracy=0.7485, over 13065.00 frames. ], tot_loss[loss=2.812, ArTop10Accuracy=0.7683, over 11678.33 frames. ], batch size: 27, lr: 1.07e-02
2024-08-06 11:23:31,805 INFO [trainer.py:765] (7/8) Epoch 11, batch 1000, train_loss[loss=2.812, ArTop10Accuracy=0.767, over 13032.00 frames. ], tot_loss[loss=2.816, ArTop10Accuracy=0.7674, over 11884.80 frames. ], batch size: 27, lr: 1.07e-02
2024-08-06 11:24:46,908 INFO [trainer.py:765] (7/8) Epoch 11, batch 1100, train_loss[loss=2.843, ArTop10Accuracy=0.7617, over 13743.00 frames. ], tot_loss[loss=2.824, ArTop10Accuracy=0.7661, over 11952.21 frames. ], batch size: 34, lr: 1.06e-02
2024-08-06 11:26:00,740 INFO [trainer.py:765] (7/8) Epoch 11, batch 1200, train_loss[loss=2.971, ArTop10Accuracy=0.7319, over 11898.00 frames. ], tot_loss[loss=2.827, ArTop10Accuracy=0.7654, over 11844.39 frames. ], batch size: 101, lr: 1.06e-02
2024-08-06 11:26:15,853 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 11:26:25,556 INFO [trainer.py:811] (7/8) Epoch 11, validation: loss=2.831, ArTop10Accuracy=0.7643, over 1827537.00 frames.
2024-08-06 11:26:25,556 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 11:26:26,191 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.251e+02 1.335e+02 1.441e+02 2.942e+02, threshold=2.669e+02, percent-clipped=0.1
2024-08-06 11:27:09,788 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 11:29:03,457 INFO [trainer.py:765] (7/8) Epoch 12, batch 100, train_loss[loss=2.906, ArTop10Accuracy=0.7525, over 14562.00 frames. ], tot_loss[loss=2.804, ArTop10Accuracy=0.7693, over 4755.02 frames. ], batch size: 62, lr: 1.01e-02
2024-08-06 11:30:30,680 INFO [trainer.py:765] (7/8) Epoch 12, batch 200, train_loss[loss=2.829, ArTop10Accuracy=0.7614, over 13722.00 frames. ], tot_loss[loss=2.796, ArTop10Accuracy=0.771, over 7750.22 frames. ], batch size: 34, lr: 1.01e-02
2024-08-06 11:31:57,661 INFO [trainer.py:765] (7/8) Epoch 12, batch 300, train_loss[loss=2.844, ArTop10Accuracy=0.7598, over 14127.00 frames. ], tot_loss[loss=2.796, ArTop10Accuracy=0.7709, over 9370.85 frames. ], batch size: 45, lr: 1.01e-02
2024-08-06 11:33:30,744 INFO [trainer.py:765] (7/8) Epoch 12, batch 400, train_loss[loss=2.777, ArTop10Accuracy=0.7773, over 10335.00 frames. ], tot_loss[loss=2.794, ArTop10Accuracy=0.7714, over 10290.92 frames. ], batch size: 14, lr: 1.00e-02
2024-08-06 11:34:55,741 INFO [trainer.py:765] (7/8) Epoch 12, batch 500, train_loss[loss=2.726, ArTop10Accuracy=0.7936, over 12180.00 frames. ], tot_loss[loss=2.787, ArTop10Accuracy=0.7728, over 10849.09 frames. ], batch size: 22, lr: 1.00e-02
2024-08-06 11:36:29,367 INFO [trainer.py:765] (7/8) Epoch 12, batch 600, train_loss[loss=2.77, ArTop10Accuracy=0.7734, over 11346.00 frames. ], tot_loss[loss=2.795, ArTop10Accuracy=0.7714, over 11371.49 frames. ], batch size: 18, lr: 9.97e-03
2024-08-06 11:38:00,350 INFO [trainer.py:765] (7/8) Epoch 12, batch 700, train_loss[loss=2.692, ArTop10Accuracy=0.7974, over 9297.00 frames. ], tot_loss[loss=2.805, ArTop10Accuracy=0.7696, over 11504.84 frames. ], batch size: 11, lr: 9.93e-03
2024-08-06 11:39:23,617 INFO [trainer.py:765] (7/8) Epoch 12, batch 800, train_loss[loss=2.756, ArTop10Accuracy=0.7772, over 10002.00 frames. ], tot_loss[loss=2.807, ArTop10Accuracy=0.7691, over 11641.67 frames. ], batch size: 12, lr: 9.90e-03
2024-08-06 11:40:39,895 INFO [trainer.py:765] (7/8) Epoch 12, batch 900, train_loss[loss=2.87, ArTop10Accuracy=0.7533, over 13086.00 frames. ], tot_loss[loss=2.801, ArTop10Accuracy=0.7701, over 11681.88 frames. ], batch size: 27, lr: 9.87e-03
2024-08-06 11:41:14,001 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.248e+02 1.348e+02 1.459e+02 5.540e+02, threshold=2.695e+02, percent-clipped=0.3
2024-08-06 11:41:56,195 INFO [trainer.py:765] (7/8) Epoch 12, batch 1000, train_loss[loss=2.782, ArTop10Accuracy=0.7761, over 12906.00 frames. ], tot_loss[loss=2.804, ArTop10Accuracy=0.7696, over 11878.21 frames. ], batch size: 27, lr: 9.85e-03
2024-08-06 11:43:14,326 INFO [trainer.py:765] (7/8) Epoch 12, batch 1100, train_loss[loss=2.795, ArTop10Accuracy=0.7758, over 13584.00 frames. ], tot_loss[loss=2.808, ArTop10Accuracy=0.7688, over 11961.32 frames. ], batch size: 34, lr: 9.82e-03
2024-08-06 11:44:26,162 INFO [trainer.py:765] (7/8) Epoch 12, batch 1200, train_loss[loss=2.877, ArTop10Accuracy=0.751, over 12174.00 frames. ], tot_loss[loss=2.805, ArTop10Accuracy=0.7697, over 11871.83 frames. ], batch size: 101, lr: 9.79e-03
2024-08-06 11:45:26,840 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 11:47:26,608 INFO [trainer.py:765] (7/8) Epoch 13, batch 100, train_loss[loss=2.856, ArTop10Accuracy=0.7638, over 14538.00 frames. ], tot_loss[loss=2.788, ArTop10Accuracy=0.7722, over 4752.84 frames. ], batch size: 63, lr: 9.37e-03
2024-08-06 11:48:54,784 INFO [trainer.py:765] (7/8) Epoch 13, batch 200, train_loss[loss=2.783, ArTop10Accuracy=0.7744, over 13587.00 frames. ], tot_loss[loss=2.785, ArTop10Accuracy=0.7726, over 7712.35 frames. ], batch size: 34, lr: 9.34e-03
2024-08-06 11:50:20,521 INFO [trainer.py:765] (7/8) Epoch 13, batch 300, train_loss[loss=2.848, ArTop10Accuracy=0.7612, over 14127.00 frames. ], tot_loss[loss=2.78, ArTop10Accuracy=0.7738, over 9346.13 frames. ], batch size: 44, lr: 9.31e-03
2024-08-06 11:51:48,770 INFO [trainer.py:765] (7/8) Epoch 13, batch 400, train_loss[loss=2.655, ArTop10Accuracy=0.7974, over 10497.00 frames. ], tot_loss[loss=2.778, ArTop10Accuracy=0.7744, over 10273.88 frames. ], batch size: 14, lr: 9.28e-03
2024-08-06 11:53:13,413 INFO [trainer.py:765] (7/8) Epoch 13, batch 500, train_loss[loss=2.696, ArTop10Accuracy=0.7907, over 12237.00 frames. ], tot_loss[loss=2.774, ArTop10Accuracy=0.7752, over 10830.80 frames. ], batch size: 22, lr: 9.26e-03
2024-08-06 11:54:52,228 INFO [trainer.py:765] (7/8) Epoch 13, batch 600, train_loss[loss=2.751, ArTop10Accuracy=0.7839, over 11358.00 frames. ], tot_loss[loss=2.776, ArTop10Accuracy=0.7748, over 11350.36 frames. ], batch size: 18, lr: 9.23e-03
2024-08-06 11:55:47,087 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 11:55:56,834 INFO [trainer.py:811] (7/8) Epoch 13, validation: loss=2.824, ArTop10Accuracy=0.7662, over 1827537.00 frames.
2024-08-06 11:55:56,835 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 11:55:57,718 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.255e+02 1.343e+02 1.452e+02 4.888e+02, threshold=2.687e+02, percent-clipped=0.1
2024-08-06 11:56:28,470 INFO [trainer.py:765] (7/8) Epoch 13, batch 700, train_loss[loss=2.716, ArTop10Accuracy=0.7877, over 9372.00 frames. ], tot_loss[loss=2.778, ArTop10Accuracy=0.7744, over 11502.85 frames. ], batch size: 11, lr: 9.20e-03
2024-08-06 11:57:46,688 INFO [trainer.py:765] (7/8) Epoch 13, batch 800, train_loss[loss=2.765, ArTop10Accuracy=0.7779, over 9270.00 frames. ], tot_loss[loss=2.781, ArTop10Accuracy=0.7737, over 11637.14 frames. ], batch size: 11, lr: 9.18e-03
2024-08-06 11:59:03,294 INFO [trainer.py:765] (7/8) Epoch 13, batch 900, train_loss[loss=2.746, ArTop10Accuracy=0.7813, over 12927.00 frames. ], tot_loss[loss=2.779, ArTop10Accuracy=0.7742, over 11686.24 frames. ], batch size: 27, lr: 9.15e-03
2024-08-06 12:00:19,179 INFO [trainer.py:765] (7/8) Epoch 13, batch 1000, train_loss[loss=2.763, ArTop10Accuracy=0.784, over 12996.00 frames. ], tot_loss[loss=2.787, ArTop10Accuracy=0.7727, over 11882.38 frames. ], batch size: 27, lr: 9.13e-03
2024-08-06 12:01:34,888 INFO [trainer.py:765] (7/8) Epoch 13, batch 1100, train_loss[loss=2.788, ArTop10Accuracy=0.7706, over 13569.00 frames. ], tot_loss[loss=2.794, ArTop10Accuracy=0.7711, over 11950.05 frames. ], batch size: 34, lr: 9.10e-03
2024-08-06 12:02:48,668 INFO [trainer.py:765] (7/8) Epoch 13, batch 1200, train_loss[loss=2.882, ArTop10Accuracy=0.7557, over 12348.00 frames. ], tot_loss[loss=2.794, ArTop10Accuracy=0.7714, over 11841.55 frames. ], batch size: 101, lr: 9.08e-03
2024-08-06 12:03:48,262 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 12:05:45,342 INFO [trainer.py:765] (7/8) Epoch 14, batch 100, train_loss[loss=2.801, ArTop10Accuracy=0.7691, over 14286.00 frames. ], tot_loss[loss=2.774, ArTop10Accuracy=0.7744, over 4789.12 frames. ], batch size: 62, lr: 8.71e-03
2024-08-06 12:07:16,612 INFO [trainer.py:765] (7/8) Epoch 14, batch 200, train_loss[loss=2.759, ArTop10Accuracy=0.7766, over 13548.00 frames. ], tot_loss[loss=2.77, ArTop10Accuracy=0.7754, over 7763.87 frames. ], batch size: 34, lr: 8.69e-03
2024-08-06 12:08:44,319 INFO [trainer.py:765] (7/8) Epoch 14, batch 300, train_loss[loss=2.798, ArTop10Accuracy=0.7739, over 14325.00 frames. ], tot_loss[loss=2.763, ArTop10Accuracy=0.7772, over 9404.43 frames. ], batch size: 45, lr: 8.66e-03
2024-08-06 12:10:01,138 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.266e+02 1.374e+02 1.483e+02 6.480e+02, threshold=2.748e+02, percent-clipped=0.2
2024-08-06 12:10:10,233 INFO [trainer.py:765] (7/8) Epoch 14, batch 400, train_loss[loss=2.795, ArTop10Accuracy=0.7679, over 10785.00 frames. ], tot_loss[loss=2.764, ArTop10Accuracy=0.777, over 10299.80 frames. ], batch size: 15, lr: 8.64e-03
2024-08-06 12:11:36,157 INFO [trainer.py:765] (7/8) Epoch 14, batch 500, train_loss[loss=2.688, ArTop10Accuracy=0.7904, over 12291.00 frames. ], tot_loss[loss=2.757, ArTop10Accuracy=0.7783, over 10862.13 frames. ], batch size: 22, lr: 8.62e-03
2024-08-06 12:13:05,999 INFO [trainer.py:765] (7/8) Epoch 14, batch 600, train_loss[loss=2.724, ArTop10Accuracy=0.7831, over 11988.00 frames. ], tot_loss[loss=2.762, ArTop10Accuracy=0.7775, over 11389.76 frames. ], batch size: 19, lr: 8.59e-03
2024-08-06 12:14:38,559 INFO [trainer.py:765] (7/8) Epoch 14, batch 700, train_loss[loss=2.728, ArTop10Accuracy=0.7858, over 10293.00 frames. ], tot_loss[loss=2.765, ArTop10Accuracy=0.7771, over 11532.92 frames. ], batch size: 12, lr: 8.57e-03
2024-08-06 12:15:58,076 INFO [trainer.py:765] (7/8) Epoch 14, batch 800, train_loss[loss=2.684, ArTop10Accuracy=0.7962, over 9432.00 frames. ], tot_loss[loss=2.771, ArTop10Accuracy=0.7758, over 11619.66 frames. ], batch size: 11, lr: 8.55e-03
2024-08-06 12:17:12,872 INFO [trainer.py:765] (7/8) Epoch 14, batch 900, train_loss[loss=2.82, ArTop10Accuracy=0.7688, over 12945.00 frames. ], tot_loss[loss=2.767, ArTop10Accuracy=0.7764, over 11694.97 frames. ], batch size: 27, lr: 8.52e-03
2024-08-06 12:18:29,621 INFO [trainer.py:765] (7/8) Epoch 14, batch 1000, train_loss[loss=2.708, ArTop10Accuracy=0.791, over 13026.00 frames. ], tot_loss[loss=2.771, ArTop10Accuracy=0.7758, over 11882.66 frames. ], batch size: 27, lr: 8.50e-03
2024-08-06 12:19:45,383 INFO [trainer.py:765] (7/8) Epoch 14, batch 1100, train_loss[loss=2.799, ArTop10Accuracy=0.7704, over 13656.00 frames. ], tot_loss[loss=2.778, ArTop10Accuracy=0.7744, over 11972.38 frames. ], batch size: 35, lr: 8.48e-03
2024-08-06 12:20:59,284 INFO [trainer.py:765] (7/8) Epoch 14, batch 1200, train_loss[loss=2.889, ArTop10Accuracy=0.7537, over 11946.00 frames. ], tot_loss[loss=2.777, ArTop10Accuracy=0.7747, over 11883.44 frames. ], batch size: 101, lr: 8.46e-03
2024-08-06 12:21:58,392 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 12:23:51,969 INFO [trainer.py:765] (7/8) Epoch 15, batch 100, train_loss[loss=2.843, ArTop10Accuracy=0.7642, over 14451.00 frames. ], tot_loss[loss=2.762, ArTop10Accuracy=0.7777, over 4748.98 frames. ], batch size: 62, lr: 8.14e-03
2024-08-06 12:24:00,606 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 12:24:10,290 INFO [trainer.py:811] (7/8) Epoch 15, validation: loss=2.819, ArTop10Accuracy=0.7675, over 1827537.00 frames.
2024-08-06 12:24:10,291 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 12:24:11,100 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.284e+02 1.371e+02 1.488e+02 4.667e+02, threshold=2.743e+02, percent-clipped=0.2
2024-08-06 12:25:29,995 INFO [trainer.py:765] (7/8) Epoch 15, batch 200, train_loss[loss=2.79, ArTop10Accuracy=0.7691, over 13515.00 frames. ], tot_loss[loss=2.756, ArTop10Accuracy=0.7787, over 7748.40 frames. ], batch size: 34, lr: 8.12e-03
2024-08-06 12:26:58,700 INFO [trainer.py:765] (7/8) Epoch 15, batch 300, train_loss[loss=2.801, ArTop10Accuracy=0.7692, over 13812.00 frames. ], tot_loss[loss=2.749, ArTop10Accuracy=0.78, over 9360.32 frames. ], batch size: 44, lr: 8.09e-03
2024-08-06 12:28:28,541 INFO [trainer.py:765] (7/8) Epoch 15, batch 400, train_loss[loss=2.615, ArTop10Accuracy=0.8103, over 11025.00 frames. ], tot_loss[loss=2.751, ArTop10Accuracy=0.7795, over 10292.66 frames. ], batch size: 15, lr: 8.07e-03
2024-08-06 12:29:54,040 INFO [trainer.py:765] (7/8) Epoch 15, batch 500, train_loss[loss=2.716, ArTop10Accuracy=0.7863, over 12057.00 frames. ], tot_loss[loss=2.747, ArTop10Accuracy=0.7803, over 10846.17 frames. ], batch size: 22, lr: 8.05e-03
2024-08-06 12:31:23,300 INFO [trainer.py:765] (7/8) Epoch 15, batch 600, train_loss[loss=2.654, ArTop10Accuracy=0.7999, over 11283.00 frames. ], tot_loss[loss=2.754, ArTop10Accuracy=0.7792, over 11370.72 frames. ], batch size: 18, lr: 8.03e-03
2024-08-06 12:32:53,182 INFO [trainer.py:765] (7/8) Epoch 15, batch 700, train_loss[loss=2.632, ArTop10Accuracy=0.8045, over 10308.00 frames. ], tot_loss[loss=2.757, ArTop10Accuracy=0.7785, over 11518.17 frames. ], batch size: 12, lr: 8.01e-03
2024-08-06 12:34:18,261 INFO [trainer.py:765] (7/8) Epoch 15, batch 800, train_loss[loss=2.534, ArTop10Accuracy=0.8238, over 10224.00 frames. ], tot_loss[loss=2.759, ArTop10Accuracy=0.7781, over 11628.39 frames. ], batch size: 12, lr: 7.99e-03
2024-08-06 12:35:34,733 INFO [trainer.py:765] (7/8) Epoch 15, batch 900, train_loss[loss=2.727, ArTop10Accuracy=0.786, over 12687.00 frames. ], tot_loss[loss=2.754, ArTop10Accuracy=0.7792, over 11667.32 frames. ], batch size: 27, lr: 7.97e-03
2024-08-06 12:36:50,547 INFO [trainer.py:765] (7/8) Epoch 15, batch 1000, train_loss[loss=2.801, ArTop10Accuracy=0.7699, over 12762.00 frames. ], tot_loss[loss=2.759, ArTop10Accuracy=0.7781, over 11870.70 frames. ], batch size: 27, lr: 7.95e-03
2024-08-06 12:38:05,188 INFO [trainer.py:765] (7/8) Epoch 15, batch 1100, train_loss[loss=2.793, ArTop10Accuracy=0.7714, over 13743.00 frames. ], tot_loss[loss=2.767, ArTop10Accuracy=0.7765, over 11953.51 frames. ], batch size: 34, lr: 7.93e-03
2024-08-06 12:38:12,847 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.293e+02 1.379e+02 1.467e+02 2.824e+02, threshold=2.759e+02, percent-clipped=0.1
2024-08-06 12:39:18,795 INFO [trainer.py:765] (7/8) Epoch 15, batch 1200, train_loss[loss=2.905, ArTop10Accuracy=0.7478, over 12228.00 frames. ], tot_loss[loss=2.766, ArTop10Accuracy=0.7768, over 11851.53 frames. ], batch size: 101, lr: 7.91e-03
2024-08-06 12:40:18,896 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 12:42:17,627 INFO [trainer.py:765] (7/8) Epoch 16, batch 100, train_loss[loss=2.832, ArTop10Accuracy=0.7641, over 14472.00 frames. ], tot_loss[loss=2.747, ArTop10Accuracy=0.7805, over 4759.95 frames. ], batch size: 62, lr: 7.63e-03
2024-08-06 12:43:49,572 INFO [trainer.py:765] (7/8) Epoch 16, batch 200, train_loss[loss=2.744, ArTop10Accuracy=0.7798, over 13692.00 frames. ], tot_loss[loss=2.742, ArTop10Accuracy=0.7813, over 7743.89 frames. ], batch size: 34, lr: 7.61e-03
2024-08-06 12:45:18,508 INFO [trainer.py:765] (7/8) Epoch 16, batch 300, train_loss[loss=2.813, ArTop10Accuracy=0.7684, over 14073.00 frames. ], tot_loss[loss=2.741, ArTop10Accuracy=0.7815, over 9383.12 frames. ], batch size: 44, lr: 7.59e-03
2024-08-06 12:46:45,215 INFO [trainer.py:765] (7/8) Epoch 16, batch 400, train_loss[loss=2.627, ArTop10Accuracy=0.8022, over 10203.00 frames. ], tot_loss[loss=2.737, ArTop10Accuracy=0.7819, over 10285.65 frames. ], batch size: 14, lr: 7.58e-03
2024-08-06 12:48:16,319 INFO [trainer.py:765] (7/8) Epoch 16, batch 500, train_loss[loss=2.705, ArTop10Accuracy=0.7881, over 12114.00 frames. ], tot_loss[loss=2.733, ArTop10Accuracy=0.7826, over 10840.99 frames. ], batch size: 22, lr: 7.56e-03
2024-08-06 12:49:46,651 INFO [trainer.py:765] (7/8) Epoch 16, batch 600, train_loss[loss=2.678, ArTop10Accuracy=0.7914, over 11430.00 frames. ], tot_loss[loss=2.738, ArTop10Accuracy=0.7818, over 11353.37 frames. ], batch size: 18, lr: 7.54e-03
2024-08-06 12:51:23,687 INFO [trainer.py:765] (7/8) Epoch 16, batch 700, train_loss[loss=2.576, ArTop10Accuracy=0.8136, over 10032.00 frames. ], tot_loss[loss=2.741, ArTop10Accuracy=0.7812, over 11511.29 frames. ], batch size: 12, lr: 7.52e-03
2024-08-06 12:52:43,507 INFO [trainer.py:765] (7/8) Epoch 16, batch 800, train_loss[loss=2.719, ArTop10Accuracy=0.7814, over 9390.00 frames. ], tot_loss[loss=2.75, ArTop10Accuracy=0.7794, over 11637.75 frames. ], batch size: 11, lr: 7.51e-03
2024-08-06 12:53:06,022 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 12:53:15,499 INFO [trainer.py:811] (7/8) Epoch 16, validation: loss=2.816, ArTop10Accuracy=0.7678, over 1827537.00 frames.
2024-08-06 12:53:15,499 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 12:53:16,192 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.291e+02 1.391e+02 1.487e+02 3.459e+02, threshold=2.783e+02, percent-clipped=0.1
2024-08-06 12:54:06,487 INFO [trainer.py:765] (7/8) Epoch 16, batch 900, train_loss[loss=2.729, ArTop10Accuracy=0.7871, over 12846.00 frames. ], tot_loss[loss=2.75, ArTop10Accuracy=0.7795, over 11697.95 frames. ], batch size: 27, lr: 7.49e-03
2024-08-06 12:55:19,797 INFO [trainer.py:765] (7/8) Epoch 16, batch 1000, train_loss[loss=2.687, ArTop10Accuracy=0.7908, over 12930.00 frames. ], tot_loss[loss=2.751, ArTop10Accuracy=0.7794, over 11884.37 frames. ], batch size: 27, lr: 7.47e-03
2024-08-06 12:56:33,171 INFO [trainer.py:765] (7/8) Epoch 16, batch 1100, train_loss[loss=2.778, ArTop10Accuracy=0.7749, over 13665.00 frames. ], tot_loss[loss=2.758, ArTop10Accuracy=0.7782, over 11949.15 frames. ], batch size: 34, lr: 7.45e-03
2024-08-06 12:57:48,491 INFO [trainer.py:765] (7/8) Epoch 16, batch 1200, train_loss[loss=2.839, ArTop10Accuracy=0.7655, over 12969.00 frames. ], tot_loss[loss=2.758, ArTop10Accuracy=0.7781, over 11864.36 frames. ], batch size: 103, lr: 7.44e-03
2024-08-06 12:58:48,019 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 13:00:47,906 INFO [trainer.py:765] (7/8) Epoch 17, batch 100, train_loss[loss=2.792, ArTop10Accuracy=0.7732, over 14166.00 frames. ], tot_loss[loss=2.743, ArTop10Accuracy=0.7807, over 4760.01 frames. ], batch size: 62, lr: 7.18e-03
2024-08-06 13:02:19,308 INFO [trainer.py:765] (7/8) Epoch 17, batch 200, train_loss[loss=2.814, ArTop10Accuracy=0.7661, over 13584.00 frames. ], tot_loss[loss=2.734, ArTop10Accuracy=0.7825, over 7731.47 frames. ], batch size: 34, lr: 7.17e-03
2024-08-06 13:03:45,523 INFO [trainer.py:765] (7/8) Epoch 17, batch 300, train_loss[loss=2.769, ArTop10Accuracy=0.7733, over 14361.00 frames. ], tot_loss[loss=2.731, ArTop10Accuracy=0.7829, over 9385.16 frames. ], batch size: 45, lr: 7.15e-03
2024-08-06 13:05:21,768 INFO [trainer.py:765] (7/8) Epoch 17, batch 400, train_loss[loss=2.632, ArTop10Accuracy=0.8014, over 10311.00 frames. ], tot_loss[loss=2.728, ArTop10Accuracy=0.7838, over 10317.90 frames. ], batch size: 14, lr: 7.14e-03
2024-08-06 13:06:47,027 INFO [trainer.py:765] (7/8) Epoch 17, batch 500, train_loss[loss=2.792, ArTop10Accuracy=0.7735, over 12228.00 frames. ], tot_loss[loss=2.725, ArTop10Accuracy=0.7845, over 10862.57 frames. ], batch size: 22, lr: 7.12e-03
2024-08-06 13:07:39,886 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.293e+02 1.386e+02 1.488e+02 3.253e+02, threshold=2.772e+02, percent-clipped=0.1
2024-08-06 13:08:22,694 INFO [trainer.py:765] (7/8) Epoch 17, batch 600, train_loss[loss=2.656, ArTop10Accuracy=0.8009, over 11367.00 frames. ], tot_loss[loss=2.726, ArTop10Accuracy=0.7845, over 11383.18 frames. ], batch size: 18, lr: 7.10e-03
2024-08-06 13:09:54,842 INFO [trainer.py:765] (7/8) Epoch 17, batch 700, train_loss[loss=2.583, ArTop10Accuracy=0.8144, over 9984.00 frames. ], tot_loss[loss=2.732, ArTop10Accuracy=0.7831, over 11520.02 frames. ], batch size: 12, lr: 7.09e-03
2024-08-06 13:11:19,487 INFO [trainer.py:765] (7/8) Epoch 17, batch 800, train_loss[loss=2.614, ArTop10Accuracy=0.8099, over 9345.00 frames. ], tot_loss[loss=2.736, ArTop10Accuracy=0.7823, over 11651.14 frames. ], batch size: 11, lr: 7.07e-03
2024-08-06 13:12:35,676 INFO [trainer.py:765] (7/8) Epoch 17, batch 900, train_loss[loss=2.75, ArTop10Accuracy=0.7803, over 12879.00 frames. ], tot_loss[loss=2.736, ArTop10Accuracy=0.7824, over 11692.53 frames. ], batch size: 27, lr: 7.06e-03
2024-08-06 13:13:53,068 INFO [trainer.py:765] (7/8) Epoch 17, batch 1000, train_loss[loss=2.727, ArTop10Accuracy=0.7848, over 12855.00 frames. ], tot_loss[loss=2.744, ArTop10Accuracy=0.7809, over 11882.04 frames. ], batch size: 27, lr: 7.04e-03
2024-08-06 13:15:08,492 INFO [trainer.py:765] (7/8) Epoch 17, batch 1100, train_loss[loss=2.74, ArTop10Accuracy=0.7812, over 14052.00 frames. ], tot_loss[loss=2.747, ArTop10Accuracy=0.7804, over 11950.75 frames. ], batch size: 35, lr: 7.02e-03
2024-08-06 13:16:22,394 INFO [trainer.py:765] (7/8) Epoch 17, batch 1200, train_loss[loss=2.863, ArTop10Accuracy=0.7594, over 12015.00 frames. ], tot_loss[loss=2.747, ArTop10Accuracy=0.7804, over 11855.95 frames. ], batch size: 101, lr: 7.01e-03
2024-08-06 13:17:21,213 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 13:19:16,001 INFO [trainer.py:765] (7/8) Epoch 18, batch 100, train_loss[loss=2.784, ArTop10Accuracy=0.7674, over 14079.00 frames. ], tot_loss[loss=2.724, ArTop10Accuracy=0.7844, over 4751.48 frames. ], batch size: 62, lr: 6.78e-03
2024-08-06 13:20:46,608 INFO [trainer.py:765] (7/8) Epoch 18, batch 200, train_loss[loss=2.649, ArTop10Accuracy=0.7999, over 13605.00 frames. ], tot_loss[loss=2.722, ArTop10Accuracy=0.7849, over 7735.17 frames. ], batch size: 34, lr: 6.77e-03
2024-08-06 13:21:55,110 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 13:22:04,751 INFO [trainer.py:811] (7/8) Epoch 18, validation: loss=2.817, ArTop10Accuracy=0.768, over 1827537.00 frames.
2024-08-06 13:22:04,752 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 13:22:05,480 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.323e+02 1.409e+02 1.514e+02 3.209e+02, threshold=2.818e+02, percent-clipped=0.1
2024-08-06 13:22:26,587 INFO [trainer.py:765] (7/8) Epoch 18, batch 300, train_loss[loss=2.752, ArTop10Accuracy=0.7807, over 14289.00 frames. ], tot_loss[loss=2.716, ArTop10Accuracy=0.7863, over 9361.90 frames. ], batch size: 44, lr: 6.76e-03
2024-08-06 13:23:57,938 INFO [trainer.py:765] (7/8) Epoch 18, batch 400, train_loss[loss=2.653, ArTop10Accuracy=0.7984, over 10791.00 frames. ], tot_loss[loss=2.715, ArTop10Accuracy=0.7863, over 10296.98 frames. ], batch size: 15, lr: 6.74e-03
2024-08-06 13:25:34,019 INFO [trainer.py:765] (7/8) Epoch 18, batch 500, train_loss[loss=2.708, ArTop10Accuracy=0.7846, over 12288.00 frames. ], tot_loss[loss=2.713, ArTop10Accuracy=0.7865, over 10862.32 frames. ], batch size: 22, lr: 6.73e-03
2024-08-06 13:27:00,640 INFO [trainer.py:765] (7/8) Epoch 18, batch 600, train_loss[loss=2.674, ArTop10Accuracy=0.7969, over 11604.00 frames. ], tot_loss[loss=2.719, ArTop10Accuracy=0.7857, over 11368.16 frames. ], batch size: 18, lr: 6.71e-03
2024-08-06 13:28:33,590 INFO [trainer.py:765] (7/8) Epoch 18, batch 700, train_loss[loss=2.703, ArTop10Accuracy=0.7906, over 10236.00 frames. ], tot_loss[loss=2.724, ArTop10Accuracy=0.7847, over 11505.06 frames. ], batch size: 12, lr: 6.70e-03
2024-08-06 13:29:54,993 INFO [trainer.py:765] (7/8) Epoch 18, batch 800, train_loss[loss=2.754, ArTop10Accuracy=0.7758, over 10239.00 frames. ], tot_loss[loss=2.725, ArTop10Accuracy=0.7845, over 11629.35 frames. ], batch size: 12, lr: 6.68e-03
2024-08-06 13:31:12,525 INFO [trainer.py:765] (7/8) Epoch 18, batch 900, train_loss[loss=2.723, ArTop10Accuracy=0.7859, over 12897.00 frames. ], tot_loss[loss=2.721, ArTop10Accuracy=0.7852, over 11685.87 frames. ], batch size: 27, lr: 6.67e-03
2024-08-06 13:32:26,558 INFO [trainer.py:765] (7/8) Epoch 18, batch 1000, train_loss[loss=2.771, ArTop10Accuracy=0.7752, over 12876.00 frames. ], tot_loss[loss=2.727, ArTop10Accuracy=0.784, over 11871.23 frames. ], batch size: 27, lr: 6.66e-03
2024-08-06 13:33:41,504 INFO [trainer.py:765] (7/8) Epoch 18, batch 1100, train_loss[loss=2.757, ArTop10Accuracy=0.7748, over 13779.00 frames. ], tot_loss[loss=2.734, ArTop10Accuracy=0.7827, over 11937.42 frames. ], batch size: 34, lr: 6.64e-03
2024-08-06 13:34:54,682 INFO [trainer.py:765] (7/8) Epoch 18, batch 1200, train_loss[loss=2.863, ArTop10Accuracy=0.7601, over 12321.00 frames. ], tot_loss[loss=2.734, ArTop10Accuracy=0.7828, over 11837.69 frames. ], batch size: 101, lr: 6.63e-03
2024-08-06 13:35:51,070 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.340e+02 1.433e+02 1.533e+02 2.444e+02, threshold=2.867e+02, percent-clipped=0.0
2024-08-06 13:35:54,176 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 13:37:48,630 INFO [trainer.py:765] (7/8) Epoch 19, batch 100, train_loss[loss=2.798, ArTop10Accuracy=0.7683, over 14376.00 frames. ], tot_loss[loss=2.723, ArTop10Accuracy=0.7846, over 4770.95 frames. ], batch size: 62, lr: 6.43e-03
2024-08-06 13:39:23,263 INFO [trainer.py:765] (7/8) Epoch 19, batch 200, train_loss[loss=2.726, ArTop10Accuracy=0.7834, over 13749.00 frames. ], tot_loss[loss=2.721, ArTop10Accuracy=0.7848, over 7745.36 frames. ], batch size: 34, lr: 6.41e-03
2024-08-06 13:40:48,366 INFO [trainer.py:765] (7/8) Epoch 19, batch 300, train_loss[loss=2.731, ArTop10Accuracy=0.7845, over 14517.00 frames. ], tot_loss[loss=2.713, ArTop10Accuracy=0.7862, over 9373.26 frames. ], batch size: 44, lr: 6.40e-03
2024-08-06 13:42:21,074 INFO [trainer.py:765] (7/8) Epoch 19, batch 400, train_loss[loss=2.731, ArTop10Accuracy=0.7861, over 10323.00 frames. ], tot_loss[loss=2.705, ArTop10Accuracy=0.7879, over 10293.26 frames. ], batch size: 14, lr: 6.39e-03
2024-08-06 13:43:44,961 INFO [trainer.py:765] (7/8) Epoch 19, batch 500, train_loss[loss=2.731, ArTop10Accuracy=0.7784, over 12534.00 frames. ], tot_loss[loss=2.7, ArTop10Accuracy=0.789, over 10849.95 frames. ], batch size: 23, lr: 6.37e-03
2024-08-06 13:45:16,688 INFO [trainer.py:765] (7/8) Epoch 19, batch 600, train_loss[loss=2.637, ArTop10Accuracy=0.8015, over 11577.00 frames. ], tot_loss[loss=2.703, ArTop10Accuracy=0.7885, over 11382.15 frames. ], batch size: 18, lr: 6.36e-03
2024-08-06 13:46:48,330 INFO [trainer.py:765] (7/8) Epoch 19, batch 700, train_loss[loss=2.646, ArTop10Accuracy=0.7997, over 10125.00 frames. ], tot_loss[loss=2.71, ArTop10Accuracy=0.787, over 11524.94 frames. ], batch size: 12, lr: 6.35e-03
2024-08-06 13:48:11,890 INFO [trainer.py:765] (7/8) Epoch 19, batch 800, train_loss[loss=2.537, ArTop10Accuracy=0.8126, over 10260.00 frames. ], tot_loss[loss=2.715, ArTop10Accuracy=0.786, over 11639.76 frames. ], batch size: 12, lr: 6.34e-03
2024-08-06 13:49:27,268 INFO [trainer.py:765] (7/8) Epoch 19, batch 900, train_loss[loss=2.684, ArTop10Accuracy=0.7954, over 13089.00 frames. ], tot_loss[loss=2.71, ArTop10Accuracy=0.7871, over 11707.01 frames. ], batch size: 27, lr: 6.32e-03
2024-08-06 13:50:40,660 INFO [trainer.py:803] (7/8) Computing validation loss
2024-08-06 13:50:50,536 INFO [trainer.py:811] (7/8) Epoch 19, validation: loss=2.818, ArTop10Accuracy=0.7679, over 1827537.00 frames.
2024-08-06 13:50:50,537 INFO [trainer.py:814] (7/8) Maximum memory allocated so far is 33001MB
2024-08-06 13:50:51,497 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.371e+02 1.455e+02 1.550e+02 3.697e+02, threshold=2.909e+02, percent-clipped=0.2
2024-08-06 13:50:52,921 INFO [trainer.py:765] (7/8) Epoch 19, batch 1000, train_loss[loss=2.774, ArTop10Accuracy=0.773, over 12831.00 frames. ], tot_loss[loss=2.716, ArTop10Accuracy=0.7858, over 11889.69 frames. ], batch size: 27, lr: 6.31e-03
2024-08-06 13:52:08,274 INFO [trainer.py:765] (7/8) Epoch 19, batch 1100, train_loss[loss=2.682, ArTop10Accuracy=0.792, over 13770.00 frames. ], tot_loss[loss=2.725, ArTop10Accuracy=0.784, over 11949.85 frames. ], batch size: 34, lr: 6.30e-03
2024-08-06 13:53:22,320 INFO [trainer.py:765] (7/8) Epoch 19, batch 1200, train_loss[loss=2.872, ArTop10Accuracy=0.7564, over 12657.00 frames. ], tot_loss[loss=2.725, ArTop10Accuracy=0.7842, over 11853.51 frames. ], batch size: 103, lr: 6.28e-03
2024-08-06 13:54:21,906 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 13:56:12,912 INFO [trainer.py:765] (7/8) Epoch 20, batch 100, train_loss[loss=2.794, ArTop10Accuracy=0.7677, over 14673.00 frames. ], tot_loss[loss=2.706, ArTop10Accuracy=0.7875, over 4740.81 frames. ], batch size: 62, lr: 6.10e-03
2024-08-06 13:57:42,501 INFO [trainer.py:765] (7/8) Epoch 20, batch 200, train_loss[loss=2.691, ArTop10Accuracy=0.7936, over 13551.00 frames. ], tot_loss[loss=2.704, ArTop10Accuracy=0.788, over 7749.97 frames. ], batch size: 34, lr: 6.09e-03
2024-08-06 13:59:15,436 INFO [trainer.py:765] (7/8) Epoch 20, batch 300, train_loss[loss=2.764, ArTop10Accuracy=0.7777, over 13926.00 frames. ], tot_loss[loss=2.696, ArTop10Accuracy=0.7898, over 9400.91 frames. ], batch size: 44, lr: 6.08e-03
2024-08-06 14:00:44,362 INFO [trainer.py:765] (7/8) Epoch 20, batch 400, train_loss[loss=2.598, ArTop10Accuracy=0.8109, over 10725.00 frames. ], tot_loss[loss=2.696, ArTop10Accuracy=0.7898, over 10319.38 frames. ], batch size: 15, lr: 6.07e-03
2024-08-06 14:02:14,860 INFO [trainer.py:765] (7/8) Epoch 20, batch 500, train_loss[loss=2.713, ArTop10Accuracy=0.7871, over 12249.00 frames. ], tot_loss[loss=2.692, ArTop10Accuracy=0.7905, over 10864.98 frames. ], batch size: 22, lr: 6.06e-03
2024-08-06 14:03:40,862 INFO [trainer.py:765] (7/8) Epoch 20, batch 600, train_loss[loss=2.639, ArTop10Accuracy=0.8041, over 11127.00 frames. ], tot_loss[loss=2.694, ArTop10Accuracy=0.79, over 11364.86 frames. ], batch size: 18, lr: 6.04e-03
2024-08-06 14:05:13,870 INFO [trainer.py:765] (7/8) Epoch 20, batch 700, train_loss[loss=2.727, ArTop10Accuracy=0.7852, over 10017.00 frames. ], tot_loss[loss=2.701, ArTop10Accuracy=0.7889, over 11514.99 frames. ], batch size: 12, lr: 6.03e-03
2024-08-06 14:05:30,798 INFO [optim.py:386] (7/8) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.365e+02 1.456e+02 1.550e+02 3.525e+02, threshold=2.913e+02, percent-clipped=0.1
2024-08-06 14:06:34,515 INFO [trainer.py:765] (7/8) Epoch 20, batch 800, train_loss[loss=2.786, ArTop10Accuracy=0.7703, over 10329.00 frames. ], tot_loss[loss=2.703, ArTop10Accuracy=0.7884, over 11642.78 frames. ], batch size: 12, lr: 6.02e-03
2024-08-06 14:07:50,950 INFO [trainer.py:765] (7/8) Epoch 20, batch 900, train_loss[loss=2.812, ArTop10Accuracy=0.7704, over 12780.00 frames. ], tot_loss[loss=2.703, ArTop10Accuracy=0.7884, over 11670.27 frames. ], batch size: 27, lr: 6.01e-03
2024-08-06 14:09:07,180 INFO [trainer.py:765] (7/8) Epoch 20, batch 1000, train_loss[loss=2.698, ArTop10Accuracy=0.7897, over 12972.00 frames. ], tot_loss[loss=2.71, ArTop10Accuracy=0.7871, over 11868.65 frames. ], batch size: 27, lr: 6.00e-03
2024-08-06 14:10:21,216 INFO [trainer.py:765] (7/8) Epoch 20, batch 1100, train_loss[loss=2.704, ArTop10Accuracy=0.7918, over 13494.00 frames. ], tot_loss[loss=2.717, ArTop10Accuracy=0.7857, over 11953.96 frames. ], batch size: 34, lr: 5.99e-03
2024-08-06 14:11:37,819 INFO [trainer.py:765] (7/8) Epoch 20, batch 1200, train_loss[loss=2.794, ArTop10Accuracy=0.7705, over 11277.00 frames. ], tot_loss[loss=2.72, ArTop10Accuracy=0.7853, over 11847.41 frames. ], batch size: 101, lr: 5.98e-03
2024-08-06 14:12:37,438 INFO [trainer.py:650] (7/8) Reaches end of dataloader.
2024-08-06 14:12:37,442 INFO [trainer.py:1069] (7/8) Done!