2023-03-25 21:42:34,499 INFO [finetune.py:1046] (2/7) Training started 2023-03-25 21:42:34,499 INFO [finetune.py:1056] (2/7) Device: cuda:2 2023-03-25 21:42:34,502 INFO [finetune.py:1065] (2/7) {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '62e404dd3f3a811d73e424199b3408e309c06e1a', 'k2-git-date': 'Mon Jan 30 02:26:16 2023', 'lhotse-version': '1.12.0.dev+git.3ccfeb7.clean', 'torch-version': '1.13.0', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.8', 'icefall-git-branch': 'master', 'icefall-git-sha1': 'd74822d-dirty', 'icefall-git-date': 'Tue Mar 21 21:35:32 2023', 'icefall-path': '/home/lishaojie/icefall', 'k2-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/lhotse/__init__.py', 'hostname': 'cnc533', 'IP address': '127.0.1.1'}, 'world_size': 7, 'master_port': 18181, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'base_lr': 0.004, 'lr_batches': 100000.0, 'lr_epochs': 100.0, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'do_finetune': True, 'init_modules': 'encoder', 'finetune_ckpt': '/home/lishaojie/icefall/egs/commonvoice/ASR/pruned_transducer_stateless7_streaming/exp/english_pretrain/epoch-30.pt', 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 500} 2023-03-25 21:42:34,502 INFO [finetune.py:1067] (2/7) About to create model 2023-03-25 21:42:34,855 INFO [zipformer.py:405] (2/7) At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-03-25 21:42:34,864 INFO [finetune.py:1071] (2/7) Number of model parameters: 70369391 2023-03-25 21:42:34,864 INFO [finetune.py:626] (2/7) Loading checkpoint from /home/lishaojie/icefall/egs/commonvoice/ASR/pruned_transducer_stateless7_streaming/exp/english_pretrain/epoch-30.pt 2023-03-25 21:42:35,500 INFO [finetune.py:647] (2/7) Loading parameters starting with prefix encoder 2023-03-25 21:42:37,020 INFO [finetune.py:1093] (2/7) Using DDP 2023-03-25 21:42:37,686 INFO [commonvoice_fr.py:392] (2/7) About to get train cuts 2023-03-25 21:42:37,687 INFO [commonvoice_fr.py:218] (2/7) Enable MUSAN 2023-03-25 21:42:37,688 INFO [commonvoice_fr.py:219] (2/7) About to get Musan cuts 2023-03-25 21:42:39,658 INFO [commonvoice_fr.py:243] (2/7) Enable SpecAugment 2023-03-25 21:42:39,658 INFO [commonvoice_fr.py:244] (2/7) Time warp factor: 80 2023-03-25 21:42:39,658 INFO [commonvoice_fr.py:254] (2/7) Num frame mask: 10 2023-03-25 21:42:39,658 INFO [commonvoice_fr.py:267] (2/7) About to create train dataset 2023-03-25 21:42:39,658 INFO [commonvoice_fr.py:294] (2/7) Using DynamicBucketingSampler. 2023-03-25 21:42:42,364 INFO [commonvoice_fr.py:309] (2/7) About to create train dataloader 2023-03-25 21:42:42,365 INFO [commonvoice_fr.py:399] (2/7) About to get dev cuts 2023-03-25 21:42:42,365 INFO [commonvoice_fr.py:340] (2/7) About to create dev dataset 2023-03-25 21:42:42,774 INFO [commonvoice_fr.py:357] (2/7) About to create dev dataloader 2023-03-25 21:42:42,775 INFO [finetune.py:1289] (2/7) Sanity check -- see if any of the batches in epoch 1 would cause OOM. 2023-03-25 21:46:46,135 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 4858MB 2023-03-25 21:46:46,827 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:46:48,915 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:46:49,576 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:46:50,265 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:46:50,960 INFO [finetune.py:1317] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:46:59,846 INFO [finetune.py:976] (2/7) Epoch 1, batch 0, loss[loss=7.456, simple_loss=6.76, pruned_loss=6.939, over 4760.00 frames. ], tot_loss[loss=7.456, simple_loss=6.76, pruned_loss=6.939, over 4760.00 frames. ], batch size: 28, lr: 2.00e-03, grad_scale: 2.0 2023-03-25 21:46:59,847 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-25 21:47:04,892 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8300, 1.6664, 1.9483, 1.2105, 1.6926, 1.9439, 1.6453, 2.2088], device='cuda:2'), covar=tensor([0.0584, 0.0981, 0.0573, 0.0810, 0.0513, 0.0621, 0.1282, 0.0351], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0216, 0.0210, 0.0195, 0.0173, 0.0219, 0.0219, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 21:47:07,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2137, 1.5091, 1.7327, 0.7558, 1.2562, 1.6203, 1.7735, 1.5722], device='cuda:2'), covar=tensor([0.0648, 0.0270, 0.0206, 0.0365, 0.0298, 0.0533, 0.0179, 0.0426], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0176, 0.0133, 0.0143, 0.0148, 0.0145, 0.0168, 0.0183], device='cuda:2'), out_proj_covar=tensor([1.1281e-04, 1.3142e-04, 9.7380e-05, 1.0433e-04, 1.0803e-04, 1.0854e-04, 1.2654e-04, 1.3707e-04], device='cuda:2') 2023-03-25 21:47:15,851 INFO [finetune.py:1010] (2/7) Epoch 1, validation: loss=7.294, simple_loss=6.606, pruned_loss=6.863, over 2265189.00 frames. 2023-03-25 21:47:15,852 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 5345MB 2023-03-25 21:47:19,866 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 21:47:30,297 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:47:33,036 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.25 vs. limit=2.0 2023-03-25 21:47:51,898 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0060, 1.7939, 1.8075, 2.0681, 1.1942, 3.6934, 1.8479, 2.4530], device='cuda:2'), covar=tensor([0.1118, 0.0994, 0.0818, 0.0711, 0.0601, 0.0434, 0.0571, 0.0389], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0117, 0.0125, 0.0123, 0.0108, 0.0097, 0.0091, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 21:48:00,211 INFO [finetune.py:976] (2/7) Epoch 1, batch 50, loss[loss=4.266, simple_loss=3.993, pruned_loss=2.637, over 4248.00 frames. ], tot_loss[loss=4.286, simple_loss=3.846, pruned_loss=4.223, over 216829.18 frames. ], batch size: 66, lr: 2.20e-03, grad_scale: 0.000244140625 2023-03-25 21:48:32,986 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:48:51,579 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=2.00 vs. limit=2.0 2023-03-25 21:48:53,451 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.000244140625 2023-03-25 21:48:53,452 INFO [finetune.py:976] (2/7) Epoch 1, batch 100, loss[loss=2.219, simple_loss=2.113, pruned_loss=1.098, over 4743.00 frames. ], tot_loss[loss=3.458, simple_loss=3.182, pruned_loss=2.681, over 381366.70 frames. ], batch size: 59, lr: 2.40e-03, grad_scale: 0.00048828125 2023-03-25 21:49:13,206 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.539e+02 2.791e+03 6.484e+03 1.700e+04 1.722e+07, threshold=1.297e+04, percent-clipped=0.0 2023-03-25 21:49:18,514 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6801, 1.0060, 0.7520, 1.5435, 1.8549, 1.0813, 1.3825, 1.3468], device='cuda:2'), covar=tensor([0.0949, 0.1294, 0.1615, 0.0958, 0.1228, 0.1707, 0.0962, 0.1231], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0097, 0.0116, 0.0097, 0.0121, 0.0092, 0.0099, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 21:49:28,870 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=144.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 21:49:35,848 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3054, 2.5072, 2.7008, 0.9771, 3.5664, 2.2418, 1.2198, 2.1499], device='cuda:2'), covar=tensor([0.5443, 0.4636, 0.5717, 0.6298, 0.2968, 0.2690, 0.6289, 0.4089], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0154, 0.0164, 0.0127, 0.0155, 0.0119, 0.0147, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-25 21:49:37,440 INFO [finetune.py:976] (2/7) Epoch 1, batch 150, loss[loss=1.679, simple_loss=1.519, pruned_loss=1.295, over 4930.00 frames. ], tot_loss[loss=2.87, simple_loss=2.65, pruned_loss=2.1, over 507461.57 frames. ], batch size: 33, lr: 2.60e-03, grad_scale: 0.00048828125 2023-03-25 21:50:05,775 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=11.00 vs. limit=5.0 2023-03-25 21:50:15,717 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.00048828125 2023-03-25 21:50:15,717 INFO [finetune.py:976] (2/7) Epoch 1, batch 200, loss[loss=1.27, simple_loss=1.099, pruned_loss=1.189, over 4787.00 frames. ], tot_loss[loss=2.37, simple_loss=2.167, pruned_loss=1.803, over 607944.63 frames. ], batch size: 29, lr: 2.80e-03, grad_scale: 0.0009765625 2023-03-25 21:50:29,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.018e+02 7.406e+02 1.293e+03 3.197e+03 6.754e+04, threshold=2.586e+03, percent-clipped=12.0 2023-03-25 21:50:49,562 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=24.60 vs. limit=5.0 2023-03-25 21:50:54,579 INFO [finetune.py:976] (2/7) Epoch 1, batch 250, loss[loss=1.387, simple_loss=1.183, pruned_loss=1.305, over 4856.00 frames. ], tot_loss[loss=2.051, simple_loss=1.853, pruned_loss=1.631, over 685335.08 frames. ], batch size: 31, lr: 3.00e-03, grad_scale: 0.0009765625 2023-03-25 21:51:43,795 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=296.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:51:45,810 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=300.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:51:46,257 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.0009765625 2023-03-25 21:51:46,257 INFO [finetune.py:976] (2/7) Epoch 1, batch 300, loss[loss=1.172, simple_loss=0.9853, pruned_loss=1.109, over 4810.00 frames. ], tot_loss[loss=1.845, simple_loss=1.647, pruned_loss=1.521, over 742192.50 frames. ], batch size: 25, lr: 3.20e-03, grad_scale: 0.001953125 2023-03-25 21:51:47,993 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=2.09 vs. limit=2.0 2023-03-25 21:51:58,579 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.075e+01 5.781e+01 1.827e+02 5.788e+02 1.230e+04, threshold=3.655e+02, percent-clipped=4.0 2023-03-25 21:52:37,276 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.16 vs. limit=2.0 2023-03-25 21:52:39,147 INFO [finetune.py:976] (2/7) Epoch 1, batch 350, loss[loss=1.255, simple_loss=1.052, pruned_loss=1.144, over 4144.00 frames. ], tot_loss[loss=1.701, simple_loss=1.498, pruned_loss=1.442, over 789175.10 frames. ], batch size: 65, lr: 3.40e-03, grad_scale: 0.001953125 2023-03-25 21:52:47,116 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=357.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 21:53:10,170 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=387.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:53:27,701 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.001953125 2023-03-25 21:53:27,702 INFO [finetune.py:976] (2/7) Epoch 1, batch 400, loss[loss=1.393, simple_loss=1.141, pruned_loss=1.308, over 4844.00 frames. ], tot_loss[loss=1.587, simple_loss=1.379, pruned_loss=1.375, over 825033.18 frames. ], batch size: 44, lr: 3.60e-03, grad_scale: 0.00390625 2023-03-25 21:53:39,884 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.702e+01 2.277e+01 3.517e+01 1.113e+02 1.032e+03, threshold=7.035e+01, percent-clipped=3.0 2023-03-25 21:53:51,258 WARNING [optim.py:389] (2/7) Scaling gradients by 0.06621765345335007, model_norm_threshold=70.34587860107422 2023-03-25 21:53:51,344 INFO [optim.py:451] (2/7) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.67, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=7.539e+05, grad_sumsq = 2.933e+06, orig_rms_sq=2.571e-01 2023-03-25 21:54:00,695 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=439.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 21:54:05,299 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=448.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:54:06,748 INFO [finetune.py:976] (2/7) Epoch 1, batch 450, loss[loss=1.082, simple_loss=0.8687, pruned_loss=1.026, over 4822.00 frames. ], tot_loss[loss=1.479, simple_loss=1.269, pruned_loss=1.303, over 854618.90 frames. ], batch size: 30, lr: 3.80e-03, grad_scale: 0.00390625 2023-03-25 21:54:20,597 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=22.40 vs. limit=5.0 2023-03-25 21:54:43,455 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.00390625 2023-03-25 21:54:43,455 INFO [finetune.py:976] (2/7) Epoch 1, batch 500, loss[loss=1.044, simple_loss=0.8331, pruned_loss=0.9694, over 4866.00 frames. ], tot_loss[loss=1.38, simple_loss=1.17, pruned_loss=1.229, over 876767.44 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 0.0078125 2023-03-25 21:54:51,141 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.24 vs. limit=2.0 2023-03-25 21:54:54,209 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.84 vs. limit=2.0 2023-03-25 21:54:57,595 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.430e+01 1.676e+01 1.950e+01 4.114e+01 1.062e+03, threshold=3.899e+01, percent-clipped=11.0 2023-03-25 21:55:10,107 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=40.05 vs. limit=5.0 2023-03-25 21:55:28,670 INFO [finetune.py:976] (2/7) Epoch 1, batch 550, loss[loss=1.041, simple_loss=0.8258, pruned_loss=0.944, over 4833.00 frames. ], tot_loss[loss=1.294, simple_loss=1.084, pruned_loss=1.159, over 895454.83 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 0.0078125 2023-03-25 21:55:39,552 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=562.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:55:42,628 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=568.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:55:58,115 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=590.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:56:09,542 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=600.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 21:56:09,982 WARNING [finetune.py:966] (2/7) Grad scale is small: 0.0078125 2023-03-25 21:56:09,982 INFO [finetune.py:976] (2/7) Epoch 1, batch 600, loss[loss=1.122, simple_loss=0.8775, pruned_loss=1.015, over 4807.00 frames. ], tot_loss[loss=1.231, simple_loss=1.019, pruned_loss=1.105, over 910771.18 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 0.015625 2023-03-25 21:56:22,617 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.472e+01 1.758e+01 2.024e+01 2.271e+01 8.528e+01, threshold=4.048e+01, percent-clipped=5.0 2023-03-25 21:56:27,300 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=623.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:56:36,087 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=629.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 21:56:54,357 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=648.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:56:55,845 INFO [finetune.py:976] (2/7) Epoch 1, batch 650, loss[loss=1.091, simple_loss=0.8487, pruned_loss=0.9673, over 4834.00 frames. ], tot_loss[loss=1.189, simple_loss=0.9731, pruned_loss=1.067, over 921919.70 frames. ], batch size: 47, lr: 4.00e-03, grad_scale: 0.015625 2023-03-25 21:56:55,937 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=651.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 21:56:56,427 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=652.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 21:57:31,099 INFO [finetune.py:976] (2/7) Epoch 1, batch 700, loss[loss=0.9979, simple_loss=0.766, pruned_loss=0.8813, over 4893.00 frames. ], tot_loss[loss=1.151, simple_loss=0.9317, pruned_loss=1.028, over 927362.42 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 0.03125 2023-03-25 21:57:32,309 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=9.63 vs. limit=5.0 2023-03-25 21:57:38,290 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.810e+01 2.037e+01 2.232e+01 2.628e+01 5.516e+01, threshold=4.463e+01, percent-clipped=4.0 2023-03-25 21:57:54,107 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5503, 1.8719, 2.5649, 1.7161, 3.5518, 4.1950, 3.4928, 3.2967], device='cuda:2'), covar=tensor([0.0218, 0.0344, 0.0360, 0.0291, 0.0230, 0.0229, 0.0231, 0.0228], device='cuda:2'), in_proj_covar=tensor([0.0072, 0.0081, 0.0071, 0.0073, 0.0090, 0.0076, 0.0084, 0.0076], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 21:57:59,224 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=739.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 21:58:01,229 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=743.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 21:58:05,815 INFO [finetune.py:976] (2/7) Epoch 1, batch 750, loss[loss=1.043, simple_loss=0.79, pruned_loss=0.916, over 4881.00 frames. ], tot_loss[loss=1.12, simple_loss=0.897, pruned_loss=0.9957, over 934005.17 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 0.03125 2023-03-25 21:58:27,666 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=21.05 vs. limit=5.0 2023-03-25 21:58:29,192 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=787.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:58:32,294 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=11.43 vs. limit=5.0 2023-03-25 21:58:36,507 INFO [finetune.py:976] (2/7) Epoch 1, batch 800, loss[loss=0.8903, simple_loss=0.6756, pruned_loss=0.7603, over 4759.00 frames. ], tot_loss[loss=1.095, simple_loss=0.8683, pruned_loss=0.9668, over 940825.78 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 0.0625 2023-03-25 21:58:45,191 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.059e+01 2.266e+01 2.508e+01 2.744e+01 4.199e+01, threshold=5.016e+01, percent-clipped=0.0 2023-03-25 21:59:20,566 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=847.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:59:22,544 INFO [finetune.py:976] (2/7) Epoch 1, batch 850, loss[loss=0.8979, simple_loss=0.6767, pruned_loss=0.7562, over 4933.00 frames. ], tot_loss[loss=1.059, simple_loss=0.8317, pruned_loss=0.9273, over 940818.62 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 0.0625 2023-03-25 21:59:39,601 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-25 21:59:41,108 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=8.45 vs. limit=5.0 2023-03-25 21:59:54,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-25 22:00:12,165 INFO [finetune.py:976] (2/7) Epoch 1, batch 900, loss[loss=0.9294, simple_loss=0.6908, pruned_loss=0.7798, over 4889.00 frames. ], tot_loss[loss=1.03, simple_loss=0.8017, pruned_loss=0.8938, over 945655.79 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 0.125 2023-03-25 22:00:16,308 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=908.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 22:00:25,566 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.101e+01 2.406e+01 2.575e+01 3.027e+01 5.726e+01, threshold=5.150e+01, percent-clipped=1.0 2023-03-25 22:00:28,255 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=918.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:00:32,455 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=924.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:00:53,627 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=946.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:00:56,170 INFO [finetune.py:976] (2/7) Epoch 1, batch 950, loss[loss=0.9452, simple_loss=0.6952, pruned_loss=0.7866, over 4900.00 frames. ], tot_loss[loss=1.01, simple_loss=0.7787, pruned_loss=0.8679, over 947572.31 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 0.125 2023-03-25 22:00:56,743 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=952.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:01:26,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4577, 3.5322, 3.7214, 1.5164, 4.0018, 2.7605, 0.6037, 2.5109], device='cuda:2'), covar=tensor([0.3742, 0.2881, 0.2859, 0.4457, 0.1492, 0.1999, 0.6257, 0.2394], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0153, 0.0163, 0.0127, 0.0155, 0.0119, 0.0147, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-25 22:01:43,866 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1000.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:01:44,324 INFO [finetune.py:976] (2/7) Epoch 1, batch 1000, loss[loss=0.9741, simple_loss=0.7155, pruned_loss=0.795, over 4891.00 frames. ], tot_loss[loss=1.009, simple_loss=0.7705, pruned_loss=0.8584, over 950462.30 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 0.25 2023-03-25 22:01:58,290 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.382e+01 2.890e+01 3.153e+01 3.664e+01 7.462e+01, threshold=6.306e+01, percent-clipped=2.0 2023-03-25 22:02:21,749 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1043.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:02:31,287 INFO [finetune.py:976] (2/7) Epoch 1, batch 1050, loss[loss=0.87, simple_loss=0.6263, pruned_loss=0.7126, over 4678.00 frames. ], tot_loss[loss=1.009, simple_loss=0.7633, pruned_loss=0.8494, over 951046.40 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 0.25 2023-03-25 22:02:46,780 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4512, 1.2230, 2.0399, 1.0042, 1.5928, 1.6803, 1.3302, 2.1098], device='cuda:2'), covar=tensor([0.1054, 0.1224, 0.0957, 0.1033, 0.0719, 0.0781, 0.1228, 0.0509], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0216, 0.0210, 0.0195, 0.0173, 0.0219, 0.0219, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:03:07,642 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1091.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:03:18,240 INFO [finetune.py:976] (2/7) Epoch 1, batch 1100, loss[loss=1.018, simple_loss=0.7371, pruned_loss=0.8132, over 4904.00 frames. ], tot_loss[loss=1.005, simple_loss=0.7544, pruned_loss=0.837, over 952911.31 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 0.5 2023-03-25 22:03:30,770 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 2.698e+01 3.337e+01 3.640e+01 4.251e+01 7.174e+01, threshold=7.279e+01, percent-clipped=4.0 2023-03-25 22:04:04,832 INFO [finetune.py:976] (2/7) Epoch 1, batch 1150, loss[loss=0.9556, simple_loss=0.695, pruned_loss=0.7453, over 4924.00 frames. ], tot_loss[loss=1.002, simple_loss=0.7468, pruned_loss=0.8241, over 954655.97 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 0.5 2023-03-25 22:04:06,001 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1153.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:04:25,866 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=16.49 vs. limit=5.0 2023-03-25 22:04:46,447 INFO [finetune.py:976] (2/7) Epoch 1, batch 1200, loss[loss=0.8967, simple_loss=0.6596, pruned_loss=0.6794, over 4887.00 frames. ], tot_loss[loss=0.9907, simple_loss=0.7354, pruned_loss=0.8034, over 956220.35 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:04:47,539 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1203.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:04:59,184 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 3.248e+01 4.460e+01 5.563e+01 6.854e+01 1.013e+02, threshold=1.113e+02, percent-clipped=20.0 2023-03-25 22:04:59,281 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1214.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 22:05:01,647 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1218.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:05:06,571 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1224.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:05:24,080 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1246.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:05:26,685 INFO [finetune.py:976] (2/7) Epoch 1, batch 1250, loss[loss=0.9372, simple_loss=0.6936, pruned_loss=0.6947, over 4827.00 frames. ], tot_loss[loss=0.9675, simple_loss=0.717, pruned_loss=0.7727, over 956663.47 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:05:46,032 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1266.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:05:48,634 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9485, 1.4324, 2.5282, 1.2090, 2.2555, 2.0991, 1.3069, 2.6270], device='cuda:2'), covar=tensor([0.1743, 0.2241, 0.1219, 0.2183, 0.0917, 0.1546, 0.2739, 0.0812], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0214, 0.0208, 0.0193, 0.0171, 0.0215, 0.0217, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:05:49,130 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1272.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:05:55,004 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=22.34 vs. limit=5.0 2023-03-25 22:06:08,471 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1294.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:06:14,925 INFO [finetune.py:976] (2/7) Epoch 1, batch 1300, loss[loss=0.7972, simple_loss=0.5933, pruned_loss=0.579, over 4899.00 frames. ], tot_loss[loss=0.9372, simple_loss=0.6947, pruned_loss=0.7366, over 955746.75 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:06:23,505 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 5.599e+01 8.403e+01 9.999e+01 1.262e+02 2.600e+02, threshold=2.000e+02, percent-clipped=40.0 2023-03-25 22:06:54,512 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=29.87 vs. limit=5.0 2023-03-25 22:06:57,096 INFO [finetune.py:976] (2/7) Epoch 1, batch 1350, loss[loss=0.8217, simple_loss=0.6215, pruned_loss=0.5793, over 4909.00 frames. ], tot_loss[loss=0.9154, simple_loss=0.68, pruned_loss=0.7064, over 955824.48 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:07:27,660 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.12 vs. limit=5.0 2023-03-25 22:07:50,277 INFO [finetune.py:976] (2/7) Epoch 1, batch 1400, loss[loss=0.8718, simple_loss=0.667, pruned_loss=0.6002, over 4844.00 frames. ], tot_loss[loss=0.9038, simple_loss=0.6746, pruned_loss=0.6833, over 955686.97 frames. ], batch size: 49, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:07:58,273 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.277e+01 1.400e+02 1.610e+02 1.980e+02 2.974e+02, threshold=3.221e+02, percent-clipped=23.0 2023-03-25 22:08:09,182 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1434.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:08:20,139 INFO [finetune.py:976] (2/7) Epoch 1, batch 1450, loss[loss=0.7745, simple_loss=0.5922, pruned_loss=0.5274, over 4924.00 frames. ], tot_loss[loss=0.8807, simple_loss=0.6613, pruned_loss=0.6525, over 956270.42 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:08:47,678 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1495.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 22:08:51,796 INFO [finetune.py:976] (2/7) Epoch 1, batch 1500, loss[loss=0.7544, simple_loss=0.5913, pruned_loss=0.4965, over 4903.00 frames. ], tot_loss[loss=0.8543, simple_loss=0.6466, pruned_loss=0.6199, over 956212.06 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:08:51,987 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.64 vs. limit=5.0 2023-03-25 22:08:52,966 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1503.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:09:02,840 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1509.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:09:05,949 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.968e+01 1.844e+02 2.293e+02 2.711e+02 4.587e+02, threshold=4.586e+02, percent-clipped=13.0 2023-03-25 22:09:26,788 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-25 22:09:42,480 INFO [finetune.py:976] (2/7) Epoch 1, batch 1550, loss[loss=0.7, simple_loss=0.5592, pruned_loss=0.4482, over 4895.00 frames. ], tot_loss[loss=0.8236, simple_loss=0.6292, pruned_loss=0.5852, over 955575.55 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:09:42,537 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1551.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:09:52,850 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1566.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:10:33,765 INFO [finetune.py:976] (2/7) Epoch 1, batch 1600, loss[loss=0.6535, simple_loss=0.5118, pruned_loss=0.4224, over 4359.00 frames. ], tot_loss[loss=0.7899, simple_loss=0.6088, pruned_loss=0.5502, over 954659.13 frames. ], batch size: 19, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:10:37,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-25 22:10:40,878 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1611.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:10:42,942 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.367e+02 1.965e+02 2.441e+02 2.819e+02 5.041e+02, threshold=4.882e+02, percent-clipped=1.0 2023-03-25 22:10:54,884 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5554, 3.9403, 3.9986, 4.4365, 4.2513, 4.0775, 4.6486, 1.5008], device='cuda:2'), covar=tensor([0.0707, 0.0872, 0.0773, 0.0755, 0.1262, 0.1080, 0.0627, 0.5351], device='cuda:2'), in_proj_covar=tensor([0.0367, 0.0242, 0.0256, 0.0291, 0.0344, 0.0283, 0.0306, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:10:55,969 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1627.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:11:18,719 INFO [finetune.py:976] (2/7) Epoch 1, batch 1650, loss[loss=0.6227, simple_loss=0.5111, pruned_loss=0.3826, over 4904.00 frames. ], tot_loss[loss=0.7552, simple_loss=0.5881, pruned_loss=0.5156, over 953564.44 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:11:41,987 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1672.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 22:11:44,749 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-25 22:12:02,514 INFO [finetune.py:976] (2/7) Epoch 1, batch 1700, loss[loss=0.6528, simple_loss=0.5315, pruned_loss=0.4013, over 4826.00 frames. ], tot_loss[loss=0.7223, simple_loss=0.5689, pruned_loss=0.4831, over 953391.87 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:12:14,553 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 2.187e+02 2.736e+02 3.197e+02 8.210e+02, threshold=5.471e+02, percent-clipped=2.0 2023-03-25 22:12:53,679 INFO [finetune.py:976] (2/7) Epoch 1, batch 1750, loss[loss=0.5896, simple_loss=0.4891, pruned_loss=0.3544, over 4813.00 frames. ], tot_loss[loss=0.6999, simple_loss=0.5576, pruned_loss=0.4586, over 952142.90 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:13:29,501 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.12 vs. limit=5.0 2023-03-25 22:13:29,981 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1790.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:13:36,053 INFO [finetune.py:976] (2/7) Epoch 1, batch 1800, loss[loss=0.6986, simple_loss=0.5627, pruned_loss=0.4276, over 4176.00 frames. ], tot_loss[loss=0.688, simple_loss=0.5539, pruned_loss=0.4422, over 953187.66 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:13:40,490 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1809.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:13:43,024 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 2.215e+02 2.629e+02 3.291e+02 5.990e+02, threshold=5.258e+02, percent-clipped=1.0 2023-03-25 22:13:46,464 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.68 vs. limit=2.0 2023-03-25 22:13:59,217 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1838.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:14:06,687 INFO [finetune.py:976] (2/7) Epoch 1, batch 1850, loss[loss=0.4511, simple_loss=0.3992, pruned_loss=0.2537, over 4759.00 frames. ], tot_loss[loss=0.6715, simple_loss=0.5466, pruned_loss=0.4236, over 954214.82 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:14:10,079 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1857.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:14:10,123 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1857.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:14:17,813 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1870.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:14:51,527 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1899.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:14:52,518 INFO [finetune.py:976] (2/7) Epoch 1, batch 1900, loss[loss=0.5784, simple_loss=0.4999, pruned_loss=0.3308, over 4784.00 frames. ], tot_loss[loss=0.6541, simple_loss=0.538, pruned_loss=0.4056, over 954161.43 frames. ], batch size: 29, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:15:03,949 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.381e+02 2.208e+02 2.560e+02 3.227e+02 6.450e+02, threshold=5.121e+02, percent-clipped=1.0 2023-03-25 22:15:11,568 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1918.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:15:14,226 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1922.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:15:20,200 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1931.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 22:15:37,068 INFO [finetune.py:976] (2/7) Epoch 1, batch 1950, loss[loss=0.5023, simple_loss=0.4497, pruned_loss=0.2781, over 4831.00 frames. ], tot_loss[loss=0.6341, simple_loss=0.5267, pruned_loss=0.3871, over 954724.77 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:15:46,015 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1967.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:15:47,183 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3692, 1.5354, 1.1978, 1.6003, 1.4135, 3.0134, 1.0950, 1.4211], device='cuda:2'), covar=tensor([0.1102, 0.1818, 0.1292, 0.1207, 0.2004, 0.0230, 0.1970, 0.2440], device='cuda:2'), in_proj_covar=tensor([0.0070, 0.0077, 0.0069, 0.0071, 0.0089, 0.0073, 0.0083, 0.0076], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:15:48,822 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-25 22:16:12,903 INFO [finetune.py:976] (2/7) Epoch 1, batch 2000, loss[loss=0.4965, simple_loss=0.4192, pruned_loss=0.2869, over 3025.00 frames. ], tot_loss[loss=0.6104, simple_loss=0.5119, pruned_loss=0.3673, over 953154.50 frames. ], batch size: 12, lr: 4.00e-03, grad_scale: 4.0 2023-03-25 22:16:22,849 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.403e+02 2.183e+02 2.758e+02 3.285e+02 7.843e+02, threshold=5.515e+02, percent-clipped=1.0 2023-03-25 22:16:57,263 INFO [finetune.py:976] (2/7) Epoch 1, batch 2050, loss[loss=0.5175, simple_loss=0.4504, pruned_loss=0.2924, over 4921.00 frames. ], tot_loss[loss=0.5835, simple_loss=0.4958, pruned_loss=0.3456, over 953050.78 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:17:16,714 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4509, 2.2201, 3.1397, 4.3369, 3.3081, 2.9725, 1.1521, 3.4311], device='cuda:2'), covar=tensor([0.1638, 0.1327, 0.0899, 0.0320, 0.0712, 0.1052, 0.1806, 0.0640], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0113, 0.0126, 0.0147, 0.0101, 0.0135, 0.0118, 0.0104], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-25 22:17:31,369 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2090.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:17:36,899 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6048, 1.4767, 1.4593, 1.5953, 2.4305, 1.5358, 1.3239, 1.2406], device='cuda:2'), covar=tensor([0.4984, 0.5775, 0.4499, 0.4989, 0.4022, 0.3155, 0.6560, 0.4229], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0214, 0.0202, 0.0188, 0.0239, 0.0186, 0.0210, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:17:41,735 INFO [finetune.py:976] (2/7) Epoch 1, batch 2100, loss[loss=0.5251, simple_loss=0.4801, pruned_loss=0.2851, over 4819.00 frames. ], tot_loss[loss=0.5655, simple_loss=0.4868, pruned_loss=0.3298, over 955111.29 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:17:55,062 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.336e+02 2.022e+02 2.484e+02 2.961e+02 6.695e+02, threshold=4.968e+02, percent-clipped=1.0 2023-03-25 22:18:12,540 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2138.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:18:22,690 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.9680, 1.5248, 0.7479, 1.2788, 1.3786, 2.4512, 1.2274, 1.4088], device='cuda:2'), covar=tensor([0.1226, 0.1753, 0.1310, 0.1177, 0.1916, 0.0334, 0.1759, 0.2139], device='cuda:2'), in_proj_covar=tensor([0.0068, 0.0074, 0.0067, 0.0070, 0.0086, 0.0071, 0.0080, 0.0074], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:18:29,604 INFO [finetune.py:976] (2/7) Epoch 1, batch 2150, loss[loss=0.5203, simple_loss=0.4877, pruned_loss=0.2765, over 4832.00 frames. ], tot_loss[loss=0.559, simple_loss=0.4866, pruned_loss=0.3217, over 955776.24 frames. ], batch size: 49, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:19:03,492 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2194.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:08,495 INFO [finetune.py:976] (2/7) Epoch 1, batch 2200, loss[loss=0.5718, simple_loss=0.5117, pruned_loss=0.3159, over 4927.00 frames. ], tot_loss[loss=0.549, simple_loss=0.4835, pruned_loss=0.312, over 955001.22 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:19:17,021 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2213.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:17,478 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.568e+02 2.355e+02 2.819e+02 3.325e+02 5.172e+02, threshold=5.637e+02, percent-clipped=1.0 2023-03-25 22:19:21,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5316, 3.7461, 3.7591, 1.7650, 4.0210, 2.8957, 1.0113, 2.6507], device='cuda:2'), covar=tensor([0.2191, 0.1146, 0.1396, 0.2764, 0.0745, 0.0760, 0.3504, 0.1115], device='cuda:2'), in_proj_covar=tensor([0.0144, 0.0144, 0.0152, 0.0120, 0.0142, 0.0108, 0.0132, 0.0111], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:19:22,643 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2222.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:19:28,127 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2226.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:57,056 INFO [finetune.py:976] (2/7) Epoch 1, batch 2250, loss[loss=0.5408, simple_loss=0.4909, pruned_loss=0.2953, over 4900.00 frames. ], tot_loss[loss=0.5385, simple_loss=0.4789, pruned_loss=0.3027, over 954525.21 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:20:18,377 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2267.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:20:20,123 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2270.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:21:00,800 INFO [finetune.py:976] (2/7) Epoch 1, batch 2300, loss[loss=0.4983, simple_loss=0.4548, pruned_loss=0.2709, over 4863.00 frames. ], tot_loss[loss=0.5247, simple_loss=0.4716, pruned_loss=0.2917, over 954231.06 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:21:15,879 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.417e+02 2.050e+02 2.425e+02 2.921e+02 4.362e+02, threshold=4.850e+02, percent-clipped=0.0 2023-03-25 22:21:21,996 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2315.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:21:37,391 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2340.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:21:54,801 INFO [finetune.py:976] (2/7) Epoch 1, batch 2350, loss[loss=0.3566, simple_loss=0.3535, pruned_loss=0.1798, over 4782.00 frames. ], tot_loss[loss=0.5079, simple_loss=0.4606, pruned_loss=0.2798, over 953622.96 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:22:16,398 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-25 22:22:47,386 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-25 22:22:57,441 INFO [finetune.py:976] (2/7) Epoch 1, batch 2400, loss[loss=0.4379, simple_loss=0.4173, pruned_loss=0.2293, over 4918.00 frames. ], tot_loss[loss=0.4946, simple_loss=0.452, pruned_loss=0.2703, over 952766.72 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:22:57,562 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2401.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:23:05,863 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.408e+02 1.953e+02 2.427e+02 2.971e+02 6.309e+02, threshold=4.853e+02, percent-clipped=1.0 2023-03-25 22:23:17,131 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.68 vs. limit=5.0 2023-03-25 22:23:27,063 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-25 22:23:32,014 INFO [finetune.py:976] (2/7) Epoch 1, batch 2450, loss[loss=0.5062, simple_loss=0.4646, pruned_loss=0.2739, over 4863.00 frames. ], tot_loss[loss=0.4818, simple_loss=0.4437, pruned_loss=0.2613, over 952967.70 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:24:09,660 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-25 22:24:21,669 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2494.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:24:25,633 INFO [finetune.py:976] (2/7) Epoch 1, batch 2500, loss[loss=0.5356, simple_loss=0.5068, pruned_loss=0.2822, over 4808.00 frames. ], tot_loss[loss=0.4752, simple_loss=0.4414, pruned_loss=0.2555, over 955683.79 frames. ], batch size: 45, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:24:29,780 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=8.25 vs. limit=5.0 2023-03-25 22:24:34,884 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2513.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:24:35,368 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.353e+02 2.241e+02 2.593e+02 3.079e+02 4.323e+02, threshold=5.185e+02, percent-clipped=0.0 2023-03-25 22:24:44,453 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2526.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:24:54,544 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2542.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:25:00,196 INFO [finetune.py:976] (2/7) Epoch 1, batch 2550, loss[loss=0.5238, simple_loss=0.4981, pruned_loss=0.2747, over 4728.00 frames. ], tot_loss[loss=0.4739, simple_loss=0.4436, pruned_loss=0.2529, over 957263.33 frames. ], batch size: 59, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:25:09,686 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2561.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:25:18,672 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2574.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:25:48,340 INFO [finetune.py:976] (2/7) Epoch 1, batch 2600, loss[loss=0.4908, simple_loss=0.4704, pruned_loss=0.2556, over 4811.00 frames. ], tot_loss[loss=0.4688, simple_loss=0.4411, pruned_loss=0.2489, over 956098.87 frames. ], batch size: 45, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:25:55,823 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.651e+02 2.205e+02 2.587e+02 2.996e+02 4.228e+02, threshold=5.174e+02, percent-clipped=0.0 2023-03-25 22:26:19,956 INFO [finetune.py:976] (2/7) Epoch 1, batch 2650, loss[loss=0.4335, simple_loss=0.4337, pruned_loss=0.2167, over 4811.00 frames. ], tot_loss[loss=0.4635, simple_loss=0.4386, pruned_loss=0.2447, over 952682.16 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:26:37,511 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-25 22:26:45,761 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.02 vs. limit=5.0 2023-03-25 22:26:54,468 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2685.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:27:06,868 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2696.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:27:15,135 INFO [finetune.py:976] (2/7) Epoch 1, batch 2700, loss[loss=0.3731, simple_loss=0.368, pruned_loss=0.1891, over 4738.00 frames. ], tot_loss[loss=0.4546, simple_loss=0.4338, pruned_loss=0.2381, over 953914.10 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:27:26,658 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5461, 1.7543, 1.5303, 1.7008, 0.9450, 3.5482, 1.2522, 1.8503], device='cuda:2'), covar=tensor([0.3792, 0.2505, 0.2290, 0.2287, 0.2395, 0.0190, 0.2773, 0.1558], device='cuda:2'), in_proj_covar=tensor([0.0116, 0.0100, 0.0107, 0.0105, 0.0095, 0.0084, 0.0082, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:27:28,252 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.346e+02 2.127e+02 2.493e+02 3.058e+02 5.200e+02, threshold=4.985e+02, percent-clipped=1.0 2023-03-25 22:27:29,000 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8700, 2.3531, 1.6723, 1.4950, 2.8890, 2.6392, 2.0667, 1.9436], device='cuda:2'), covar=tensor([0.0945, 0.0628, 0.1078, 0.1155, 0.0446, 0.0760, 0.0986, 0.1311], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0133, 0.0133, 0.0121, 0.0109, 0.0131, 0.0137, 0.0167], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:28:08,151 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5386, 1.7030, 1.4973, 1.7365, 0.9251, 3.1347, 1.0381, 1.7150], device='cuda:2'), covar=tensor([0.3494, 0.2250, 0.2133, 0.2176, 0.2170, 0.0241, 0.2940, 0.1421], device='cuda:2'), in_proj_covar=tensor([0.0116, 0.0100, 0.0107, 0.0105, 0.0096, 0.0085, 0.0083, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:28:14,526 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2746.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 22:28:17,275 INFO [finetune.py:976] (2/7) Epoch 1, batch 2750, loss[loss=0.389, simple_loss=0.3845, pruned_loss=0.1968, over 4770.00 frames. ], tot_loss[loss=0.4448, simple_loss=0.4265, pruned_loss=0.2318, over 955239.78 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:28:57,075 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-25 22:28:58,717 INFO [finetune.py:976] (2/7) Epoch 1, batch 2800, loss[loss=0.3816, simple_loss=0.3881, pruned_loss=0.1876, over 4865.00 frames. ], tot_loss[loss=0.4352, simple_loss=0.42, pruned_loss=0.2254, over 954544.06 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:29:06,642 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.489e+02 2.264e+02 2.537e+02 3.001e+02 5.007e+02, threshold=5.073e+02, percent-clipped=1.0 2023-03-25 22:29:12,560 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2824.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:29:40,281 INFO [finetune.py:976] (2/7) Epoch 1, batch 2850, loss[loss=0.374, simple_loss=0.3788, pruned_loss=0.1846, over 4760.00 frames. ], tot_loss[loss=0.431, simple_loss=0.4169, pruned_loss=0.2228, over 954719.59 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:30:03,440 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2885.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:30:15,602 INFO [finetune.py:976] (2/7) Epoch 1, batch 2900, loss[loss=0.377, simple_loss=0.3715, pruned_loss=0.1912, over 4732.00 frames. ], tot_loss[loss=0.4328, simple_loss=0.4194, pruned_loss=0.2233, over 955148.21 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:30:19,197 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6551, 1.7005, 1.5417, 1.1757, 2.1327, 1.9195, 1.8191, 1.6496], device='cuda:2'), covar=tensor([0.0924, 0.0791, 0.0992, 0.1175, 0.0457, 0.0911, 0.0856, 0.1317], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0132, 0.0132, 0.0121, 0.0108, 0.0131, 0.0137, 0.0165], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:30:23,165 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.426e+02 2.100e+02 2.461e+02 2.914e+02 6.574e+02, threshold=4.923e+02, percent-clipped=3.0 2023-03-25 22:30:30,171 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8994, 2.1449, 1.4983, 1.4484, 2.8932, 2.5183, 2.0339, 1.9205], device='cuda:2'), covar=tensor([0.0862, 0.0652, 0.1010, 0.1031, 0.0357, 0.0745, 0.0842, 0.1263], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0132, 0.0132, 0.0121, 0.0108, 0.0131, 0.0137, 0.0165], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:30:37,140 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2937.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:30:50,912 INFO [finetune.py:976] (2/7) Epoch 1, batch 2950, loss[loss=0.4186, simple_loss=0.3945, pruned_loss=0.2214, over 4387.00 frames. ], tot_loss[loss=0.4344, simple_loss=0.4228, pruned_loss=0.2231, over 956140.81 frames. ], batch size: 19, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:31:04,889 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6140, 1.4675, 2.0102, 3.1245, 2.2139, 2.1677, 0.8813, 2.4182], device='cuda:2'), covar=tensor([0.1758, 0.1578, 0.1217, 0.0460, 0.0854, 0.1615, 0.1956, 0.0659], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0113, 0.0129, 0.0149, 0.0101, 0.0137, 0.0120, 0.0104], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-25 22:31:27,747 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7770, 1.7509, 1.6611, 1.3651, 2.1569, 1.9886, 1.8357, 1.6865], device='cuda:2'), covar=tensor([0.0756, 0.0630, 0.0785, 0.0939, 0.0410, 0.0677, 0.0765, 0.1159], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0131, 0.0131, 0.0120, 0.0107, 0.0130, 0.0136, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:31:31,760 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2996.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:31:33,418 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2998.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:31:35,184 INFO [finetune.py:976] (2/7) Epoch 1, batch 3000, loss[loss=0.467, simple_loss=0.4747, pruned_loss=0.2296, over 4808.00 frames. ], tot_loss[loss=0.4336, simple_loss=0.4234, pruned_loss=0.2221, over 956360.69 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:31:35,184 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-25 22:31:56,384 INFO [finetune.py:1010] (2/7) Epoch 1, validation: loss=0.4228, simple_loss=0.4589, pruned_loss=0.1933, over 2265189.00 frames. 2023-03-25 22:31:56,384 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 5444MB 2023-03-25 22:32:17,071 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.364e+02 2.092e+02 2.490e+02 2.940e+02 5.162e+02, threshold=4.980e+02, percent-clipped=2.0 2023-03-25 22:32:25,821 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3019.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:32:39,244 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3041.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:32:40,984 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3044.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:32:44,999 INFO [finetune.py:976] (2/7) Epoch 1, batch 3050, loss[loss=0.3939, simple_loss=0.4038, pruned_loss=0.192, over 4810.00 frames. ], tot_loss[loss=0.4285, simple_loss=0.4209, pruned_loss=0.2181, over 956374.30 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:33:07,436 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-25 22:33:09,863 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3080.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 22:33:38,269 INFO [finetune.py:976] (2/7) Epoch 1, batch 3100, loss[loss=0.3984, simple_loss=0.3866, pruned_loss=0.2051, over 4819.00 frames. ], tot_loss[loss=0.42, simple_loss=0.4153, pruned_loss=0.2124, over 956975.63 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:33:51,987 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.371e+02 2.009e+02 2.458e+02 3.052e+02 4.298e+02, threshold=4.916e+02, percent-clipped=0.0 2023-03-25 22:34:39,800 INFO [finetune.py:976] (2/7) Epoch 1, batch 3150, loss[loss=0.3488, simple_loss=0.3665, pruned_loss=0.1655, over 4903.00 frames. ], tot_loss[loss=0.4121, simple_loss=0.4089, pruned_loss=0.2077, over 957306.02 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:35:06,514 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-25 22:35:11,093 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3180.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:35:19,486 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2897, 1.5762, 1.1781, 1.6236, 1.6177, 2.8007, 1.2667, 1.5786], device='cuda:2'), covar=tensor([0.1097, 0.1558, 0.1363, 0.1008, 0.1437, 0.0332, 0.1454, 0.1716], device='cuda:2'), in_proj_covar=tensor([0.0071, 0.0075, 0.0069, 0.0073, 0.0087, 0.0074, 0.0081, 0.0074], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:35:39,620 INFO [finetune.py:976] (2/7) Epoch 1, batch 3200, loss[loss=0.3793, simple_loss=0.3953, pruned_loss=0.1817, over 4768.00 frames. ], tot_loss[loss=0.4032, simple_loss=0.4018, pruned_loss=0.2023, over 956754.79 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:35:52,730 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.973e+02 2.320e+02 2.787e+02 5.091e+02, threshold=4.641e+02, percent-clipped=1.0 2023-03-25 22:36:24,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0632, 4.8515, 4.7050, 3.0726, 5.0842, 3.9185, 1.2081, 3.6766], device='cuda:2'), covar=tensor([0.1959, 0.1307, 0.1280, 0.2483, 0.0578, 0.0658, 0.4115, 0.1118], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0151, 0.0157, 0.0123, 0.0147, 0.0112, 0.0138, 0.0114], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:36:29,139 INFO [finetune.py:976] (2/7) Epoch 1, batch 3250, loss[loss=0.4122, simple_loss=0.4088, pruned_loss=0.2078, over 4822.00 frames. ], tot_loss[loss=0.4012, simple_loss=0.4008, pruned_loss=0.2008, over 958551.20 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:37:12,281 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3288.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:37:20,008 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3293.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:37:24,555 INFO [finetune.py:976] (2/7) Epoch 1, batch 3300, loss[loss=0.4151, simple_loss=0.4278, pruned_loss=0.2012, over 4841.00 frames. ], tot_loss[loss=0.4022, simple_loss=0.4029, pruned_loss=0.2008, over 955820.81 frames. ], batch size: 47, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:37:32,705 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.553e+02 2.210e+02 2.512e+02 3.057e+02 4.555e+02, threshold=5.024e+02, percent-clipped=0.0 2023-03-25 22:38:03,255 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3341.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:38:13,352 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3349.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:38:14,398 INFO [finetune.py:976] (2/7) Epoch 1, batch 3350, loss[loss=0.4393, simple_loss=0.4321, pruned_loss=0.2232, over 4791.00 frames. ], tot_loss[loss=0.4027, simple_loss=0.4048, pruned_loss=0.2003, over 957183.83 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:38:44,448 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3375.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:38:52,677 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-25 22:38:59,533 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3389.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:39:06,749 INFO [finetune.py:976] (2/7) Epoch 1, batch 3400, loss[loss=0.3468, simple_loss=0.3566, pruned_loss=0.1685, over 4762.00 frames. ], tot_loss[loss=0.4025, simple_loss=0.4049, pruned_loss=0.2001, over 956338.51 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:39:17,328 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3608, 0.8888, 1.1697, 1.0581, 1.0015, 0.9631, 1.1140, 1.1479], device='cuda:2'), covar=tensor([ 7.0604, 14.1923, 7.7599, 9.8328, 11.8346, 7.0364, 15.1430, 7.4498], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0256, 0.0243, 0.0277, 0.0261, 0.0223, 0.0291, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 22:39:20,793 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.307e+02 1.988e+02 2.392e+02 2.720e+02 4.202e+02, threshold=4.784e+02, percent-clipped=0.0 2023-03-25 22:39:26,050 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3751, 1.1984, 1.5178, 2.3972, 1.6940, 2.0080, 0.7679, 1.8925], device='cuda:2'), covar=tensor([0.1875, 0.1624, 0.1161, 0.0613, 0.0938, 0.1146, 0.1676, 0.0846], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0153, 0.0102, 0.0140, 0.0123, 0.0106], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-25 22:39:52,094 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-25 22:40:08,844 INFO [finetune.py:976] (2/7) Epoch 1, batch 3450, loss[loss=0.3312, simple_loss=0.3588, pruned_loss=0.1518, over 4760.00 frames. ], tot_loss[loss=0.4006, simple_loss=0.4041, pruned_loss=0.1986, over 954499.69 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:40:19,969 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5561, 1.3701, 1.9062, 3.0940, 2.1721, 2.1962, 0.8568, 2.3120], device='cuda:2'), covar=tensor([0.1778, 0.1684, 0.1234, 0.0467, 0.0847, 0.1365, 0.1969, 0.0724], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0151, 0.0101, 0.0139, 0.0122, 0.0105], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-25 22:40:28,877 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6141, 1.5827, 1.0511, 1.3989, 1.4417, 1.3173, 1.3817, 2.1855], device='cuda:2'), covar=tensor([1.1678, 1.0573, 1.1162, 1.5601, 0.8364, 0.8261, 1.0709, 0.3102], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0226, 0.0206, 0.0264, 0.0220, 0.0190, 0.0226, 0.0169], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 22:40:37,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-25 22:40:40,374 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3480.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:41:03,278 INFO [finetune.py:976] (2/7) Epoch 1, batch 3500, loss[loss=0.3947, simple_loss=0.3899, pruned_loss=0.1998, over 4931.00 frames. ], tot_loss[loss=0.3937, simple_loss=0.3982, pruned_loss=0.1946, over 954921.05 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:41:14,914 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 2.271e+02 2.832e+02 3.824e+02 1.123e+03, threshold=5.664e+02, percent-clipped=12.0 2023-03-25 22:41:30,605 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3528.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:41:48,805 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3546.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:41:57,309 INFO [finetune.py:976] (2/7) Epoch 1, batch 3550, loss[loss=0.3456, simple_loss=0.3772, pruned_loss=0.157, over 4928.00 frames. ], tot_loss[loss=0.3894, simple_loss=0.3941, pruned_loss=0.1923, over 954716.82 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:41:57,572 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.09 vs. limit=5.0 2023-03-25 22:42:11,380 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.02 vs. limit=5.0 2023-03-25 22:42:39,694 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3593.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:42:50,748 INFO [finetune.py:976] (2/7) Epoch 1, batch 3600, loss[loss=0.3798, simple_loss=0.371, pruned_loss=0.1943, over 4074.00 frames. ], tot_loss[loss=0.3826, simple_loss=0.3887, pruned_loss=0.1882, over 953799.59 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:42:59,636 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3607.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:43:09,366 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.842e+02 2.557e+02 2.866e+02 3.769e+02 9.044e+02, threshold=5.732e+02, percent-clipped=5.0 2023-03-25 22:43:22,160 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6616, 0.4065, 0.6828, 0.5175, 0.4624, 0.4203, 0.5559, 0.5297], device='cuda:2'), covar=tensor([ 7.6925, 13.2186, 7.8178, 11.0880, 12.3002, 7.4574, 12.4946, 7.3432], device='cuda:2'), in_proj_covar=tensor([0.0224, 0.0249, 0.0235, 0.0267, 0.0252, 0.0216, 0.0281, 0.0211], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 22:43:44,100 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3641.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:43:46,912 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3644.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:43:49,716 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.9702, 1.3375, 0.8985, 1.2276, 1.4402, 2.4639, 1.1762, 1.4305], device='cuda:2'), covar=tensor([0.1270, 0.1825, 0.1277, 0.1209, 0.1759, 0.0372, 0.1668, 0.1877], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0076, 0.0070, 0.0074, 0.0088, 0.0075, 0.0082, 0.0075], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:43:52,548 INFO [finetune.py:976] (2/7) Epoch 1, batch 3650, loss[loss=0.4944, simple_loss=0.4654, pruned_loss=0.2617, over 4910.00 frames. ], tot_loss[loss=0.3842, simple_loss=0.3907, pruned_loss=0.1889, over 954259.07 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:44:09,181 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4066, 1.5406, 1.4033, 1.6097, 0.9403, 3.5656, 1.1463, 1.7693], device='cuda:2'), covar=tensor([0.3996, 0.2705, 0.2371, 0.2489, 0.2333, 0.0189, 0.3193, 0.1753], device='cuda:2'), in_proj_covar=tensor([0.0119, 0.0102, 0.0109, 0.0108, 0.0100, 0.0087, 0.0087, 0.0085], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:44:11,680 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.5917, 4.7886, 5.1468, 5.4629, 5.2671, 5.0211, 5.6260, 2.2142], device='cuda:2'), covar=tensor([0.0681, 0.0714, 0.0644, 0.0738, 0.1161, 0.1119, 0.0511, 0.5044], device='cuda:2'), in_proj_covar=tensor([0.0374, 0.0247, 0.0266, 0.0298, 0.0352, 0.0292, 0.0314, 0.0305], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:44:20,325 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3675.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:44:21,469 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3676.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:44:35,076 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0475, 1.6030, 1.4218, 1.1217, 1.8000, 2.5728, 2.0644, 1.5259], device='cuda:2'), covar=tensor([0.0201, 0.0461, 0.0486, 0.0546, 0.0340, 0.0121, 0.0251, 0.0443], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0112, 0.0132, 0.0111, 0.0104, 0.0100, 0.0087, 0.0111], device='cuda:2'), out_proj_covar=tensor([6.6106e-05, 8.8689e-05, 1.0700e-04, 8.7765e-05, 8.2802e-05, 7.4949e-05, 6.7327e-05, 8.7162e-05], device='cuda:2') 2023-03-25 22:44:40,054 INFO [finetune.py:976] (2/7) Epoch 1, batch 3700, loss[loss=0.417, simple_loss=0.4328, pruned_loss=0.2006, over 4812.00 frames. ], tot_loss[loss=0.3852, simple_loss=0.3931, pruned_loss=0.1886, over 953029.61 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:44:52,812 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.548e+02 2.567e+02 2.980e+02 3.536e+02 5.905e+02, threshold=5.959e+02, percent-clipped=1.0 2023-03-25 22:45:01,539 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-25 22:45:09,751 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3723.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:45:23,629 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3737.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:45:43,732 INFO [finetune.py:976] (2/7) Epoch 1, batch 3750, loss[loss=0.4024, simple_loss=0.4029, pruned_loss=0.2009, over 4890.00 frames. ], tot_loss[loss=0.3853, simple_loss=0.3946, pruned_loss=0.188, over 953356.24 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:46:33,630 INFO [finetune.py:976] (2/7) Epoch 1, batch 3800, loss[loss=0.4099, simple_loss=0.4225, pruned_loss=0.1986, over 4852.00 frames. ], tot_loss[loss=0.3842, simple_loss=0.3947, pruned_loss=0.1869, over 952941.49 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:46:47,078 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.580e+02 2.141e+02 2.900e+02 3.620e+02 1.043e+03, threshold=5.800e+02, percent-clipped=4.0 2023-03-25 22:46:58,348 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2149, 1.6472, 0.8619, 1.9379, 2.2002, 1.6479, 1.6983, 2.1026], device='cuda:2'), covar=tensor([0.1567, 0.1904, 0.2476, 0.1195, 0.2340, 0.2087, 0.1293, 0.1828], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0094, 0.0112, 0.0090, 0.0122, 0.0092, 0.0095, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 22:47:22,184 INFO [finetune.py:976] (2/7) Epoch 1, batch 3850, loss[loss=0.3176, simple_loss=0.3451, pruned_loss=0.145, over 4889.00 frames. ], tot_loss[loss=0.3789, simple_loss=0.3909, pruned_loss=0.1835, over 955984.94 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:47:22,274 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5993, 3.9158, 4.0710, 4.4247, 4.2863, 4.0299, 4.6793, 1.4446], device='cuda:2'), covar=tensor([0.0647, 0.0794, 0.0680, 0.0885, 0.1094, 0.1255, 0.0529, 0.4953], device='cuda:2'), in_proj_covar=tensor([0.0371, 0.0245, 0.0265, 0.0294, 0.0348, 0.0289, 0.0311, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:47:24,561 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4705, 1.3330, 1.1092, 1.5350, 1.6311, 1.1896, 2.1602, 1.4280], device='cuda:2'), covar=tensor([0.6699, 1.5007, 1.1239, 1.3948, 0.7763, 0.6053, 0.7738, 0.9978], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0183, 0.0222, 0.0235, 0.0197, 0.0168, 0.0175, 0.0178], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:2') 2023-03-25 22:48:07,194 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6771, 1.5817, 1.3577, 0.8706, 1.3879, 1.7801, 1.6034, 1.4855], device='cuda:2'), covar=tensor([0.0699, 0.0373, 0.0484, 0.0524, 0.0293, 0.0234, 0.0266, 0.0369], device='cuda:2'), in_proj_covar=tensor([0.0117, 0.0140, 0.0109, 0.0118, 0.0118, 0.0110, 0.0134, 0.0139], device='cuda:2'), out_proj_covar=tensor([8.8273e-05, 1.0442e-04, 8.0142e-05, 8.6520e-05, 8.5431e-05, 8.1349e-05, 9.9830e-05, 1.0333e-04], device='cuda:2') 2023-03-25 22:48:09,999 INFO [finetune.py:976] (2/7) Epoch 1, batch 3900, loss[loss=0.3424, simple_loss=0.3609, pruned_loss=0.162, over 4899.00 frames. ], tot_loss[loss=0.3731, simple_loss=0.3853, pruned_loss=0.1805, over 955176.52 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:48:10,640 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3902.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:48:20,243 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.580e+02 2.264e+02 2.673e+02 3.196e+02 5.181e+02, threshold=5.346e+02, percent-clipped=0.0 2023-03-25 22:48:24,023 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4015, 1.3016, 1.3585, 0.8394, 1.6017, 1.4216, 1.3158, 1.2442], device='cuda:2'), covar=tensor([0.0722, 0.0856, 0.0762, 0.1087, 0.0594, 0.0826, 0.0874, 0.1464], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0131, 0.0133, 0.0122, 0.0107, 0.0131, 0.0137, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:48:30,333 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3926.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:48:50,576 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3944.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:48:54,666 INFO [finetune.py:976] (2/7) Epoch 1, batch 3950, loss[loss=0.3586, simple_loss=0.3648, pruned_loss=0.1762, over 4829.00 frames. ], tot_loss[loss=0.3647, simple_loss=0.378, pruned_loss=0.1757, over 953892.13 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:49:44,093 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.11 vs. limit=5.0 2023-03-25 22:49:45,842 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3987.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:49:53,768 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3992.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:50:05,835 INFO [finetune.py:976] (2/7) Epoch 1, batch 4000, loss[loss=0.4013, simple_loss=0.4154, pruned_loss=0.1935, over 4842.00 frames. ], tot_loss[loss=0.3611, simple_loss=0.3751, pruned_loss=0.1736, over 952409.63 frames. ], batch size: 47, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:50:18,328 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.331e+02 2.072e+02 2.562e+02 2.941e+02 5.028e+02, threshold=5.123e+02, percent-clipped=0.0 2023-03-25 22:50:31,341 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4032.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:50:50,289 INFO [finetune.py:976] (2/7) Epoch 1, batch 4050, loss[loss=0.3438, simple_loss=0.3749, pruned_loss=0.1564, over 4912.00 frames. ], tot_loss[loss=0.3613, simple_loss=0.3763, pruned_loss=0.1732, over 951059.12 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:51:12,344 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4403, 1.2553, 0.9349, 1.1145, 1.2701, 1.0450, 1.1404, 2.0358], device='cuda:2'), covar=tensor([4.5149, 4.9289, 4.0578, 6.7994, 3.7033, 2.8228, 4.9850, 1.1961], device='cuda:2'), in_proj_covar=tensor([0.0213, 0.0204, 0.0188, 0.0239, 0.0200, 0.0173, 0.0204, 0.0153], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 22:51:36,700 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1513, 1.1683, 1.2729, 1.0031, 1.0219, 1.3080, 1.1429, 1.4323], device='cuda:2'), covar=tensor([0.1953, 0.2250, 0.1657, 0.1617, 0.1533, 0.1516, 0.3018, 0.1265], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0198, 0.0195, 0.0185, 0.0167, 0.0209, 0.0205, 0.0185], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 22:51:46,312 INFO [finetune.py:976] (2/7) Epoch 1, batch 4100, loss[loss=0.3407, simple_loss=0.3512, pruned_loss=0.1651, over 4731.00 frames. ], tot_loss[loss=0.3628, simple_loss=0.379, pruned_loss=0.1733, over 951155.08 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:52:00,028 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.481e+02 2.010e+02 2.495e+02 2.957e+02 5.246e+02, threshold=4.990e+02, percent-clipped=1.0 2023-03-25 22:52:15,085 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.93 vs. limit=5.0 2023-03-25 22:52:34,822 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4141.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:52:43,041 INFO [finetune.py:976] (2/7) Epoch 1, batch 4150, loss[loss=0.3637, simple_loss=0.3911, pruned_loss=0.1681, over 4903.00 frames. ], tot_loss[loss=0.3627, simple_loss=0.3799, pruned_loss=0.1728, over 952807.07 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:53:43,097 INFO [finetune.py:976] (2/7) Epoch 1, batch 4200, loss[loss=0.34, simple_loss=0.3661, pruned_loss=0.1569, over 4800.00 frames. ], tot_loss[loss=0.3588, simple_loss=0.3776, pruned_loss=0.17, over 954152.73 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:53:43,841 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4202.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:53:43,868 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4202.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:53:52,402 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-25 22:54:02,772 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.292e+02 2.089e+02 2.432e+02 2.936e+02 5.530e+02, threshold=4.864e+02, percent-clipped=1.0 2023-03-25 22:54:22,222 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-25 22:54:33,432 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4689, 1.5004, 1.5859, 0.8754, 1.4172, 1.8595, 1.6750, 1.4477], device='cuda:2'), covar=tensor([0.0846, 0.0569, 0.0493, 0.0621, 0.0389, 0.0310, 0.0293, 0.0479], device='cuda:2'), in_proj_covar=tensor([0.0118, 0.0141, 0.0110, 0.0120, 0.0120, 0.0111, 0.0135, 0.0140], device='cuda:2'), out_proj_covar=tensor([8.9107e-05, 1.0521e-04, 8.1024e-05, 8.7674e-05, 8.6756e-05, 8.2067e-05, 1.0075e-04, 1.0405e-04], device='cuda:2') 2023-03-25 22:54:44,474 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4250.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:54:45,011 INFO [finetune.py:976] (2/7) Epoch 1, batch 4250, loss[loss=0.4123, simple_loss=0.4082, pruned_loss=0.2082, over 4856.00 frames. ], tot_loss[loss=0.3517, simple_loss=0.3713, pruned_loss=0.166, over 955250.79 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:54:45,168 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0572, 0.4892, 0.9456, 0.6908, 0.7461, 0.7347, 0.6557, 0.8531], device='cuda:2'), covar=tensor([4.2131, 9.2201, 5.7638, 7.4346, 8.1355, 5.1073, 9.6445, 5.3411], device='cuda:2'), in_proj_covar=tensor([0.0201, 0.0227, 0.0213, 0.0241, 0.0225, 0.0196, 0.0251, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 22:55:09,508 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6296, 1.6935, 1.5961, 1.6689, 0.9334, 2.8891, 1.0068, 1.6229], device='cuda:2'), covar=tensor([0.3483, 0.2405, 0.2075, 0.2246, 0.2283, 0.0301, 0.2846, 0.1537], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0103, 0.0110, 0.0109, 0.0102, 0.0088, 0.0089, 0.0087], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:55:09,579 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.19 vs. limit=5.0 2023-03-25 22:55:23,146 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4282.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:55:27,305 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4288.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:55:35,148 INFO [finetune.py:976] (2/7) Epoch 1, batch 4300, loss[loss=0.3404, simple_loss=0.356, pruned_loss=0.1623, over 4823.00 frames. ], tot_loss[loss=0.3453, simple_loss=0.3658, pruned_loss=0.1624, over 955617.66 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:55:45,449 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.425e+02 1.950e+02 2.267e+02 2.860e+02 4.056e+02, threshold=4.534e+02, percent-clipped=0.0 2023-03-25 22:56:05,410 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0465, 1.4570, 0.9183, 1.2115, 1.6114, 2.5018, 1.3057, 1.6059], device='cuda:2'), covar=tensor([0.1217, 0.1787, 0.1261, 0.1169, 0.1533, 0.0381, 0.1566, 0.1730], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0076, 0.0072, 0.0075, 0.0088, 0.0076, 0.0082, 0.0075], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 22:56:13,429 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4332.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:56:35,754 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4349.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:56:36,849 INFO [finetune.py:976] (2/7) Epoch 1, batch 4350, loss[loss=0.3629, simple_loss=0.3719, pruned_loss=0.177, over 4909.00 frames. ], tot_loss[loss=0.3398, simple_loss=0.3607, pruned_loss=0.1595, over 955500.81 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:57:17,018 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4380.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:57:44,938 INFO [finetune.py:976] (2/7) Epoch 1, batch 4400, loss[loss=0.32, simple_loss=0.3321, pruned_loss=0.154, over 3988.00 frames. ], tot_loss[loss=0.3406, simple_loss=0.3618, pruned_loss=0.1597, over 955807.07 frames. ], batch size: 17, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:57:57,029 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.978e+02 2.430e+02 2.895e+02 4.966e+02, threshold=4.860e+02, percent-clipped=1.0 2023-03-25 22:58:28,111 INFO [finetune.py:976] (2/7) Epoch 1, batch 4450, loss[loss=0.33, simple_loss=0.3556, pruned_loss=0.1522, over 4836.00 frames. ], tot_loss[loss=0.3444, simple_loss=0.3664, pruned_loss=0.1612, over 957489.01 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:59:09,846 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4497.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:59:12,229 INFO [finetune.py:976] (2/7) Epoch 1, batch 4500, loss[loss=0.3994, simple_loss=0.4058, pruned_loss=0.1965, over 4913.00 frames. ], tot_loss[loss=0.3455, simple_loss=0.3685, pruned_loss=0.1613, over 957215.97 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:59:29,320 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 2.111e+02 2.516e+02 2.889e+02 5.762e+02, threshold=5.032e+02, percent-clipped=1.0 2023-03-25 23:00:00,673 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-25 23:00:15,077 INFO [finetune.py:976] (2/7) Epoch 1, batch 4550, loss[loss=0.2592, simple_loss=0.2805, pruned_loss=0.119, over 4199.00 frames. ], tot_loss[loss=0.3472, simple_loss=0.3706, pruned_loss=0.1619, over 957354.14 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:00:15,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2964, 2.9045, 3.0090, 3.2140, 3.0201, 2.9086, 3.3469, 1.0711], device='cuda:2'), covar=tensor([0.1000, 0.0937, 0.0894, 0.1121, 0.1637, 0.1515, 0.1075, 0.4647], device='cuda:2'), in_proj_covar=tensor([0.0371, 0.0245, 0.0268, 0.0296, 0.0349, 0.0291, 0.0312, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:00:46,094 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1127, 1.4436, 0.9489, 1.2823, 1.4549, 2.4745, 1.2249, 1.5507], device='cuda:2'), covar=tensor([0.1280, 0.1835, 0.1330, 0.1187, 0.1792, 0.0413, 0.1730, 0.1998], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0076, 0.0072, 0.0076, 0.0088, 0.0076, 0.0082, 0.0075], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:00:55,783 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4582.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:01:25,592 INFO [finetune.py:976] (2/7) Epoch 1, batch 4600, loss[loss=0.2818, simple_loss=0.3161, pruned_loss=0.1238, over 4767.00 frames. ], tot_loss[loss=0.3456, simple_loss=0.3692, pruned_loss=0.1611, over 956416.68 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:01:38,110 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.209e+02 2.082e+02 2.456e+02 3.064e+02 5.977e+02, threshold=4.911e+02, percent-clipped=1.0 2023-03-25 23:01:59,353 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4630.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:02:18,011 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4644.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:02:28,981 INFO [finetune.py:976] (2/7) Epoch 1, batch 4650, loss[loss=0.2812, simple_loss=0.3104, pruned_loss=0.126, over 4728.00 frames. ], tot_loss[loss=0.3395, simple_loss=0.3635, pruned_loss=0.1578, over 956678.47 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:03:06,338 INFO [finetune.py:976] (2/7) Epoch 1, batch 4700, loss[loss=0.2666, simple_loss=0.3106, pruned_loss=0.1113, over 4932.00 frames. ], tot_loss[loss=0.3329, simple_loss=0.3578, pruned_loss=0.154, over 958506.85 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:03:20,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.870e+02 2.224e+02 2.796e+02 5.273e+02, threshold=4.448e+02, percent-clipped=2.0 2023-03-25 23:03:59,442 INFO [finetune.py:976] (2/7) Epoch 1, batch 4750, loss[loss=0.3336, simple_loss=0.3432, pruned_loss=0.162, over 4752.00 frames. ], tot_loss[loss=0.327, simple_loss=0.353, pruned_loss=0.1505, over 958755.31 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:04:36,396 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4797.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:04:39,212 INFO [finetune.py:976] (2/7) Epoch 1, batch 4800, loss[loss=0.386, simple_loss=0.3814, pruned_loss=0.1954, over 4740.00 frames. ], tot_loss[loss=0.33, simple_loss=0.3558, pruned_loss=0.1521, over 956174.46 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:04:56,835 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.406e+02 2.096e+02 2.556e+02 3.186e+02 5.883e+02, threshold=5.111e+02, percent-clipped=4.0 2023-03-25 23:04:59,503 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-25 23:05:00,027 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5855, 1.2675, 1.0410, 0.2197, 1.1976, 1.4247, 1.2246, 1.4558], device='cuda:2'), covar=tensor([0.0777, 0.0954, 0.1508, 0.2368, 0.1416, 0.2469, 0.2512, 0.0941], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0174, 0.0188, 0.0173, 0.0194, 0.0193, 0.0198, 0.0185], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:05:32,155 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4845.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:05:41,982 INFO [finetune.py:976] (2/7) Epoch 1, batch 4850, loss[loss=0.329, simple_loss=0.373, pruned_loss=0.1425, over 4813.00 frames. ], tot_loss[loss=0.3345, simple_loss=0.3606, pruned_loss=0.1542, over 955206.63 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:06:28,850 INFO [finetune.py:976] (2/7) Epoch 1, batch 4900, loss[loss=0.3172, simple_loss=0.3542, pruned_loss=0.1401, over 4770.00 frames. ], tot_loss[loss=0.3353, simple_loss=0.3613, pruned_loss=0.1547, over 953330.17 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:06:45,500 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.521e+02 2.056e+02 2.408e+02 2.893e+02 5.886e+02, threshold=4.817e+02, percent-clipped=2.0 2023-03-25 23:07:15,536 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4944.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:07:16,810 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5252, 1.3071, 1.4146, 1.5241, 2.0531, 1.4504, 1.1046, 1.1480], device='cuda:2'), covar=tensor([0.3138, 0.3163, 0.2479, 0.2395, 0.2685, 0.1823, 0.3886, 0.2414], device='cuda:2'), in_proj_covar=tensor([0.0209, 0.0194, 0.0180, 0.0167, 0.0215, 0.0165, 0.0193, 0.0170], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:07:19,810 INFO [finetune.py:976] (2/7) Epoch 1, batch 4950, loss[loss=0.3143, simple_loss=0.3544, pruned_loss=0.1371, over 4811.00 frames. ], tot_loss[loss=0.335, simple_loss=0.3621, pruned_loss=0.154, over 953959.63 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:07:25,280 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0737, 0.6942, 0.9403, 0.8004, 0.7034, 0.7732, 0.8035, 0.9541], device='cuda:2'), covar=tensor([4.5103, 8.7792, 5.4667, 6.9748, 7.7638, 5.1764, 9.6711, 4.8435], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0234, 0.0219, 0.0248, 0.0230, 0.0202, 0.0257, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 23:07:37,636 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6641, 1.1662, 0.8525, 1.5006, 1.9888, 1.0360, 1.3206, 1.6240], device='cuda:2'), covar=tensor([0.1779, 0.2163, 0.2189, 0.1305, 0.2221, 0.2175, 0.1407, 0.1990], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0095, 0.0114, 0.0091, 0.0123, 0.0094, 0.0097, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:08:11,782 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4992.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:08:22,746 INFO [finetune.py:976] (2/7) Epoch 1, batch 5000, loss[loss=0.3112, simple_loss=0.3222, pruned_loss=0.1501, over 4222.00 frames. ], tot_loss[loss=0.3307, simple_loss=0.3584, pruned_loss=0.1514, over 953084.04 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:08:32,726 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 2.131e+02 2.461e+02 3.038e+02 5.796e+02, threshold=4.923e+02, percent-clipped=4.0 2023-03-25 23:08:49,796 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3164, 1.5708, 0.7541, 2.0653, 2.5766, 1.8877, 1.7558, 2.3463], device='cuda:2'), covar=tensor([0.1453, 0.1993, 0.2433, 0.1105, 0.1925, 0.1878, 0.1330, 0.1676], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0095, 0.0114, 0.0091, 0.0123, 0.0094, 0.0097, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:09:12,433 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6516, 3.7280, 3.7592, 1.8105, 3.9817, 2.9228, 0.8844, 2.8586], device='cuda:2'), covar=tensor([0.2334, 0.1316, 0.1343, 0.3178, 0.0778, 0.0862, 0.4051, 0.1211], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0157, 0.0161, 0.0125, 0.0151, 0.0115, 0.0143, 0.0117], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-25 23:09:14,377 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-25 23:09:20,249 INFO [finetune.py:976] (2/7) Epoch 1, batch 5050, loss[loss=0.3167, simple_loss=0.3458, pruned_loss=0.1438, over 4774.00 frames. ], tot_loss[loss=0.3259, simple_loss=0.3539, pruned_loss=0.1489, over 954133.30 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:09:33,951 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6045, 1.6105, 1.1922, 1.2728, 1.7516, 2.0849, 1.6923, 1.4746], device='cuda:2'), covar=tensor([0.0360, 0.0444, 0.0612, 0.0529, 0.0448, 0.0273, 0.0406, 0.0450], device='cuda:2'), in_proj_covar=tensor([0.0083, 0.0111, 0.0130, 0.0110, 0.0103, 0.0098, 0.0087, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.4887e-05, 8.7921e-05, 1.0569e-04, 8.7401e-05, 8.1700e-05, 7.3828e-05, 6.7062e-05, 8.5568e-05], device='cuda:2') 2023-03-25 23:10:01,165 INFO [finetune.py:976] (2/7) Epoch 1, batch 5100, loss[loss=0.3253, simple_loss=0.3289, pruned_loss=0.1609, over 3972.00 frames. ], tot_loss[loss=0.3198, simple_loss=0.3483, pruned_loss=0.1456, over 953759.47 frames. ], batch size: 17, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:10:01,835 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4877, 3.9418, 4.0269, 4.3308, 4.2074, 4.0074, 4.5775, 1.3758], device='cuda:2'), covar=tensor([0.0664, 0.0729, 0.0845, 0.0899, 0.1062, 0.1251, 0.0679, 0.5111], device='cuda:2'), in_proj_covar=tensor([0.0369, 0.0245, 0.0268, 0.0295, 0.0347, 0.0290, 0.0313, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:10:09,454 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.918e+02 2.379e+02 2.966e+02 8.444e+02, threshold=4.758e+02, percent-clipped=2.0 2023-03-25 23:10:13,608 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7087, 1.8411, 1.6963, 1.2271, 2.0272, 1.8188, 1.7268, 1.6305], device='cuda:2'), covar=tensor([0.0824, 0.0658, 0.0922, 0.1080, 0.0467, 0.0843, 0.0902, 0.1109], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0130, 0.0134, 0.0123, 0.0106, 0.0133, 0.0139, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:10:28,430 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-25 23:10:34,835 INFO [finetune.py:976] (2/7) Epoch 1, batch 5150, loss[loss=0.2868, simple_loss=0.3358, pruned_loss=0.1189, over 4917.00 frames. ], tot_loss[loss=0.3207, simple_loss=0.3485, pruned_loss=0.1465, over 952166.94 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:11:02,385 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-25 23:11:14,840 INFO [finetune.py:976] (2/7) Epoch 1, batch 5200, loss[loss=0.3705, simple_loss=0.3951, pruned_loss=0.1729, over 4823.00 frames. ], tot_loss[loss=0.3233, simple_loss=0.3523, pruned_loss=0.1471, over 953620.36 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:11:24,773 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.601e+02 2.261e+02 2.570e+02 3.078e+02 5.221e+02, threshold=5.140e+02, percent-clipped=2.0 2023-03-25 23:11:56,306 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3553, 1.1804, 0.9500, 1.0077, 1.0694, 0.9948, 1.0504, 1.9981], device='cuda:2'), covar=tensor([4.3441, 4.1572, 3.7308, 5.7061, 3.3742, 2.7907, 4.6201, 1.1850], device='cuda:2'), in_proj_covar=tensor([0.0220, 0.0209, 0.0193, 0.0244, 0.0204, 0.0175, 0.0209, 0.0156], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 23:12:06,968 INFO [finetune.py:976] (2/7) Epoch 1, batch 5250, loss[loss=0.3011, simple_loss=0.3425, pruned_loss=0.1299, over 4763.00 frames. ], tot_loss[loss=0.3234, simple_loss=0.3536, pruned_loss=0.1466, over 953715.98 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:12:45,500 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7017, 1.3824, 1.8639, 1.3069, 1.5284, 1.7097, 1.4055, 1.8845], device='cuda:2'), covar=tensor([0.1289, 0.1974, 0.1125, 0.1499, 0.0924, 0.1259, 0.2463, 0.0851], device='cuda:2'), in_proj_covar=tensor([0.0199, 0.0201, 0.0199, 0.0190, 0.0171, 0.0216, 0.0208, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:12:48,476 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8932, 1.2481, 0.8720, 1.6928, 2.1823, 1.6535, 1.3634, 1.8879], device='cuda:2'), covar=tensor([0.2295, 0.2970, 0.2981, 0.1685, 0.2610, 0.2715, 0.2002, 0.2686], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0096, 0.0116, 0.0092, 0.0124, 0.0095, 0.0098, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:12:55,129 INFO [finetune.py:976] (2/7) Epoch 1, batch 5300, loss[loss=0.3221, simple_loss=0.3576, pruned_loss=0.1433, over 4887.00 frames. ], tot_loss[loss=0.3246, simple_loss=0.3555, pruned_loss=0.1468, over 955569.16 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:13:08,327 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.321e+02 2.032e+02 2.465e+02 2.907e+02 4.480e+02, threshold=4.930e+02, percent-clipped=0.0 2023-03-25 23:13:26,822 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7929, 1.2349, 0.8358, 1.6650, 2.0360, 1.4211, 1.4268, 1.8773], device='cuda:2'), covar=tensor([0.1602, 0.2046, 0.2383, 0.1232, 0.2149, 0.2017, 0.1369, 0.1817], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0096, 0.0115, 0.0092, 0.0123, 0.0095, 0.0098, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:13:29,252 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.02 vs. limit=2.0 2023-03-25 23:13:49,133 INFO [finetune.py:976] (2/7) Epoch 1, batch 5350, loss[loss=0.2577, simple_loss=0.3018, pruned_loss=0.1068, over 4794.00 frames. ], tot_loss[loss=0.3222, simple_loss=0.3539, pruned_loss=0.1452, over 954545.38 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:14:48,110 INFO [finetune.py:976] (2/7) Epoch 1, batch 5400, loss[loss=0.3009, simple_loss=0.3392, pruned_loss=0.1313, over 4822.00 frames. ], tot_loss[loss=0.3205, simple_loss=0.3517, pruned_loss=0.1447, over 954140.98 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:14:55,971 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.226e+02 1.939e+02 2.339e+02 2.729e+02 4.650e+02, threshold=4.678e+02, percent-clipped=0.0 2023-03-25 23:15:29,125 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7150, 3.6315, 3.5696, 1.6239, 3.8591, 2.7985, 1.0990, 2.6300], device='cuda:2'), covar=tensor([0.2007, 0.1410, 0.1632, 0.3422, 0.0815, 0.0931, 0.4009, 0.1349], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0157, 0.0161, 0.0125, 0.0151, 0.0115, 0.0142, 0.0117], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-25 23:15:39,347 INFO [finetune.py:976] (2/7) Epoch 1, batch 5450, loss[loss=0.279, simple_loss=0.3194, pruned_loss=0.1193, over 4825.00 frames. ], tot_loss[loss=0.3142, simple_loss=0.3461, pruned_loss=0.1412, over 954739.44 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:16:31,590 INFO [finetune.py:976] (2/7) Epoch 1, batch 5500, loss[loss=0.3434, simple_loss=0.3692, pruned_loss=0.1588, over 4905.00 frames. ], tot_loss[loss=0.3119, simple_loss=0.3432, pruned_loss=0.1403, over 955046.35 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:16:45,961 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.302e+02 2.026e+02 2.277e+02 2.875e+02 1.009e+03, threshold=4.553e+02, percent-clipped=5.0 2023-03-25 23:17:08,507 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5555, 1.6053, 1.8178, 1.0463, 1.5886, 1.9010, 1.8233, 1.6041], device='cuda:2'), covar=tensor([0.1002, 0.0611, 0.0385, 0.0782, 0.0428, 0.0437, 0.0293, 0.0536], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0145, 0.0112, 0.0122, 0.0123, 0.0111, 0.0137, 0.0139], device='cuda:2'), out_proj_covar=tensor([9.1230e-05, 1.0800e-04, 8.1695e-05, 8.9891e-05, 8.8925e-05, 8.2315e-05, 1.0272e-04, 1.0351e-04], device='cuda:2') 2023-03-25 23:17:20,560 INFO [finetune.py:976] (2/7) Epoch 1, batch 5550, loss[loss=0.3705, simple_loss=0.3986, pruned_loss=0.1712, over 4822.00 frames. ], tot_loss[loss=0.3138, simple_loss=0.3448, pruned_loss=0.1414, over 956104.27 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:17:58,716 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4503, 1.2978, 1.3899, 1.5280, 1.9333, 1.4258, 1.0735, 1.1854], device='cuda:2'), covar=tensor([0.3017, 0.2880, 0.2315, 0.2130, 0.2523, 0.1732, 0.3902, 0.2302], device='cuda:2'), in_proj_covar=tensor([0.0213, 0.0198, 0.0183, 0.0170, 0.0219, 0.0167, 0.0198, 0.0173], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:18:01,090 INFO [finetune.py:976] (2/7) Epoch 1, batch 5600, loss[loss=0.3846, simple_loss=0.4086, pruned_loss=0.1803, over 4172.00 frames. ], tot_loss[loss=0.3183, simple_loss=0.3503, pruned_loss=0.1432, over 955863.11 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:18:10,653 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-25 23:18:19,452 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.468e+02 1.839e+02 2.287e+02 2.793e+02 4.099e+02, threshold=4.573e+02, percent-clipped=0.0 2023-03-25 23:18:51,733 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-25 23:18:59,387 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5650.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:18:59,877 INFO [finetune.py:976] (2/7) Epoch 1, batch 5650, loss[loss=0.3098, simple_loss=0.3494, pruned_loss=0.1351, over 4898.00 frames. ], tot_loss[loss=0.3216, simple_loss=0.3539, pruned_loss=0.1446, over 956731.57 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:19:00,516 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2459, 1.4193, 1.1789, 1.5454, 1.4302, 2.7873, 1.1816, 1.4835], device='cuda:2'), covar=tensor([0.1137, 0.1817, 0.1442, 0.1188, 0.1741, 0.0336, 0.1631, 0.1861], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0077, 0.0074, 0.0076, 0.0089, 0.0078, 0.0082, 0.0076], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:19:08,969 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-25 23:19:33,910 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7448, 1.8320, 1.5117, 1.7647, 1.9415, 1.5245, 2.5151, 1.7902], device='cuda:2'), covar=tensor([0.1800, 0.2574, 0.3495, 0.2982, 0.2309, 0.1848, 0.1750, 0.2320], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0185, 0.0226, 0.0238, 0.0200, 0.0172, 0.0186, 0.0180], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:19:35,523 INFO [finetune.py:976] (2/7) Epoch 1, batch 5700, loss[loss=0.3132, simple_loss=0.3239, pruned_loss=0.1512, over 4312.00 frames. ], tot_loss[loss=0.3159, simple_loss=0.3469, pruned_loss=0.1424, over 936349.11 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:19:35,606 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9785, 2.2952, 2.3936, 2.4441, 2.3974, 4.6803, 1.8263, 2.5569], device='cuda:2'), covar=tensor([0.1024, 0.1460, 0.1035, 0.1031, 0.1399, 0.0176, 0.1460, 0.1514], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0077, 0.0074, 0.0077, 0.0090, 0.0078, 0.0083, 0.0076], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:19:41,533 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5711.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:19:43,179 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.818e+02 2.245e+02 2.685e+02 4.321e+02, threshold=4.489e+02, percent-clipped=0.0 2023-03-25 23:19:45,773 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-25 23:20:08,411 INFO [finetune.py:976] (2/7) Epoch 2, batch 0, loss[loss=0.2719, simple_loss=0.3104, pruned_loss=0.1167, over 4807.00 frames. ], tot_loss[loss=0.2719, simple_loss=0.3104, pruned_loss=0.1167, over 4807.00 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:20:08,411 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-25 23:20:25,000 INFO [finetune.py:1010] (2/7) Epoch 2, validation: loss=0.2224, simple_loss=0.2847, pruned_loss=0.08, over 2265189.00 frames. 2023-03-25 23:20:25,001 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 5977MB 2023-03-25 23:20:56,836 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5755.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:21:21,809 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-25 23:21:22,832 INFO [finetune.py:976] (2/7) Epoch 2, batch 50, loss[loss=0.3157, simple_loss=0.3662, pruned_loss=0.1327, over 4798.00 frames. ], tot_loss[loss=0.3185, simple_loss=0.353, pruned_loss=0.142, over 217226.50 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:21:54,356 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.870e+02 2.317e+02 2.912e+02 7.564e+02, threshold=4.633e+02, percent-clipped=3.0 2023-03-25 23:21:55,684 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5816.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:22:11,074 INFO [finetune.py:976] (2/7) Epoch 2, batch 100, loss[loss=0.28, simple_loss=0.325, pruned_loss=0.1175, over 4827.00 frames. ], tot_loss[loss=0.3094, simple_loss=0.3432, pruned_loss=0.1378, over 381988.38 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:22:23,817 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8057, 1.1605, 1.0058, 1.5376, 2.1709, 1.0420, 1.3505, 1.6732], device='cuda:2'), covar=tensor([0.1745, 0.2377, 0.2218, 0.1455, 0.2060, 0.2154, 0.1619, 0.2104], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0096, 0.0115, 0.0092, 0.0124, 0.0095, 0.0098, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:22:36,192 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5858.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:22:49,647 INFO [finetune.py:976] (2/7) Epoch 2, batch 150, loss[loss=0.3164, simple_loss=0.3445, pruned_loss=0.1442, over 4817.00 frames. ], tot_loss[loss=0.3008, simple_loss=0.3354, pruned_loss=0.1332, over 509567.82 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:23:17,718 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6898, 1.9134, 1.5603, 1.9203, 1.1539, 4.3331, 1.5132, 2.1254], device='cuda:2'), covar=tensor([0.3656, 0.2412, 0.2111, 0.2111, 0.2004, 0.0114, 0.2887, 0.1595], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0106, 0.0112, 0.0113, 0.0107, 0.0091, 0.0094, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:23:18,195 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.293e+02 1.883e+02 2.329e+02 2.858e+02 5.160e+02, threshold=4.657e+02, percent-clipped=2.0 2023-03-25 23:23:21,831 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5919.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:23:25,363 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1688, 1.7469, 2.5903, 1.6678, 2.1864, 2.1137, 1.7483, 2.3586], device='cuda:2'), covar=tensor([0.1206, 0.1587, 0.1294, 0.2001, 0.0800, 0.1408, 0.2001, 0.0840], device='cuda:2'), in_proj_covar=tensor([0.0198, 0.0201, 0.0199, 0.0190, 0.0172, 0.0217, 0.0208, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:23:28,109 INFO [finetune.py:976] (2/7) Epoch 2, batch 200, loss[loss=0.3525, simple_loss=0.375, pruned_loss=0.165, over 4710.00 frames. ], tot_loss[loss=0.2977, simple_loss=0.3323, pruned_loss=0.1316, over 608202.64 frames. ], batch size: 59, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:24:01,222 INFO [finetune.py:976] (2/7) Epoch 2, batch 250, loss[loss=0.2889, simple_loss=0.3501, pruned_loss=0.1138, over 4822.00 frames. ], tot_loss[loss=0.3028, simple_loss=0.3369, pruned_loss=0.1344, over 686227.83 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:24:20,397 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5990.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:24:42,067 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6006.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:24:48,587 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.968e+02 2.365e+02 2.842e+02 7.361e+02, threshold=4.731e+02, percent-clipped=2.0 2023-03-25 23:25:01,945 INFO [finetune.py:976] (2/7) Epoch 2, batch 300, loss[loss=0.3342, simple_loss=0.3676, pruned_loss=0.1503, over 4893.00 frames. ], tot_loss[loss=0.3076, simple_loss=0.3414, pruned_loss=0.1369, over 745981.47 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:25:19,401 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6194, 1.5367, 1.4271, 1.7828, 1.0560, 3.5606, 1.2166, 1.8659], device='cuda:2'), covar=tensor([0.3535, 0.2548, 0.2353, 0.2158, 0.2083, 0.0198, 0.3087, 0.1671], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0107, 0.0113, 0.0113, 0.0108, 0.0092, 0.0094, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:25:29,067 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6043.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:25:33,846 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6051.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:25:42,562 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5961, 1.3373, 1.5039, 1.5778, 2.0076, 1.4918, 1.1924, 1.2709], device='cuda:2'), covar=tensor([0.3107, 0.3042, 0.2412, 0.2444, 0.2676, 0.1846, 0.3827, 0.2507], device='cuda:2'), in_proj_covar=tensor([0.0215, 0.0200, 0.0185, 0.0171, 0.0220, 0.0168, 0.0200, 0.0174], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:25:51,354 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4868, 1.2965, 1.3792, 1.4402, 1.9319, 1.4284, 1.0197, 1.2132], device='cuda:2'), covar=tensor([0.2927, 0.2901, 0.2334, 0.2245, 0.2558, 0.1745, 0.3897, 0.2296], device='cuda:2'), in_proj_covar=tensor([0.0215, 0.0199, 0.0185, 0.0171, 0.0220, 0.0168, 0.0200, 0.0174], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:26:11,334 INFO [finetune.py:976] (2/7) Epoch 2, batch 350, loss[loss=0.2677, simple_loss=0.3049, pruned_loss=0.1153, over 4203.00 frames. ], tot_loss[loss=0.3099, simple_loss=0.344, pruned_loss=0.1379, over 789764.40 frames. ], batch size: 17, lr: 4.00e-03, grad_scale: 32.0 2023-03-25 23:26:25,148 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-25 23:26:35,011 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6104.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:26:39,147 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6111.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:26:41,481 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.275e+02 2.097e+02 2.536e+02 2.955e+02 5.135e+02, threshold=5.071e+02, percent-clipped=1.0 2023-03-25 23:26:59,638 INFO [finetune.py:976] (2/7) Epoch 2, batch 400, loss[loss=0.3027, simple_loss=0.3491, pruned_loss=0.1281, over 4919.00 frames. ], tot_loss[loss=0.3104, simple_loss=0.3459, pruned_loss=0.1375, over 825265.91 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:27:09,949 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6135.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:27:24,728 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0450, 2.0381, 1.8429, 1.3776, 2.3346, 2.1614, 1.9962, 1.8402], device='cuda:2'), covar=tensor([0.0821, 0.0729, 0.1047, 0.1228, 0.0402, 0.0901, 0.0909, 0.1201], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0133, 0.0138, 0.0127, 0.0108, 0.0137, 0.0143, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:27:49,727 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6170.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:27:59,427 INFO [finetune.py:976] (2/7) Epoch 2, batch 450, loss[loss=0.2732, simple_loss=0.3165, pruned_loss=0.1149, over 4933.00 frames. ], tot_loss[loss=0.3084, simple_loss=0.3439, pruned_loss=0.1364, over 855263.72 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:28:14,493 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6196.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:25,728 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6203.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:33,031 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6211.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:34,830 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6214.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:35,342 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.932e+02 2.244e+02 2.718e+02 3.817e+02, threshold=4.487e+02, percent-clipped=0.0 2023-03-25 23:28:45,037 INFO [finetune.py:976] (2/7) Epoch 2, batch 500, loss[loss=0.2777, simple_loss=0.3099, pruned_loss=0.1227, over 4907.00 frames. ], tot_loss[loss=0.305, simple_loss=0.3403, pruned_loss=0.1348, over 879612.69 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:28:51,090 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-25 23:28:52,709 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6231.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:06,792 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1709, 1.1586, 1.3354, 0.9735, 1.0011, 1.2400, 1.1231, 1.3880], device='cuda:2'), covar=tensor([0.1887, 0.2339, 0.1613, 0.1690, 0.1481, 0.1633, 0.3114, 0.1310], device='cuda:2'), in_proj_covar=tensor([0.0200, 0.0202, 0.0200, 0.0190, 0.0172, 0.0218, 0.0209, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:29:25,564 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6264.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:26,186 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6427, 1.6108, 1.3192, 1.2766, 1.7731, 2.0266, 1.6931, 1.2114], device='cuda:2'), covar=tensor([0.0288, 0.0423, 0.0590, 0.0457, 0.0281, 0.0246, 0.0294, 0.0502], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0110, 0.0129, 0.0109, 0.0102, 0.0097, 0.0086, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.4019e-05, 8.6702e-05, 1.0430e-04, 8.6739e-05, 8.1320e-05, 7.2397e-05, 6.6443e-05, 8.3990e-05], device='cuda:2') 2023-03-25 23:29:26,761 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6266.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:35,140 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6272.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:38,749 INFO [finetune.py:976] (2/7) Epoch 2, batch 550, loss[loss=0.3261, simple_loss=0.354, pruned_loss=0.1491, over 4832.00 frames. ], tot_loss[loss=0.3022, simple_loss=0.337, pruned_loss=0.1337, over 894625.06 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:30:17,697 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6306.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:30:19,425 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2674, 2.6317, 1.8285, 1.5468, 2.8871, 2.6013, 2.3543, 2.1622], device='cuda:2'), covar=tensor([0.0800, 0.0594, 0.1048, 0.1286, 0.0417, 0.0840, 0.0864, 0.1251], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0132, 0.0138, 0.0127, 0.0108, 0.0137, 0.0143, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:30:29,435 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.346e+02 1.947e+02 2.353e+02 2.718e+02 5.175e+02, threshold=4.705e+02, percent-clipped=1.0 2023-03-25 23:30:41,989 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6327.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:30:48,004 INFO [finetune.py:976] (2/7) Epoch 2, batch 600, loss[loss=0.2884, simple_loss=0.3389, pruned_loss=0.119, over 4827.00 frames. ], tot_loss[loss=0.3026, simple_loss=0.337, pruned_loss=0.1341, over 908322.20 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:30:49,401 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3338, 1.1573, 1.4301, 1.1047, 1.1328, 1.3707, 1.2427, 1.5608], device='cuda:2'), covar=tensor([0.1317, 0.2308, 0.1474, 0.1464, 0.1096, 0.1356, 0.2946, 0.0935], device='cuda:2'), in_proj_covar=tensor([0.0200, 0.0202, 0.0200, 0.0190, 0.0173, 0.0218, 0.0210, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:31:07,297 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6346.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:13,142 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6354.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:31:28,046 INFO [finetune.py:976] (2/7) Epoch 2, batch 650, loss[loss=0.3019, simple_loss=0.329, pruned_loss=0.1374, over 4758.00 frames. ], tot_loss[loss=0.307, simple_loss=0.3417, pruned_loss=0.1361, over 919542.28 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:31:42,388 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6399.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:51,309 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6411.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:53,613 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.435e+02 2.011e+02 2.373e+02 2.999e+02 4.783e+02, threshold=4.746e+02, percent-clipped=1.0 2023-03-25 23:31:59,209 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0876, 1.2526, 0.8863, 1.4143, 1.2922, 2.4886, 1.0484, 1.3150], device='cuda:2'), covar=tensor([0.1375, 0.2283, 0.1486, 0.1209, 0.2107, 0.0482, 0.2147, 0.2398], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0078, 0.0075, 0.0078, 0.0090, 0.0079, 0.0083, 0.0077], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:32:01,499 INFO [finetune.py:976] (2/7) Epoch 2, batch 700, loss[loss=0.2592, simple_loss=0.3028, pruned_loss=0.1078, over 4744.00 frames. ], tot_loss[loss=0.3075, simple_loss=0.3431, pruned_loss=0.136, over 926787.15 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:32:22,199 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6459.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:22,335 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.20 vs. limit=5.0 2023-03-25 23:32:24,555 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6462.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:27,956 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6467.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:34,494 INFO [finetune.py:976] (2/7) Epoch 2, batch 750, loss[loss=0.3416, simple_loss=0.3736, pruned_loss=0.1548, over 4834.00 frames. ], tot_loss[loss=0.3075, simple_loss=0.3435, pruned_loss=0.1358, over 933855.47 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:32:42,487 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6491.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:58,324 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6514.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:58,813 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.464e+02 1.998e+02 2.267e+02 2.688e+02 5.596e+02, threshold=4.534e+02, percent-clipped=2.0 2023-03-25 23:33:04,283 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6523.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:33:06,038 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6526.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:07,176 INFO [finetune.py:976] (2/7) Epoch 2, batch 800, loss[loss=0.3573, simple_loss=0.3799, pruned_loss=0.1673, over 4895.00 frames. ], tot_loss[loss=0.3065, simple_loss=0.343, pruned_loss=0.135, over 939101.11 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:33:07,369 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6528.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:40,447 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6559.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:42,336 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6562.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:46,515 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6567.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:58,058 INFO [finetune.py:976] (2/7) Epoch 2, batch 850, loss[loss=0.2908, simple_loss=0.3236, pruned_loss=0.129, over 4901.00 frames. ], tot_loss[loss=0.3027, simple_loss=0.3396, pruned_loss=0.1329, over 943324.58 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:34:37,099 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.789e+02 2.218e+02 2.697e+02 5.451e+02, threshold=4.436e+02, percent-clipped=1.0 2023-03-25 23:34:42,622 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6622.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:34:46,288 INFO [finetune.py:976] (2/7) Epoch 2, batch 900, loss[loss=0.2914, simple_loss=0.3328, pruned_loss=0.125, over 4829.00 frames. ], tot_loss[loss=0.2992, simple_loss=0.3359, pruned_loss=0.1312, over 944911.02 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:34:47,135 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6629.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:34:58,624 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.03 vs. limit=2.0 2023-03-25 23:35:02,812 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6646.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:23,989 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3916, 1.1989, 0.9768, 1.0214, 1.1535, 1.1183, 1.1354, 1.9856], device='cuda:2'), covar=tensor([2.8147, 2.5355, 2.2473, 3.1634, 2.0375, 1.5102, 2.5303, 0.7672], device='cuda:2'), in_proj_covar=tensor([0.0233, 0.0220, 0.0200, 0.0256, 0.0213, 0.0181, 0.0218, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 23:35:25,036 INFO [finetune.py:976] (2/7) Epoch 2, batch 950, loss[loss=0.3383, simple_loss=0.3688, pruned_loss=0.1538, over 4919.00 frames. ], tot_loss[loss=0.2965, simple_loss=0.3329, pruned_loss=0.1301, over 947849.28 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:35:32,464 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6690.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:34,866 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6694.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:37,915 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6699.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:39,284 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-25 23:35:48,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.729e+02 2.077e+02 2.706e+02 4.933e+02, threshold=4.155e+02, percent-clipped=1.0 2023-03-25 23:36:03,623 INFO [finetune.py:976] (2/7) Epoch 2, batch 1000, loss[loss=0.2311, simple_loss=0.2776, pruned_loss=0.09226, over 4765.00 frames. ], tot_loss[loss=0.2993, simple_loss=0.336, pruned_loss=0.1313, over 951109.17 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:36:22,930 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6747.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:36:40,655 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-25 23:37:01,237 INFO [finetune.py:976] (2/7) Epoch 2, batch 1050, loss[loss=0.3041, simple_loss=0.3452, pruned_loss=0.1315, over 4895.00 frames. ], tot_loss[loss=0.3028, simple_loss=0.3397, pruned_loss=0.133, over 953597.18 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:37:03,786 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0979, 1.2795, 0.8933, 1.4149, 1.3079, 2.3471, 1.1201, 1.3680], device='cuda:2'), covar=tensor([0.1155, 0.1869, 0.1681, 0.1076, 0.1799, 0.0486, 0.1755, 0.2023], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0078, 0.0074, 0.0077, 0.0090, 0.0079, 0.0083, 0.0076], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:37:08,629 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5752, 1.3864, 1.8103, 1.2885, 1.5620, 1.7778, 1.3993, 1.8578], device='cuda:2'), covar=tensor([0.1534, 0.2418, 0.1481, 0.1868, 0.1009, 0.1430, 0.2674, 0.1087], device='cuda:2'), in_proj_covar=tensor([0.0200, 0.0202, 0.0201, 0.0192, 0.0174, 0.0219, 0.0210, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:37:09,198 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6791.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:19,910 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1446, 1.8262, 2.6443, 4.0101, 2.9526, 2.6416, 1.0046, 3.2903], device='cuda:2'), covar=tensor([0.2010, 0.1701, 0.1439, 0.0563, 0.0821, 0.1607, 0.2103, 0.0611], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0119, 0.0138, 0.0161, 0.0105, 0.0145, 0.0130, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-25 23:37:29,664 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.307e+02 2.050e+02 2.547e+02 2.958e+02 5.414e+02, threshold=5.095e+02, percent-clipped=8.0 2023-03-25 23:37:37,310 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6818.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:37:40,266 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6823.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:47,834 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6826.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:48,962 INFO [finetune.py:976] (2/7) Epoch 2, batch 1100, loss[loss=0.3406, simple_loss=0.366, pruned_loss=0.1576, over 4816.00 frames. ], tot_loss[loss=0.3036, simple_loss=0.3411, pruned_loss=0.133, over 954517.65 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:37:56,677 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6839.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:18,877 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6859.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:23,766 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6867.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:28,918 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6874.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:31,313 INFO [finetune.py:976] (2/7) Epoch 2, batch 1150, loss[loss=0.3026, simple_loss=0.3428, pruned_loss=0.1312, over 4814.00 frames. ], tot_loss[loss=0.3033, simple_loss=0.3412, pruned_loss=0.1327, over 955961.67 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:38:31,415 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6878.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:38:34,883 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7389, 2.1891, 2.0881, 1.6872, 2.5283, 3.0671, 2.7059, 2.1382], device='cuda:2'), covar=tensor([0.0218, 0.0432, 0.0456, 0.0425, 0.0316, 0.0323, 0.0293, 0.0441], device='cuda:2'), in_proj_covar=tensor([0.0083, 0.0112, 0.0131, 0.0112, 0.0103, 0.0098, 0.0088, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.4821e-05, 8.8561e-05, 1.0632e-04, 8.8825e-05, 8.2287e-05, 7.3265e-05, 6.8239e-05, 8.5445e-05], device='cuda:2') 2023-03-25 23:38:39,369 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-25 23:38:52,177 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6907.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:53,377 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6909.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:56,974 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 2.014e+02 2.348e+02 2.865e+02 5.036e+02, threshold=4.696e+02, percent-clipped=0.0 2023-03-25 23:38:57,049 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6915.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:07,724 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6922.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:17,348 INFO [finetune.py:976] (2/7) Epoch 2, batch 1200, loss[loss=0.2304, simple_loss=0.2818, pruned_loss=0.0895, over 4804.00 frames. ], tot_loss[loss=0.3013, simple_loss=0.3389, pruned_loss=0.1318, over 955726.45 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:39:31,061 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6939.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:39:35,914 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3426, 2.5160, 2.1667, 2.5369, 1.6144, 4.8723, 2.2089, 2.9064], device='cuda:2'), covar=tensor([0.2779, 0.1964, 0.1750, 0.1782, 0.1802, 0.0112, 0.2371, 0.1199], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0107, 0.0113, 0.0114, 0.0109, 0.0092, 0.0096, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-25 23:39:39,685 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-25 23:39:47,663 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-25 23:39:49,900 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:49,969 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:56,177 INFO [finetune.py:976] (2/7) Epoch 2, batch 1250, loss[loss=0.2763, simple_loss=0.3198, pruned_loss=0.1164, over 4853.00 frames. ], tot_loss[loss=0.2975, simple_loss=0.3349, pruned_loss=0.1301, over 955203.87 frames. ], batch size: 47, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:40:00,601 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6985.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:40:10,139 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4596, 2.0179, 1.8832, 0.8658, 2.0180, 1.8910, 1.5925, 2.0379], device='cuda:2'), covar=tensor([0.0750, 0.1111, 0.1571, 0.2579, 0.1256, 0.2357, 0.2376, 0.1039], device='cuda:2'), in_proj_covar=tensor([0.0162, 0.0183, 0.0196, 0.0180, 0.0204, 0.0202, 0.0206, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:40:12,025 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2283, 3.6081, 3.7961, 4.0830, 3.9637, 3.7014, 4.2850, 1.3991], device='cuda:2'), covar=tensor([0.0635, 0.0765, 0.0754, 0.0843, 0.0988, 0.1250, 0.0628, 0.4645], device='cuda:2'), in_proj_covar=tensor([0.0371, 0.0248, 0.0272, 0.0297, 0.0350, 0.0291, 0.0314, 0.0305], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:40:23,990 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.918e+02 2.412e+02 2.798e+02 4.765e+02, threshold=4.825e+02, percent-clipped=1.0 2023-03-25 23:40:39,104 INFO [finetune.py:976] (2/7) Epoch 2, batch 1300, loss[loss=0.2368, simple_loss=0.289, pruned_loss=0.09226, over 4911.00 frames. ], tot_loss[loss=0.2929, simple_loss=0.3305, pruned_loss=0.1276, over 955956.31 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:40:47,581 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-25 23:40:52,843 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7048.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:40:53,994 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3523, 2.8776, 2.8397, 1.3419, 3.0725, 2.2134, 0.8582, 1.8577], device='cuda:2'), covar=tensor([0.2198, 0.2199, 0.1809, 0.3517, 0.1276, 0.1128, 0.4100, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0164, 0.0165, 0.0129, 0.0155, 0.0119, 0.0148, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-25 23:41:17,562 INFO [finetune.py:976] (2/7) Epoch 2, batch 1350, loss[loss=0.3048, simple_loss=0.3647, pruned_loss=0.1225, over 4805.00 frames. ], tot_loss[loss=0.2919, simple_loss=0.3298, pruned_loss=0.127, over 954928.93 frames. ], batch size: 41, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:41:24,890 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8160, 1.1582, 0.8935, 1.6770, 2.0998, 1.3915, 1.4916, 1.7097], device='cuda:2'), covar=tensor([0.1691, 0.2410, 0.2355, 0.1324, 0.2245, 0.2142, 0.1469, 0.2083], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0097, 0.0115, 0.0093, 0.0124, 0.0096, 0.0098, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:41:28,079 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9406, 1.1333, 0.7839, 1.8100, 2.0817, 1.7551, 1.4470, 1.9056], device='cuda:2'), covar=tensor([0.1668, 0.2353, 0.2427, 0.1222, 0.2468, 0.2133, 0.1446, 0.1980], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0097, 0.0115, 0.0093, 0.0124, 0.0096, 0.0098, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:41:38,568 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-25 23:41:41,175 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7109.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:41:47,302 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.456e+02 1.876e+02 2.208e+02 2.586e+02 5.614e+02, threshold=4.416e+02, percent-clipped=5.0 2023-03-25 23:41:49,200 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7118.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:41:52,297 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7123.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:41:55,448 INFO [finetune.py:976] (2/7) Epoch 2, batch 1400, loss[loss=0.2546, simple_loss=0.3212, pruned_loss=0.09399, over 4805.00 frames. ], tot_loss[loss=0.2955, simple_loss=0.3342, pruned_loss=0.1285, over 956096.67 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:42:20,946 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1541, 1.7782, 1.9033, 0.8959, 1.9339, 2.3608, 1.7905, 1.9834], device='cuda:2'), covar=tensor([0.1010, 0.0806, 0.0551, 0.0852, 0.0827, 0.0431, 0.0542, 0.0479], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0148, 0.0113, 0.0125, 0.0125, 0.0113, 0.0140, 0.0138], device='cuda:2'), out_proj_covar=tensor([9.2731e-05, 1.1033e-04, 8.2885e-05, 9.2100e-05, 9.0651e-05, 8.3470e-05, 1.0462e-04, 1.0256e-04], device='cuda:2') 2023-03-25 23:42:31,260 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7163.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:33,029 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7166.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:36,029 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7171.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:36,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9067, 1.4937, 2.3079, 1.4284, 1.9309, 2.0142, 1.5841, 2.1360], device='cuda:2'), covar=tensor([0.1681, 0.2269, 0.1668, 0.2399, 0.1190, 0.1947, 0.2662, 0.1294], device='cuda:2'), in_proj_covar=tensor([0.0204, 0.0205, 0.0205, 0.0195, 0.0177, 0.0223, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:42:40,184 INFO [finetune.py:976] (2/7) Epoch 2, batch 1450, loss[loss=0.2466, simple_loss=0.2959, pruned_loss=0.09865, over 4757.00 frames. ], tot_loss[loss=0.296, simple_loss=0.3354, pruned_loss=0.1284, over 954360.88 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:42:40,419 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.98 vs. limit=5.0 2023-03-25 23:43:12,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4258, 2.0150, 1.8500, 1.4533, 2.2124, 2.9332, 2.3908, 1.9659], device='cuda:2'), covar=tensor([0.0190, 0.0478, 0.0502, 0.0500, 0.0351, 0.0271, 0.0292, 0.0399], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0112, 0.0131, 0.0112, 0.0103, 0.0097, 0.0087, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4258e-05, 8.8196e-05, 1.0618e-04, 8.8711e-05, 8.2073e-05, 7.2750e-05, 6.7548e-05, 8.4922e-05], device='cuda:2') 2023-03-25 23:43:13,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1957, 1.2024, 1.2968, 0.7226, 0.9879, 1.4432, 1.4185, 1.2421], device='cuda:2'), covar=tensor([0.0870, 0.0477, 0.0451, 0.0573, 0.0456, 0.0401, 0.0263, 0.0475], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0112, 0.0124, 0.0124, 0.0112, 0.0139, 0.0138], device='cuda:2'), out_proj_covar=tensor([9.1990e-05, 1.0945e-04, 8.2202e-05, 9.1551e-05, 8.9707e-05, 8.2871e-05, 1.0382e-04, 1.0194e-04], device='cuda:2') 2023-03-25 23:43:29,960 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.933e+02 2.198e+02 2.731e+02 4.077e+02, threshold=4.395e+02, percent-clipped=0.0 2023-03-25 23:43:40,434 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7224.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:43:42,757 INFO [finetune.py:976] (2/7) Epoch 2, batch 1500, loss[loss=0.3031, simple_loss=0.3432, pruned_loss=0.1315, over 4820.00 frames. ], tot_loss[loss=0.2983, simple_loss=0.3373, pruned_loss=0.1296, over 955634.82 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:43:51,517 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-25 23:43:51,997 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7234.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:44:11,998 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.9193, 3.4876, 3.6602, 3.7043, 3.4325, 3.2219, 4.0381, 1.3617], device='cuda:2'), covar=tensor([0.1256, 0.1726, 0.1253, 0.1708, 0.2178, 0.2358, 0.1314, 0.6461], device='cuda:2'), in_proj_covar=tensor([0.0372, 0.0249, 0.0273, 0.0299, 0.0350, 0.0293, 0.0315, 0.0306], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:44:36,528 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7265.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:44:47,929 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7275.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:44:49,672 INFO [finetune.py:976] (2/7) Epoch 2, batch 1550, loss[loss=0.285, simple_loss=0.3364, pruned_loss=0.1168, over 4865.00 frames. ], tot_loss[loss=0.2985, simple_loss=0.3377, pruned_loss=0.1297, over 953272.81 frames. ], batch size: 34, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:44:57,962 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7285.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:45:12,943 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-25 23:45:29,494 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3025, 1.6625, 1.1712, 2.0379, 2.3706, 1.8315, 1.7745, 1.9605], device='cuda:2'), covar=tensor([0.1964, 0.2757, 0.2730, 0.1534, 0.2599, 0.2472, 0.1844, 0.2635], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0097, 0.0115, 0.0093, 0.0124, 0.0096, 0.0098, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-25 23:45:31,202 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.380e+02 1.989e+02 2.300e+02 2.720e+02 4.709e+02, threshold=4.600e+02, percent-clipped=4.0 2023-03-25 23:45:39,141 INFO [finetune.py:976] (2/7) Epoch 2, batch 1600, loss[loss=0.2663, simple_loss=0.3117, pruned_loss=0.1105, over 4859.00 frames. ], tot_loss[loss=0.2935, simple_loss=0.333, pruned_loss=0.127, over 953281.68 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:45:42,237 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7333.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:45:44,188 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7336.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:46:35,387 INFO [finetune.py:976] (2/7) Epoch 2, batch 1650, loss[loss=0.2527, simple_loss=0.2927, pruned_loss=0.1063, over 4903.00 frames. ], tot_loss[loss=0.2899, simple_loss=0.3291, pruned_loss=0.1253, over 954976.98 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:46:59,570 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7404.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:47:01,927 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7407.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:47:07,156 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.344e+02 1.965e+02 2.306e+02 2.769e+02 7.650e+02, threshold=4.611e+02, percent-clipped=3.0 2023-03-25 23:47:15,137 INFO [finetune.py:976] (2/7) Epoch 2, batch 1700, loss[loss=0.3248, simple_loss=0.3401, pruned_loss=0.1547, over 4281.00 frames. ], tot_loss[loss=0.2871, simple_loss=0.326, pruned_loss=0.1241, over 954730.85 frames. ], batch size: 18, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:47:20,149 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5555, 1.3362, 1.9075, 2.9566, 2.1232, 2.1147, 1.0150, 2.3356], device='cuda:2'), covar=tensor([0.1836, 0.1676, 0.1367, 0.0678, 0.0836, 0.1654, 0.1847, 0.0750], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0119, 0.0138, 0.0160, 0.0104, 0.0144, 0.0129, 0.0107], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-25 23:47:32,086 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-25 23:47:48,729 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7468.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:48:00,184 INFO [finetune.py:976] (2/7) Epoch 2, batch 1750, loss[loss=0.345, simple_loss=0.3882, pruned_loss=0.1509, over 4808.00 frames. ], tot_loss[loss=0.289, simple_loss=0.3279, pruned_loss=0.125, over 954219.84 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:48:14,795 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0211, 1.7668, 2.5923, 1.6778, 2.1813, 2.1965, 1.7383, 2.4234], device='cuda:2'), covar=tensor([0.2115, 0.2596, 0.1695, 0.2801, 0.1228, 0.2297, 0.2841, 0.1221], device='cuda:2'), in_proj_covar=tensor([0.0201, 0.0202, 0.0200, 0.0192, 0.0174, 0.0220, 0.0210, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:48:46,572 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.280e+02 2.019e+02 2.351e+02 2.794e+02 5.482e+02, threshold=4.701e+02, percent-clipped=1.0 2023-03-25 23:48:51,340 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5179, 1.2985, 1.3865, 1.5278, 2.0926, 1.4701, 1.1791, 1.2615], device='cuda:2'), covar=tensor([0.3216, 0.3130, 0.2506, 0.2371, 0.2576, 0.1705, 0.3790, 0.2393], device='cuda:2'), in_proj_covar=tensor([0.0220, 0.0202, 0.0188, 0.0174, 0.0224, 0.0169, 0.0205, 0.0178], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:48:53,181 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7519.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:49:03,491 INFO [finetune.py:976] (2/7) Epoch 2, batch 1800, loss[loss=0.2781, simple_loss=0.3348, pruned_loss=0.1107, over 4934.00 frames. ], tot_loss[loss=0.2927, simple_loss=0.3327, pruned_loss=0.1263, over 956012.86 frames. ], batch size: 42, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:49:12,357 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7534.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:49:44,259 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7565.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:49:57,763 INFO [finetune.py:976] (2/7) Epoch 2, batch 1850, loss[loss=0.3323, simple_loss=0.3472, pruned_loss=0.1587, over 4878.00 frames. ], tot_loss[loss=0.2958, simple_loss=0.3348, pruned_loss=0.1284, over 954040.26 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:50:05,101 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7582.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:50:41,637 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7613.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:50:48,710 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.368e+02 2.140e+02 2.552e+02 3.133e+02 4.516e+02, threshold=5.105e+02, percent-clipped=0.0 2023-03-25 23:51:00,606 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7624.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:02,925 INFO [finetune.py:976] (2/7) Epoch 2, batch 1900, loss[loss=0.3463, simple_loss=0.367, pruned_loss=0.1628, over 4299.00 frames. ], tot_loss[loss=0.2952, simple_loss=0.3345, pruned_loss=0.128, over 952906.58 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:51:09,870 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7631.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:41,826 INFO [finetune.py:976] (2/7) Epoch 2, batch 1950, loss[loss=0.2859, simple_loss=0.317, pruned_loss=0.1274, over 4774.00 frames. ], tot_loss[loss=0.2916, simple_loss=0.3319, pruned_loss=0.1256, over 953775.99 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:51:47,786 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7685.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:59,345 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7704.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:52:07,431 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.355e+02 1.951e+02 2.306e+02 2.786e+02 5.176e+02, threshold=4.611e+02, percent-clipped=1.0 2023-03-25 23:52:17,327 INFO [finetune.py:976] (2/7) Epoch 2, batch 2000, loss[loss=0.2902, simple_loss=0.3255, pruned_loss=0.1275, over 4816.00 frames. ], tot_loss[loss=0.2871, simple_loss=0.3275, pruned_loss=0.1233, over 954872.02 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:52:19,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5906, 2.0640, 1.9175, 1.9131, 1.9549, 4.2864, 1.5719, 2.1485], device='cuda:2'), covar=tensor([0.0976, 0.1458, 0.1139, 0.1087, 0.1396, 0.0139, 0.1344, 0.1498], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0078, 0.0075, 0.0078, 0.0091, 0.0080, 0.0084, 0.0077], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:52:26,512 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7743.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:52:31,890 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7752.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:52:39,939 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7763.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:52:49,965 INFO [finetune.py:976] (2/7) Epoch 2, batch 2050, loss[loss=0.2018, simple_loss=0.261, pruned_loss=0.07132, over 4818.00 frames. ], tot_loss[loss=0.2824, simple_loss=0.3227, pruned_loss=0.1211, over 956405.51 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:52:52,401 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5180, 1.2622, 1.0888, 1.0420, 1.2357, 1.2031, 1.1688, 2.0377], device='cuda:2'), covar=tensor([2.8156, 2.5030, 2.1233, 3.0414, 2.0630, 1.5064, 2.5336, 0.8080], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0228, 0.0207, 0.0263, 0.0220, 0.0187, 0.0225, 0.0168], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 23:53:05,273 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7802.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:53:06,507 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7804.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:53:14,627 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.310e+02 1.893e+02 2.255e+02 2.795e+02 6.508e+02, threshold=4.510e+02, percent-clipped=3.0 2023-03-25 23:53:17,184 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7819.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:53:26,209 INFO [finetune.py:976] (2/7) Epoch 2, batch 2100, loss[loss=0.2748, simple_loss=0.3054, pruned_loss=0.1221, over 4813.00 frames. ], tot_loss[loss=0.2816, simple_loss=0.3217, pruned_loss=0.1207, over 957096.24 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:53:46,461 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0469, 2.2314, 2.2700, 2.2759, 2.2047, 3.7691, 1.8634, 2.1363], device='cuda:2'), covar=tensor([0.0842, 0.1213, 0.0842, 0.0844, 0.1211, 0.0240, 0.1265, 0.1396], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0078, 0.0075, 0.0078, 0.0090, 0.0080, 0.0083, 0.0077], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:53:47,052 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5897, 1.5665, 2.1167, 3.1397, 2.1869, 2.2900, 1.1197, 2.5095], device='cuda:2'), covar=tensor([0.1807, 0.1432, 0.1275, 0.0518, 0.0865, 0.1449, 0.1770, 0.0645], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0119, 0.0139, 0.0162, 0.0105, 0.0145, 0.0131, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-25 23:53:56,797 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7863.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:54:00,157 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7867.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:54:08,365 INFO [finetune.py:976] (2/7) Epoch 2, batch 2150, loss[loss=0.2523, simple_loss=0.2983, pruned_loss=0.1032, over 4744.00 frames. ], tot_loss[loss=0.2844, simple_loss=0.3248, pruned_loss=0.122, over 955623.91 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:54:09,078 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4991, 1.4148, 1.1759, 1.4019, 1.6326, 1.2551, 1.9233, 1.4172], device='cuda:2'), covar=tensor([0.3070, 0.5823, 0.6083, 0.5962, 0.4066, 0.3169, 0.7382, 0.4101], device='cuda:2'), in_proj_covar=tensor([0.0161, 0.0193, 0.0236, 0.0248, 0.0209, 0.0179, 0.0199, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:54:49,231 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8130, 1.4931, 1.3427, 1.5555, 1.4967, 1.5071, 1.4170, 2.2807], device='cuda:2'), covar=tensor([1.9917, 1.9083, 1.6090, 2.1247, 1.5556, 1.0797, 2.0942, 0.5162], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0229, 0.0208, 0.0264, 0.0221, 0.0187, 0.0226, 0.0169], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-25 23:54:49,706 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.983e+02 2.392e+02 2.856e+02 4.131e+02, threshold=4.785e+02, percent-clipped=0.0 2023-03-25 23:55:10,157 INFO [finetune.py:976] (2/7) Epoch 2, batch 2200, loss[loss=0.3093, simple_loss=0.346, pruned_loss=0.1363, over 4823.00 frames. ], tot_loss[loss=0.2869, simple_loss=0.3279, pruned_loss=0.123, over 955431.79 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:55:16,977 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7931.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:56:12,553 INFO [finetune.py:976] (2/7) Epoch 2, batch 2250, loss[loss=0.3141, simple_loss=0.359, pruned_loss=0.1345, over 4895.00 frames. ], tot_loss[loss=0.2883, simple_loss=0.3292, pruned_loss=0.1237, over 952648.91 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:56:13,681 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7979.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:56:14,301 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7980.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:56:34,511 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.03 vs. limit=5.0 2023-03-25 23:57:05,130 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.321e+02 1.899e+02 2.269e+02 2.653e+02 4.132e+02, threshold=4.538e+02, percent-clipped=0.0 2023-03-25 23:57:24,778 INFO [finetune.py:976] (2/7) Epoch 2, batch 2300, loss[loss=0.2877, simple_loss=0.3229, pruned_loss=0.1263, over 4778.00 frames. ], tot_loss[loss=0.2891, simple_loss=0.3304, pruned_loss=0.1239, over 953788.82 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:57:38,271 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1115, 1.2960, 0.9830, 1.2821, 1.3277, 2.3061, 1.1191, 1.3888], device='cuda:2'), covar=tensor([0.1054, 0.1563, 0.1091, 0.0926, 0.1556, 0.0367, 0.1416, 0.1569], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0079, 0.0075, 0.0078, 0.0091, 0.0080, 0.0083, 0.0077], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-25 23:57:53,415 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8063.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:57:57,067 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2480, 1.2583, 1.5382, 1.1061, 1.1389, 1.3878, 1.2936, 1.5362], device='cuda:2'), covar=tensor([0.1674, 0.2423, 0.1489, 0.1602, 0.1385, 0.1546, 0.2994, 0.1051], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0205, 0.0202, 0.0194, 0.0176, 0.0221, 0.0212, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-25 23:58:05,655 INFO [finetune.py:976] (2/7) Epoch 2, batch 2350, loss[loss=0.2723, simple_loss=0.3168, pruned_loss=0.1139, over 4903.00 frames. ], tot_loss[loss=0.2843, simple_loss=0.3263, pruned_loss=0.1212, over 952364.02 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:58:20,624 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8099.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:58:39,061 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8111.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:58:41,409 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.369e+02 1.949e+02 2.211e+02 2.649e+02 5.737e+02, threshold=4.422e+02, percent-clipped=2.0 2023-03-25 23:59:01,109 INFO [finetune.py:976] (2/7) Epoch 2, batch 2400, loss[loss=0.2352, simple_loss=0.2782, pruned_loss=0.09613, over 4827.00 frames. ], tot_loss[loss=0.2805, simple_loss=0.3226, pruned_loss=0.1192, over 955108.18 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 32.0 2023-03-25 23:59:27,562 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8158.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:59:39,736 INFO [finetune.py:976] (2/7) Epoch 2, batch 2450, loss[loss=0.2977, simple_loss=0.3291, pruned_loss=0.1332, over 4823.00 frames. ], tot_loss[loss=0.2765, simple_loss=0.3187, pruned_loss=0.1171, over 957163.82 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:00:20,955 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.358e+02 1.915e+02 2.165e+02 2.652e+02 4.234e+02, threshold=4.330e+02, percent-clipped=0.0 2023-03-26 00:00:34,016 INFO [finetune.py:976] (2/7) Epoch 2, batch 2500, loss[loss=0.3198, simple_loss=0.3558, pruned_loss=0.1419, over 4171.00 frames. ], tot_loss[loss=0.2792, simple_loss=0.3212, pruned_loss=0.1186, over 955147.66 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:01:11,681 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:01:20,784 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5514, 1.6281, 1.4591, 1.5900, 0.9449, 3.2657, 1.2537, 1.8010], device='cuda:2'), covar=tensor([0.3352, 0.2158, 0.2096, 0.2167, 0.2103, 0.0180, 0.2981, 0.1445], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0107, 0.0113, 0.0115, 0.0110, 0.0093, 0.0097, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 00:01:30,122 INFO [finetune.py:976] (2/7) Epoch 2, batch 2550, loss[loss=0.2259, simple_loss=0.254, pruned_loss=0.0989, over 4019.00 frames. ], tot_loss[loss=0.2831, simple_loss=0.3256, pruned_loss=0.1203, over 951302.01 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:01:31,470 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8280.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:02:02,893 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.611e+02 2.134e+02 2.551e+02 3.097e+02 6.135e+02, threshold=5.102e+02, percent-clipped=4.0 2023-03-26 00:02:06,647 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4186, 2.1532, 1.7234, 0.8385, 1.9124, 1.9078, 1.5721, 1.9708], device='cuda:2'), covar=tensor([0.0665, 0.0791, 0.1619, 0.2283, 0.1374, 0.2142, 0.2333, 0.0990], device='cuda:2'), in_proj_covar=tensor([0.0162, 0.0187, 0.0198, 0.0182, 0.0208, 0.0203, 0.0208, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:02:09,741 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:02:10,869 INFO [finetune.py:976] (2/7) Epoch 2, batch 2600, loss[loss=0.3494, simple_loss=0.3662, pruned_loss=0.1663, over 4144.00 frames. ], tot_loss[loss=0.2852, simple_loss=0.328, pruned_loss=0.1212, over 951360.62 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:02:10,925 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8328.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:02:21,315 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6332, 1.5883, 1.5430, 1.5992, 1.0481, 2.9403, 1.1314, 1.6581], device='cuda:2'), covar=tensor([0.3232, 0.2313, 0.2039, 0.2243, 0.2089, 0.0273, 0.2939, 0.1444], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0107, 0.0113, 0.0115, 0.0110, 0.0093, 0.0097, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 00:02:50,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5267, 1.5104, 1.7107, 1.8067, 1.6207, 3.2546, 1.2413, 1.5874], device='cuda:2'), covar=tensor([0.0962, 0.1623, 0.1275, 0.1078, 0.1487, 0.0246, 0.1459, 0.1671], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0079, 0.0076, 0.0078, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 00:03:09,436 INFO [finetune.py:976] (2/7) Epoch 2, batch 2650, loss[loss=0.327, simple_loss=0.36, pruned_loss=0.147, over 4809.00 frames. ], tot_loss[loss=0.2854, simple_loss=0.3279, pruned_loss=0.1214, over 950792.52 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:03:19,938 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-26 00:03:28,494 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:03:45,676 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.975e+02 2.270e+02 2.796e+02 5.066e+02, threshold=4.539e+02, percent-clipped=0.0 2023-03-26 00:03:57,445 INFO [finetune.py:976] (2/7) Epoch 2, batch 2700, loss[loss=0.2762, simple_loss=0.3197, pruned_loss=0.1164, over 4742.00 frames. ], tot_loss[loss=0.2815, simple_loss=0.3252, pruned_loss=0.1189, over 950028.84 frames. ], batch size: 59, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:04:02,583 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0453, 0.7686, 0.9153, 0.9281, 1.1368, 1.1815, 0.9963, 0.8788], device='cuda:2'), covar=tensor([0.0324, 0.0392, 0.0594, 0.0374, 0.0318, 0.0314, 0.0334, 0.0427], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0113, 0.0132, 0.0112, 0.0103, 0.0098, 0.0088, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.4135e-05, 8.9185e-05, 1.0692e-04, 8.9281e-05, 8.2093e-05, 7.2943e-05, 6.7964e-05, 8.5210e-05], device='cuda:2') 2023-03-26 00:04:17,923 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:04:18,017 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8203, 1.6150, 1.3565, 1.7252, 2.0030, 1.5312, 2.2329, 1.7161], device='cuda:2'), covar=tensor([0.3233, 0.6312, 0.6666, 0.6556, 0.4222, 0.3221, 0.4833, 0.4338], device='cuda:2'), in_proj_covar=tensor([0.0162, 0.0194, 0.0237, 0.0250, 0.0211, 0.0180, 0.0200, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:04:28,871 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:04:47,921 INFO [finetune.py:976] (2/7) Epoch 2, batch 2750, loss[loss=0.2607, simple_loss=0.3053, pruned_loss=0.108, over 4796.00 frames. ], tot_loss[loss=0.2779, simple_loss=0.3218, pruned_loss=0.117, over 952515.74 frames. ], batch size: 29, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:05:09,727 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:05:25,225 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.975e+02 2.311e+02 2.901e+02 5.278e+02, threshold=4.621e+02, percent-clipped=1.0 2023-03-26 00:05:28,432 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5580, 1.4290, 1.4809, 0.8969, 1.4510, 1.7190, 1.6768, 1.3961], device='cuda:2'), covar=tensor([0.1244, 0.0762, 0.0599, 0.0887, 0.0525, 0.0655, 0.0423, 0.0761], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0113, 0.0127, 0.0125, 0.0113, 0.0140, 0.0139], device='cuda:2'), out_proj_covar=tensor([9.2975e-05, 1.1110e-04, 8.2870e-05, 9.2943e-05, 9.0865e-05, 8.3447e-05, 1.0444e-04, 1.0283e-04], device='cuda:2') 2023-03-26 00:05:38,382 INFO [finetune.py:976] (2/7) Epoch 2, batch 2800, loss[loss=0.2022, simple_loss=0.2592, pruned_loss=0.07262, over 4801.00 frames. ], tot_loss[loss=0.2737, simple_loss=0.3173, pruned_loss=0.1151, over 952471.05 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:05:59,992 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 00:06:31,703 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3465, 0.5989, 1.2970, 1.0989, 1.0772, 1.0260, 0.9599, 1.1520], device='cuda:2'), covar=tensor([1.2168, 2.3222, 1.7162, 1.9048, 2.1371, 1.4375, 2.5270, 1.5618], device='cuda:2'), in_proj_covar=tensor([0.0222, 0.0255, 0.0243, 0.0268, 0.0244, 0.0217, 0.0277, 0.0214], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:06:44,212 INFO [finetune.py:976] (2/7) Epoch 2, batch 2850, loss[loss=0.2429, simple_loss=0.3091, pruned_loss=0.08839, over 4860.00 frames. ], tot_loss[loss=0.2731, simple_loss=0.3163, pruned_loss=0.1149, over 952752.11 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:07:08,736 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.882e+02 2.289e+02 2.777e+02 4.779e+02, threshold=4.578e+02, percent-clipped=1.0 2023-03-26 00:07:13,354 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8621.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:07:18,069 INFO [finetune.py:976] (2/7) Epoch 2, batch 2900, loss[loss=0.268, simple_loss=0.3298, pruned_loss=0.1031, over 4849.00 frames. ], tot_loss[loss=0.2753, simple_loss=0.3193, pruned_loss=0.1157, over 954991.73 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:07:27,329 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8102, 1.5336, 1.3765, 1.6537, 1.4863, 1.4927, 1.4634, 2.2556], device='cuda:2'), covar=tensor([1.8323, 1.8708, 1.4646, 1.9507, 1.5810, 1.0731, 2.0038, 0.5062], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0230, 0.0208, 0.0266, 0.0222, 0.0188, 0.0227, 0.0170], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:07:30,897 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5620, 1.5040, 1.4589, 1.6071, 1.1682, 3.2261, 1.2533, 1.8285], device='cuda:2'), covar=tensor([0.3509, 0.2374, 0.2175, 0.2321, 0.1944, 0.0211, 0.2871, 0.1519], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0107, 0.0114, 0.0115, 0.0111, 0.0094, 0.0098, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 00:07:50,906 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 00:07:51,240 INFO [finetune.py:976] (2/7) Epoch 2, batch 2950, loss[loss=0.2766, simple_loss=0.3151, pruned_loss=0.1191, over 4714.00 frames. ], tot_loss[loss=0.2801, simple_loss=0.3244, pruned_loss=0.118, over 954371.92 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:08:17,805 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.871e+02 2.334e+02 2.881e+02 5.890e+02, threshold=4.669e+02, percent-clipped=2.0 2023-03-26 00:08:19,843 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6379, 1.5007, 1.1994, 1.4939, 1.6763, 1.3398, 2.0561, 1.5366], device='cuda:2'), covar=tensor([0.2915, 0.5730, 0.6390, 0.5778, 0.4276, 0.3263, 0.5159, 0.4285], device='cuda:2'), in_proj_covar=tensor([0.0163, 0.0195, 0.0237, 0.0251, 0.0213, 0.0181, 0.0202, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:08:27,818 INFO [finetune.py:976] (2/7) Epoch 2, batch 3000, loss[loss=0.2839, simple_loss=0.3362, pruned_loss=0.1158, over 4906.00 frames. ], tot_loss[loss=0.2819, simple_loss=0.3262, pruned_loss=0.1188, over 955371.66 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:08:27,818 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 00:08:32,831 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7463, 1.5681, 1.5953, 1.7073, 2.0551, 1.6207, 1.2675, 1.5207], device='cuda:2'), covar=tensor([0.2188, 0.2547, 0.1999, 0.1885, 0.1954, 0.1333, 0.3057, 0.1844], device='cuda:2'), in_proj_covar=tensor([0.0221, 0.0203, 0.0189, 0.0175, 0.0225, 0.0167, 0.0205, 0.0178], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:08:43,568 INFO [finetune.py:1010] (2/7) Epoch 2, validation: loss=0.1956, simple_loss=0.2636, pruned_loss=0.06384, over 2265189.00 frames. 2023-03-26 00:08:43,568 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6161MB 2023-03-26 00:09:09,157 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:09:23,634 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 00:09:26,209 INFO [finetune.py:976] (2/7) Epoch 2, batch 3050, loss[loss=0.2822, simple_loss=0.3372, pruned_loss=0.1136, over 4909.00 frames. ], tot_loss[loss=0.2819, simple_loss=0.3264, pruned_loss=0.1187, over 956786.41 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:09:47,777 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9885, 1.7949, 1.5789, 1.8296, 2.0829, 1.6966, 2.3895, 1.8269], device='cuda:2'), covar=tensor([0.2525, 0.4746, 0.5544, 0.4873, 0.3264, 0.2769, 0.3429, 0.3719], device='cuda:2'), in_proj_covar=tensor([0.0163, 0.0194, 0.0237, 0.0251, 0.0212, 0.0181, 0.0203, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:10:07,397 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.935e+02 2.296e+02 2.584e+02 4.666e+02, threshold=4.592e+02, percent-clipped=0.0 2023-03-26 00:10:11,744 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:10:15,950 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7089, 1.4627, 1.1515, 1.4257, 1.3960, 1.3341, 1.3344, 2.3415], device='cuda:2'), covar=tensor([1.9564, 1.9006, 1.5349, 2.1487, 1.5755, 1.0764, 1.9260, 0.5108], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0230, 0.0209, 0.0266, 0.0222, 0.0188, 0.0227, 0.0170], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:10:16,913 INFO [finetune.py:976] (2/7) Epoch 2, batch 3100, loss[loss=0.2768, simple_loss=0.3166, pruned_loss=0.1185, over 4829.00 frames. ], tot_loss[loss=0.2792, simple_loss=0.3238, pruned_loss=0.1173, over 958316.37 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:10:36,841 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4214, 2.1856, 1.9817, 0.9703, 2.0228, 1.9598, 1.6492, 1.9841], device='cuda:2'), covar=tensor([0.0944, 0.0857, 0.1620, 0.2283, 0.1476, 0.2050, 0.2052, 0.1126], device='cuda:2'), in_proj_covar=tensor([0.0163, 0.0189, 0.0200, 0.0184, 0.0210, 0.0205, 0.0210, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:10:59,133 INFO [finetune.py:976] (2/7) Epoch 2, batch 3150, loss[loss=0.2585, simple_loss=0.308, pruned_loss=0.1046, over 4928.00 frames. ], tot_loss[loss=0.276, simple_loss=0.3198, pruned_loss=0.1161, over 957370.59 frames. ], batch size: 42, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:11:04,147 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 00:11:28,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.838e+02 2.165e+02 2.835e+02 5.909e+02, threshold=4.329e+02, percent-clipped=1.0 2023-03-26 00:11:33,407 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8921.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:11:43,243 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8927.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:11:43,732 INFO [finetune.py:976] (2/7) Epoch 2, batch 3200, loss[loss=0.3149, simple_loss=0.3348, pruned_loss=0.1475, over 4759.00 frames. ], tot_loss[loss=0.2695, simple_loss=0.3136, pruned_loss=0.1127, over 956169.30 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:12:02,983 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4612, 1.1893, 1.2610, 1.1598, 1.5252, 1.6203, 1.4175, 1.1135], device='cuda:2'), covar=tensor([0.0234, 0.0375, 0.0518, 0.0380, 0.0291, 0.0322, 0.0246, 0.0439], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0112, 0.0132, 0.0112, 0.0103, 0.0097, 0.0087, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4134e-05, 8.8845e-05, 1.0677e-04, 8.8797e-05, 8.1732e-05, 7.2095e-05, 6.7177e-05, 8.4346e-05], device='cuda:2') 2023-03-26 00:12:21,692 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1618, 1.4775, 0.9738, 1.9268, 2.2455, 1.7392, 1.6583, 2.0088], device='cuda:2'), covar=tensor([0.1290, 0.1820, 0.2156, 0.1073, 0.1993, 0.2055, 0.1148, 0.1633], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0097, 0.0115, 0.0092, 0.0124, 0.0096, 0.0098, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 00:12:21,706 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7625, 1.8666, 1.7058, 1.8112, 1.0757, 3.8584, 1.6237, 2.2020], device='cuda:2'), covar=tensor([0.2942, 0.2017, 0.1826, 0.2001, 0.1908, 0.0146, 0.2492, 0.1294], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0108, 0.0113, 0.0115, 0.0111, 0.0094, 0.0098, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 00:12:22,922 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7008, 1.4790, 1.2805, 1.2953, 1.7492, 1.9214, 1.7354, 1.2196], device='cuda:2'), covar=tensor([0.0267, 0.0471, 0.0666, 0.0425, 0.0315, 0.0305, 0.0258, 0.0528], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0113, 0.0132, 0.0112, 0.0103, 0.0097, 0.0087, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4308e-05, 8.8996e-05, 1.0698e-04, 8.8941e-05, 8.1947e-05, 7.2311e-05, 6.7385e-05, 8.4568e-05], device='cuda:2') 2023-03-26 00:12:24,180 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7593, 0.9277, 1.3104, 1.3070, 1.2418, 1.3292, 1.2303, 1.3586], device='cuda:2'), covar=tensor([1.9180, 3.8933, 2.7690, 3.0191, 3.4840, 2.1288, 4.2646, 2.5879], device='cuda:2'), in_proj_covar=tensor([0.0224, 0.0256, 0.0245, 0.0269, 0.0245, 0.0218, 0.0278, 0.0215], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:12:25,339 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:12:30,760 INFO [finetune.py:976] (2/7) Epoch 2, batch 3250, loss[loss=0.3024, simple_loss=0.348, pruned_loss=0.1284, over 4832.00 frames. ], tot_loss[loss=0.2714, simple_loss=0.3153, pruned_loss=0.1138, over 955767.28 frames. ], batch size: 47, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:12:36,288 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:12:37,531 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:02,670 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9013.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:03,776 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.894e+02 2.281e+02 2.922e+02 4.541e+02, threshold=4.561e+02, percent-clipped=3.0 2023-03-26 00:13:11,707 INFO [finetune.py:976] (2/7) Epoch 2, batch 3300, loss[loss=0.2648, simple_loss=0.32, pruned_loss=0.1048, over 4801.00 frames. ], tot_loss[loss=0.2758, simple_loss=0.3197, pruned_loss=0.1159, over 952394.70 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:13:25,340 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9047.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:42,682 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:44,938 INFO [finetune.py:976] (2/7) Epoch 2, batch 3350, loss[loss=0.3335, simple_loss=0.3722, pruned_loss=0.1474, over 4822.00 frames. ], tot_loss[loss=0.2797, simple_loss=0.3233, pruned_loss=0.118, over 952510.02 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:14:20,188 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.430e+02 1.942e+02 2.307e+02 2.996e+02 6.023e+02, threshold=4.614e+02, percent-clipped=2.0 2023-03-26 00:14:20,873 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:14:28,092 INFO [finetune.py:976] (2/7) Epoch 2, batch 3400, loss[loss=0.2463, simple_loss=0.298, pruned_loss=0.09727, over 4826.00 frames. ], tot_loss[loss=0.2798, simple_loss=0.3235, pruned_loss=0.1181, over 952821.75 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:15:23,465 INFO [finetune.py:976] (2/7) Epoch 2, batch 3450, loss[loss=0.2785, simple_loss=0.3163, pruned_loss=0.1204, over 4822.00 frames. ], tot_loss[loss=0.2785, simple_loss=0.3225, pruned_loss=0.1172, over 953184.18 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:15:33,711 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0610, 2.7012, 1.9119, 1.4955, 2.9019, 2.6301, 2.2617, 2.1723], device='cuda:2'), covar=tensor([0.0932, 0.0579, 0.1031, 0.1190, 0.0424, 0.0841, 0.0932, 0.1129], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0132, 0.0143, 0.0130, 0.0108, 0.0140, 0.0147, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:15:49,826 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6913, 2.4223, 2.6322, 2.7295, 3.3252, 2.6393, 2.2953, 2.2151], device='cuda:2'), covar=tensor([0.2101, 0.2051, 0.1617, 0.1609, 0.1595, 0.1070, 0.2322, 0.1714], device='cuda:2'), in_proj_covar=tensor([0.0223, 0.0205, 0.0191, 0.0177, 0.0227, 0.0169, 0.0208, 0.0180], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:15:59,479 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.266e+02 1.993e+02 2.349e+02 2.850e+02 4.291e+02, threshold=4.698e+02, percent-clipped=0.0 2023-03-26 00:16:12,592 INFO [finetune.py:976] (2/7) Epoch 2, batch 3500, loss[loss=0.3019, simple_loss=0.3366, pruned_loss=0.1336, over 4865.00 frames. ], tot_loss[loss=0.2729, simple_loss=0.3169, pruned_loss=0.1144, over 951396.48 frames. ], batch size: 34, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:17:13,577 INFO [finetune.py:976] (2/7) Epoch 2, batch 3550, loss[loss=0.2412, simple_loss=0.2682, pruned_loss=0.1071, over 3942.00 frames. ], tot_loss[loss=0.2695, simple_loss=0.3134, pruned_loss=0.1128, over 951817.71 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:17:16,743 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9283.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:17:21,800 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 00:17:53,394 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.769e+02 2.188e+02 2.766e+02 5.069e+02, threshold=4.376e+02, percent-clipped=2.0 2023-03-26 00:18:09,360 INFO [finetune.py:976] (2/7) Epoch 2, batch 3600, loss[loss=0.2095, simple_loss=0.2681, pruned_loss=0.07546, over 4777.00 frames. ], tot_loss[loss=0.2658, simple_loss=0.3099, pruned_loss=0.1109, over 952970.27 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:18:23,000 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9342.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:18:44,748 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3973, 2.9054, 2.1000, 1.7333, 3.1488, 2.8420, 2.5422, 2.3631], device='cuda:2'), covar=tensor([0.0863, 0.0594, 0.1000, 0.1207, 0.0262, 0.0809, 0.0946, 0.1055], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0132, 0.0142, 0.0130, 0.0108, 0.0139, 0.0145, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:18:45,310 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9369.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:18:51,738 INFO [finetune.py:976] (2/7) Epoch 2, batch 3650, loss[loss=0.2417, simple_loss=0.2725, pruned_loss=0.1055, over 3995.00 frames. ], tot_loss[loss=0.2691, simple_loss=0.3131, pruned_loss=0.1126, over 950895.24 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:19:24,496 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9414.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:19:24,983 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.270e+02 1.902e+02 2.269e+02 2.850e+02 5.426e+02, threshold=4.539e+02, percent-clipped=4.0 2023-03-26 00:19:31,906 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9416.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:19:45,283 INFO [finetune.py:976] (2/7) Epoch 2, batch 3700, loss[loss=0.2489, simple_loss=0.3072, pruned_loss=0.09527, over 4868.00 frames. ], tot_loss[loss=0.2742, simple_loss=0.3183, pruned_loss=0.1151, over 949587.99 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:20:15,847 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:20:23,946 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:20:26,041 INFO [finetune.py:976] (2/7) Epoch 2, batch 3750, loss[loss=0.2729, simple_loss=0.3249, pruned_loss=0.1105, over 4816.00 frames. ], tot_loss[loss=0.2766, simple_loss=0.3211, pruned_loss=0.116, over 950885.05 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:20:28,009 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 00:20:55,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.357e+02 1.893e+02 2.405e+02 2.686e+02 6.929e+02, threshold=4.810e+02, percent-clipped=1.0 2023-03-26 00:21:01,529 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-26 00:21:10,674 INFO [finetune.py:976] (2/7) Epoch 2, batch 3800, loss[loss=0.276, simple_loss=0.3, pruned_loss=0.1261, over 4745.00 frames. ], tot_loss[loss=0.2775, simple_loss=0.3219, pruned_loss=0.1165, over 951616.75 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:21:30,853 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 00:22:03,070 INFO [finetune.py:976] (2/7) Epoch 2, batch 3850, loss[loss=0.2512, simple_loss=0.3029, pruned_loss=0.09975, over 4906.00 frames. ], tot_loss[loss=0.2753, simple_loss=0.3196, pruned_loss=0.1155, over 951591.55 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:22:07,223 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9583.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:22:07,875 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:22:19,081 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.27 vs. limit=5.0 2023-03-26 00:22:19,549 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1840, 1.1556, 1.3248, 0.6198, 1.0158, 1.4412, 1.4721, 1.3195], device='cuda:2'), covar=tensor([0.0885, 0.0620, 0.0440, 0.0598, 0.0460, 0.0461, 0.0303, 0.0487], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0116, 0.0132, 0.0129, 0.0116, 0.0143, 0.0142], device='cuda:2'), out_proj_covar=tensor([9.5393e-05, 1.1435e-04, 8.4964e-05, 9.6800e-05, 9.3939e-05, 8.5853e-05, 1.0683e-04, 1.0539e-04], device='cuda:2') 2023-03-26 00:22:33,608 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.890e+02 2.331e+02 2.867e+02 5.576e+02, threshold=4.662e+02, percent-clipped=3.0 2023-03-26 00:22:48,508 INFO [finetune.py:976] (2/7) Epoch 2, batch 3900, loss[loss=0.2634, simple_loss=0.3224, pruned_loss=0.1022, over 4910.00 frames. ], tot_loss[loss=0.2702, simple_loss=0.3147, pruned_loss=0.1128, over 952846.15 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:22:55,986 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9631.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:08,229 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9642.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:10,110 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9645.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:28,458 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:29,659 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1310, 3.5706, 3.7132, 3.9737, 3.8338, 3.6494, 4.2246, 1.3024], device='cuda:2'), covar=tensor([0.0752, 0.0807, 0.0840, 0.0944, 0.1271, 0.1304, 0.0665, 0.5213], device='cuda:2'), in_proj_covar=tensor([0.0370, 0.0247, 0.0275, 0.0297, 0.0344, 0.0289, 0.0313, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:23:40,492 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:46,463 INFO [finetune.py:976] (2/7) Epoch 2, batch 3950, loss[loss=0.2896, simple_loss=0.3276, pruned_loss=0.1258, over 4814.00 frames. ], tot_loss[loss=0.2672, simple_loss=0.3112, pruned_loss=0.1116, over 951636.94 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:24:00,116 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9690.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:10,406 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.14 vs. limit=5.0 2023-03-26 00:24:28,618 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.967e+02 2.302e+02 2.784e+02 7.100e+02, threshold=4.604e+02, percent-clipped=2.0 2023-03-26 00:24:29,311 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9717.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:30,636 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9719.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:36,431 INFO [finetune.py:976] (2/7) Epoch 2, batch 4000, loss[loss=0.2646, simple_loss=0.3023, pruned_loss=0.1135, over 4704.00 frames. ], tot_loss[loss=0.2674, simple_loss=0.3113, pruned_loss=0.1118, over 952722.80 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:24:48,457 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:48,544 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 00:25:10,942 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:25:10,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5738, 1.6867, 1.6318, 1.0000, 1.9540, 1.8546, 1.6742, 1.5370], device='cuda:2'), covar=tensor([0.0843, 0.0755, 0.0872, 0.1219, 0.0546, 0.0882, 0.0838, 0.1247], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0132, 0.0142, 0.0129, 0.0108, 0.0139, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:25:16,246 INFO [finetune.py:976] (2/7) Epoch 2, batch 4050, loss[loss=0.2753, simple_loss=0.2936, pruned_loss=0.1285, over 4470.00 frames. ], tot_loss[loss=0.2691, simple_loss=0.3136, pruned_loss=0.1123, over 952653.46 frames. ], batch size: 19, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:25:21,472 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6985, 1.7513, 1.6048, 1.0474, 2.0460, 1.8568, 1.7214, 1.5760], device='cuda:2'), covar=tensor([0.0755, 0.0734, 0.0862, 0.1102, 0.0495, 0.0780, 0.0822, 0.1165], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0132, 0.0143, 0.0130, 0.0109, 0.0140, 0.0146, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:25:37,129 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:25:44,424 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.258e+02 1.934e+02 2.184e+02 2.585e+02 5.351e+02, threshold=4.368e+02, percent-clipped=2.0 2023-03-26 00:25:57,342 INFO [finetune.py:976] (2/7) Epoch 2, batch 4100, loss[loss=0.3312, simple_loss=0.3583, pruned_loss=0.152, over 4241.00 frames. ], tot_loss[loss=0.2728, simple_loss=0.318, pruned_loss=0.1138, over 953290.62 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:26:34,082 INFO [finetune.py:976] (2/7) Epoch 2, batch 4150, loss[loss=0.304, simple_loss=0.3513, pruned_loss=0.1283, over 4906.00 frames. ], tot_loss[loss=0.275, simple_loss=0.32, pruned_loss=0.115, over 953704.84 frames. ], batch size: 35, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:26:36,585 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2299, 1.8483, 2.5674, 3.9178, 2.7064, 2.6383, 0.6834, 3.0956], device='cuda:2'), covar=tensor([0.1645, 0.1506, 0.1312, 0.0492, 0.0807, 0.1710, 0.2156, 0.0639], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0120, 0.0139, 0.0164, 0.0105, 0.0147, 0.0131, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 00:27:05,051 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.789e+02 2.151e+02 2.605e+02 5.306e+02, threshold=4.302e+02, percent-clipped=2.0 2023-03-26 00:27:17,333 INFO [finetune.py:976] (2/7) Epoch 2, batch 4200, loss[loss=0.2178, simple_loss=0.2705, pruned_loss=0.08257, over 4809.00 frames. ], tot_loss[loss=0.2714, simple_loss=0.3171, pruned_loss=0.1128, over 951221.97 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:27:25,690 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:27:58,482 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3588, 4.6318, 4.8124, 5.1828, 5.0366, 4.7183, 5.4601, 1.6797], device='cuda:2'), covar=tensor([0.0642, 0.0750, 0.0653, 0.0712, 0.1065, 0.1411, 0.0554, 0.5213], device='cuda:2'), in_proj_covar=tensor([0.0370, 0.0246, 0.0276, 0.0297, 0.0343, 0.0288, 0.0314, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:28:05,867 INFO [finetune.py:976] (2/7) Epoch 2, batch 4250, loss[loss=0.2157, simple_loss=0.272, pruned_loss=0.07973, over 4793.00 frames. ], tot_loss[loss=0.2709, simple_loss=0.3158, pruned_loss=0.113, over 951520.30 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:28:41,979 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10014.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:28:47,607 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.798e+02 2.175e+02 2.768e+02 4.624e+02, threshold=4.351e+02, percent-clipped=3.0 2023-03-26 00:29:00,124 INFO [finetune.py:976] (2/7) Epoch 2, batch 4300, loss[loss=0.2457, simple_loss=0.294, pruned_loss=0.09872, over 4863.00 frames. ], tot_loss[loss=0.2675, simple_loss=0.3124, pruned_loss=0.1113, over 954077.34 frames. ], batch size: 44, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:29:12,680 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10039.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:29:36,688 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10062.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:29:46,293 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10070.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:29:52,920 INFO [finetune.py:976] (2/7) Epoch 2, batch 4350, loss[loss=0.2686, simple_loss=0.3025, pruned_loss=0.1173, over 4863.00 frames. ], tot_loss[loss=0.2639, simple_loss=0.3089, pruned_loss=0.1095, over 954918.26 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:30:06,664 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10099.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:30:07,321 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10100.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:30:16,922 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9828, 1.1144, 0.8929, 1.6270, 2.0949, 1.3525, 1.4481, 1.7882], device='cuda:2'), covar=tensor([0.1464, 0.2403, 0.2212, 0.1286, 0.2084, 0.2002, 0.1457, 0.1881], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0098, 0.0118, 0.0094, 0.0126, 0.0098, 0.0100, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 00:30:25,353 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.331e+02 1.930e+02 2.275e+02 2.717e+02 4.483e+02, threshold=4.550e+02, percent-clipped=1.0 2023-03-26 00:30:27,179 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:30:34,425 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10123.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:30:37,375 INFO [finetune.py:976] (2/7) Epoch 2, batch 4400, loss[loss=0.3565, simple_loss=0.3905, pruned_loss=0.1612, over 4280.00 frames. ], tot_loss[loss=0.2655, simple_loss=0.3102, pruned_loss=0.1104, over 954856.38 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:31:21,131 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4387, 1.3784, 1.6438, 2.3352, 1.6425, 2.1496, 0.8914, 1.9378], device='cuda:2'), covar=tensor([0.1711, 0.1339, 0.1111, 0.0739, 0.0872, 0.1142, 0.1517, 0.0792], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0121, 0.0140, 0.0166, 0.0106, 0.0148, 0.0132, 0.0109], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 00:31:35,876 INFO [finetune.py:976] (2/7) Epoch 2, batch 4450, loss[loss=0.2908, simple_loss=0.344, pruned_loss=0.1188, over 4824.00 frames. ], tot_loss[loss=0.2691, simple_loss=0.3145, pruned_loss=0.1119, over 954781.84 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:31:37,186 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8967, 4.3525, 4.1469, 2.6128, 4.4566, 3.3565, 0.7358, 2.9542], device='cuda:2'), covar=tensor([0.2698, 0.1876, 0.1569, 0.2828, 0.0931, 0.0883, 0.5212, 0.1603], device='cuda:2'), in_proj_covar=tensor([0.0158, 0.0169, 0.0168, 0.0130, 0.0158, 0.0121, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 00:31:56,211 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7841, 0.9880, 1.5483, 1.4010, 1.3744, 1.3255, 1.2817, 1.4274], device='cuda:2'), covar=tensor([1.2201, 2.4174, 1.8156, 2.0527, 2.2087, 1.5428, 2.5510, 1.6525], device='cuda:2'), in_proj_covar=tensor([0.0226, 0.0257, 0.0248, 0.0270, 0.0246, 0.0219, 0.0280, 0.0217], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:32:18,678 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.270e+02 1.960e+02 2.424e+02 3.015e+02 7.276e+02, threshold=4.848e+02, percent-clipped=5.0 2023-03-26 00:32:36,845 INFO [finetune.py:976] (2/7) Epoch 2, batch 4500, loss[loss=0.2533, simple_loss=0.2957, pruned_loss=0.1055, over 4045.00 frames. ], tot_loss[loss=0.2698, simple_loss=0.3153, pruned_loss=0.1121, over 952669.90 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:32:44,226 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:33:20,853 INFO [finetune.py:976] (2/7) Epoch 2, batch 4550, loss[loss=0.2396, simple_loss=0.2995, pruned_loss=0.08989, over 4817.00 frames. ], tot_loss[loss=0.2709, simple_loss=0.3168, pruned_loss=0.1125, over 951599.82 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:33:32,438 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10288.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:33:59,117 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10314.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:34:00,222 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.209e+02 1.803e+02 2.127e+02 2.576e+02 5.771e+02, threshold=4.255e+02, percent-clipped=1.0 2023-03-26 00:34:04,994 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2113, 1.8602, 1.6399, 1.8184, 1.8482, 1.7926, 1.7568, 2.7135], device='cuda:2'), covar=tensor([1.5158, 1.5189, 1.3089, 1.8529, 1.2641, 0.8764, 1.6594, 0.4158], device='cuda:2'), in_proj_covar=tensor([0.0255, 0.0236, 0.0212, 0.0273, 0.0227, 0.0191, 0.0232, 0.0175], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:34:14,431 INFO [finetune.py:976] (2/7) Epoch 2, batch 4600, loss[loss=0.2485, simple_loss=0.292, pruned_loss=0.1025, over 4805.00 frames. ], tot_loss[loss=0.2688, simple_loss=0.3154, pruned_loss=0.1111, over 951865.51 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:34:42,670 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.41 vs. limit=5.0 2023-03-26 00:34:50,214 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:01,342 INFO [finetune.py:976] (2/7) Epoch 2, batch 4650, loss[loss=0.2708, simple_loss=0.3049, pruned_loss=0.1184, over 4897.00 frames. ], tot_loss[loss=0.2659, simple_loss=0.3123, pruned_loss=0.1098, over 953935.94 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:35:06,830 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9066, 1.2194, 1.0390, 1.6349, 2.2368, 1.1257, 1.5351, 1.6639], device='cuda:2'), covar=tensor([0.1568, 0.2334, 0.2062, 0.1367, 0.1979, 0.2037, 0.1416, 0.2079], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0117, 0.0094, 0.0125, 0.0097, 0.0099, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 00:35:12,310 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10395.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:35:14,745 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:21,412 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10409.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:25,530 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.369e+02 1.834e+02 2.117e+02 2.478e+02 4.313e+02, threshold=4.233e+02, percent-clipped=1.0 2023-03-26 00:35:27,271 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10418.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:35:34,604 INFO [finetune.py:976] (2/7) Epoch 2, batch 4700, loss[loss=0.2253, simple_loss=0.2709, pruned_loss=0.08987, over 4924.00 frames. ], tot_loss[loss=0.2621, simple_loss=0.3083, pruned_loss=0.108, over 955398.96 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:35:46,696 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:50,994 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:58,772 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 00:36:01,598 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:36:02,257 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 00:36:07,252 INFO [finetune.py:976] (2/7) Epoch 2, batch 4750, loss[loss=0.2312, simple_loss=0.2877, pruned_loss=0.08738, over 4901.00 frames. ], tot_loss[loss=0.259, simple_loss=0.3051, pruned_loss=0.1065, over 955791.84 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:36:36,931 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10515.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:36:42,343 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.257e+02 1.733e+02 2.215e+02 2.656e+02 7.843e+02, threshold=4.429e+02, percent-clipped=2.0 2023-03-26 00:36:42,473 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2816, 1.3493, 1.4184, 0.6264, 1.6666, 1.3719, 1.3241, 1.2667], device='cuda:2'), covar=tensor([0.0738, 0.0774, 0.0786, 0.1158, 0.0638, 0.0877, 0.0873, 0.1412], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0131, 0.0143, 0.0129, 0.0108, 0.0140, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:36:51,176 INFO [finetune.py:976] (2/7) Epoch 2, batch 4800, loss[loss=0.2761, simple_loss=0.3242, pruned_loss=0.1141, over 4903.00 frames. ], tot_loss[loss=0.2623, simple_loss=0.3087, pruned_loss=0.108, over 956852.65 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:37:26,098 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7995, 1.8081, 2.0443, 1.2715, 1.8685, 2.1189, 2.0061, 1.7850], device='cuda:2'), covar=tensor([0.0907, 0.0598, 0.0426, 0.0576, 0.0381, 0.0574, 0.0369, 0.0500], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0154, 0.0116, 0.0133, 0.0129, 0.0117, 0.0144, 0.0142], device='cuda:2'), out_proj_covar=tensor([9.5960e-05, 1.1438e-04, 8.4904e-05, 9.7466e-05, 9.3680e-05, 8.6036e-05, 1.0725e-04, 1.0560e-04], device='cuda:2') 2023-03-26 00:37:43,276 INFO [finetune.py:976] (2/7) Epoch 2, batch 4850, loss[loss=0.3318, simple_loss=0.364, pruned_loss=0.1498, over 4841.00 frames. ], tot_loss[loss=0.2658, simple_loss=0.3126, pruned_loss=0.1095, over 954370.10 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:38:19,132 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 00:38:22,410 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.945e+02 2.280e+02 2.839e+02 5.226e+02, threshold=4.560e+02, percent-clipped=1.0 2023-03-26 00:38:33,266 INFO [finetune.py:976] (2/7) Epoch 2, batch 4900, loss[loss=0.2354, simple_loss=0.2806, pruned_loss=0.09513, over 4717.00 frames. ], tot_loss[loss=0.2659, simple_loss=0.3134, pruned_loss=0.1093, over 955759.77 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:38:37,608 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5553, 1.0873, 0.8474, 1.3401, 1.9102, 0.6685, 1.1626, 1.4208], device='cuda:2'), covar=tensor([0.1605, 0.2241, 0.1970, 0.1413, 0.2188, 0.2144, 0.1546, 0.2062], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0098, 0.0118, 0.0094, 0.0125, 0.0097, 0.0100, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 00:39:15,445 INFO [finetune.py:976] (2/7) Epoch 2, batch 4950, loss[loss=0.295, simple_loss=0.3339, pruned_loss=0.128, over 4298.00 frames. ], tot_loss[loss=0.2675, simple_loss=0.3155, pruned_loss=0.1097, over 957299.60 frames. ], batch size: 66, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:39:21,589 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 00:39:27,296 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10695.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:39:40,464 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.382e+02 1.832e+02 2.144e+02 2.563e+02 4.788e+02, threshold=4.289e+02, percent-clipped=1.0 2023-03-26 00:39:42,246 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10718.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:39:48,191 INFO [finetune.py:976] (2/7) Epoch 2, batch 5000, loss[loss=0.2141, simple_loss=0.2789, pruned_loss=0.0747, over 4788.00 frames. ], tot_loss[loss=0.2632, simple_loss=0.3112, pruned_loss=0.1076, over 957270.91 frames. ], batch size: 29, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:39:59,862 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:00,513 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:13,900 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:14,498 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10766.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:40:19,853 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:24,906 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7201, 1.6758, 2.0905, 1.4364, 1.7604, 2.0035, 1.6663, 2.1932], device='cuda:2'), covar=tensor([0.1763, 0.2172, 0.1385, 0.1960, 0.1089, 0.1616, 0.2773, 0.0992], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0208, 0.0206, 0.0199, 0.0181, 0.0226, 0.0216, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:40:25,990 INFO [finetune.py:976] (2/7) Epoch 2, batch 5050, loss[loss=0.2384, simple_loss=0.2837, pruned_loss=0.09653, over 4885.00 frames. ], tot_loss[loss=0.2589, simple_loss=0.3065, pruned_loss=0.1057, over 954358.22 frames. ], batch size: 35, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:40:48,600 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10805.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:52,129 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10810.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:40:55,640 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.711e+02 2.025e+02 2.497e+02 4.623e+02, threshold=4.049e+02, percent-clipped=1.0 2023-03-26 00:41:03,401 INFO [finetune.py:976] (2/7) Epoch 2, batch 5100, loss[loss=0.2792, simple_loss=0.3116, pruned_loss=0.1234, over 4748.00 frames. ], tot_loss[loss=0.2567, simple_loss=0.3034, pruned_loss=0.105, over 950579.58 frames. ], batch size: 27, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:41:07,761 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10835.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:41:37,837 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.07 vs. limit=5.0 2023-03-26 00:41:47,380 INFO [finetune.py:976] (2/7) Epoch 2, batch 5150, loss[loss=0.1809, simple_loss=0.2368, pruned_loss=0.0625, over 4728.00 frames. ], tot_loss[loss=0.2591, simple_loss=0.305, pruned_loss=0.1066, over 950287.09 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:41:54,272 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10885.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:41:54,404 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 00:42:14,419 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1638, 1.8157, 2.0058, 1.0817, 2.1338, 2.5064, 1.8918, 2.0984], device='cuda:2'), covar=tensor([0.1058, 0.0901, 0.0525, 0.0907, 0.0704, 0.0701, 0.0583, 0.0561], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0155, 0.0117, 0.0134, 0.0131, 0.0118, 0.0145, 0.0143], device='cuda:2'), out_proj_covar=tensor([9.6707e-05, 1.1549e-04, 8.5356e-05, 9.8863e-05, 9.4762e-05, 8.6755e-05, 1.0840e-04, 1.0582e-04], device='cuda:2') 2023-03-26 00:42:24,566 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6854, 1.4205, 1.8651, 2.8970, 2.1157, 2.1871, 1.0312, 2.2690], device='cuda:2'), covar=tensor([0.1907, 0.1679, 0.1444, 0.0654, 0.0869, 0.1222, 0.1897, 0.0815], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0120, 0.0140, 0.0165, 0.0106, 0.0147, 0.0132, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 00:42:25,067 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.747e+02 2.089e+02 2.521e+02 5.475e+02, threshold=4.178e+02, percent-clipped=1.0 2023-03-26 00:42:37,754 INFO [finetune.py:976] (2/7) Epoch 2, batch 5200, loss[loss=0.3396, simple_loss=0.3771, pruned_loss=0.1511, over 4821.00 frames. ], tot_loss[loss=0.2616, simple_loss=0.3086, pruned_loss=0.1072, over 950342.07 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:43:00,768 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10946.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:43:29,722 INFO [finetune.py:976] (2/7) Epoch 2, batch 5250, loss[loss=0.2615, simple_loss=0.2971, pruned_loss=0.113, over 4928.00 frames. ], tot_loss[loss=0.2645, simple_loss=0.312, pruned_loss=0.1085, over 949265.93 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:43:33,015 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 00:43:52,520 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2945, 1.1559, 1.1491, 1.2408, 1.5005, 1.4864, 1.2824, 1.1204], device='cuda:2'), covar=tensor([0.0284, 0.0357, 0.0525, 0.0325, 0.0241, 0.0375, 0.0305, 0.0352], device='cuda:2'), in_proj_covar=tensor([0.0081, 0.0112, 0.0133, 0.0112, 0.0102, 0.0097, 0.0087, 0.0106], device='cuda:2'), out_proj_covar=tensor([6.3080e-05, 8.8211e-05, 1.0723e-04, 8.8769e-05, 8.0799e-05, 7.2238e-05, 6.7023e-05, 8.3415e-05], device='cuda:2') 2023-03-26 00:44:03,162 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.206e+02 2.014e+02 2.376e+02 2.957e+02 8.531e+02, threshold=4.753e+02, percent-clipped=3.0 2023-03-26 00:44:10,961 INFO [finetune.py:976] (2/7) Epoch 2, batch 5300, loss[loss=0.3592, simple_loss=0.3816, pruned_loss=0.1684, over 4805.00 frames. ], tot_loss[loss=0.2673, simple_loss=0.3146, pruned_loss=0.11, over 950573.82 frames. ], batch size: 40, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:44:44,428 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:44:58,486 INFO [finetune.py:976] (2/7) Epoch 2, batch 5350, loss[loss=0.2455, simple_loss=0.2986, pruned_loss=0.09624, over 4814.00 frames. ], tot_loss[loss=0.2647, simple_loss=0.3128, pruned_loss=0.1083, over 950037.02 frames. ], batch size: 41, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:45:16,015 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11100.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:45:23,045 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11110.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:45:25,460 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:45:27,193 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.217e+02 1.830e+02 2.178e+02 2.684e+02 7.389e+02, threshold=4.357e+02, percent-clipped=1.0 2023-03-26 00:45:39,911 INFO [finetune.py:976] (2/7) Epoch 2, batch 5400, loss[loss=0.3052, simple_loss=0.3483, pruned_loss=0.131, over 4661.00 frames. ], tot_loss[loss=0.2622, simple_loss=0.31, pruned_loss=0.1072, over 950676.90 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:45:41,694 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:46:01,167 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7759, 1.5383, 1.1645, 1.2440, 1.4461, 1.4010, 1.3869, 2.2444], device='cuda:2'), covar=tensor([1.3867, 1.1769, 1.1071, 1.4596, 1.1096, 0.7589, 1.3288, 0.4023], device='cuda:2'), in_proj_covar=tensor([0.0258, 0.0238, 0.0214, 0.0275, 0.0229, 0.0191, 0.0234, 0.0176], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:46:05,245 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:46:22,030 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11174.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:46:23,945 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-26 00:46:24,380 INFO [finetune.py:976] (2/7) Epoch 2, batch 5450, loss[loss=0.2609, simple_loss=0.2998, pruned_loss=0.111, over 4934.00 frames. ], tot_loss[loss=0.2567, simple_loss=0.3046, pruned_loss=0.1044, over 951209.94 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:46:25,851 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 00:46:55,232 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.192e+02 1.719e+02 1.995e+02 2.396e+02 4.116e+02, threshold=3.991e+02, percent-clipped=0.0 2023-03-26 00:47:04,429 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 00:47:11,560 INFO [finetune.py:976] (2/7) Epoch 2, batch 5500, loss[loss=0.2541, simple_loss=0.304, pruned_loss=0.1021, over 4820.00 frames. ], tot_loss[loss=0.2511, simple_loss=0.2996, pruned_loss=0.1013, over 951066.45 frames. ], batch size: 40, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:47:21,363 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:47:25,458 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11241.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:48:00,490 INFO [finetune.py:976] (2/7) Epoch 2, batch 5550, loss[loss=0.2634, simple_loss=0.3226, pruned_loss=0.1021, over 4910.00 frames. ], tot_loss[loss=0.2536, simple_loss=0.3019, pruned_loss=0.1026, over 950484.57 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:48:05,644 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 00:48:30,779 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.394e+02 1.988e+02 2.263e+02 2.636e+02 5.646e+02, threshold=4.525e+02, percent-clipped=4.0 2023-03-26 00:48:33,827 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 00:48:38,280 INFO [finetune.py:976] (2/7) Epoch 2, batch 5600, loss[loss=0.3008, simple_loss=0.3454, pruned_loss=0.1281, over 4933.00 frames. ], tot_loss[loss=0.2585, simple_loss=0.3075, pruned_loss=0.1047, over 952114.44 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:49:19,621 INFO [finetune.py:976] (2/7) Epoch 2, batch 5650, loss[loss=0.2707, simple_loss=0.3287, pruned_loss=0.1063, over 4910.00 frames. ], tot_loss[loss=0.2604, simple_loss=0.3105, pruned_loss=0.1052, over 953189.60 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:49:22,614 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5158, 1.5246, 1.6802, 1.1816, 1.5965, 1.8647, 1.8969, 1.4974], device='cuda:2'), covar=tensor([0.1019, 0.0595, 0.0386, 0.0641, 0.0331, 0.0481, 0.0261, 0.0560], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0117, 0.0134, 0.0131, 0.0117, 0.0145, 0.0143], device='cuda:2'), out_proj_covar=tensor([9.6933e-05, 1.1555e-04, 8.5554e-05, 9.8770e-05, 9.5159e-05, 8.6679e-05, 1.0794e-04, 1.0575e-04], device='cuda:2') 2023-03-26 00:49:44,844 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:49:54,703 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.700e+02 2.043e+02 2.333e+02 3.537e+02, threshold=4.085e+02, percent-clipped=0.0 2023-03-26 00:50:01,882 INFO [finetune.py:976] (2/7) Epoch 2, batch 5700, loss[loss=0.2337, simple_loss=0.2713, pruned_loss=0.09802, over 3848.00 frames. ], tot_loss[loss=0.2582, simple_loss=0.3064, pruned_loss=0.105, over 934828.82 frames. ], batch size: 16, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:50:03,178 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3168, 1.5333, 1.4244, 1.6860, 1.6708, 2.9997, 1.3628, 1.6994], device='cuda:2'), covar=tensor([0.1132, 0.1626, 0.1133, 0.1015, 0.1567, 0.0288, 0.1506, 0.1583], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 00:50:03,184 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11430.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:50:13,812 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:50:15,108 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7855, 2.5356, 2.0777, 2.7259, 2.5239, 2.3192, 2.3088, 3.5608], device='cuda:2'), covar=tensor([1.0174, 1.0334, 0.9166, 1.2924, 0.8878, 0.6011, 1.0913, 0.2563], device='cuda:2'), in_proj_covar=tensor([0.0261, 0.0240, 0.0216, 0.0277, 0.0230, 0.0193, 0.0235, 0.0178], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:50:33,841 INFO [finetune.py:976] (2/7) Epoch 3, batch 0, loss[loss=0.341, simple_loss=0.3793, pruned_loss=0.1513, over 4698.00 frames. ], tot_loss[loss=0.341, simple_loss=0.3793, pruned_loss=0.1513, over 4698.00 frames. ], batch size: 59, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:50:33,841 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 00:50:41,862 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6586, 0.8140, 1.4841, 1.3907, 1.3352, 1.3129, 1.2005, 1.4032], device='cuda:2'), covar=tensor([1.3061, 2.1503, 1.6675, 1.9642, 2.0819, 1.4171, 2.4182, 1.4568], device='cuda:2'), in_proj_covar=tensor([0.0227, 0.0257, 0.0250, 0.0269, 0.0244, 0.0218, 0.0279, 0.0217], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 00:50:52,076 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6080, 1.4866, 1.8775, 2.8771, 2.1796, 2.1079, 0.9763, 2.3045], device='cuda:2'), covar=tensor([0.1836, 0.1658, 0.1386, 0.0593, 0.0841, 0.1331, 0.1974, 0.0733], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0120, 0.0139, 0.0164, 0.0105, 0.0146, 0.0130, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 00:50:55,329 INFO [finetune.py:1010] (2/7) Epoch 3, validation: loss=0.1864, simple_loss=0.2566, pruned_loss=0.05807, over 2265189.00 frames. 2023-03-26 00:50:55,330 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6303MB 2023-03-26 00:51:16,098 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4351, 1.4154, 1.7753, 2.8098, 2.0008, 1.9896, 0.9360, 2.2189], device='cuda:2'), covar=tensor([0.1818, 0.1536, 0.1331, 0.0593, 0.0862, 0.1568, 0.1910, 0.0692], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0120, 0.0138, 0.0164, 0.0105, 0.0146, 0.0130, 0.0107], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 00:51:20,331 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:51:24,759 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.01 vs. limit=5.0 2023-03-26 00:51:38,814 INFO [finetune.py:976] (2/7) Epoch 3, batch 50, loss[loss=0.3147, simple_loss=0.336, pruned_loss=0.1467, over 4231.00 frames. ], tot_loss[loss=0.2693, simple_loss=0.3154, pruned_loss=0.1116, over 212834.08 frames. ], batch size: 66, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:51:45,831 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.779e+02 2.075e+02 2.495e+02 4.593e+02, threshold=4.151e+02, percent-clipped=1.0 2023-03-26 00:51:48,298 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3151, 1.7447, 1.5259, 0.6759, 1.8450, 1.8306, 1.3948, 1.6457], device='cuda:2'), covar=tensor([0.0770, 0.1277, 0.1963, 0.2500, 0.1653, 0.2066, 0.2611, 0.1366], device='cuda:2'), in_proj_covar=tensor([0.0163, 0.0192, 0.0201, 0.0185, 0.0211, 0.0208, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:51:54,921 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:51:56,207 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11532.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:51:56,292 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.17 vs. limit=5.0 2023-03-26 00:52:02,014 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11541.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:52:12,048 INFO [finetune.py:976] (2/7) Epoch 3, batch 100, loss[loss=0.2466, simple_loss=0.2873, pruned_loss=0.1029, over 4864.00 frames. ], tot_loss[loss=0.2616, simple_loss=0.3071, pruned_loss=0.1081, over 376606.39 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:52:33,516 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11589.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:52:36,422 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:52:47,407 INFO [finetune.py:976] (2/7) Epoch 3, batch 150, loss[loss=0.2344, simple_loss=0.2751, pruned_loss=0.09687, over 4710.00 frames. ], tot_loss[loss=0.2558, simple_loss=0.3015, pruned_loss=0.1051, over 504984.06 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:53:00,296 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.308e+02 1.853e+02 2.230e+02 2.582e+02 4.758e+02, threshold=4.459e+02, percent-clipped=3.0 2023-03-26 00:53:24,123 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:53:48,109 INFO [finetune.py:976] (2/7) Epoch 3, batch 200, loss[loss=0.3099, simple_loss=0.3271, pruned_loss=0.1463, over 4061.00 frames. ], tot_loss[loss=0.2528, simple_loss=0.2992, pruned_loss=0.1032, over 603266.45 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:54:35,486 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:54:37,340 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:54:44,848 INFO [finetune.py:976] (2/7) Epoch 3, batch 250, loss[loss=0.2169, simple_loss=0.2657, pruned_loss=0.08405, over 4680.00 frames. ], tot_loss[loss=0.2554, simple_loss=0.3024, pruned_loss=0.1042, over 678592.39 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:55:02,879 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.255e+02 1.783e+02 2.124e+02 2.629e+02 6.988e+02, threshold=4.248e+02, percent-clipped=1.0 2023-03-26 00:55:32,487 INFO [finetune.py:976] (2/7) Epoch 3, batch 300, loss[loss=0.2881, simple_loss=0.3299, pruned_loss=0.1232, over 4931.00 frames. ], tot_loss[loss=0.2582, simple_loss=0.3062, pruned_loss=0.1052, over 739491.75 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:55:40,992 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:55:50,148 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8401, 2.3298, 2.0452, 1.1254, 2.3879, 2.1233, 1.9572, 2.1460], device='cuda:2'), covar=tensor([0.0703, 0.1130, 0.1839, 0.2715, 0.1591, 0.2095, 0.2035, 0.1286], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0194, 0.0203, 0.0187, 0.0213, 0.0209, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 00:56:13,668 INFO [finetune.py:976] (2/7) Epoch 3, batch 350, loss[loss=0.2191, simple_loss=0.2749, pruned_loss=0.0816, over 4699.00 frames. ], tot_loss[loss=0.261, simple_loss=0.3088, pruned_loss=0.1066, over 784570.04 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:56:20,324 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.891e+02 2.265e+02 2.576e+02 3.939e+02, threshold=4.529e+02, percent-clipped=0.0 2023-03-26 00:56:20,451 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:30,475 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11830.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:45,043 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11846.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:51,650 INFO [finetune.py:976] (2/7) Epoch 3, batch 400, loss[loss=0.3087, simple_loss=0.3307, pruned_loss=0.1434, over 4756.00 frames. ], tot_loss[loss=0.2637, simple_loss=0.3123, pruned_loss=0.1076, over 822513.32 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:57:20,642 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11877.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:21,181 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:33,124 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:55,601 INFO [finetune.py:976] (2/7) Epoch 3, batch 450, loss[loss=0.2282, simple_loss=0.2757, pruned_loss=0.0903, over 4734.00 frames. ], tot_loss[loss=0.2629, simple_loss=0.3113, pruned_loss=0.1073, over 853495.14 frames. ], batch size: 59, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:57:56,364 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11907.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:12,548 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.239e+02 1.814e+02 2.317e+02 2.793e+02 4.030e+02, threshold=4.633e+02, percent-clipped=0.0 2023-03-26 00:58:15,048 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:39,216 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:50,438 INFO [finetune.py:976] (2/7) Epoch 3, batch 500, loss[loss=0.2254, simple_loss=0.2851, pruned_loss=0.08288, over 4769.00 frames. ], tot_loss[loss=0.2583, simple_loss=0.3067, pruned_loss=0.1049, over 874910.87 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:59:19,995 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:27,077 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11984.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:31,810 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:46,544 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 00:59:46,862 INFO [finetune.py:976] (2/7) Epoch 3, batch 550, loss[loss=0.2238, simple_loss=0.2827, pruned_loss=0.08243, over 4804.00 frames. ], tot_loss[loss=0.255, simple_loss=0.3031, pruned_loss=0.1034, over 893674.05 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:59:49,293 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12009.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:53,393 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.776e+02 2.066e+02 2.700e+02 4.009e+02, threshold=4.133e+02, percent-clipped=0.0 2023-03-26 01:00:13,595 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12045.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:00:21,762 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:00:22,311 INFO [finetune.py:976] (2/7) Epoch 3, batch 600, loss[loss=0.2827, simple_loss=0.3335, pruned_loss=0.116, over 4729.00 frames. ], tot_loss[loss=0.2538, simple_loss=0.3023, pruned_loss=0.1027, over 907389.53 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 01:00:31,521 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8165, 1.5787, 1.3810, 1.5640, 1.9510, 2.0849, 1.7272, 1.4170], device='cuda:2'), covar=tensor([0.0244, 0.0384, 0.0567, 0.0343, 0.0250, 0.0314, 0.0378, 0.0416], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0112, 0.0135, 0.0114, 0.0104, 0.0098, 0.0088, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4232e-05, 8.8402e-05, 1.0906e-04, 9.0526e-05, 8.2360e-05, 7.2951e-05, 6.7628e-05, 8.4549e-05], device='cuda:2') 2023-03-26 01:00:41,444 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:01:16,810 INFO [finetune.py:976] (2/7) Epoch 3, batch 650, loss[loss=0.281, simple_loss=0.3417, pruned_loss=0.1102, over 4751.00 frames. ], tot_loss[loss=0.2588, simple_loss=0.3075, pruned_loss=0.105, over 917633.29 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 01:01:20,056 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9568, 1.6684, 1.7555, 1.9224, 2.3537, 1.8354, 1.5921, 1.4431], device='cuda:2'), covar=tensor([0.2486, 0.2761, 0.2127, 0.2057, 0.2372, 0.1376, 0.3156, 0.2103], device='cuda:2'), in_proj_covar=tensor([0.0224, 0.0206, 0.0194, 0.0179, 0.0228, 0.0170, 0.0210, 0.0183], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:01:23,449 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.901e+02 2.250e+02 2.651e+02 5.885e+02, threshold=4.501e+02, percent-clipped=2.0 2023-03-26 01:01:23,627 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2166, 1.9284, 1.5703, 1.9733, 1.9178, 1.7513, 1.7815, 2.8060], device='cuda:2'), covar=tensor([1.0688, 1.0649, 0.9286, 1.2336, 0.9157, 0.6345, 1.1603, 0.2962], device='cuda:2'), in_proj_covar=tensor([0.0262, 0.0241, 0.0216, 0.0278, 0.0231, 0.0193, 0.0235, 0.0179], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:01:50,151 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12137.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:01:57,398 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6858, 1.1703, 1.4160, 1.4128, 1.2886, 1.3443, 1.3453, 1.3538], device='cuda:2'), covar=tensor([1.2301, 2.0137, 1.5459, 1.7387, 1.9255, 1.3378, 2.3451, 1.3664], device='cuda:2'), in_proj_covar=tensor([0.0227, 0.0257, 0.0252, 0.0269, 0.0245, 0.0218, 0.0279, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:02:02,051 INFO [finetune.py:976] (2/7) Epoch 3, batch 700, loss[loss=0.2536, simple_loss=0.3067, pruned_loss=0.1003, over 4813.00 frames. ], tot_loss[loss=0.2591, simple_loss=0.3084, pruned_loss=0.1049, over 926129.31 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:02:03,358 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6595, 1.6366, 1.5841, 1.8008, 1.1166, 3.6162, 1.4172, 1.8783], device='cuda:2'), covar=tensor([0.3441, 0.2515, 0.2013, 0.2204, 0.2048, 0.0164, 0.2746, 0.1517], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0110, 0.0114, 0.0118, 0.0114, 0.0096, 0.0099, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 01:02:07,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8303, 1.7210, 1.5035, 1.5159, 1.9798, 2.2029, 1.8788, 1.7888], device='cuda:2'), covar=tensor([0.0417, 0.0451, 0.0646, 0.0478, 0.0425, 0.0525, 0.0407, 0.0453], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0114, 0.0138, 0.0116, 0.0106, 0.0099, 0.0089, 0.0110], device='cuda:2'), out_proj_covar=tensor([6.5394e-05, 8.9932e-05, 1.1146e-04, 9.2264e-05, 8.3961e-05, 7.4287e-05, 6.8960e-05, 8.6323e-05], device='cuda:2') 2023-03-26 01:02:12,359 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12172.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:24,536 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12188.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:39,290 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:43,052 INFO [finetune.py:976] (2/7) Epoch 3, batch 750, loss[loss=0.2288, simple_loss=0.2862, pruned_loss=0.08573, over 4874.00 frames. ], tot_loss[loss=0.2571, simple_loss=0.307, pruned_loss=0.1036, over 932531.05 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:02:43,274 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 01:02:52,747 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 01:02:54,948 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.858e+02 2.308e+02 2.738e+02 5.308e+02, threshold=4.616e+02, percent-clipped=1.0 2023-03-26 01:03:06,610 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6772, 0.7899, 1.4663, 1.3035, 1.2994, 1.3065, 1.1649, 1.3576], device='cuda:2'), covar=tensor([1.0316, 1.7490, 1.3439, 1.4586, 1.5547, 1.1126, 1.8458, 1.2586], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0258, 0.0252, 0.0270, 0.0246, 0.0219, 0.0281, 0.0219], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:03:14,098 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:03:30,123 INFO [finetune.py:976] (2/7) Epoch 3, batch 800, loss[loss=0.2731, simple_loss=0.3209, pruned_loss=0.1126, over 4819.00 frames. ], tot_loss[loss=0.255, simple_loss=0.3054, pruned_loss=0.1023, over 937185.01 frames. ], batch size: 39, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:03:42,857 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:02,286 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 01:04:03,326 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12291.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:06,996 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2041, 1.9970, 2.3249, 1.0084, 2.3741, 2.6233, 2.1270, 2.2156], device='cuda:2'), covar=tensor([0.0989, 0.0741, 0.0443, 0.0834, 0.0467, 0.0726, 0.0469, 0.0547], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0156, 0.0118, 0.0135, 0.0132, 0.0118, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7585e-05, 1.1614e-04, 8.6006e-05, 9.9124e-05, 9.5547e-05, 8.7240e-05, 1.0918e-04, 1.0656e-04], device='cuda:2') 2023-03-26 01:04:11,114 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:12,243 INFO [finetune.py:976] (2/7) Epoch 3, batch 850, loss[loss=0.2379, simple_loss=0.2914, pruned_loss=0.09223, over 4911.00 frames. ], tot_loss[loss=0.2529, simple_loss=0.303, pruned_loss=0.1014, over 939653.81 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:04:19,496 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.224e+02 1.828e+02 2.109e+02 2.576e+02 5.946e+02, threshold=4.217e+02, percent-clipped=1.0 2023-03-26 01:04:20,251 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5318, 1.3237, 1.3082, 1.5420, 1.6717, 1.5680, 0.8016, 1.3612], device='cuda:2'), covar=tensor([0.2377, 0.2487, 0.2118, 0.1908, 0.1889, 0.1328, 0.3118, 0.2013], device='cuda:2'), in_proj_covar=tensor([0.0225, 0.0206, 0.0193, 0.0179, 0.0229, 0.0170, 0.0210, 0.0184], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:04:46,215 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12339.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:46,839 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12340.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:05,883 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.29 vs. limit=5.0 2023-03-26 01:05:06,398 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:06,917 INFO [finetune.py:976] (2/7) Epoch 3, batch 900, loss[loss=0.2036, simple_loss=0.2533, pruned_loss=0.07698, over 4752.00 frames. ], tot_loss[loss=0.2497, simple_loss=0.2995, pruned_loss=0.09992, over 943127.15 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:05:43,765 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12403.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:45,595 INFO [finetune.py:976] (2/7) Epoch 3, batch 950, loss[loss=0.202, simple_loss=0.2603, pruned_loss=0.07181, over 4832.00 frames. ], tot_loss[loss=0.2487, simple_loss=0.2982, pruned_loss=0.09961, over 946050.40 frames. ], batch size: 30, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:05:57,755 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.779e+02 2.123e+02 2.532e+02 4.452e+02, threshold=4.246e+02, percent-clipped=1.0 2023-03-26 01:06:12,419 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:06:22,021 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12446.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:06:28,911 INFO [finetune.py:976] (2/7) Epoch 3, batch 1000, loss[loss=0.2745, simple_loss=0.2898, pruned_loss=0.1296, over 3789.00 frames. ], tot_loss[loss=0.2529, simple_loss=0.3018, pruned_loss=0.102, over 947004.18 frames. ], batch size: 16, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:06:39,188 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:17,582 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:19,892 INFO [finetune.py:976] (2/7) Epoch 3, batch 1050, loss[loss=0.2278, simple_loss=0.2944, pruned_loss=0.08057, over 4821.00 frames. ], tot_loss[loss=0.2551, simple_loss=0.3046, pruned_loss=0.1029, over 949393.01 frames. ], batch size: 30, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:07:20,612 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:31,080 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 2.008e+02 2.380e+02 2.733e+02 7.204e+02, threshold=4.759e+02, percent-clipped=3.0 2023-03-26 01:07:38,119 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:39,384 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4004, 1.2304, 1.6379, 2.3287, 1.6265, 2.0466, 0.7788, 1.9610], device='cuda:2'), covar=tensor([0.1803, 0.1807, 0.1261, 0.0907, 0.1056, 0.1482, 0.1813, 0.0831], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0120, 0.0138, 0.0164, 0.0105, 0.0145, 0.0130, 0.0107], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 01:08:10,389 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:17,959 INFO [finetune.py:976] (2/7) Epoch 3, batch 1100, loss[loss=0.2544, simple_loss=0.3064, pruned_loss=0.1012, over 4867.00 frames. ], tot_loss[loss=0.2551, simple_loss=0.3048, pruned_loss=0.1027, over 949924.07 frames. ], batch size: 34, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:08:35,451 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12576.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:47,134 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 01:08:58,279 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:59,394 INFO [finetune.py:976] (2/7) Epoch 3, batch 1150, loss[loss=0.2075, simple_loss=0.2595, pruned_loss=0.07775, over 4764.00 frames. ], tot_loss[loss=0.2545, simple_loss=0.3052, pruned_loss=0.1019, over 950993.45 frames. ], batch size: 26, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:09:11,543 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.712e+02 1.953e+02 2.432e+02 5.551e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 01:09:12,322 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0699, 1.8090, 1.4174, 2.0585, 2.0743, 1.5871, 2.5992, 2.0319], device='cuda:2'), covar=tensor([0.2628, 0.5915, 0.5848, 0.5555, 0.4019, 0.2785, 0.5132, 0.3499], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0198, 0.0240, 0.0255, 0.0220, 0.0185, 0.0209, 0.0190], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:09:21,033 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:09:42,282 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:09:55,577 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12652.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:10:03,871 INFO [finetune.py:976] (2/7) Epoch 3, batch 1200, loss[loss=0.2448, simple_loss=0.3047, pruned_loss=0.09249, over 4899.00 frames. ], tot_loss[loss=0.2552, simple_loss=0.3052, pruned_loss=0.1026, over 948570.23 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:10:18,199 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1222, 1.7510, 2.3567, 1.8341, 2.1668, 2.2121, 1.7893, 2.3828], device='cuda:2'), covar=tensor([0.1032, 0.1615, 0.1059, 0.1561, 0.0686, 0.0982, 0.1880, 0.0582], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0205, 0.0205, 0.0198, 0.0180, 0.0226, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:10:31,612 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12688.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:10:43,572 INFO [finetune.py:976] (2/7) Epoch 3, batch 1250, loss[loss=0.2853, simple_loss=0.3018, pruned_loss=0.1344, over 4267.00 frames. ], tot_loss[loss=0.252, simple_loss=0.3019, pruned_loss=0.101, over 949888.99 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:10:47,285 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4222, 3.7559, 3.9646, 4.2382, 4.1666, 3.8535, 4.5106, 1.5349], device='cuda:2'), covar=tensor([0.0726, 0.0888, 0.0766, 0.0927, 0.1103, 0.1350, 0.0551, 0.4872], device='cuda:2'), in_proj_covar=tensor([0.0370, 0.0246, 0.0275, 0.0298, 0.0342, 0.0288, 0.0312, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:10:49,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2861, 2.9040, 3.0160, 3.2032, 3.0537, 2.8574, 3.3399, 1.0971], device='cuda:2'), covar=tensor([0.0950, 0.0900, 0.0959, 0.1054, 0.1407, 0.1465, 0.1008, 0.4671], device='cuda:2'), in_proj_covar=tensor([0.0370, 0.0246, 0.0275, 0.0297, 0.0342, 0.0287, 0.0312, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:10:51,325 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.791e+02 2.152e+02 2.550e+02 3.946e+02, threshold=4.304e+02, percent-clipped=1.0 2023-03-26 01:11:06,490 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:11:25,781 INFO [finetune.py:976] (2/7) Epoch 3, batch 1300, loss[loss=0.2548, simple_loss=0.299, pruned_loss=0.1052, over 4827.00 frames. ], tot_loss[loss=0.2488, simple_loss=0.2987, pruned_loss=0.09946, over 952304.64 frames. ], batch size: 40, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:11:29,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8526, 1.4867, 2.1044, 1.5287, 1.8800, 1.9614, 1.4663, 2.1291], device='cuda:2'), covar=tensor([0.1355, 0.2049, 0.1269, 0.1752, 0.0864, 0.1520, 0.2691, 0.0830], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0205, 0.0205, 0.0198, 0.0180, 0.0226, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:11:31,192 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:11:48,421 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8151, 1.5797, 1.3079, 1.5170, 1.5487, 1.4374, 1.4948, 2.3517], device='cuda:2'), covar=tensor([1.3455, 1.3578, 1.0269, 1.3355, 1.0736, 0.7252, 1.3014, 0.3885], device='cuda:2'), in_proj_covar=tensor([0.0265, 0.0243, 0.0217, 0.0279, 0.0232, 0.0194, 0.0236, 0.0180], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:11:48,954 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12780.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:12:19,504 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:12:22,425 INFO [finetune.py:976] (2/7) Epoch 3, batch 1350, loss[loss=0.2359, simple_loss=0.2973, pruned_loss=0.08725, over 4812.00 frames. ], tot_loss[loss=0.2509, simple_loss=0.3005, pruned_loss=0.1006, over 949180.75 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:12:40,681 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.832e+02 2.182e+02 2.674e+02 3.468e+02, threshold=4.364e+02, percent-clipped=0.0 2023-03-26 01:12:50,687 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12824.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:13:09,691 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-26 01:13:21,549 INFO [finetune.py:976] (2/7) Epoch 3, batch 1400, loss[loss=0.2632, simple_loss=0.3164, pruned_loss=0.105, over 4911.00 frames. ], tot_loss[loss=0.2531, simple_loss=0.3034, pruned_loss=0.1014, over 950722.40 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:13:50,016 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-26 01:14:09,952 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7977, 1.0031, 1.5108, 1.4344, 1.3180, 1.3580, 1.2950, 1.4001], device='cuda:2'), covar=tensor([0.8855, 1.5980, 1.1843, 1.3873, 1.4995, 1.1116, 1.7715, 1.1152], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0256, 0.0253, 0.0269, 0.0245, 0.0219, 0.0280, 0.0219], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:14:19,107 INFO [finetune.py:976] (2/7) Epoch 3, batch 1450, loss[loss=0.2653, simple_loss=0.3165, pruned_loss=0.1071, over 4854.00 frames. ], tot_loss[loss=0.2551, simple_loss=0.3053, pruned_loss=0.1024, over 949186.88 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:14:38,007 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.344e+02 1.849e+02 2.229e+02 2.789e+02 4.972e+02, threshold=4.459e+02, percent-clipped=1.0 2023-03-26 01:15:13,778 INFO [finetune.py:976] (2/7) Epoch 3, batch 1500, loss[loss=0.2029, simple_loss=0.2642, pruned_loss=0.07081, over 4793.00 frames. ], tot_loss[loss=0.2563, simple_loss=0.3063, pruned_loss=0.1032, over 947795.75 frames. ], batch size: 26, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:15:40,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9450, 1.7072, 1.7846, 2.0144, 2.5485, 1.9703, 1.7078, 1.4693], device='cuda:2'), covar=tensor([0.2587, 0.2640, 0.2166, 0.2037, 0.2242, 0.1384, 0.3082, 0.2191], device='cuda:2'), in_proj_covar=tensor([0.0225, 0.0206, 0.0194, 0.0179, 0.0229, 0.0170, 0.0210, 0.0183], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:15:58,823 INFO [finetune.py:976] (2/7) Epoch 3, batch 1550, loss[loss=0.2609, simple_loss=0.3042, pruned_loss=0.1088, over 4892.00 frames. ], tot_loss[loss=0.2552, simple_loss=0.3054, pruned_loss=0.1025, over 950560.88 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:16:09,621 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.917e+01 1.840e+02 2.219e+02 2.788e+02 8.539e+02, threshold=4.437e+02, percent-clipped=2.0 2023-03-26 01:16:41,417 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 01:16:44,395 INFO [finetune.py:976] (2/7) Epoch 3, batch 1600, loss[loss=0.2383, simple_loss=0.2929, pruned_loss=0.09182, over 4832.00 frames. ], tot_loss[loss=0.2534, simple_loss=0.3035, pruned_loss=0.1017, over 951479.32 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:16:48,214 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9941, 1.6764, 1.4409, 1.8536, 1.6328, 1.5526, 1.5487, 2.6071], device='cuda:2'), covar=tensor([1.3151, 1.4638, 1.0396, 1.4068, 1.2794, 0.7284, 1.4990, 0.4239], device='cuda:2'), in_proj_covar=tensor([0.0266, 0.0244, 0.0217, 0.0279, 0.0233, 0.0194, 0.0236, 0.0181], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:16:59,135 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 01:17:24,600 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5694, 1.6715, 1.9055, 1.1093, 1.6965, 1.9089, 1.9650, 1.7183], device='cuda:2'), covar=tensor([0.0929, 0.0713, 0.0406, 0.0583, 0.0417, 0.0732, 0.0357, 0.0498], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0156, 0.0118, 0.0136, 0.0133, 0.0119, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7792e-05, 1.1647e-04, 8.6684e-05, 9.9575e-05, 9.6338e-05, 8.7745e-05, 1.0893e-04, 1.0670e-04], device='cuda:2') 2023-03-26 01:17:41,255 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:17:43,644 INFO [finetune.py:976] (2/7) Epoch 3, batch 1650, loss[loss=0.2303, simple_loss=0.2698, pruned_loss=0.09545, over 4236.00 frames. ], tot_loss[loss=0.2502, simple_loss=0.2999, pruned_loss=0.1002, over 952401.88 frames. ], batch size: 18, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:17:49,787 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 01:17:55,152 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.264e+02 1.841e+02 2.109e+02 2.450e+02 4.226e+02, threshold=4.217e+02, percent-clipped=0.0 2023-03-26 01:17:56,995 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=13119.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 01:18:01,246 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 01:18:28,106 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 01:18:32,153 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=13150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:18:41,623 INFO [finetune.py:976] (2/7) Epoch 3, batch 1700, loss[loss=0.2816, simple_loss=0.3188, pruned_loss=0.1222, over 4908.00 frames. ], tot_loss[loss=0.25, simple_loss=0.2989, pruned_loss=0.1005, over 953628.74 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:18:54,256 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6787, 1.3208, 1.9242, 3.2919, 2.3043, 2.5125, 1.0153, 2.6015], device='cuda:2'), covar=tensor([0.1944, 0.2327, 0.1699, 0.0989, 0.1011, 0.1563, 0.2075, 0.0859], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0119, 0.0137, 0.0163, 0.0104, 0.0144, 0.0129, 0.0106], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 01:19:22,355 INFO [finetune.py:976] (2/7) Epoch 3, batch 1750, loss[loss=0.3027, simple_loss=0.3596, pruned_loss=0.1229, over 4809.00 frames. ], tot_loss[loss=0.2505, simple_loss=0.2997, pruned_loss=0.1007, over 952263.99 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:19:31,217 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.973e+02 2.314e+02 2.693e+02 6.749e+02, threshold=4.629e+02, percent-clipped=3.0 2023-03-26 01:20:07,995 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3964, 1.5365, 0.7735, 2.1799, 2.5279, 1.8194, 1.7672, 2.2059], device='cuda:2'), covar=tensor([0.1482, 0.2342, 0.2516, 0.1246, 0.1986, 0.2014, 0.1487, 0.2037], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0116, 0.0094, 0.0123, 0.0097, 0.0100, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 01:20:09,791 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7479, 3.9799, 3.8514, 1.7430, 4.1194, 3.0328, 0.9376, 2.7580], device='cuda:2'), covar=tensor([0.2231, 0.1983, 0.1545, 0.3370, 0.0948, 0.0961, 0.4347, 0.1490], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0169, 0.0166, 0.0129, 0.0157, 0.0121, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 01:20:18,400 INFO [finetune.py:976] (2/7) Epoch 3, batch 1800, loss[loss=0.2562, simple_loss=0.3235, pruned_loss=0.09442, over 4815.00 frames. ], tot_loss[loss=0.2515, simple_loss=0.3024, pruned_loss=0.1003, over 952308.03 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:20:59,915 INFO [finetune.py:976] (2/7) Epoch 3, batch 1850, loss[loss=0.3332, simple_loss=0.3757, pruned_loss=0.1453, over 4926.00 frames. ], tot_loss[loss=0.2529, simple_loss=0.3037, pruned_loss=0.1011, over 952065.98 frames. ], batch size: 42, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:21:08,031 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.691e+02 1.934e+02 2.488e+02 4.482e+02, threshold=3.868e+02, percent-clipped=0.0 2023-03-26 01:21:50,073 INFO [finetune.py:976] (2/7) Epoch 3, batch 1900, loss[loss=0.317, simple_loss=0.3611, pruned_loss=0.1364, over 4241.00 frames. ], tot_loss[loss=0.2548, simple_loss=0.3061, pruned_loss=0.1018, over 953348.80 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:22:09,966 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3787, 2.6472, 2.4640, 1.5206, 2.6913, 2.4654, 2.2113, 2.4472], device='cuda:2'), covar=tensor([0.0628, 0.1067, 0.1690, 0.2517, 0.1814, 0.1980, 0.1902, 0.1285], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0197, 0.0203, 0.0188, 0.0214, 0.0209, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:22:40,354 INFO [finetune.py:976] (2/7) Epoch 3, batch 1950, loss[loss=0.2185, simple_loss=0.2684, pruned_loss=0.08426, over 4907.00 frames. ], tot_loss[loss=0.2521, simple_loss=0.3033, pruned_loss=0.1004, over 952512.77 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:22:43,509 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1129, 1.3381, 0.7637, 1.8361, 2.2303, 1.6605, 1.6357, 1.9475], device='cuda:2'), covar=tensor([0.1533, 0.2286, 0.2603, 0.1394, 0.2255, 0.2335, 0.1417, 0.2167], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0099, 0.0118, 0.0094, 0.0125, 0.0098, 0.0100, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 01:22:46,997 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.849e+02 2.191e+02 2.472e+02 6.030e+02, threshold=4.381e+02, percent-clipped=4.0 2023-03-26 01:22:48,826 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13419.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:23:25,636 INFO [finetune.py:976] (2/7) Epoch 3, batch 2000, loss[loss=0.2331, simple_loss=0.2856, pruned_loss=0.09025, over 4904.00 frames. ], tot_loss[loss=0.2488, simple_loss=0.2995, pruned_loss=0.099, over 954576.15 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:23:36,602 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=13467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:24:18,862 INFO [finetune.py:976] (2/7) Epoch 3, batch 2050, loss[loss=0.2126, simple_loss=0.2788, pruned_loss=0.0732, over 4820.00 frames. ], tot_loss[loss=0.2467, simple_loss=0.2971, pruned_loss=0.09814, over 955342.65 frames. ], batch size: 40, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:24:23,471 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8624, 3.4065, 3.5280, 3.7378, 3.6041, 3.3789, 3.9430, 1.2069], device='cuda:2'), covar=tensor([0.0894, 0.0890, 0.0937, 0.1056, 0.1391, 0.1434, 0.0843, 0.5294], device='cuda:2'), in_proj_covar=tensor([0.0370, 0.0246, 0.0277, 0.0296, 0.0343, 0.0288, 0.0312, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:24:33,950 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.806e+02 2.206e+02 2.674e+02 5.377e+02, threshold=4.412e+02, percent-clipped=2.0 2023-03-26 01:24:42,539 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6438, 1.5790, 1.5143, 1.6307, 1.2767, 3.3411, 1.3789, 1.9883], device='cuda:2'), covar=tensor([0.3305, 0.2191, 0.1997, 0.2194, 0.1733, 0.0193, 0.2623, 0.1245], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0109, 0.0114, 0.0116, 0.0113, 0.0095, 0.0099, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 01:25:00,755 INFO [finetune.py:976] (2/7) Epoch 3, batch 2100, loss[loss=0.2644, simple_loss=0.329, pruned_loss=0.09991, over 4851.00 frames. ], tot_loss[loss=0.2482, simple_loss=0.298, pruned_loss=0.09916, over 957577.59 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:25:02,002 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5858, 1.5764, 1.2422, 1.4338, 1.8451, 1.7871, 1.5821, 1.3570], device='cuda:2'), covar=tensor([0.0242, 0.0275, 0.0529, 0.0318, 0.0246, 0.0285, 0.0349, 0.0362], device='cuda:2'), in_proj_covar=tensor([0.0083, 0.0113, 0.0135, 0.0116, 0.0103, 0.0098, 0.0088, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4699e-05, 8.9335e-05, 1.0889e-04, 9.1762e-05, 8.2145e-05, 7.2911e-05, 6.8043e-05, 8.4784e-05], device='cuda:2') 2023-03-26 01:25:10,529 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8222, 1.5951, 2.2046, 1.5169, 2.0349, 1.9850, 1.6295, 2.2496], device='cuda:2'), covar=tensor([0.1472, 0.2138, 0.1516, 0.2140, 0.0978, 0.1622, 0.2593, 0.1061], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0203, 0.0204, 0.0197, 0.0180, 0.0227, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:25:45,304 INFO [finetune.py:976] (2/7) Epoch 3, batch 2150, loss[loss=0.2635, simple_loss=0.3149, pruned_loss=0.1061, over 4808.00 frames. ], tot_loss[loss=0.2504, simple_loss=0.3005, pruned_loss=0.1002, over 954984.90 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:26:01,365 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.918e+02 2.256e+02 2.684e+02 5.304e+02, threshold=4.512e+02, percent-clipped=2.0 2023-03-26 01:26:24,732 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4251, 2.2162, 1.8588, 1.0498, 2.1462, 1.8680, 1.6215, 1.9151], device='cuda:2'), covar=tensor([0.0981, 0.1066, 0.1872, 0.2492, 0.1733, 0.2785, 0.2366, 0.1312], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0196, 0.0202, 0.0188, 0.0214, 0.0209, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:26:27,571 INFO [finetune.py:976] (2/7) Epoch 3, batch 2200, loss[loss=0.2534, simple_loss=0.2974, pruned_loss=0.1048, over 4798.00 frames. ], tot_loss[loss=0.252, simple_loss=0.3029, pruned_loss=0.1006, over 955789.32 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:26:30,568 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7543, 4.4192, 4.2006, 2.3611, 4.5124, 3.2616, 0.8261, 3.0194], device='cuda:2'), covar=tensor([0.2821, 0.1359, 0.1368, 0.3021, 0.0692, 0.0939, 0.4710, 0.1464], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0168, 0.0166, 0.0129, 0.0156, 0.0121, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 01:27:00,297 INFO [finetune.py:976] (2/7) Epoch 3, batch 2250, loss[loss=0.268, simple_loss=0.3271, pruned_loss=0.1044, over 4903.00 frames. ], tot_loss[loss=0.2526, simple_loss=0.3036, pruned_loss=0.1008, over 954767.62 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:27:08,388 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.927e+02 2.166e+02 2.564e+02 5.587e+02, threshold=4.333e+02, percent-clipped=2.0 2023-03-26 01:27:24,304 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=13737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:27:42,151 INFO [finetune.py:976] (2/7) Epoch 3, batch 2300, loss[loss=0.2555, simple_loss=0.3046, pruned_loss=0.1032, over 4898.00 frames. ], tot_loss[loss=0.2533, simple_loss=0.3047, pruned_loss=0.101, over 954492.38 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:27:55,077 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2428, 1.9498, 1.4142, 0.5821, 1.6167, 1.9109, 1.6631, 1.7753], device='cuda:2'), covar=tensor([0.0871, 0.0801, 0.1654, 0.2377, 0.1446, 0.2424, 0.2272, 0.0954], device='cuda:2'), in_proj_covar=tensor([0.0164, 0.0196, 0.0202, 0.0187, 0.0214, 0.0208, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:28:25,837 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=13798.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 01:28:36,682 INFO [finetune.py:976] (2/7) Epoch 3, batch 2350, loss[loss=0.1968, simple_loss=0.2613, pruned_loss=0.06612, over 4911.00 frames. ], tot_loss[loss=0.2498, simple_loss=0.3009, pruned_loss=0.09936, over 955211.83 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:28:50,252 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.589e+01 1.791e+02 2.182e+02 2.579e+02 6.380e+02, threshold=4.365e+02, percent-clipped=2.0 2023-03-26 01:29:27,914 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2664, 2.1404, 1.6472, 2.4632, 2.4206, 1.8835, 2.8976, 2.1821], device='cuda:2'), covar=tensor([0.2091, 0.4786, 0.5264, 0.4848, 0.3363, 0.2401, 0.4143, 0.3249], device='cuda:2'), in_proj_covar=tensor([0.0163, 0.0195, 0.0238, 0.0253, 0.0218, 0.0183, 0.0207, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:29:31,312 INFO [finetune.py:976] (2/7) Epoch 3, batch 2400, loss[loss=0.2562, simple_loss=0.2981, pruned_loss=0.1072, over 4756.00 frames. ], tot_loss[loss=0.2476, simple_loss=0.2982, pruned_loss=0.09851, over 957235.43 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:30:15,690 INFO [finetune.py:976] (2/7) Epoch 3, batch 2450, loss[loss=0.2918, simple_loss=0.3285, pruned_loss=0.1275, over 3998.00 frames. ], tot_loss[loss=0.2465, simple_loss=0.296, pruned_loss=0.09849, over 953265.60 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:30:26,386 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 01:30:29,103 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.953e+01 1.906e+02 2.150e+02 2.578e+02 4.181e+02, threshold=4.299e+02, percent-clipped=0.0 2023-03-26 01:30:59,052 INFO [finetune.py:976] (2/7) Epoch 3, batch 2500, loss[loss=0.2238, simple_loss=0.2952, pruned_loss=0.07625, over 4903.00 frames. ], tot_loss[loss=0.2478, simple_loss=0.2979, pruned_loss=0.09889, over 954069.90 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:31:17,976 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7778, 1.6730, 1.3512, 1.5419, 1.5748, 1.5206, 1.5052, 2.3693], device='cuda:2'), covar=tensor([1.1731, 1.1236, 0.9095, 1.2318, 0.9573, 0.6363, 1.1402, 0.3477], device='cuda:2'), in_proj_covar=tensor([0.0267, 0.0244, 0.0217, 0.0280, 0.0233, 0.0194, 0.0235, 0.0182], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:31:21,483 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1402, 2.0313, 1.6781, 1.5075, 2.1717, 2.5245, 2.1149, 1.9217], device='cuda:2'), covar=tensor([0.0387, 0.0366, 0.0559, 0.0389, 0.0372, 0.0390, 0.0285, 0.0395], device='cuda:2'), in_proj_covar=tensor([0.0083, 0.0114, 0.0136, 0.0117, 0.0104, 0.0099, 0.0089, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.5198e-05, 9.0107e-05, 1.0944e-04, 9.2646e-05, 8.2630e-05, 7.3655e-05, 6.9040e-05, 8.5644e-05], device='cuda:2') 2023-03-26 01:31:40,716 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3451, 2.0912, 1.8477, 0.8128, 2.0290, 1.8323, 1.5512, 1.9443], device='cuda:2'), covar=tensor([0.0811, 0.0910, 0.1787, 0.2581, 0.1352, 0.2504, 0.2352, 0.1169], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0197, 0.0204, 0.0189, 0.0215, 0.0209, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:31:53,427 INFO [finetune.py:976] (2/7) Epoch 3, batch 2550, loss[loss=0.262, simple_loss=0.3191, pruned_loss=0.1024, over 4916.00 frames. ], tot_loss[loss=0.251, simple_loss=0.302, pruned_loss=0.1, over 954183.44 frames. ], batch size: 36, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:32:01,947 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1153, 3.5893, 3.7457, 4.0418, 3.8574, 3.6200, 4.2226, 1.4163], device='cuda:2'), covar=tensor([0.0845, 0.0855, 0.0862, 0.0845, 0.1204, 0.1468, 0.0735, 0.5151], device='cuda:2'), in_proj_covar=tensor([0.0365, 0.0242, 0.0272, 0.0292, 0.0337, 0.0283, 0.0308, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:32:02,446 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.924e+01 1.822e+02 2.075e+02 2.520e+02 4.375e+02, threshold=4.150e+02, percent-clipped=1.0 2023-03-26 01:32:08,824 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9977, 1.7581, 1.4176, 1.7366, 1.7028, 1.6214, 1.5554, 2.6750], device='cuda:2'), covar=tensor([1.1475, 1.2076, 0.9848, 1.3663, 1.0567, 0.6706, 1.2717, 0.3619], device='cuda:2'), in_proj_covar=tensor([0.0268, 0.0244, 0.0217, 0.0280, 0.0233, 0.0194, 0.0235, 0.0182], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:32:24,845 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 01:32:30,593 INFO [finetune.py:976] (2/7) Epoch 3, batch 2600, loss[loss=0.2378, simple_loss=0.2953, pruned_loss=0.09015, over 4820.00 frames. ], tot_loss[loss=0.2524, simple_loss=0.3037, pruned_loss=0.1005, over 954394.34 frames. ], batch size: 39, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:32:41,345 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:33:04,945 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14093.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:33:16,403 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:33:18,157 INFO [finetune.py:976] (2/7) Epoch 3, batch 2650, loss[loss=0.2452, simple_loss=0.3016, pruned_loss=0.09438, over 4805.00 frames. ], tot_loss[loss=0.2518, simple_loss=0.3037, pruned_loss=0.09999, over 952713.91 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:33:34,829 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.796e+02 2.195e+02 2.771e+02 4.502e+02, threshold=4.390e+02, percent-clipped=2.0 2023-03-26 01:33:37,236 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14120.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 01:33:56,199 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14133.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:34:18,023 INFO [finetune.py:976] (2/7) Epoch 3, batch 2700, loss[loss=0.2632, simple_loss=0.3085, pruned_loss=0.109, over 4851.00 frames. ], tot_loss[loss=0.2509, simple_loss=0.3026, pruned_loss=0.09956, over 949093.92 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:34:28,754 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:34:49,233 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 01:34:50,974 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14181.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:35:13,611 INFO [finetune.py:976] (2/7) Epoch 3, batch 2750, loss[loss=0.2064, simple_loss=0.2716, pruned_loss=0.07065, over 4761.00 frames. ], tot_loss[loss=0.2484, simple_loss=0.2996, pruned_loss=0.09857, over 950900.28 frames. ], batch size: 27, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:35:20,820 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.010e+02 1.613e+02 1.949e+02 2.418e+02 3.837e+02, threshold=3.898e+02, percent-clipped=0.0 2023-03-26 01:35:23,995 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-26 01:35:50,233 INFO [finetune.py:976] (2/7) Epoch 3, batch 2800, loss[loss=0.2409, simple_loss=0.2844, pruned_loss=0.09868, over 4851.00 frames. ], tot_loss[loss=0.2435, simple_loss=0.2949, pruned_loss=0.09605, over 952263.22 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:36:18,508 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9846, 1.9343, 1.4394, 2.0208, 2.0842, 1.5930, 2.5708, 2.0270], device='cuda:2'), covar=tensor([0.2134, 0.4672, 0.4794, 0.4916, 0.3418, 0.2370, 0.4739, 0.3064], device='cuda:2'), in_proj_covar=tensor([0.0164, 0.0194, 0.0237, 0.0252, 0.0219, 0.0183, 0.0207, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:36:35,462 INFO [finetune.py:976] (2/7) Epoch 3, batch 2850, loss[loss=0.2383, simple_loss=0.2997, pruned_loss=0.08846, over 4830.00 frames. ], tot_loss[loss=0.2424, simple_loss=0.2934, pruned_loss=0.09566, over 952953.01 frames. ], batch size: 40, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:36:43,877 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0745, 1.9500, 1.4422, 2.1043, 2.2138, 1.6608, 2.6557, 2.0675], device='cuda:2'), covar=tensor([0.2315, 0.5210, 0.5697, 0.5370, 0.3583, 0.2545, 0.5118, 0.3337], device='cuda:2'), in_proj_covar=tensor([0.0164, 0.0195, 0.0239, 0.0254, 0.0220, 0.0184, 0.0208, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:36:48,619 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.210e+02 1.714e+02 2.069e+02 2.397e+02 3.427e+02, threshold=4.138e+02, percent-clipped=0.0 2023-03-26 01:37:26,624 INFO [finetune.py:976] (2/7) Epoch 3, batch 2900, loss[loss=0.1745, simple_loss=0.219, pruned_loss=0.06504, over 4695.00 frames. ], tot_loss[loss=0.2422, simple_loss=0.294, pruned_loss=0.09521, over 951414.71 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:38:06,366 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:38:25,744 INFO [finetune.py:976] (2/7) Epoch 3, batch 2950, loss[loss=0.2767, simple_loss=0.3289, pruned_loss=0.1122, over 4897.00 frames. ], tot_loss[loss=0.2457, simple_loss=0.2978, pruned_loss=0.09676, over 951593.35 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:38:33,835 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0476, 2.1319, 1.9064, 1.4432, 2.3212, 2.2320, 2.0985, 1.8261], device='cuda:2'), covar=tensor([0.0755, 0.0602, 0.0909, 0.1060, 0.0425, 0.0794, 0.0804, 0.1079], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0132, 0.0144, 0.0129, 0.0109, 0.0142, 0.0147, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:38:35,848 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14413.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:38:38,200 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.306e+02 1.859e+02 2.169e+02 2.721e+02 5.785e+02, threshold=4.339e+02, percent-clipped=3.0 2023-03-26 01:38:44,331 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1980, 1.4309, 1.4728, 0.7887, 1.1403, 1.6084, 1.6576, 1.3750], device='cuda:2'), covar=tensor([0.1049, 0.0505, 0.0368, 0.0545, 0.0424, 0.0539, 0.0291, 0.0560], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0158, 0.0118, 0.0136, 0.0132, 0.0120, 0.0147, 0.0143], device='cuda:2'), out_proj_covar=tensor([9.8748e-05, 1.1740e-04, 8.5928e-05, 1.0001e-04, 9.5737e-05, 8.8325e-05, 1.0978e-04, 1.0638e-04], device='cuda:2') 2023-03-26 01:38:50,262 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:39:02,087 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:39:20,296 INFO [finetune.py:976] (2/7) Epoch 3, batch 3000, loss[loss=0.2114, simple_loss=0.281, pruned_loss=0.07091, over 4835.00 frames. ], tot_loss[loss=0.2475, simple_loss=0.2997, pruned_loss=0.09768, over 954237.16 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:39:20,296 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 01:39:22,408 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6319, 1.6517, 1.9786, 1.4066, 1.7593, 1.8342, 1.6592, 2.0132], device='cuda:2'), covar=tensor([0.1683, 0.2178, 0.1509, 0.2063, 0.0981, 0.1609, 0.2621, 0.1032], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0204, 0.0204, 0.0196, 0.0181, 0.0225, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:39:25,664 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7739, 1.6551, 1.6883, 1.7192, 1.1257, 3.0892, 1.3224, 1.8244], device='cuda:2'), covar=tensor([0.3306, 0.2285, 0.1908, 0.2201, 0.1920, 0.0239, 0.2415, 0.1329], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 01:39:37,114 INFO [finetune.py:1010] (2/7) Epoch 3, validation: loss=0.1777, simple_loss=0.2485, pruned_loss=0.05342, over 2265189.00 frames. 2023-03-26 01:39:37,114 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6303MB 2023-03-26 01:39:42,143 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:39:56,807 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:40:03,359 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14476.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:40:14,779 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8966, 1.7044, 1.4916, 1.3443, 2.0084, 2.2497, 2.0191, 1.7626], device='cuda:2'), covar=tensor([0.0384, 0.0491, 0.0527, 0.0490, 0.0432, 0.0427, 0.0304, 0.0446], device='cuda:2'), in_proj_covar=tensor([0.0083, 0.0113, 0.0135, 0.0116, 0.0105, 0.0098, 0.0089, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.5058e-05, 8.9606e-05, 1.0867e-04, 9.2267e-05, 8.3083e-05, 7.3285e-05, 6.8472e-05, 8.4834e-05], device='cuda:2') 2023-03-26 01:40:15,379 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14487.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:40:36,152 INFO [finetune.py:976] (2/7) Epoch 3, batch 3050, loss[loss=0.2732, simple_loss=0.3332, pruned_loss=0.1066, over 4760.00 frames. ], tot_loss[loss=0.2478, simple_loss=0.3001, pruned_loss=0.09778, over 953809.43 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:40:36,221 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2352, 4.6304, 4.8970, 4.8065, 4.6603, 4.4191, 5.3322, 1.7073], device='cuda:2'), covar=tensor([0.0832, 0.1290, 0.1008, 0.1340, 0.1689, 0.1760, 0.0750, 0.6652], device='cuda:2'), in_proj_covar=tensor([0.0368, 0.0245, 0.0275, 0.0295, 0.0339, 0.0285, 0.0311, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:40:52,886 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.290e+02 1.934e+02 2.277e+02 2.724e+02 4.940e+02, threshold=4.554e+02, percent-clipped=2.0 2023-03-26 01:41:17,990 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14548.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:41:22,790 INFO [finetune.py:976] (2/7) Epoch 3, batch 3100, loss[loss=0.2145, simple_loss=0.2721, pruned_loss=0.07845, over 4893.00 frames. ], tot_loss[loss=0.2475, simple_loss=0.2991, pruned_loss=0.09801, over 953302.84 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:41:53,259 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 01:42:10,663 INFO [finetune.py:976] (2/7) Epoch 3, batch 3150, loss[loss=0.2499, simple_loss=0.3026, pruned_loss=0.09863, over 4873.00 frames. ], tot_loss[loss=0.2455, simple_loss=0.2967, pruned_loss=0.09712, over 952805.27 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:42:11,387 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5040, 1.2560, 1.2630, 1.2550, 1.6475, 1.5542, 1.4126, 1.2105], device='cuda:2'), covar=tensor([0.0247, 0.0366, 0.0538, 0.0319, 0.0273, 0.0443, 0.0319, 0.0378], device='cuda:2'), in_proj_covar=tensor([0.0082, 0.0113, 0.0134, 0.0115, 0.0103, 0.0098, 0.0088, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.4592e-05, 8.9049e-05, 1.0795e-04, 9.1504e-05, 8.2173e-05, 7.2698e-05, 6.7965e-05, 8.4250e-05], device='cuda:2') 2023-03-26 01:42:18,350 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.775e+02 2.189e+02 2.683e+02 4.981e+02, threshold=4.378e+02, percent-clipped=2.0 2023-03-26 01:42:28,642 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7254, 1.5024, 1.4830, 1.2850, 1.8153, 1.5194, 1.6640, 1.6769], device='cuda:2'), covar=tensor([0.2241, 0.4091, 0.4819, 0.4249, 0.3647, 0.2361, 0.3683, 0.3031], device='cuda:2'), in_proj_covar=tensor([0.0164, 0.0195, 0.0238, 0.0254, 0.0220, 0.0184, 0.0208, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:42:50,536 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3940, 2.0331, 1.5820, 0.7687, 1.7488, 1.9388, 1.6335, 1.8051], device='cuda:2'), covar=tensor([0.0850, 0.1002, 0.1636, 0.2292, 0.1346, 0.2252, 0.2274, 0.1128], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0198, 0.0205, 0.0189, 0.0217, 0.0211, 0.0217, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:43:00,562 INFO [finetune.py:976] (2/7) Epoch 3, batch 3200, loss[loss=0.239, simple_loss=0.2981, pruned_loss=0.08992, over 4786.00 frames. ], tot_loss[loss=0.241, simple_loss=0.2924, pruned_loss=0.09478, over 952804.37 frames. ], batch size: 26, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:43:41,554 INFO [finetune.py:976] (2/7) Epoch 3, batch 3250, loss[loss=0.2863, simple_loss=0.3371, pruned_loss=0.1177, over 4808.00 frames. ], tot_loss[loss=0.2435, simple_loss=0.2949, pruned_loss=0.09602, over 954105.26 frames. ], batch size: 51, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:43:54,504 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.171e+02 1.718e+02 2.102e+02 2.544e+02 5.358e+02, threshold=4.204e+02, percent-clipped=1.0 2023-03-26 01:44:05,442 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4443, 1.3610, 1.4036, 1.7349, 1.5733, 3.1732, 1.3242, 1.5827], device='cuda:2'), covar=tensor([0.1017, 0.1737, 0.1278, 0.1025, 0.1561, 0.0244, 0.1415, 0.1606], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 01:44:08,044 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14728.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:15,461 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:31,813 INFO [finetune.py:976] (2/7) Epoch 3, batch 3300, loss[loss=0.3149, simple_loss=0.3645, pruned_loss=0.1326, over 4847.00 frames. ], tot_loss[loss=0.2479, simple_loss=0.2996, pruned_loss=0.09807, over 955748.55 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:44:33,779 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:42,750 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14769.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:48,714 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:48,759 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14776.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 01:45:08,519 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:45:16,878 INFO [finetune.py:976] (2/7) Epoch 3, batch 3350, loss[loss=0.2615, simple_loss=0.3078, pruned_loss=0.1076, over 4817.00 frames. ], tot_loss[loss=0.2493, simple_loss=0.3014, pruned_loss=0.09862, over 954409.48 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:45:17,534 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14807.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:45:29,676 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.682e+02 2.084e+02 2.593e+02 4.183e+02, threshold=4.169e+02, percent-clipped=0.0 2023-03-26 01:45:39,450 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14824.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:45:52,839 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14843.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:46:01,259 INFO [finetune.py:976] (2/7) Epoch 3, batch 3400, loss[loss=0.2449, simple_loss=0.3006, pruned_loss=0.09463, over 4832.00 frames. ], tot_loss[loss=0.2494, simple_loss=0.302, pruned_loss=0.09837, over 954586.02 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:46:14,373 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 01:46:41,114 INFO [finetune.py:976] (2/7) Epoch 3, batch 3450, loss[loss=0.2203, simple_loss=0.27, pruned_loss=0.08534, over 4820.00 frames. ], tot_loss[loss=0.2478, simple_loss=0.301, pruned_loss=0.09726, over 956220.98 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:46:53,149 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.352e+02 1.935e+02 2.237e+02 2.692e+02 3.962e+02, threshold=4.475e+02, percent-clipped=0.0 2023-03-26 01:47:10,309 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.90 vs. limit=5.0 2023-03-26 01:47:27,611 INFO [finetune.py:976] (2/7) Epoch 3, batch 3500, loss[loss=0.2383, simple_loss=0.2911, pruned_loss=0.09274, over 4825.00 frames. ], tot_loss[loss=0.2442, simple_loss=0.2973, pruned_loss=0.09556, over 956084.66 frames. ], batch size: 40, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:48:19,804 INFO [finetune.py:976] (2/7) Epoch 3, batch 3550, loss[loss=0.2408, simple_loss=0.2909, pruned_loss=0.09536, over 4737.00 frames. ], tot_loss[loss=0.242, simple_loss=0.2945, pruned_loss=0.09475, over 955408.23 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:48:26,974 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.116e+02 1.780e+02 2.199e+02 2.800e+02 5.904e+02, threshold=4.398e+02, percent-clipped=2.0 2023-03-26 01:49:00,695 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.78 vs. limit=5.0 2023-03-26 01:49:03,540 INFO [finetune.py:976] (2/7) Epoch 3, batch 3600, loss[loss=0.227, simple_loss=0.2844, pruned_loss=0.08483, over 4734.00 frames. ], tot_loss[loss=0.2394, simple_loss=0.2918, pruned_loss=0.09349, over 956911.66 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:49:12,128 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:49:41,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4732, 1.3184, 1.1366, 0.9056, 1.2616, 1.2872, 1.2207, 1.9481], device='cuda:2'), covar=tensor([1.0868, 0.9351, 0.7864, 1.0233, 0.8242, 0.5309, 0.9525, 0.3429], device='cuda:2'), in_proj_covar=tensor([0.0272, 0.0249, 0.0219, 0.0283, 0.0235, 0.0196, 0.0239, 0.0185], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:49:42,970 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:49:49,067 INFO [finetune.py:976] (2/7) Epoch 3, batch 3650, loss[loss=0.2748, simple_loss=0.3355, pruned_loss=0.107, over 4856.00 frames. ], tot_loss[loss=0.2414, simple_loss=0.2942, pruned_loss=0.09431, over 957777.01 frames. ], batch size: 44, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:49:56,315 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.927e+02 2.238e+02 2.686e+02 4.916e+02, threshold=4.476e+02, percent-clipped=1.0 2023-03-26 01:49:56,388 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:50:01,329 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1478, 1.2397, 0.9656, 1.3499, 1.3373, 2.4073, 1.0907, 1.3716], device='cuda:2'), covar=tensor([0.1119, 0.1966, 0.1497, 0.1098, 0.1790, 0.0410, 0.1718, 0.1908], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 01:50:09,998 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1703, 1.9655, 1.9222, 2.1458, 2.6444, 2.2820, 1.8040, 1.8542], device='cuda:2'), covar=tensor([0.2334, 0.2379, 0.2029, 0.1896, 0.1727, 0.1103, 0.2778, 0.1940], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0208, 0.0195, 0.0182, 0.0232, 0.0171, 0.0212, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:50:17,857 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7043, 1.6372, 2.1132, 2.0460, 1.8351, 3.5717, 1.4531, 1.9189], device='cuda:2'), covar=tensor([0.0944, 0.1430, 0.1364, 0.0902, 0.1348, 0.0229, 0.1271, 0.1477], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 01:50:28,396 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15143.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:50:41,688 INFO [finetune.py:976] (2/7) Epoch 3, batch 3700, loss[loss=0.2192, simple_loss=0.2875, pruned_loss=0.07538, over 4919.00 frames. ], tot_loss[loss=0.2435, simple_loss=0.2971, pruned_loss=0.09502, over 955373.70 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:51:16,177 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:51:19,595 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:51:29,140 INFO [finetune.py:976] (2/7) Epoch 3, batch 3750, loss[loss=0.2357, simple_loss=0.298, pruned_loss=0.0867, over 4744.00 frames. ], tot_loss[loss=0.2456, simple_loss=0.2991, pruned_loss=0.09602, over 956090.23 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:51:40,516 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.801e+02 2.153e+02 2.622e+02 6.720e+02, threshold=4.305e+02, percent-clipped=1.0 2023-03-26 01:52:33,802 INFO [finetune.py:976] (2/7) Epoch 3, batch 3800, loss[loss=0.2486, simple_loss=0.3005, pruned_loss=0.09836, over 4885.00 frames. ], tot_loss[loss=0.2471, simple_loss=0.3006, pruned_loss=0.09682, over 955713.16 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:52:33,932 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15256.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:52:52,402 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 01:53:21,966 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6792, 0.9644, 1.4810, 1.4477, 1.3562, 1.3597, 1.3536, 1.4129], device='cuda:2'), covar=tensor([0.8017, 1.3044, 1.0242, 1.1283, 1.2576, 0.9148, 1.4122, 0.9565], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0255, 0.0254, 0.0267, 0.0243, 0.0218, 0.0279, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 01:53:22,426 INFO [finetune.py:976] (2/7) Epoch 3, batch 3850, loss[loss=0.2316, simple_loss=0.2869, pruned_loss=0.08821, over 4848.00 frames. ], tot_loss[loss=0.2459, simple_loss=0.2992, pruned_loss=0.09626, over 956096.31 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:53:39,316 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.276e+02 1.876e+02 2.253e+02 2.579e+02 5.032e+02, threshold=4.505e+02, percent-clipped=1.0 2023-03-26 01:53:49,479 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5643, 1.3799, 1.8875, 2.8766, 2.0284, 2.0965, 1.0268, 2.2540], device='cuda:2'), covar=tensor([0.1785, 0.1560, 0.1260, 0.0643, 0.0856, 0.1289, 0.1828, 0.0767], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0120, 0.0138, 0.0166, 0.0105, 0.0145, 0.0131, 0.0108], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 01:54:11,118 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9017, 1.4970, 0.9209, 1.6937, 2.1434, 1.4550, 1.6939, 1.8739], device='cuda:2'), covar=tensor([0.1558, 0.2034, 0.2500, 0.1311, 0.2172, 0.2173, 0.1485, 0.2071], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0117, 0.0094, 0.0125, 0.0097, 0.0100, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 01:54:26,485 INFO [finetune.py:976] (2/7) Epoch 3, batch 3900, loss[loss=0.1911, simple_loss=0.2603, pruned_loss=0.06095, over 4780.00 frames. ], tot_loss[loss=0.242, simple_loss=0.2954, pruned_loss=0.09428, over 958135.00 frames. ], batch size: 26, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:54:34,348 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3621, 2.9413, 2.2716, 1.8213, 2.8821, 2.8417, 2.7958, 2.3740], device='cuda:2'), covar=tensor([0.0753, 0.0516, 0.0934, 0.1041, 0.0392, 0.0818, 0.0715, 0.0977], device='cuda:2'), in_proj_covar=tensor([0.0140, 0.0132, 0.0145, 0.0130, 0.0111, 0.0144, 0.0148, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:55:10,117 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:55:16,819 INFO [finetune.py:976] (2/7) Epoch 3, batch 3950, loss[loss=0.2158, simple_loss=0.2791, pruned_loss=0.0762, over 4805.00 frames. ], tot_loss[loss=0.24, simple_loss=0.2927, pruned_loss=0.09366, over 954159.34 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:55:18,620 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1131, 2.3155, 2.1051, 1.5104, 2.3613, 2.3675, 2.3448, 1.9442], device='cuda:2'), covar=tensor([0.0734, 0.0672, 0.0824, 0.1043, 0.0451, 0.0797, 0.0735, 0.1003], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0132, 0.0145, 0.0130, 0.0110, 0.0143, 0.0147, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 01:55:25,255 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.678e+02 2.164e+02 2.472e+02 4.231e+02, threshold=4.328e+02, percent-clipped=0.0 2023-03-26 01:55:49,726 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4772, 1.5561, 1.5170, 0.9051, 1.4961, 1.7410, 1.7474, 1.4844], device='cuda:2'), covar=tensor([0.1125, 0.0552, 0.0501, 0.0621, 0.0431, 0.0641, 0.0320, 0.0604], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0159, 0.0119, 0.0138, 0.0134, 0.0122, 0.0148, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.9128e-05, 1.1865e-04, 8.7010e-05, 1.0171e-04, 9.6784e-05, 8.9915e-05, 1.1086e-04, 1.0760e-04], device='cuda:2') 2023-03-26 01:55:52,197 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:55:53,297 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3662, 1.6152, 1.7421, 0.9853, 1.4943, 1.8555, 1.8760, 1.5508], device='cuda:2'), covar=tensor([0.0985, 0.0540, 0.0390, 0.0546, 0.0405, 0.0494, 0.0280, 0.0577], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0159, 0.0119, 0.0138, 0.0134, 0.0122, 0.0148, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.9145e-05, 1.1871e-04, 8.6987e-05, 1.0173e-04, 9.6805e-05, 8.9944e-05, 1.1086e-04, 1.0762e-04], device='cuda:2') 2023-03-26 01:55:59,829 INFO [finetune.py:976] (2/7) Epoch 3, batch 4000, loss[loss=0.2213, simple_loss=0.2727, pruned_loss=0.0849, over 4755.00 frames. ], tot_loss[loss=0.2386, simple_loss=0.2908, pruned_loss=0.09322, over 953101.23 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:57:05,459 INFO [finetune.py:976] (2/7) Epoch 3, batch 4050, loss[loss=0.2124, simple_loss=0.2544, pruned_loss=0.08521, over 4717.00 frames. ], tot_loss[loss=0.2428, simple_loss=0.2947, pruned_loss=0.09549, over 951664.07 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:57:20,269 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.798e+02 2.110e+02 2.647e+02 5.396e+02, threshold=4.219e+02, percent-clipped=2.0 2023-03-26 01:57:58,327 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15551.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:58:06,352 INFO [finetune.py:976] (2/7) Epoch 3, batch 4100, loss[loss=0.2815, simple_loss=0.3302, pruned_loss=0.1164, over 4897.00 frames. ], tot_loss[loss=0.2438, simple_loss=0.2965, pruned_loss=0.09559, over 952898.63 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:58:28,113 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5021, 1.6324, 1.7611, 1.0731, 1.6361, 1.9548, 1.8988, 1.6025], device='cuda:2'), covar=tensor([0.1010, 0.0591, 0.0357, 0.0554, 0.0389, 0.0411, 0.0308, 0.0528], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0160, 0.0120, 0.0139, 0.0134, 0.0122, 0.0149, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.9480e-05, 1.1935e-04, 8.7245e-05, 1.0253e-04, 9.7303e-05, 8.9941e-05, 1.1160e-04, 1.0779e-04], device='cuda:2') 2023-03-26 01:59:02,792 INFO [finetune.py:976] (2/7) Epoch 3, batch 4150, loss[loss=0.2859, simple_loss=0.3364, pruned_loss=0.1177, over 4822.00 frames. ], tot_loss[loss=0.2459, simple_loss=0.2989, pruned_loss=0.0964, over 954747.23 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:59:10,585 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.791e+02 2.157e+02 2.467e+02 4.537e+02, threshold=4.313e+02, percent-clipped=1.0 2023-03-26 01:59:51,469 INFO [finetune.py:976] (2/7) Epoch 3, batch 4200, loss[loss=0.2393, simple_loss=0.2932, pruned_loss=0.09269, over 4763.00 frames. ], tot_loss[loss=0.2445, simple_loss=0.2981, pruned_loss=0.0954, over 954585.23 frames. ], batch size: 28, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:00:26,257 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0251, 1.5844, 1.7010, 1.7283, 1.5574, 1.5663, 1.6758, 1.6999], device='cuda:2'), covar=tensor([1.0450, 1.4743, 1.1214, 1.3638, 1.4194, 1.0438, 1.7133, 0.9515], device='cuda:2'), in_proj_covar=tensor([0.0225, 0.0252, 0.0250, 0.0263, 0.0240, 0.0215, 0.0275, 0.0217], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:00:53,256 INFO [finetune.py:976] (2/7) Epoch 3, batch 4250, loss[loss=0.254, simple_loss=0.3037, pruned_loss=0.1022, over 4783.00 frames. ], tot_loss[loss=0.2435, simple_loss=0.2967, pruned_loss=0.09517, over 954719.66 frames. ], batch size: 26, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:00:55,946 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-26 02:01:00,002 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.228e+02 1.731e+02 2.073e+02 2.469e+02 5.386e+02, threshold=4.147e+02, percent-clipped=2.0 2023-03-26 02:01:00,177 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6867, 0.8125, 1.5231, 1.3991, 1.4031, 1.3316, 1.2687, 1.4539], device='cuda:2'), covar=tensor([0.8103, 1.3358, 0.9857, 1.1230, 1.2382, 0.8832, 1.4372, 0.9477], device='cuda:2'), in_proj_covar=tensor([0.0227, 0.0253, 0.0252, 0.0265, 0.0242, 0.0216, 0.0277, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:01:13,992 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 02:01:16,513 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 02:01:31,791 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0808, 1.9083, 1.7843, 2.1702, 1.5806, 4.6311, 1.7653, 2.5324], device='cuda:2'), covar=tensor([0.3495, 0.2432, 0.2010, 0.2078, 0.1747, 0.0092, 0.2621, 0.1324], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 02:01:41,172 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6229, 1.6366, 1.7582, 1.8412, 1.7091, 2.9582, 1.4606, 1.7682], device='cuda:2'), covar=tensor([0.0861, 0.1347, 0.1245, 0.0862, 0.1244, 0.0297, 0.1210, 0.1324], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 02:01:43,529 INFO [finetune.py:976] (2/7) Epoch 3, batch 4300, loss[loss=0.2294, simple_loss=0.2814, pruned_loss=0.08868, over 4911.00 frames. ], tot_loss[loss=0.2412, simple_loss=0.2939, pruned_loss=0.09424, over 953753.68 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:01:53,449 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-26 02:02:43,903 INFO [finetune.py:976] (2/7) Epoch 3, batch 4350, loss[loss=0.1979, simple_loss=0.2511, pruned_loss=0.07231, over 4812.00 frames. ], tot_loss[loss=0.2359, simple_loss=0.2889, pruned_loss=0.09148, over 953213.75 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:03:01,710 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.218e+02 1.712e+02 2.007e+02 2.460e+02 4.679e+02, threshold=4.015e+02, percent-clipped=1.0 2023-03-26 02:03:12,273 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4455, 2.1890, 1.8084, 0.9243, 2.1421, 1.9384, 1.7388, 2.0879], device='cuda:2'), covar=tensor([0.0968, 0.0958, 0.1832, 0.2575, 0.1449, 0.2529, 0.2459, 0.1216], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0199, 0.0205, 0.0191, 0.0218, 0.0212, 0.0218, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:03:24,318 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 02:03:33,454 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:03:36,424 INFO [finetune.py:976] (2/7) Epoch 3, batch 4400, loss[loss=0.2996, simple_loss=0.3434, pruned_loss=0.1278, over 4798.00 frames. ], tot_loss[loss=0.2389, simple_loss=0.2913, pruned_loss=0.09327, over 953733.47 frames. ], batch size: 45, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:03:36,517 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3103, 1.3746, 1.3280, 1.5474, 1.4990, 2.8755, 1.1885, 1.4614], device='cuda:2'), covar=tensor([0.1110, 0.1840, 0.1454, 0.1088, 0.1625, 0.0303, 0.1671, 0.1852], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 02:04:05,504 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15883.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:04:20,878 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:04:30,843 INFO [finetune.py:976] (2/7) Epoch 3, batch 4450, loss[loss=0.2423, simple_loss=0.3086, pruned_loss=0.08797, over 4838.00 frames. ], tot_loss[loss=0.2433, simple_loss=0.2959, pruned_loss=0.09532, over 954440.82 frames. ], batch size: 47, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:04:41,781 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0588, 0.9404, 1.0429, 0.3930, 0.7164, 1.1434, 1.1864, 1.0699], device='cuda:2'), covar=tensor([0.0929, 0.0510, 0.0414, 0.0582, 0.0486, 0.0409, 0.0300, 0.0463], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0161, 0.0120, 0.0140, 0.0135, 0.0122, 0.0149, 0.0147], device='cuda:2'), out_proj_covar=tensor([1.0028e-04, 1.1988e-04, 8.7448e-05, 1.0316e-04, 9.7795e-05, 9.0484e-05, 1.1113e-04, 1.0861e-04], device='cuda:2') 2023-03-26 02:04:48,084 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.872e+02 2.224e+02 2.526e+02 5.583e+02, threshold=4.448e+02, percent-clipped=1.0 2023-03-26 02:05:16,016 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:05:23,247 INFO [finetune.py:976] (2/7) Epoch 3, batch 4500, loss[loss=0.2069, simple_loss=0.2441, pruned_loss=0.08482, over 4095.00 frames. ], tot_loss[loss=0.2441, simple_loss=0.2968, pruned_loss=0.0957, over 952136.46 frames. ], batch size: 17, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:05:23,401 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6600, 1.0189, 1.4801, 1.4132, 1.3040, 1.3015, 1.3197, 1.3680], device='cuda:2'), covar=tensor([0.7395, 1.2292, 0.9520, 1.0728, 1.1732, 0.8951, 1.3696, 0.8887], device='cuda:2'), in_proj_covar=tensor([0.0226, 0.0253, 0.0252, 0.0265, 0.0242, 0.0216, 0.0277, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:06:17,331 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16001.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:06:24,310 INFO [finetune.py:976] (2/7) Epoch 3, batch 4550, loss[loss=0.2703, simple_loss=0.3204, pruned_loss=0.1101, over 4896.00 frames. ], tot_loss[loss=0.2452, simple_loss=0.2982, pruned_loss=0.09608, over 952594.52 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:06:36,866 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.237e+02 1.760e+02 2.085e+02 2.487e+02 3.865e+02, threshold=4.170e+02, percent-clipped=0.0 2023-03-26 02:07:05,903 INFO [finetune.py:976] (2/7) Epoch 3, batch 4600, loss[loss=0.2301, simple_loss=0.2902, pruned_loss=0.08502, over 4913.00 frames. ], tot_loss[loss=0.2448, simple_loss=0.2978, pruned_loss=0.09584, over 952487.38 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:07:07,940 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 02:07:11,406 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16062.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:07:35,240 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16085.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:07:36,477 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7136, 0.8329, 1.5635, 1.4524, 1.3485, 1.3286, 1.2479, 1.4895], device='cuda:2'), covar=tensor([0.8331, 1.3666, 1.0349, 1.1696, 1.2294, 0.9164, 1.5015, 0.9496], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0255, 0.0254, 0.0266, 0.0243, 0.0217, 0.0278, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:07:59,849 INFO [finetune.py:976] (2/7) Epoch 3, batch 4650, loss[loss=0.2446, simple_loss=0.2862, pruned_loss=0.1015, over 4753.00 frames. ], tot_loss[loss=0.2433, simple_loss=0.2955, pruned_loss=0.09559, over 953446.11 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:08:07,260 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.817e+02 2.201e+02 2.598e+02 3.850e+02, threshold=4.403e+02, percent-clipped=0.0 2023-03-26 02:08:39,888 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16146.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:08:42,107 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16149.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:08:52,381 INFO [finetune.py:976] (2/7) Epoch 3, batch 4700, loss[loss=0.2081, simple_loss=0.2654, pruned_loss=0.07539, over 4717.00 frames. ], tot_loss[loss=0.2389, simple_loss=0.2912, pruned_loss=0.09328, over 953973.24 frames. ], batch size: 23, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:08:59,698 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 02:09:26,778 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5219, 2.1482, 1.6078, 0.7749, 1.8769, 2.0872, 1.7656, 1.9652], device='cuda:2'), covar=tensor([0.0791, 0.0944, 0.1698, 0.2255, 0.1374, 0.2073, 0.2141, 0.0937], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0198, 0.0204, 0.0190, 0.0216, 0.0210, 0.0218, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:09:40,755 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7275, 3.7961, 3.6723, 1.8186, 3.9197, 2.8334, 0.8514, 2.6710], device='cuda:2'), covar=tensor([0.3022, 0.2061, 0.1652, 0.3531, 0.1094, 0.1042, 0.4724, 0.1597], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0168, 0.0163, 0.0127, 0.0154, 0.0120, 0.0146, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 02:09:41,914 INFO [finetune.py:976] (2/7) Epoch 3, batch 4750, loss[loss=0.2657, simple_loss=0.3134, pruned_loss=0.109, over 4925.00 frames. ], tot_loss[loss=0.2368, simple_loss=0.2891, pruned_loss=0.09226, over 954087.98 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:09:44,992 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16210.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:09:49,753 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.299e+02 1.759e+02 2.065e+02 2.416e+02 4.123e+02, threshold=4.129e+02, percent-clipped=0.0 2023-03-26 02:09:52,596 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 02:10:03,619 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16239.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:10:16,138 INFO [finetune.py:976] (2/7) Epoch 3, batch 4800, loss[loss=0.252, simple_loss=0.3112, pruned_loss=0.0964, over 4760.00 frames. ], tot_loss[loss=0.2397, simple_loss=0.292, pruned_loss=0.09369, over 954586.76 frames. ], batch size: 59, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:10,322 INFO [finetune.py:976] (2/7) Epoch 3, batch 4850, loss[loss=0.248, simple_loss=0.2798, pruned_loss=0.1081, over 4708.00 frames. ], tot_loss[loss=0.241, simple_loss=0.2939, pruned_loss=0.09399, over 954627.74 frames. ], batch size: 23, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:18,714 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.278e+02 1.868e+02 2.289e+02 2.628e+02 4.977e+02, threshold=4.577e+02, percent-clipped=4.0 2023-03-26 02:11:45,922 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 02:11:52,999 INFO [finetune.py:976] (2/7) Epoch 3, batch 4900, loss[loss=0.2306, simple_loss=0.2861, pruned_loss=0.0876, over 4909.00 frames. ], tot_loss[loss=0.2422, simple_loss=0.2957, pruned_loss=0.09437, over 954236.97 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:53,736 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16357.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:00,017 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4974, 1.4364, 1.7939, 1.8387, 1.5611, 3.3402, 1.2199, 1.5915], device='cuda:2'), covar=tensor([0.1071, 0.1868, 0.1304, 0.1102, 0.1734, 0.0277, 0.1728, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 02:12:04,215 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16371.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:35,109 INFO [finetune.py:976] (2/7) Epoch 3, batch 4950, loss[loss=0.2281, simple_loss=0.2861, pruned_loss=0.08504, over 4791.00 frames. ], tot_loss[loss=0.2448, simple_loss=0.2983, pruned_loss=0.0956, over 956478.19 frames. ], batch size: 51, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:12:43,743 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.793e+02 2.170e+02 2.564e+02 4.726e+02, threshold=4.340e+02, percent-clipped=1.0 2023-03-26 02:12:46,797 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 02:12:53,396 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:56,909 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2114, 1.9471, 1.4561, 0.5896, 1.6583, 1.8364, 1.6157, 1.8449], device='cuda:2'), covar=tensor([0.0898, 0.0724, 0.1565, 0.2089, 0.1232, 0.2353, 0.2242, 0.0843], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0199, 0.0204, 0.0191, 0.0217, 0.0211, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:12:58,137 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9783, 1.8434, 1.4506, 1.9937, 1.7772, 1.7216, 1.7173, 2.6078], device='cuda:2'), covar=tensor([1.0939, 1.1484, 0.8466, 1.1620, 0.9857, 0.6253, 1.1754, 0.3453], device='cuda:2'), in_proj_covar=tensor([0.0274, 0.0250, 0.0220, 0.0285, 0.0236, 0.0196, 0.0240, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:12:59,267 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:13:11,536 INFO [finetune.py:976] (2/7) Epoch 3, batch 5000, loss[loss=0.2083, simple_loss=0.2634, pruned_loss=0.07659, over 4766.00 frames. ], tot_loss[loss=0.2425, simple_loss=0.296, pruned_loss=0.09448, over 957547.15 frames. ], batch size: 28, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:13:26,646 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-26 02:13:49,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5337, 1.3506, 1.9966, 1.3132, 1.7596, 1.7370, 1.3199, 1.9742], device='cuda:2'), covar=tensor([0.1512, 0.2048, 0.1241, 0.1709, 0.1056, 0.1452, 0.2573, 0.1035], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0207, 0.0204, 0.0198, 0.0184, 0.0227, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:14:08,362 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:14:08,897 INFO [finetune.py:976] (2/7) Epoch 3, batch 5050, loss[loss=0.2112, simple_loss=0.2792, pruned_loss=0.07161, over 4801.00 frames. ], tot_loss[loss=0.2406, simple_loss=0.2941, pruned_loss=0.09357, over 957155.38 frames. ], batch size: 29, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:14:27,728 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.680e+02 2.024e+02 2.446e+02 4.498e+02, threshold=4.048e+02, percent-clipped=1.0 2023-03-26 02:14:35,620 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 02:14:49,243 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16539.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:14:55,127 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9600, 1.9705, 1.6858, 1.3218, 2.1198, 2.4306, 2.1621, 1.8467], device='cuda:2'), covar=tensor([0.0345, 0.0396, 0.0589, 0.0492, 0.0364, 0.0449, 0.0333, 0.0372], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0114, 0.0137, 0.0117, 0.0104, 0.0099, 0.0090, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.5556e-05, 8.9978e-05, 1.1003e-04, 9.2642e-05, 8.2683e-05, 7.3683e-05, 6.9568e-05, 8.4442e-05], device='cuda:2') 2023-03-26 02:15:09,827 INFO [finetune.py:976] (2/7) Epoch 3, batch 5100, loss[loss=0.2305, simple_loss=0.2902, pruned_loss=0.08537, over 4919.00 frames. ], tot_loss[loss=0.236, simple_loss=0.2893, pruned_loss=0.09137, over 959127.19 frames. ], batch size: 46, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:15:36,573 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16587.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:15:37,230 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0276, 4.3767, 4.5599, 4.8341, 4.7291, 4.4450, 5.1402, 1.6150], device='cuda:2'), covar=tensor([0.0685, 0.0716, 0.0605, 0.0833, 0.1132, 0.1308, 0.0474, 0.5512], device='cuda:2'), in_proj_covar=tensor([0.0367, 0.0246, 0.0279, 0.0297, 0.0345, 0.0290, 0.0314, 0.0305], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:15:39,384 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8163, 1.5558, 1.7909, 1.6694, 1.5193, 1.5387, 1.6103, 1.7153], device='cuda:2'), covar=tensor([0.7775, 1.1050, 0.8416, 1.0691, 1.2034, 0.8583, 1.3677, 0.7794], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0254, 0.0254, 0.0266, 0.0244, 0.0218, 0.0279, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:15:45,661 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6435, 0.6129, 1.3889, 1.2690, 1.2454, 1.2747, 1.1585, 1.4087], device='cuda:2'), covar=tensor([0.9369, 1.5319, 1.2563, 1.2922, 1.5313, 1.0651, 1.7729, 1.1340], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0254, 0.0255, 0.0266, 0.0244, 0.0218, 0.0279, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:15:52,951 INFO [finetune.py:976] (2/7) Epoch 3, batch 5150, loss[loss=0.2079, simple_loss=0.2583, pruned_loss=0.07877, over 4775.00 frames. ], tot_loss[loss=0.236, simple_loss=0.289, pruned_loss=0.09145, over 958493.90 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:16:12,054 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.762e+02 2.113e+02 2.590e+02 4.768e+02, threshold=4.226e+02, percent-clipped=2.0 2023-03-26 02:16:47,968 INFO [finetune.py:976] (2/7) Epoch 3, batch 5200, loss[loss=0.2822, simple_loss=0.3192, pruned_loss=0.1226, over 4131.00 frames. ], tot_loss[loss=0.2394, simple_loss=0.293, pruned_loss=0.09296, over 956742.37 frames. ], batch size: 18, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:16:48,672 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:17:40,755 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16705.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:17:41,306 INFO [finetune.py:976] (2/7) Epoch 3, batch 5250, loss[loss=0.2847, simple_loss=0.3305, pruned_loss=0.1195, over 4846.00 frames. ], tot_loss[loss=0.2414, simple_loss=0.2951, pruned_loss=0.09386, over 955956.49 frames. ], batch size: 47, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:17:53,305 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.211e+02 1.833e+02 2.103e+02 2.647e+02 4.683e+02, threshold=4.205e+02, percent-clipped=1.0 2023-03-26 02:18:00,562 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:18:12,269 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:18:21,889 INFO [finetune.py:976] (2/7) Epoch 3, batch 5300, loss[loss=0.2564, simple_loss=0.3096, pruned_loss=0.1016, over 4837.00 frames. ], tot_loss[loss=0.2425, simple_loss=0.2966, pruned_loss=0.09421, over 955782.94 frames. ], batch size: 49, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:18:22,036 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8136, 1.6717, 1.3004, 1.7001, 1.6188, 1.5280, 1.5275, 2.4316], device='cuda:2'), covar=tensor([1.1777, 1.2068, 0.8853, 1.1648, 0.9538, 0.5904, 1.0360, 0.3471], device='cuda:2'), in_proj_covar=tensor([0.0273, 0.0249, 0.0218, 0.0283, 0.0234, 0.0195, 0.0238, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:18:23,852 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7827, 1.0827, 0.9032, 1.6721, 2.1581, 1.2105, 1.4667, 1.5693], device='cuda:2'), covar=tensor([0.1685, 0.2386, 0.2294, 0.1333, 0.1990, 0.2075, 0.1536, 0.2231], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0099, 0.0118, 0.0094, 0.0125, 0.0098, 0.0101, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 02:18:49,199 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16789.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:07,676 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16805.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:08,225 INFO [finetune.py:976] (2/7) Epoch 3, batch 5350, loss[loss=0.2969, simple_loss=0.3309, pruned_loss=0.1314, over 4208.00 frames. ], tot_loss[loss=0.2425, simple_loss=0.2966, pruned_loss=0.09416, over 956482.22 frames. ], batch size: 66, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:19:21,972 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.880e+02 2.227e+02 2.511e+02 3.677e+02, threshold=4.454e+02, percent-clipped=0.0 2023-03-26 02:19:45,405 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-26 02:19:46,298 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16853.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:48,084 INFO [finetune.py:976] (2/7) Epoch 3, batch 5400, loss[loss=0.2818, simple_loss=0.3221, pruned_loss=0.1208, over 4809.00 frames. ], tot_loss[loss=0.2407, simple_loss=0.2944, pruned_loss=0.09352, over 955830.03 frames. ], batch size: 39, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:19:53,086 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1717, 1.2587, 1.4384, 0.6527, 1.2490, 1.5902, 1.5734, 1.3359], device='cuda:2'), covar=tensor([0.0904, 0.0544, 0.0447, 0.0504, 0.0417, 0.0470, 0.0306, 0.0603], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0117, 0.0137, 0.0133, 0.0121, 0.0146, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.7870e-05, 1.1750e-04, 8.5200e-05, 1.0043e-04, 9.6259e-05, 8.9446e-05, 1.0898e-04, 1.0752e-04], device='cuda:2') 2023-03-26 02:19:55,561 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16868.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:20:31,377 INFO [finetune.py:976] (2/7) Epoch 3, batch 5450, loss[loss=0.1789, simple_loss=0.243, pruned_loss=0.05739, over 4756.00 frames. ], tot_loss[loss=0.2379, simple_loss=0.2911, pruned_loss=0.09235, over 956493.47 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:20:31,483 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:20:38,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.665e+02 2.000e+02 2.450e+02 4.433e+02, threshold=4.000e+02, percent-clipped=0.0 2023-03-26 02:20:46,027 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16929.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:21:14,236 INFO [finetune.py:976] (2/7) Epoch 3, batch 5500, loss[loss=0.2005, simple_loss=0.252, pruned_loss=0.07456, over 4775.00 frames. ], tot_loss[loss=0.2341, simple_loss=0.2873, pruned_loss=0.09047, over 956749.11 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:21:24,962 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9574, 1.2631, 0.7666, 1.6927, 2.1516, 1.4317, 1.4989, 1.8630], device='cuda:2'), covar=tensor([0.1415, 0.2149, 0.2459, 0.1274, 0.1931, 0.2059, 0.1455, 0.1920], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0099, 0.0118, 0.0094, 0.0125, 0.0098, 0.0101, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 02:21:26,175 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16967.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:21:43,861 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8029, 1.7191, 1.4352, 1.8971, 1.8483, 1.5332, 2.2714, 1.8526], device='cuda:2'), covar=tensor([0.2118, 0.4124, 0.4469, 0.4235, 0.3528, 0.2342, 0.3960, 0.2895], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0195, 0.0239, 0.0254, 0.0221, 0.0186, 0.0209, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:22:07,776 INFO [finetune.py:976] (2/7) Epoch 3, batch 5550, loss[loss=0.2723, simple_loss=0.3185, pruned_loss=0.1131, over 4829.00 frames. ], tot_loss[loss=0.2368, simple_loss=0.2896, pruned_loss=0.09198, over 956135.59 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:22:13,278 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2591, 1.8675, 2.1198, 1.1220, 2.4027, 2.5459, 2.0705, 2.0583], device='cuda:2'), covar=tensor([0.1175, 0.1056, 0.0556, 0.0890, 0.0568, 0.0541, 0.0527, 0.0623], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7934e-05, 1.1711e-04, 8.5342e-05, 1.0049e-04, 9.5880e-05, 8.9726e-05, 1.0894e-04, 1.0696e-04], device='cuda:2') 2023-03-26 02:22:15,686 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.712e+02 2.015e+02 2.380e+02 4.122e+02, threshold=4.030e+02, percent-clipped=1.0 2023-03-26 02:22:21,768 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:22:22,554 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 02:22:41,565 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-26 02:22:53,364 INFO [finetune.py:976] (2/7) Epoch 3, batch 5600, loss[loss=0.2539, simple_loss=0.3087, pruned_loss=0.09956, over 4739.00 frames. ], tot_loss[loss=0.2393, simple_loss=0.2933, pruned_loss=0.09263, over 957716.98 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:23:10,679 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17075.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:23:11,397 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.04 vs. limit=5.0 2023-03-26 02:23:26,472 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 02:23:38,817 INFO [finetune.py:976] (2/7) Epoch 3, batch 5650, loss[loss=0.2238, simple_loss=0.2888, pruned_loss=0.07939, over 4890.00 frames. ], tot_loss[loss=0.2428, simple_loss=0.2977, pruned_loss=0.09399, over 958200.83 frames. ], batch size: 32, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:23:45,803 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.755e+02 2.152e+02 2.681e+02 4.789e+02, threshold=4.305e+02, percent-clipped=1.0 2023-03-26 02:24:15,373 INFO [finetune.py:976] (2/7) Epoch 3, batch 5700, loss[loss=0.2068, simple_loss=0.2555, pruned_loss=0.07903, over 4197.00 frames. ], tot_loss[loss=0.2407, simple_loss=0.2938, pruned_loss=0.09386, over 937874.39 frames. ], batch size: 18, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:24:56,867 INFO [finetune.py:976] (2/7) Epoch 4, batch 0, loss[loss=0.2819, simple_loss=0.3287, pruned_loss=0.1176, over 4867.00 frames. ], tot_loss[loss=0.2819, simple_loss=0.3287, pruned_loss=0.1176, over 4867.00 frames. ], batch size: 34, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:24:56,868 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 02:25:04,988 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2162, 1.3491, 1.4085, 0.8151, 1.2644, 1.5758, 1.5926, 1.3642], device='cuda:2'), covar=tensor([0.1303, 0.0728, 0.0544, 0.0611, 0.0514, 0.0669, 0.0467, 0.0738], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0157, 0.0117, 0.0136, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.8181e-05, 1.1719e-04, 8.5409e-05, 1.0017e-04, 9.5355e-05, 9.0035e-05, 1.0877e-04, 1.0736e-04], device='cuda:2') 2023-03-26 02:25:18,215 INFO [finetune.py:1010] (2/7) Epoch 4, validation: loss=0.1768, simple_loss=0.2473, pruned_loss=0.0532, over 2265189.00 frames. 2023-03-26 02:25:18,216 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 02:25:22,722 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17189.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:25:55,800 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.713e+02 2.128e+02 2.708e+02 4.853e+02, threshold=4.257e+02, percent-clipped=3.0 2023-03-26 02:26:00,041 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17224.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 02:26:03,141 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2312, 1.2516, 1.5192, 1.0985, 1.3435, 1.3513, 1.2413, 1.5694], device='cuda:2'), covar=tensor([0.1488, 0.2471, 0.1397, 0.1576, 0.0971, 0.1430, 0.3176, 0.0956], device='cuda:2'), in_proj_covar=tensor([0.0208, 0.0209, 0.0206, 0.0200, 0.0185, 0.0228, 0.0218, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:26:05,891 INFO [finetune.py:976] (2/7) Epoch 4, batch 50, loss[loss=0.243, simple_loss=0.2995, pruned_loss=0.09321, over 4818.00 frames. ], tot_loss[loss=0.2404, simple_loss=0.2954, pruned_loss=0.09272, over 217340.66 frames. ], batch size: 30, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:26:06,651 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2850, 2.5484, 2.1524, 1.7968, 2.5291, 2.6864, 2.7507, 2.0938], device='cuda:2'), covar=tensor([0.0782, 0.0587, 0.0964, 0.1050, 0.0785, 0.0729, 0.0612, 0.1103], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0132, 0.0143, 0.0129, 0.0110, 0.0142, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:26:29,251 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:26:36,461 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17262.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:26:55,371 INFO [finetune.py:976] (2/7) Epoch 4, batch 100, loss[loss=0.194, simple_loss=0.2521, pruned_loss=0.06795, over 4898.00 frames. ], tot_loss[loss=0.237, simple_loss=0.29, pruned_loss=0.09202, over 380701.95 frames. ], batch size: 35, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:27:20,322 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7155, 3.7247, 3.6286, 1.7788, 3.9225, 2.9908, 0.7701, 2.7641], device='cuda:2'), covar=tensor([0.2726, 0.1696, 0.1423, 0.3344, 0.0849, 0.0949, 0.4472, 0.1388], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0168, 0.0162, 0.0128, 0.0154, 0.0120, 0.0146, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 02:27:26,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.228e+02 1.697e+02 1.982e+02 2.273e+02 3.827e+02, threshold=3.964e+02, percent-clipped=0.0 2023-03-26 02:27:36,640 INFO [finetune.py:976] (2/7) Epoch 4, batch 150, loss[loss=0.2081, simple_loss=0.2677, pruned_loss=0.0743, over 4816.00 frames. ], tot_loss[loss=0.2308, simple_loss=0.2838, pruned_loss=0.08888, over 509695.73 frames. ], batch size: 41, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:27:45,968 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6588, 1.4576, 1.4756, 0.9392, 1.5069, 1.7314, 1.6911, 1.4508], device='cuda:2'), covar=tensor([0.0845, 0.0685, 0.0516, 0.0560, 0.0510, 0.0409, 0.0343, 0.0558], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0158, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.8255e-05, 1.1723e-04, 8.5224e-05, 1.0054e-04, 9.5736e-05, 8.9709e-05, 1.0911e-04, 1.0776e-04], device='cuda:2') 2023-03-26 02:28:01,920 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:28:05,879 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9888, 1.4887, 1.7548, 1.7995, 1.5104, 1.5954, 1.7199, 1.6037], device='cuda:2'), covar=tensor([0.8856, 1.3703, 1.0033, 1.2844, 1.3959, 0.9755, 1.5600, 0.9724], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0255, 0.0257, 0.0267, 0.0245, 0.0220, 0.0280, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:28:22,983 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4989, 1.4842, 1.5813, 1.0289, 1.5465, 1.7723, 1.7202, 1.4468], device='cuda:2'), covar=tensor([0.0963, 0.0597, 0.0421, 0.0544, 0.0407, 0.0452, 0.0362, 0.0616], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0158, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.8306e-05, 1.1727e-04, 8.5066e-05, 1.0044e-04, 9.5887e-05, 8.9706e-05, 1.0908e-04, 1.0778e-04], device='cuda:2') 2023-03-26 02:28:23,538 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0376, 3.5184, 3.6832, 3.8954, 3.7570, 3.6241, 4.1456, 1.4167], device='cuda:2'), covar=tensor([0.0876, 0.0878, 0.0837, 0.1118, 0.1342, 0.1548, 0.0785, 0.5187], device='cuda:2'), in_proj_covar=tensor([0.0360, 0.0243, 0.0275, 0.0291, 0.0340, 0.0284, 0.0308, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:28:25,906 INFO [finetune.py:976] (2/7) Epoch 4, batch 200, loss[loss=0.2851, simple_loss=0.3357, pruned_loss=0.1173, over 4852.00 frames. ], tot_loss[loss=0.233, simple_loss=0.2851, pruned_loss=0.09043, over 610430.34 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:28:26,012 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17383.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:28:55,774 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.314e+02 1.771e+02 2.098e+02 2.514e+02 4.657e+02, threshold=4.195e+02, percent-clipped=1.0 2023-03-26 02:29:01,227 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:29:09,065 INFO [finetune.py:976] (2/7) Epoch 4, batch 250, loss[loss=0.2276, simple_loss=0.2835, pruned_loss=0.08585, over 4755.00 frames. ], tot_loss[loss=0.2347, simple_loss=0.2883, pruned_loss=0.09055, over 687141.97 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:29:17,851 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17444.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:29:47,577 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-26 02:29:49,001 INFO [finetune.py:976] (2/7) Epoch 4, batch 300, loss[loss=0.2162, simple_loss=0.2779, pruned_loss=0.0772, over 4782.00 frames. ], tot_loss[loss=0.237, simple_loss=0.2914, pruned_loss=0.09131, over 748284.78 frames. ], batch size: 28, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:29:55,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5842, 1.4034, 1.3779, 1.2413, 1.6856, 1.3837, 1.7515, 1.5339], device='cuda:2'), covar=tensor([0.2025, 0.3807, 0.4591, 0.3779, 0.3297, 0.2228, 0.3559, 0.2952], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0195, 0.0238, 0.0254, 0.0222, 0.0186, 0.0209, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:30:24,389 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 02:30:35,038 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.302e+02 1.964e+02 2.271e+02 2.699e+02 6.272e+02, threshold=4.542e+02, percent-clipped=2.0 2023-03-26 02:30:39,314 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17524.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:30:44,606 INFO [finetune.py:976] (2/7) Epoch 4, batch 350, loss[loss=0.2568, simple_loss=0.3046, pruned_loss=0.1045, over 4823.00 frames. ], tot_loss[loss=0.2413, simple_loss=0.2952, pruned_loss=0.09367, over 794724.86 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:30:53,140 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:30:54,400 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0109, 2.3206, 2.0160, 1.4959, 2.5956, 2.4485, 2.4327, 1.9063], device='cuda:2'), covar=tensor([0.0983, 0.0760, 0.1065, 0.1335, 0.0639, 0.0933, 0.0965, 0.1558], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0134, 0.0145, 0.0130, 0.0112, 0.0143, 0.0148, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:31:05,363 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2678, 3.7029, 3.8353, 4.0246, 3.9861, 3.7741, 4.3369, 1.4661], device='cuda:2'), covar=tensor([0.0738, 0.0778, 0.0752, 0.0961, 0.1240, 0.1219, 0.0620, 0.4773], device='cuda:2'), in_proj_covar=tensor([0.0361, 0.0244, 0.0276, 0.0293, 0.0342, 0.0285, 0.0309, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:31:08,032 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 02:31:10,207 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17562.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:31:16,311 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17572.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:31:23,350 INFO [finetune.py:976] (2/7) Epoch 4, batch 400, loss[loss=0.2406, simple_loss=0.3013, pruned_loss=0.08992, over 4844.00 frames. ], tot_loss[loss=0.2407, simple_loss=0.2955, pruned_loss=0.09291, over 832358.96 frames. ], batch size: 49, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:31:46,315 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:31:51,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.790e+02 1.987e+02 2.567e+02 5.687e+02, threshold=3.975e+02, percent-clipped=1.0 2023-03-26 02:32:10,369 INFO [finetune.py:976] (2/7) Epoch 4, batch 450, loss[loss=0.2497, simple_loss=0.2993, pruned_loss=0.09999, over 4824.00 frames. ], tot_loss[loss=0.24, simple_loss=0.2951, pruned_loss=0.09248, over 860982.46 frames. ], batch size: 39, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:00,227 INFO [finetune.py:976] (2/7) Epoch 4, batch 500, loss[loss=0.2088, simple_loss=0.256, pruned_loss=0.08073, over 4762.00 frames. ], tot_loss[loss=0.236, simple_loss=0.2908, pruned_loss=0.09056, over 883571.73 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:10,993 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17700.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:24,348 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.264e+02 1.808e+02 2.091e+02 2.485e+02 4.480e+02, threshold=4.181e+02, percent-clipped=1.0 2023-03-26 02:33:24,434 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:26,387 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5300, 1.3962, 1.3149, 1.5580, 1.6531, 1.4864, 0.8844, 1.2999], device='cuda:2'), covar=tensor([0.2365, 0.2228, 0.2018, 0.1866, 0.1725, 0.1316, 0.3051, 0.1955], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0207, 0.0197, 0.0182, 0.0231, 0.0172, 0.0213, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:33:26,985 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6787, 1.7832, 1.7933, 0.9814, 1.9691, 1.8791, 1.8426, 1.6040], device='cuda:2'), covar=tensor([0.0737, 0.0692, 0.0726, 0.1088, 0.0540, 0.0730, 0.0703, 0.1204], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0133, 0.0144, 0.0128, 0.0110, 0.0141, 0.0147, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:33:31,285 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7733, 1.7215, 2.2959, 1.4383, 1.9193, 2.0786, 1.6715, 2.4474], device='cuda:2'), covar=tensor([0.1963, 0.2186, 0.1801, 0.2430, 0.1180, 0.1957, 0.2747, 0.1074], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0207, 0.0205, 0.0198, 0.0183, 0.0226, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:33:31,874 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:34,062 INFO [finetune.py:976] (2/7) Epoch 4, batch 550, loss[loss=0.2227, simple_loss=0.2651, pruned_loss=0.09016, over 4828.00 frames. ], tot_loss[loss=0.2331, simple_loss=0.2871, pruned_loss=0.08954, over 901586.56 frames. ], batch size: 40, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:37,769 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17739.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:34:03,326 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6808, 1.6052, 2.0312, 1.3434, 1.6701, 1.9190, 1.5946, 2.2305], device='cuda:2'), covar=tensor([0.1819, 0.2475, 0.1682, 0.2263, 0.1178, 0.1875, 0.3014, 0.1110], device='cuda:2'), in_proj_covar=tensor([0.0208, 0.0208, 0.0205, 0.0198, 0.0184, 0.0227, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:34:03,943 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17761.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:34:17,628 INFO [finetune.py:976] (2/7) Epoch 4, batch 600, loss[loss=0.2518, simple_loss=0.2945, pruned_loss=0.1045, over 4871.00 frames. ], tot_loss[loss=0.2323, simple_loss=0.2865, pruned_loss=0.08906, over 913763.02 frames. ], batch size: 31, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:34:22,556 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17791.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:34:41,399 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.729e+02 2.101e+02 2.480e+02 7.519e+02, threshold=4.202e+02, percent-clipped=3.0 2023-03-26 02:34:50,417 INFO [finetune.py:976] (2/7) Epoch 4, batch 650, loss[loss=0.2185, simple_loss=0.2901, pruned_loss=0.07347, over 4922.00 frames. ], tot_loss[loss=0.2363, simple_loss=0.2907, pruned_loss=0.09094, over 921358.32 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:34:59,963 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:35:31,919 INFO [finetune.py:976] (2/7) Epoch 4, batch 700, loss[loss=0.256, simple_loss=0.3143, pruned_loss=0.09888, over 4907.00 frames. ], tot_loss[loss=0.2387, simple_loss=0.2934, pruned_loss=0.09197, over 929921.01 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:35:46,368 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17893.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:36:18,030 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.240e+02 1.804e+02 2.027e+02 2.461e+02 4.855e+02, threshold=4.055e+02, percent-clipped=2.0 2023-03-26 02:36:34,755 INFO [finetune.py:976] (2/7) Epoch 4, batch 750, loss[loss=0.2371, simple_loss=0.2982, pruned_loss=0.08806, over 4842.00 frames. ], tot_loss[loss=0.2411, simple_loss=0.2957, pruned_loss=0.09323, over 936003.78 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:36:37,361 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1818, 1.2250, 1.3411, 0.6374, 1.0904, 1.4817, 1.5080, 1.2647], device='cuda:2'), covar=tensor([0.0870, 0.0491, 0.0402, 0.0562, 0.0442, 0.0478, 0.0295, 0.0546], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0117, 0.0137, 0.0133, 0.0122, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.8053e-05, 1.1720e-04, 8.4747e-05, 1.0066e-04, 9.6161e-05, 9.0447e-05, 1.0904e-04, 1.0715e-04], device='cuda:2') 2023-03-26 02:36:59,865 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-26 02:37:08,570 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:37:25,637 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 02:37:30,393 INFO [finetune.py:976] (2/7) Epoch 4, batch 800, loss[loss=0.27, simple_loss=0.3127, pruned_loss=0.1136, over 4807.00 frames. ], tot_loss[loss=0.2393, simple_loss=0.2945, pruned_loss=0.09205, over 938789.18 frames. ], batch size: 45, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:05,563 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.784e+02 2.191e+02 2.808e+02 5.190e+02, threshold=4.382e+02, percent-clipped=3.0 2023-03-26 02:38:05,685 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:05,895 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.60 vs. limit=2.0 2023-03-26 02:38:08,560 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18021.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:20,834 INFO [finetune.py:976] (2/7) Epoch 4, batch 850, loss[loss=0.2054, simple_loss=0.2725, pruned_loss=0.06913, over 4838.00 frames. ], tot_loss[loss=0.2371, simple_loss=0.2918, pruned_loss=0.09124, over 943834.78 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:26,576 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18039.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:38:39,174 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:45,706 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:48,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1176, 1.7983, 2.4046, 1.4750, 2.0190, 2.2744, 1.7143, 2.4504], device='cuda:2'), covar=tensor([0.1634, 0.2201, 0.1667, 0.2420, 0.1185, 0.1807, 0.2703, 0.1304], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0207, 0.0204, 0.0199, 0.0183, 0.0227, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:38:52,249 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-26 02:38:57,814 INFO [finetune.py:976] (2/7) Epoch 4, batch 900, loss[loss=0.1956, simple_loss=0.2528, pruned_loss=0.06916, over 4910.00 frames. ], tot_loss[loss=0.2335, simple_loss=0.2878, pruned_loss=0.08964, over 945420.56 frames. ], batch size: 43, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:59,681 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18086.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:39:00,270 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18087.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:39:19,669 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.218e+02 1.720e+02 1.921e+02 2.312e+02 4.297e+02, threshold=3.842e+02, percent-clipped=0.0 2023-03-26 02:39:35,995 INFO [finetune.py:976] (2/7) Epoch 4, batch 950, loss[loss=0.1862, simple_loss=0.2441, pruned_loss=0.06413, over 4899.00 frames. ], tot_loss[loss=0.2323, simple_loss=0.2863, pruned_loss=0.08917, over 949599.03 frames. ], batch size: 32, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:39:46,386 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 02:40:16,965 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9637, 1.4283, 1.6746, 1.6833, 1.5373, 1.5318, 1.6185, 1.6405], device='cuda:2'), covar=tensor([0.8795, 1.3597, 0.9744, 1.1672, 1.3002, 0.9767, 1.5857, 0.9293], device='cuda:2'), in_proj_covar=tensor([0.0227, 0.0252, 0.0254, 0.0263, 0.0241, 0.0216, 0.0277, 0.0219], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:40:21,704 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7666, 1.9803, 1.9200, 1.1545, 2.0990, 2.0666, 1.8963, 1.7193], device='cuda:2'), covar=tensor([0.0710, 0.0643, 0.0799, 0.1144, 0.0541, 0.0767, 0.0753, 0.1096], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0131, 0.0143, 0.0128, 0.0109, 0.0141, 0.0146, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:40:22,962 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18174.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:40:24,040 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18175.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:40:29,235 INFO [finetune.py:976] (2/7) Epoch 4, batch 1000, loss[loss=0.2407, simple_loss=0.3013, pruned_loss=0.09003, over 4931.00 frames. ], tot_loss[loss=0.2343, simple_loss=0.2886, pruned_loss=0.09, over 950720.37 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:41:07,306 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.690e+02 2.099e+02 2.471e+02 3.966e+02, threshold=4.198e+02, percent-clipped=1.0 2023-03-26 02:41:28,636 INFO [finetune.py:976] (2/7) Epoch 4, batch 1050, loss[loss=0.2452, simple_loss=0.293, pruned_loss=0.09869, over 4831.00 frames. ], tot_loss[loss=0.2363, simple_loss=0.2916, pruned_loss=0.0905, over 952513.02 frames. ], batch size: 30, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:41:29,968 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:41:30,589 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:41:52,682 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6907, 4.5612, 4.3925, 2.2065, 4.6766, 3.4759, 0.7449, 3.3147], device='cuda:2'), covar=tensor([0.2344, 0.1719, 0.1159, 0.3152, 0.0801, 0.0899, 0.4614, 0.1248], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0169, 0.0163, 0.0129, 0.0155, 0.0122, 0.0145, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 02:42:18,623 INFO [finetune.py:976] (2/7) Epoch 4, batch 1100, loss[loss=0.2161, simple_loss=0.2841, pruned_loss=0.074, over 4924.00 frames. ], tot_loss[loss=0.2376, simple_loss=0.2931, pruned_loss=0.09103, over 953413.91 frames. ], batch size: 29, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:42:24,116 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5439, 1.5236, 1.4577, 1.5571, 1.1047, 3.3970, 1.2203, 1.6481], device='cuda:2'), covar=tensor([0.3902, 0.2510, 0.2281, 0.2475, 0.2092, 0.0197, 0.2697, 0.1497], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0119, 0.0116, 0.0096, 0.0100, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 02:42:36,346 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 02:42:49,582 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18316.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:42:50,749 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.851e+02 2.220e+02 2.754e+02 4.687e+02, threshold=4.440e+02, percent-clipped=1.0 2023-03-26 02:43:08,649 INFO [finetune.py:976] (2/7) Epoch 4, batch 1150, loss[loss=0.2484, simple_loss=0.2875, pruned_loss=0.1046, over 4268.00 frames. ], tot_loss[loss=0.2372, simple_loss=0.2928, pruned_loss=0.09078, over 951847.97 frames. ], batch size: 66, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:43:21,099 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7781, 1.5747, 1.2624, 1.3670, 1.9008, 2.0218, 1.6779, 1.4158], device='cuda:2'), covar=tensor([0.0221, 0.0399, 0.0764, 0.0406, 0.0226, 0.0351, 0.0329, 0.0451], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0113, 0.0138, 0.0116, 0.0104, 0.0099, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.5562e-05, 8.9398e-05, 1.1073e-04, 9.1939e-05, 8.2369e-05, 7.3644e-05, 7.0333e-05, 8.5482e-05], device='cuda:2') 2023-03-26 02:43:25,343 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18356.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:03,370 INFO [finetune.py:976] (2/7) Epoch 4, batch 1200, loss[loss=0.2227, simple_loss=0.2763, pruned_loss=0.08451, over 4858.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.2907, pruned_loss=0.08997, over 953930.75 frames. ], batch size: 31, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:44:05,829 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:28,301 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18404.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:47,472 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.754e+02 2.082e+02 2.492e+02 3.668e+02, threshold=4.164e+02, percent-clipped=0.0 2023-03-26 02:45:07,911 INFO [finetune.py:976] (2/7) Epoch 4, batch 1250, loss[loss=0.2551, simple_loss=0.3061, pruned_loss=0.1021, over 4824.00 frames. ], tot_loss[loss=0.2322, simple_loss=0.287, pruned_loss=0.08868, over 953712.77 frames. ], batch size: 40, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:45:09,083 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18434.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:45:59,525 INFO [finetune.py:976] (2/7) Epoch 4, batch 1300, loss[loss=0.2146, simple_loss=0.2728, pruned_loss=0.07818, over 4824.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2835, pruned_loss=0.08727, over 954586.90 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:46:10,685 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18490.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:22,661 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18508.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:46:26,896 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4272, 1.3141, 1.3629, 1.3440, 0.8634, 2.2660, 0.7168, 1.2284], device='cuda:2'), covar=tensor([0.3537, 0.2385, 0.2127, 0.2449, 0.2015, 0.0352, 0.2743, 0.1457], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 02:46:29,709 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.658e+02 2.008e+02 2.632e+02 4.281e+02, threshold=4.017e+02, percent-clipped=1.0 2023-03-26 02:46:34,687 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 02:46:36,931 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:37,493 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18531.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:41,946 INFO [finetune.py:976] (2/7) Epoch 4, batch 1350, loss[loss=0.2537, simple_loss=0.3084, pruned_loss=0.09944, over 4915.00 frames. ], tot_loss[loss=0.2315, simple_loss=0.2858, pruned_loss=0.08858, over 954288.02 frames. ], batch size: 36, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:46:48,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1543, 2.3012, 2.1987, 1.4806, 2.4810, 2.6198, 2.3059, 2.0296], device='cuda:2'), covar=tensor([0.0735, 0.0590, 0.0831, 0.1147, 0.0464, 0.0659, 0.0741, 0.0996], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0133, 0.0144, 0.0129, 0.0110, 0.0141, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:46:51,562 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6247, 1.4951, 1.2563, 1.3755, 1.7817, 1.7606, 1.5188, 1.2365], device='cuda:2'), covar=tensor([0.0259, 0.0343, 0.0619, 0.0336, 0.0218, 0.0335, 0.0323, 0.0447], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0113, 0.0137, 0.0117, 0.0104, 0.0099, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.5586e-05, 8.9462e-05, 1.1065e-04, 9.2285e-05, 8.2561e-05, 7.3786e-05, 7.0280e-05, 8.5540e-05], device='cuda:2') 2023-03-26 02:46:52,136 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6768, 1.5266, 1.4033, 1.7421, 2.1232, 1.7309, 1.2145, 1.3363], device='cuda:2'), covar=tensor([0.2470, 0.2464, 0.2080, 0.1858, 0.2085, 0.1274, 0.3021, 0.2000], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0208, 0.0196, 0.0182, 0.0232, 0.0172, 0.0212, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:46:55,225 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:47:11,776 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18569.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:47:20,050 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7016, 1.4336, 1.3175, 1.0698, 1.4840, 1.4475, 1.4178, 2.0753], device='cuda:2'), covar=tensor([0.9394, 0.9123, 0.7256, 0.8734, 0.7073, 0.4863, 0.8200, 0.3285], device='cuda:2'), in_proj_covar=tensor([0.0276, 0.0251, 0.0219, 0.0283, 0.0235, 0.0196, 0.0239, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:47:25,314 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4435, 2.0497, 1.8533, 1.0393, 2.0690, 1.8775, 1.4247, 1.8867], device='cuda:2'), covar=tensor([0.0891, 0.1150, 0.1801, 0.2521, 0.1750, 0.2632, 0.2749, 0.1383], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0201, 0.0204, 0.0191, 0.0219, 0.0211, 0.0220, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:47:25,815 INFO [finetune.py:976] (2/7) Epoch 4, batch 1400, loss[loss=0.2642, simple_loss=0.3228, pruned_loss=0.1028, over 4803.00 frames. ], tot_loss[loss=0.2369, simple_loss=0.2914, pruned_loss=0.09119, over 953954.35 frames. ], batch size: 45, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:47:35,077 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0121, 1.9069, 2.0243, 1.4512, 2.2970, 2.3741, 2.1434, 1.5151], device='cuda:2'), covar=tensor([0.0781, 0.0831, 0.0930, 0.1195, 0.0523, 0.0682, 0.0795, 0.1744], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0133, 0.0145, 0.0129, 0.0111, 0.0142, 0.0147, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:47:53,783 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:47:55,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.863e+02 2.207e+02 2.712e+02 5.337e+02, threshold=4.415e+02, percent-clipped=2.0 2023-03-26 02:47:59,240 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-26 02:48:10,034 INFO [finetune.py:976] (2/7) Epoch 4, batch 1450, loss[loss=0.3146, simple_loss=0.3582, pruned_loss=0.1355, over 4805.00 frames. ], tot_loss[loss=0.2383, simple_loss=0.293, pruned_loss=0.09183, over 954262.15 frames. ], batch size: 45, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:48:17,902 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 02:48:39,172 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9917, 1.9194, 2.0284, 1.4875, 2.1688, 2.3333, 2.1305, 1.6045], device='cuda:2'), covar=tensor([0.0502, 0.0590, 0.0577, 0.0875, 0.0474, 0.0375, 0.0491, 0.1180], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0134, 0.0145, 0.0130, 0.0111, 0.0142, 0.0147, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:48:43,322 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18664.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:48:49,426 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:48:56,991 INFO [finetune.py:976] (2/7) Epoch 4, batch 1500, loss[loss=0.262, simple_loss=0.3179, pruned_loss=0.1031, over 4924.00 frames. ], tot_loss[loss=0.2415, simple_loss=0.2958, pruned_loss=0.09359, over 951755.23 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:49:36,301 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.377e+02 1.810e+02 2.122e+02 2.509e+02 6.153e+02, threshold=4.245e+02, percent-clipped=1.0 2023-03-26 02:49:54,647 INFO [finetune.py:976] (2/7) Epoch 4, batch 1550, loss[loss=0.2768, simple_loss=0.3245, pruned_loss=0.1146, over 4844.00 frames. ], tot_loss[loss=0.2406, simple_loss=0.295, pruned_loss=0.09308, over 951728.40 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:49:55,969 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:50:35,229 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.05 vs. limit=5.0 2023-03-26 02:50:39,060 INFO [finetune.py:976] (2/7) Epoch 4, batch 1600, loss[loss=0.2302, simple_loss=0.2824, pruned_loss=0.089, over 4721.00 frames. ], tot_loss[loss=0.238, simple_loss=0.2918, pruned_loss=0.09214, over 949770.87 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:50:41,682 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0655, 0.6512, 0.8844, 0.9430, 1.2207, 1.1834, 1.0763, 0.9661], device='cuda:2'), covar=tensor([0.0254, 0.0370, 0.0695, 0.0334, 0.0219, 0.0320, 0.0271, 0.0360], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0113, 0.0137, 0.0117, 0.0104, 0.0099, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.5715e-05, 8.9203e-05, 1.1055e-04, 9.2567e-05, 8.2122e-05, 7.3727e-05, 7.0007e-05, 8.4773e-05], device='cuda:2') 2023-03-26 02:51:19,526 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.645e+02 2.011e+02 2.399e+02 5.772e+02, threshold=4.021e+02, percent-clipped=1.0 2023-03-26 02:51:30,318 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18830.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:51:30,943 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:51:32,084 INFO [finetune.py:976] (2/7) Epoch 4, batch 1650, loss[loss=0.197, simple_loss=0.255, pruned_loss=0.0695, over 4930.00 frames. ], tot_loss[loss=0.2352, simple_loss=0.2889, pruned_loss=0.09079, over 953054.98 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:51:48,115 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18846.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:51:57,202 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 02:52:01,104 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6527, 1.7601, 1.8900, 1.1155, 1.8095, 2.1308, 2.0856, 1.6334], device='cuda:2'), covar=tensor([0.1072, 0.0728, 0.0451, 0.0765, 0.0446, 0.0509, 0.0342, 0.0647], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7124e-05, 1.1629e-04, 8.5326e-05, 1.0043e-04, 9.5526e-05, 8.9929e-05, 1.0876e-04, 1.0652e-04], device='cuda:2') 2023-03-26 02:52:05,191 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18864.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:52:05,863 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3475, 1.1742, 1.1370, 1.2591, 1.5826, 1.5269, 1.3121, 1.1405], device='cuda:2'), covar=tensor([0.0280, 0.0330, 0.0591, 0.0312, 0.0200, 0.0407, 0.0303, 0.0382], device='cuda:2'), in_proj_covar=tensor([0.0084, 0.0114, 0.0138, 0.0118, 0.0104, 0.0099, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.5961e-05, 8.9699e-05, 1.1093e-04, 9.3054e-05, 8.2450e-05, 7.3765e-05, 7.0387e-05, 8.5241e-05], device='cuda:2') 2023-03-26 02:52:23,038 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:52:23,630 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18879.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:52:25,972 INFO [finetune.py:976] (2/7) Epoch 4, batch 1700, loss[loss=0.2011, simple_loss=0.2709, pruned_loss=0.06569, over 4783.00 frames. ], tot_loss[loss=0.2328, simple_loss=0.2865, pruned_loss=0.08954, over 952560.11 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:53:00,726 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.767e+02 2.149e+02 2.599e+02 5.673e+02, threshold=4.299e+02, percent-clipped=2.0 2023-03-26 02:53:07,512 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18930.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:53:09,237 INFO [finetune.py:976] (2/7) Epoch 4, batch 1750, loss[loss=0.2949, simple_loss=0.3554, pruned_loss=0.1172, over 4724.00 frames. ], tot_loss[loss=0.234, simple_loss=0.288, pruned_loss=0.08998, over 953748.04 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:53:52,361 INFO [finetune.py:976] (2/7) Epoch 4, batch 1800, loss[loss=0.2413, simple_loss=0.3038, pruned_loss=0.08935, over 4816.00 frames. ], tot_loss[loss=0.2367, simple_loss=0.291, pruned_loss=0.09118, over 953698.21 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:53:57,493 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:54:32,253 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.317e+02 1.869e+02 2.115e+02 2.590e+02 5.981e+02, threshold=4.230e+02, percent-clipped=1.0 2023-03-26 02:54:50,225 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:54:52,012 INFO [finetune.py:976] (2/7) Epoch 4, batch 1850, loss[loss=0.2572, simple_loss=0.3108, pruned_loss=0.1017, over 4780.00 frames. ], tot_loss[loss=0.2391, simple_loss=0.2939, pruned_loss=0.09214, over 954791.52 frames. ], batch size: 29, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:55:39,893 INFO [finetune.py:976] (2/7) Epoch 4, batch 1900, loss[loss=0.2212, simple_loss=0.2801, pruned_loss=0.08115, over 4770.00 frames. ], tot_loss[loss=0.2392, simple_loss=0.2945, pruned_loss=0.09196, over 955115.68 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:56:12,754 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 1.752e+02 2.082e+02 2.658e+02 3.786e+02, threshold=4.164e+02, percent-clipped=0.0 2023-03-26 02:56:25,565 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19129.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:56:25,678 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-26 02:56:33,396 INFO [finetune.py:976] (2/7) Epoch 4, batch 1950, loss[loss=0.3384, simple_loss=0.3501, pruned_loss=0.1634, over 4345.00 frames. ], tot_loss[loss=0.2374, simple_loss=0.2924, pruned_loss=0.09116, over 954157.03 frames. ], batch size: 66, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:56:41,419 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19146.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:56:51,020 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8704, 1.3192, 1.7693, 1.6190, 1.5717, 1.5486, 1.5329, 1.6544], device='cuda:2'), covar=tensor([0.6984, 0.9092, 0.7363, 0.8943, 0.9453, 0.7092, 1.0976, 0.6902], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0252, 0.0256, 0.0263, 0.0242, 0.0218, 0.0277, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 02:56:53,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19164.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:57:09,559 INFO [finetune.py:976] (2/7) Epoch 4, batch 2000, loss[loss=0.1756, simple_loss=0.2482, pruned_loss=0.05154, over 4865.00 frames. ], tot_loss[loss=0.2347, simple_loss=0.2891, pruned_loss=0.09017, over 953736.38 frames. ], batch size: 31, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:57:14,680 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19190.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:57:17,077 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:57:35,165 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19212.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:57:39,345 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.251e+02 1.681e+02 2.009e+02 2.396e+02 5.395e+02, threshold=4.017e+02, percent-clipped=3.0 2023-03-26 02:57:43,639 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9458, 1.3742, 0.7925, 1.7411, 2.2224, 1.3921, 1.6163, 1.7812], device='cuda:2'), covar=tensor([0.1455, 0.2206, 0.2360, 0.1273, 0.1876, 0.1972, 0.1439, 0.2012], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0099, 0.0117, 0.0095, 0.0126, 0.0097, 0.0101, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 02:57:49,440 INFO [finetune.py:976] (2/7) Epoch 4, batch 2050, loss[loss=0.2019, simple_loss=0.2531, pruned_loss=0.07532, over 4753.00 frames. ], tot_loss[loss=0.2304, simple_loss=0.2847, pruned_loss=0.08807, over 954242.86 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:58:31,806 INFO [finetune.py:976] (2/7) Epoch 4, batch 2100, loss[loss=0.2554, simple_loss=0.3192, pruned_loss=0.09585, over 4852.00 frames. ], tot_loss[loss=0.23, simple_loss=0.2842, pruned_loss=0.08789, over 953924.94 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:58:34,244 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19286.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:59:08,801 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9294, 1.7121, 1.4843, 1.4684, 1.7149, 1.6704, 1.6348, 2.4114], device='cuda:2'), covar=tensor([0.8504, 0.8282, 0.6783, 0.8336, 0.6609, 0.4372, 0.7571, 0.2725], device='cuda:2'), in_proj_covar=tensor([0.0276, 0.0252, 0.0219, 0.0283, 0.0234, 0.0195, 0.0239, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 02:59:09,861 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.737e+02 2.057e+02 2.371e+02 3.601e+02, threshold=4.115e+02, percent-clipped=0.0 2023-03-26 02:59:10,154 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 02:59:25,760 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:59:27,977 INFO [finetune.py:976] (2/7) Epoch 4, batch 2150, loss[loss=0.2774, simple_loss=0.3393, pruned_loss=0.1078, over 4071.00 frames. ], tot_loss[loss=0.2364, simple_loss=0.291, pruned_loss=0.09092, over 954997.52 frames. ], batch size: 65, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:00:10,912 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19378.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:00:13,945 INFO [finetune.py:976] (2/7) Epoch 4, batch 2200, loss[loss=0.2589, simple_loss=0.3167, pruned_loss=0.1006, over 4892.00 frames. ], tot_loss[loss=0.2374, simple_loss=0.2923, pruned_loss=0.09128, over 956172.27 frames. ], batch size: 35, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:00:33,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:01:05,355 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.822e+02 2.092e+02 2.549e+02 4.918e+02, threshold=4.184e+02, percent-clipped=2.0 2023-03-26 03:01:17,113 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 03:01:19,363 INFO [finetune.py:976] (2/7) Epoch 4, batch 2250, loss[loss=0.2163, simple_loss=0.2742, pruned_loss=0.07919, over 4837.00 frames. ], tot_loss[loss=0.2389, simple_loss=0.2938, pruned_loss=0.09198, over 957142.12 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:01:43,573 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 03:01:46,318 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:02:09,425 INFO [finetune.py:976] (2/7) Epoch 4, batch 2300, loss[loss=0.2773, simple_loss=0.3181, pruned_loss=0.1182, over 4808.00 frames. ], tot_loss[loss=0.2386, simple_loss=0.2941, pruned_loss=0.09159, over 957479.23 frames. ], batch size: 40, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:02:12,903 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19485.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 03:02:14,249 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-26 03:02:23,418 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 03:02:33,044 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6782, 1.5000, 1.8283, 2.0082, 1.6101, 3.4366, 1.4206, 1.6751], device='cuda:2'), covar=tensor([0.0958, 0.1739, 0.1151, 0.0942, 0.1632, 0.0243, 0.1477, 0.1784], device='cuda:2'), in_proj_covar=tensor([0.0079, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 03:02:37,769 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.717e+02 2.025e+02 2.639e+02 4.089e+02, threshold=4.050e+02, percent-clipped=0.0 2023-03-26 03:02:53,437 INFO [finetune.py:976] (2/7) Epoch 4, batch 2350, loss[loss=0.2435, simple_loss=0.2962, pruned_loss=0.09545, over 4847.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.2898, pruned_loss=0.09002, over 954978.76 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:03:15,151 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6645, 1.2060, 0.8874, 1.5650, 2.0677, 1.2315, 1.5346, 1.7511], device='cuda:2'), covar=tensor([0.1484, 0.2039, 0.2110, 0.1234, 0.2007, 0.2189, 0.1291, 0.1865], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0099, 0.0117, 0.0094, 0.0126, 0.0097, 0.0100, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 03:03:36,838 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 03:03:37,780 INFO [finetune.py:976] (2/7) Epoch 4, batch 2400, loss[loss=0.1701, simple_loss=0.2262, pruned_loss=0.05697, over 4688.00 frames. ], tot_loss[loss=0.231, simple_loss=0.2858, pruned_loss=0.08811, over 956961.47 frames. ], batch size: 23, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:03:40,238 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:04:02,209 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4739, 1.6162, 1.8101, 0.8447, 1.6211, 1.9168, 1.9164, 1.5633], device='cuda:2'), covar=tensor([0.0991, 0.0659, 0.0467, 0.0674, 0.0433, 0.0617, 0.0330, 0.0751], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0118, 0.0137, 0.0133, 0.0121, 0.0147, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.7250e-05, 1.1684e-04, 8.5812e-05, 1.0025e-04, 9.5991e-05, 8.9696e-05, 1.0920e-04, 1.0729e-04], device='cuda:2') 2023-03-26 03:04:10,024 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19618.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:04:10,537 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.651e+02 1.936e+02 2.390e+02 3.810e+02, threshold=3.872e+02, percent-clipped=0.0 2023-03-26 03:04:19,622 INFO [finetune.py:976] (2/7) Epoch 4, batch 2450, loss[loss=0.2356, simple_loss=0.2876, pruned_loss=0.09179, over 4863.00 frames. ], tot_loss[loss=0.2267, simple_loss=0.2817, pruned_loss=0.08582, over 957803.20 frames. ], batch size: 31, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:04:20,301 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19634.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:04:23,405 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.11 vs. limit=2.0 2023-03-26 03:04:59,948 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:05:01,904 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 03:05:02,276 INFO [finetune.py:976] (2/7) Epoch 4, batch 2500, loss[loss=0.2387, simple_loss=0.3095, pruned_loss=0.08396, over 4819.00 frames. ], tot_loss[loss=0.2287, simple_loss=0.2839, pruned_loss=0.08674, over 956011.64 frames. ], batch size: 39, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:05:28,380 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7639, 1.4455, 2.1365, 1.4785, 1.8700, 2.0433, 1.5093, 2.1896], device='cuda:2'), covar=tensor([0.1344, 0.2213, 0.1177, 0.1858, 0.0875, 0.1281, 0.2680, 0.0782], device='cuda:2'), in_proj_covar=tensor([0.0210, 0.0209, 0.0206, 0.0200, 0.0184, 0.0228, 0.0218, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:05:30,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.296e+02 1.721e+02 2.075e+02 2.470e+02 4.533e+02, threshold=4.150e+02, percent-clipped=4.0 2023-03-26 03:05:40,557 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7662, 1.5618, 1.5644, 0.9165, 1.6013, 1.8846, 1.7927, 1.4994], device='cuda:2'), covar=tensor([0.0983, 0.0699, 0.0463, 0.0613, 0.0372, 0.0427, 0.0278, 0.0573], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0117, 0.0135, 0.0131, 0.0120, 0.0145, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.6733e-05, 1.1582e-04, 8.4867e-05, 9.9441e-05, 9.5038e-05, 8.8692e-05, 1.0818e-04, 1.0642e-04], device='cuda:2') 2023-03-26 03:05:45,256 INFO [finetune.py:976] (2/7) Epoch 4, batch 2550, loss[loss=0.2703, simple_loss=0.3237, pruned_loss=0.1085, over 4858.00 frames. ], tot_loss[loss=0.2323, simple_loss=0.2882, pruned_loss=0.08819, over 956520.97 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:05:58,493 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19750.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:06:31,492 INFO [finetune.py:976] (2/7) Epoch 4, batch 2600, loss[loss=0.2659, simple_loss=0.3091, pruned_loss=0.1114, over 4865.00 frames. ], tot_loss[loss=0.2348, simple_loss=0.2908, pruned_loss=0.08938, over 956359.11 frames. ], batch size: 34, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:06:33,307 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19785.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 03:07:15,920 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.335e+02 1.804e+02 2.222e+02 2.871e+02 4.406e+02, threshold=4.445e+02, percent-clipped=2.0 2023-03-26 03:07:22,941 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3859, 1.4836, 1.2765, 1.3134, 1.6703, 1.6261, 1.4895, 1.3500], device='cuda:2'), covar=tensor([0.0288, 0.0294, 0.0489, 0.0307, 0.0212, 0.0339, 0.0295, 0.0336], device='cuda:2'), in_proj_covar=tensor([0.0085, 0.0114, 0.0138, 0.0118, 0.0104, 0.0099, 0.0092, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.6623e-05, 9.0297e-05, 1.1114e-04, 9.3254e-05, 8.2549e-05, 7.3968e-05, 7.0876e-05, 8.5531e-05], device='cuda:2') 2023-03-26 03:07:23,580 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0185, 1.8058, 1.4952, 1.8102, 2.1204, 1.6800, 2.3344, 1.9466], device='cuda:2'), covar=tensor([0.1786, 0.3742, 0.4455, 0.4113, 0.2853, 0.2151, 0.3854, 0.2699], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0195, 0.0238, 0.0256, 0.0224, 0.0187, 0.0211, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:07:34,471 INFO [finetune.py:976] (2/7) Epoch 4, batch 2650, loss[loss=0.2463, simple_loss=0.3045, pruned_loss=0.09409, over 4808.00 frames. ], tot_loss[loss=0.2371, simple_loss=0.2929, pruned_loss=0.09062, over 954391.93 frames. ], batch size: 40, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:07:34,540 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19833.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 03:08:16,793 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2083, 4.4595, 4.6068, 4.9919, 4.8989, 4.6118, 5.3299, 1.5330], device='cuda:2'), covar=tensor([0.0689, 0.0764, 0.0871, 0.0845, 0.1139, 0.1526, 0.0480, 0.5584], device='cuda:2'), in_proj_covar=tensor([0.0361, 0.0246, 0.0278, 0.0294, 0.0340, 0.0287, 0.0311, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:08:34,281 INFO [finetune.py:976] (2/7) Epoch 4, batch 2700, loss[loss=0.1901, simple_loss=0.2516, pruned_loss=0.06424, over 4823.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.2911, pruned_loss=0.0898, over 953280.34 frames. ], batch size: 25, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:08:37,375 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2500, 1.1199, 1.2850, 0.4840, 1.0890, 1.4870, 1.4361, 1.2383], device='cuda:2'), covar=tensor([0.1071, 0.0841, 0.0574, 0.0750, 0.0610, 0.0478, 0.0427, 0.0711], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0157, 0.0117, 0.0136, 0.0132, 0.0121, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7096e-05, 1.1660e-04, 8.5026e-05, 9.9893e-05, 9.5820e-05, 8.9443e-05, 1.0894e-04, 1.0684e-04], device='cuda:2') 2023-03-26 03:09:16,013 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.716e+02 2.005e+02 2.489e+02 3.950e+02, threshold=4.009e+02, percent-clipped=0.0 2023-03-26 03:09:17,465 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 03:09:23,771 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 03:09:26,958 INFO [finetune.py:976] (2/7) Epoch 4, batch 2750, loss[loss=0.273, simple_loss=0.3094, pruned_loss=0.1183, over 4740.00 frames. ], tot_loss[loss=0.2326, simple_loss=0.2878, pruned_loss=0.08871, over 952928.61 frames. ], batch size: 59, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:09:54,786 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19974.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:10:00,205 INFO [finetune.py:976] (2/7) Epoch 4, batch 2800, loss[loss=0.2185, simple_loss=0.2717, pruned_loss=0.08263, over 4806.00 frames. ], tot_loss[loss=0.2282, simple_loss=0.2835, pruned_loss=0.08646, over 955022.40 frames. ], batch size: 45, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:10:24,993 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.781e+02 2.091e+02 2.518e+02 3.954e+02, threshold=4.183e+02, percent-clipped=0.0 2023-03-26 03:10:26,228 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:10:34,540 INFO [finetune.py:976] (2/7) Epoch 4, batch 2850, loss[loss=0.2439, simple_loss=0.2906, pruned_loss=0.09859, over 4899.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2805, pruned_loss=0.08478, over 956017.64 frames. ], batch size: 35, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:10:34,812 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.14 vs. limit=5.0 2023-03-26 03:10:45,824 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:10:48,328 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6417, 1.6190, 1.7837, 1.0188, 1.7143, 1.9493, 1.9083, 1.5729], device='cuda:2'), covar=tensor([0.1104, 0.0774, 0.0550, 0.0697, 0.0455, 0.0665, 0.0367, 0.0661], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0158, 0.0118, 0.0137, 0.0133, 0.0122, 0.0148, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.8181e-05, 1.1751e-04, 8.5887e-05, 1.0074e-04, 9.6711e-05, 9.0165e-05, 1.1011e-04, 1.0760e-04], device='cuda:2') 2023-03-26 03:11:13,225 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6448, 1.5324, 1.5450, 1.5746, 1.1164, 3.6945, 1.3420, 1.9366], device='cuda:2'), covar=tensor([0.3567, 0.2600, 0.2101, 0.2335, 0.2069, 0.0176, 0.2764, 0.1472], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0119, 0.0116, 0.0096, 0.0101, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 03:11:21,580 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:22,237 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-26 03:11:22,666 INFO [finetune.py:976] (2/7) Epoch 4, batch 2900, loss[loss=0.2847, simple_loss=0.3528, pruned_loss=0.1082, over 4844.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2846, pruned_loss=0.08664, over 956036.06 frames. ], batch size: 49, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:11:31,814 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:42,681 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:56,192 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20112.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:12:05,860 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.342e+02 1.862e+02 2.201e+02 2.748e+02 4.534e+02, threshold=4.402e+02, percent-clipped=1.0 2023-03-26 03:12:27,411 INFO [finetune.py:976] (2/7) Epoch 4, batch 2950, loss[loss=0.2573, simple_loss=0.3095, pruned_loss=0.1026, over 4751.00 frames. ], tot_loss[loss=0.2336, simple_loss=0.2894, pruned_loss=0.08893, over 954649.48 frames. ], batch size: 59, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:12:48,240 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:13:10,792 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20173.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:13:18,822 INFO [finetune.py:976] (2/7) Epoch 4, batch 3000, loss[loss=0.2899, simple_loss=0.3389, pruned_loss=0.1205, over 4892.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.2905, pruned_loss=0.08962, over 955020.91 frames. ], batch size: 35, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:13:18,822 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 03:13:34,007 INFO [finetune.py:1010] (2/7) Epoch 4, validation: loss=0.169, simple_loss=0.2409, pruned_loss=0.04857, over 2265189.00 frames. 2023-03-26 03:13:34,007 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 03:13:37,638 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9178, 1.3713, 2.3163, 3.8237, 2.6634, 2.7215, 0.7005, 3.2104], device='cuda:2'), covar=tensor([0.2223, 0.2460, 0.1842, 0.0968, 0.1040, 0.1662, 0.2631, 0.0790], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0120, 0.0138, 0.0168, 0.0105, 0.0144, 0.0130, 0.0106], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 03:13:58,696 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:14:18,154 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.917e+02 2.134e+02 2.804e+02 4.274e+02, threshold=4.268e+02, percent-clipped=0.0 2023-03-26 03:14:27,242 INFO [finetune.py:976] (2/7) Epoch 4, batch 3050, loss[loss=0.234, simple_loss=0.2919, pruned_loss=0.08806, over 4781.00 frames. ], tot_loss[loss=0.2361, simple_loss=0.2919, pruned_loss=0.09018, over 953716.88 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:15:01,464 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2313, 4.5611, 4.7183, 5.0230, 4.9345, 4.6272, 5.3605, 1.5749], device='cuda:2'), covar=tensor([0.0718, 0.0766, 0.0806, 0.0806, 0.1156, 0.1490, 0.0564, 0.5413], device='cuda:2'), in_proj_covar=tensor([0.0359, 0.0244, 0.0277, 0.0293, 0.0339, 0.0285, 0.0308, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:15:06,984 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:15:16,618 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20274.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:15:24,345 INFO [finetune.py:976] (2/7) Epoch 4, batch 3100, loss[loss=0.2242, simple_loss=0.2747, pruned_loss=0.08685, over 4846.00 frames. ], tot_loss[loss=0.2342, simple_loss=0.2898, pruned_loss=0.08932, over 952551.31 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:15:25,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 03:16:01,239 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.549e+02 1.969e+02 2.570e+02 5.632e+02, threshold=3.937e+02, percent-clipped=1.0 2023-03-26 03:16:03,108 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:16:14,158 INFO [finetune.py:976] (2/7) Epoch 4, batch 3150, loss[loss=0.1899, simple_loss=0.2545, pruned_loss=0.06269, over 4751.00 frames. ], tot_loss[loss=0.2325, simple_loss=0.2871, pruned_loss=0.08896, over 952559.12 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:16:51,083 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.32 vs. limit=5.0 2023-03-26 03:16:56,930 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:01,160 INFO [finetune.py:976] (2/7) Epoch 4, batch 3200, loss[loss=0.2592, simple_loss=0.3036, pruned_loss=0.1074, over 4873.00 frames. ], tot_loss[loss=0.2276, simple_loss=0.2822, pruned_loss=0.08646, over 954070.06 frames. ], batch size: 31, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:17:04,824 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:07,164 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4288, 1.5138, 1.3240, 1.5889, 1.5862, 3.0463, 1.4110, 1.6203], device='cuda:2'), covar=tensor([0.1051, 0.1818, 0.1276, 0.1064, 0.1621, 0.0283, 0.1499, 0.1684], device='cuda:2'), in_proj_covar=tensor([0.0079, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 03:17:40,209 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:40,661 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.625e+02 1.959e+02 2.342e+02 5.079e+02, threshold=3.919e+02, percent-clipped=3.0 2023-03-26 03:17:40,811 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7325, 1.6271, 1.5048, 1.8201, 2.2464, 1.7692, 1.4508, 1.4113], device='cuda:2'), covar=tensor([0.2677, 0.2714, 0.2274, 0.2051, 0.2128, 0.1391, 0.3097, 0.2147], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0209, 0.0198, 0.0184, 0.0233, 0.0174, 0.0214, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:17:58,484 INFO [finetune.py:976] (2/7) Epoch 4, batch 3250, loss[loss=0.2306, simple_loss=0.2899, pruned_loss=0.0856, over 4146.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2838, pruned_loss=0.08715, over 952685.61 frames. ], batch size: 65, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:18:06,356 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20445.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:08,886 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20449.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:21,894 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20468.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:29,630 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:31,929 INFO [finetune.py:976] (2/7) Epoch 4, batch 3300, loss[loss=0.2273, simple_loss=0.2879, pruned_loss=0.08339, over 4772.00 frames. ], tot_loss[loss=0.2327, simple_loss=0.2881, pruned_loss=0.08867, over 952027.29 frames. ], batch size: 28, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:19:09,458 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.750e+02 2.039e+02 2.534e+02 4.074e+02, threshold=4.078e+02, percent-clipped=2.0 2023-03-26 03:19:29,491 INFO [finetune.py:976] (2/7) Epoch 4, batch 3350, loss[loss=0.2421, simple_loss=0.3016, pruned_loss=0.09125, over 4902.00 frames. ], tot_loss[loss=0.2356, simple_loss=0.2915, pruned_loss=0.08985, over 954074.13 frames. ], batch size: 36, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:19:29,584 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1171, 3.5819, 3.7431, 3.9983, 3.8504, 3.6662, 4.2052, 1.3408], device='cuda:2'), covar=tensor([0.0742, 0.0735, 0.0796, 0.0878, 0.1206, 0.1426, 0.0654, 0.5039], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0243, 0.0277, 0.0292, 0.0338, 0.0285, 0.0307, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:19:50,345 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 03:19:59,033 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20560.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:20:17,266 INFO [finetune.py:976] (2/7) Epoch 4, batch 3400, loss[loss=0.2261, simple_loss=0.2943, pruned_loss=0.07896, over 4810.00 frames. ], tot_loss[loss=0.2363, simple_loss=0.2921, pruned_loss=0.09028, over 952835.07 frames. ], batch size: 25, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:20:20,441 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:20:46,762 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.735e+02 2.048e+02 2.538e+02 3.974e+02, threshold=4.096e+02, percent-clipped=0.0 2023-03-26 03:21:05,098 INFO [finetune.py:976] (2/7) Epoch 4, batch 3450, loss[loss=0.2122, simple_loss=0.2593, pruned_loss=0.08252, over 4700.00 frames. ], tot_loss[loss=0.2339, simple_loss=0.2903, pruned_loss=0.08881, over 954715.19 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:21:09,301 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8498, 1.6579, 1.5350, 1.8477, 2.3799, 1.8622, 1.3938, 1.5013], device='cuda:2'), covar=tensor([0.2138, 0.2194, 0.1846, 0.1661, 0.1733, 0.1188, 0.2763, 0.1734], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0208, 0.0197, 0.0183, 0.0233, 0.0174, 0.0213, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:21:22,485 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:21:45,675 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:21:51,858 INFO [finetune.py:976] (2/7) Epoch 4, batch 3500, loss[loss=0.2327, simple_loss=0.2803, pruned_loss=0.09255, over 4781.00 frames. ], tot_loss[loss=0.2317, simple_loss=0.2873, pruned_loss=0.08802, over 956942.79 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:22:31,512 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.236e+02 1.688e+02 2.022e+02 2.523e+02 5.341e+02, threshold=4.043e+02, percent-clipped=2.0 2023-03-26 03:22:35,130 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20724.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:22:43,663 INFO [finetune.py:976] (2/7) Epoch 4, batch 3550, loss[loss=0.2569, simple_loss=0.3089, pruned_loss=0.1024, over 4890.00 frames. ], tot_loss[loss=0.2286, simple_loss=0.2841, pruned_loss=0.08652, over 959086.64 frames. ], batch size: 35, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:22:56,122 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:22:56,750 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:21,257 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-26 03:23:24,131 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20768.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:33,432 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:41,428 INFO [finetune.py:976] (2/7) Epoch 4, batch 3600, loss[loss=0.2236, simple_loss=0.2628, pruned_loss=0.09223, over 4764.00 frames. ], tot_loss[loss=0.2264, simple_loss=0.2812, pruned_loss=0.08575, over 958960.59 frames. ], batch size: 28, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:23:52,914 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:24:24,359 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:24:31,499 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.828e+01 1.831e+02 2.152e+02 2.506e+02 5.159e+02, threshold=4.304e+02, percent-clipped=1.0 2023-03-26 03:24:39,102 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 03:24:50,039 INFO [finetune.py:976] (2/7) Epoch 4, batch 3650, loss[loss=0.3365, simple_loss=0.3795, pruned_loss=0.1468, over 4829.00 frames. ], tot_loss[loss=0.2305, simple_loss=0.2851, pruned_loss=0.08797, over 957143.84 frames. ], batch size: 39, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:25:01,170 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6628, 1.5410, 2.0057, 1.9255, 1.7511, 4.3037, 1.3359, 1.8088], device='cuda:2'), covar=tensor([0.1154, 0.2206, 0.1475, 0.1208, 0.1871, 0.0226, 0.2001, 0.2188], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 03:25:24,791 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20860.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:25:52,756 INFO [finetune.py:976] (2/7) Epoch 4, batch 3700, loss[loss=0.2573, simple_loss=0.3042, pruned_loss=0.1052, over 4758.00 frames. ], tot_loss[loss=0.2319, simple_loss=0.2875, pruned_loss=0.08811, over 955403.03 frames. ], batch size: 28, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:26:17,163 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20908.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:26:24,268 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.710e+02 2.022e+02 2.626e+02 5.956e+02, threshold=4.044e+02, percent-clipped=2.0 2023-03-26 03:26:34,612 INFO [finetune.py:976] (2/7) Epoch 4, batch 3750, loss[loss=0.227, simple_loss=0.2948, pruned_loss=0.07962, over 4815.00 frames. ], tot_loss[loss=0.2322, simple_loss=0.2881, pruned_loss=0.08815, over 954170.78 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:26:46,553 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:27:04,386 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7105, 3.4074, 3.3716, 1.8759, 3.4984, 2.7266, 1.4098, 2.5645], device='cuda:2'), covar=tensor([0.2975, 0.1753, 0.1375, 0.2760, 0.1069, 0.0923, 0.3430, 0.1349], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0170, 0.0163, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 03:27:33,087 INFO [finetune.py:976] (2/7) Epoch 4, batch 3800, loss[loss=0.1703, simple_loss=0.2496, pruned_loss=0.0455, over 4917.00 frames. ], tot_loss[loss=0.2332, simple_loss=0.289, pruned_loss=0.08872, over 951846.89 frames. ], batch size: 42, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:28:14,226 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.257e+02 1.718e+02 1.983e+02 2.372e+02 3.493e+02, threshold=3.966e+02, percent-clipped=0.0 2023-03-26 03:28:29,876 INFO [finetune.py:976] (2/7) Epoch 4, batch 3850, loss[loss=0.2855, simple_loss=0.327, pruned_loss=0.122, over 4826.00 frames. ], tot_loss[loss=0.232, simple_loss=0.2882, pruned_loss=0.08788, over 953528.41 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:28:37,717 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:28:45,323 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:28:54,811 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3169, 2.8831, 2.7702, 1.3300, 3.0223, 2.1461, 0.7849, 1.8725], device='cuda:2'), covar=tensor([0.2674, 0.2160, 0.1930, 0.3478, 0.1320, 0.1162, 0.4018, 0.1818], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0170, 0.0163, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 03:29:08,442 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:19,707 INFO [finetune.py:976] (2/7) Epoch 4, batch 3900, loss[loss=0.2197, simple_loss=0.2695, pruned_loss=0.08495, over 4819.00 frames. ], tot_loss[loss=0.2304, simple_loss=0.2858, pruned_loss=0.08747, over 953562.00 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:29:26,805 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:29,932 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0369, 1.4726, 1.8075, 1.7334, 1.6096, 1.6768, 1.6650, 1.7948], device='cuda:2'), covar=tensor([0.6901, 0.9145, 0.7644, 0.8973, 0.9618, 0.7389, 1.1364, 0.6484], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0250, 0.0255, 0.0261, 0.0240, 0.0217, 0.0276, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 03:29:43,717 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:46,792 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:47,850 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.709e+02 1.984e+02 2.370e+02 6.134e+02, threshold=3.968e+02, percent-clipped=2.0 2023-03-26 03:29:49,149 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:59,343 INFO [finetune.py:976] (2/7) Epoch 4, batch 3950, loss[loss=0.205, simple_loss=0.258, pruned_loss=0.07605, over 4815.00 frames. ], tot_loss[loss=0.2269, simple_loss=0.2821, pruned_loss=0.08584, over 954993.88 frames. ], batch size: 51, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:30:47,960 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:30:50,288 INFO [finetune.py:976] (2/7) Epoch 4, batch 4000, loss[loss=0.2025, simple_loss=0.2709, pruned_loss=0.06699, over 4731.00 frames. ], tot_loss[loss=0.2265, simple_loss=0.2815, pruned_loss=0.08572, over 954764.42 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:30:57,668 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 03:31:19,052 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.740e+02 2.148e+02 2.427e+02 4.357e+02, threshold=4.296e+02, percent-clipped=1.0 2023-03-26 03:31:27,483 INFO [finetune.py:976] (2/7) Epoch 4, batch 4050, loss[loss=0.205, simple_loss=0.2839, pruned_loss=0.06306, over 4813.00 frames. ], tot_loss[loss=0.2288, simple_loss=0.2843, pruned_loss=0.08665, over 952294.38 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:31:35,261 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:32:12,118 INFO [finetune.py:976] (2/7) Epoch 4, batch 4100, loss[loss=0.2988, simple_loss=0.3368, pruned_loss=0.1304, over 4921.00 frames. ], tot_loss[loss=0.2297, simple_loss=0.2864, pruned_loss=0.0865, over 953142.24 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:32:18,161 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21292.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:32:44,673 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:32:45,334 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5175, 1.3676, 1.3112, 1.4631, 1.7150, 1.4040, 0.9661, 1.3552], device='cuda:2'), covar=tensor([0.1651, 0.1788, 0.1526, 0.1393, 0.1369, 0.1058, 0.2490, 0.1517], device='cuda:2'), in_proj_covar=tensor([0.0232, 0.0209, 0.0198, 0.0185, 0.0235, 0.0174, 0.0215, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:32:45,354 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6031, 0.6711, 1.5397, 1.3443, 1.3092, 1.2834, 1.1682, 1.4033], device='cuda:2'), covar=tensor([0.4594, 0.6957, 0.6022, 0.5873, 0.6932, 0.4955, 0.7344, 0.5043], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0251, 0.0255, 0.0262, 0.0241, 0.0218, 0.0277, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:32:53,584 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.825e+02 2.074e+02 2.571e+02 5.101e+02, threshold=4.147e+02, percent-clipped=2.0 2023-03-26 03:33:04,208 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21328.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:33:07,022 INFO [finetune.py:976] (2/7) Epoch 4, batch 4150, loss[loss=0.1952, simple_loss=0.2548, pruned_loss=0.06778, over 4223.00 frames. ], tot_loss[loss=0.2309, simple_loss=0.2876, pruned_loss=0.0871, over 952366.49 frames. ], batch size: 65, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:33:28,620 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 03:33:45,540 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21373.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:33:52,602 INFO [finetune.py:976] (2/7) Epoch 4, batch 4200, loss[loss=0.2421, simple_loss=0.2987, pruned_loss=0.09276, over 4731.00 frames. ], tot_loss[loss=0.2311, simple_loss=0.2883, pruned_loss=0.087, over 953532.41 frames. ], batch size: 59, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:34:02,010 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:34:07,260 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4056, 2.0588, 1.5947, 0.7459, 1.8278, 1.9060, 1.6893, 1.8555], device='cuda:2'), covar=tensor([0.0828, 0.1060, 0.1766, 0.2435, 0.1567, 0.2483, 0.2564, 0.1106], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0204, 0.0191, 0.0219, 0.0211, 0.0220, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:34:21,952 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:34:39,318 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.725e+02 2.024e+02 2.533e+02 3.913e+02, threshold=4.049e+02, percent-clipped=0.0 2023-03-26 03:34:50,755 INFO [finetune.py:976] (2/7) Epoch 4, batch 4250, loss[loss=0.2026, simple_loss=0.2685, pruned_loss=0.06832, over 4825.00 frames. ], tot_loss[loss=0.2285, simple_loss=0.2855, pruned_loss=0.08575, over 955159.16 frames. ], batch size: 38, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:35:01,944 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 03:35:18,393 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:35:24,281 INFO [finetune.py:976] (2/7) Epoch 4, batch 4300, loss[loss=0.1796, simple_loss=0.2401, pruned_loss=0.05959, over 4809.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2832, pruned_loss=0.08548, over 955431.47 frames. ], batch size: 25, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:35:53,224 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.671e+02 2.065e+02 2.560e+02 4.445e+02, threshold=4.130e+02, percent-clipped=1.0 2023-03-26 03:35:53,398 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8340, 0.9249, 1.6188, 1.5137, 1.4675, 1.4131, 1.3952, 1.4942], device='cuda:2'), covar=tensor([0.6474, 0.9415, 0.7735, 0.8090, 0.9307, 0.6461, 1.0855, 0.7028], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0251, 0.0255, 0.0261, 0.0241, 0.0217, 0.0277, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 03:36:01,104 INFO [finetune.py:976] (2/7) Epoch 4, batch 4350, loss[loss=0.2241, simple_loss=0.2799, pruned_loss=0.08408, over 4822.00 frames. ], tot_loss[loss=0.224, simple_loss=0.2798, pruned_loss=0.0841, over 955700.87 frames. ], batch size: 45, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:36:13,769 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.38 vs. limit=5.0 2023-03-26 03:36:15,719 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 03:36:19,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0950, 2.0817, 1.9846, 1.4361, 2.3060, 2.3024, 2.2323, 1.7477], device='cuda:2'), covar=tensor([0.0627, 0.0606, 0.0785, 0.0975, 0.0461, 0.0627, 0.0584, 0.1145], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0133, 0.0146, 0.0129, 0.0112, 0.0145, 0.0148, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:36:40,601 INFO [finetune.py:976] (2/7) Epoch 4, batch 4400, loss[loss=0.2229, simple_loss=0.3013, pruned_loss=0.07222, over 4814.00 frames. ], tot_loss[loss=0.2251, simple_loss=0.2812, pruned_loss=0.08451, over 957007.82 frames. ], batch size: 39, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:36:49,014 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:36:54,164 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 03:37:05,922 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:15,011 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.721e+02 2.036e+02 2.627e+02 4.967e+02, threshold=4.072e+02, percent-clipped=2.0 2023-03-26 03:37:22,898 INFO [finetune.py:976] (2/7) Epoch 4, batch 4450, loss[loss=0.2898, simple_loss=0.3345, pruned_loss=0.1226, over 4739.00 frames. ], tot_loss[loss=0.2288, simple_loss=0.2851, pruned_loss=0.08623, over 956852.13 frames. ], batch size: 59, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:37:38,548 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:51,472 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:51,537 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8412, 1.6941, 1.5878, 1.8418, 1.8631, 1.5395, 2.1240, 1.8601], device='cuda:2'), covar=tensor([0.1700, 0.2987, 0.3404, 0.2723, 0.2525, 0.1859, 0.2957, 0.2144], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0193, 0.0235, 0.0254, 0.0224, 0.0186, 0.0209, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:38:01,167 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:09,677 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7792, 1.6421, 1.5694, 1.8126, 2.2106, 1.8727, 1.2880, 1.5004], device='cuda:2'), covar=tensor([0.2402, 0.2459, 0.2220, 0.2069, 0.1906, 0.1254, 0.3054, 0.2198], device='cuda:2'), in_proj_covar=tensor([0.0232, 0.0209, 0.0198, 0.0185, 0.0235, 0.0174, 0.0215, 0.0187], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:38:12,007 INFO [finetune.py:976] (2/7) Epoch 4, batch 4500, loss[loss=0.2286, simple_loss=0.2934, pruned_loss=0.08195, over 4752.00 frames. ], tot_loss[loss=0.2305, simple_loss=0.2871, pruned_loss=0.08696, over 956663.08 frames. ], batch size: 54, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:38:12,675 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:31,069 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9937, 1.8810, 1.5170, 1.7306, 1.7857, 1.7026, 1.7207, 2.5552], device='cuda:2'), covar=tensor([0.7395, 0.8189, 0.6041, 0.8023, 0.6752, 0.4324, 0.7771, 0.2753], device='cuda:2'), in_proj_covar=tensor([0.0279, 0.0253, 0.0219, 0.0284, 0.0236, 0.0197, 0.0241, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:2') 2023-03-26 03:38:41,406 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:45,057 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2342, 1.1409, 1.2558, 0.5828, 1.1408, 1.4628, 1.4866, 1.2250], device='cuda:2'), covar=tensor([0.0896, 0.0701, 0.0570, 0.0605, 0.0528, 0.0471, 0.0374, 0.0639], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0117, 0.0136, 0.0131, 0.0120, 0.0146, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7025e-05, 1.1627e-04, 8.5326e-05, 9.9310e-05, 9.5056e-05, 8.9188e-05, 1.0908e-04, 1.0672e-04], device='cuda:2') 2023-03-26 03:38:49,075 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.721e+02 2.032e+02 2.543e+02 5.339e+02, threshold=4.063e+02, percent-clipped=3.0 2023-03-26 03:38:58,599 INFO [finetune.py:976] (2/7) Epoch 4, batch 4550, loss[loss=0.1984, simple_loss=0.2525, pruned_loss=0.07215, over 4799.00 frames. ], tot_loss[loss=0.2319, simple_loss=0.2888, pruned_loss=0.08746, over 955227.22 frames. ], batch size: 26, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:39:04,468 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 03:39:09,620 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4261, 1.4731, 1.5432, 1.7115, 1.5439, 3.4050, 1.3561, 1.6028], device='cuda:2'), covar=tensor([0.1070, 0.1772, 0.1361, 0.1062, 0.1675, 0.0235, 0.1535, 0.1731], device='cuda:2'), in_proj_covar=tensor([0.0079, 0.0081, 0.0078, 0.0080, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 03:39:14,113 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:18,467 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:30,505 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:41,974 INFO [finetune.py:976] (2/7) Epoch 4, batch 4600, loss[loss=0.2475, simple_loss=0.3095, pruned_loss=0.09274, over 4847.00 frames. ], tot_loss[loss=0.2307, simple_loss=0.2877, pruned_loss=0.08685, over 954383.09 frames. ], batch size: 49, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:40:25,680 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.789e+02 2.167e+02 2.687e+02 4.147e+02, threshold=4.334e+02, percent-clipped=1.0 2023-03-26 03:40:32,331 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21822.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:40:33,572 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:40:44,876 INFO [finetune.py:976] (2/7) Epoch 4, batch 4650, loss[loss=0.2436, simple_loss=0.296, pruned_loss=0.09557, over 4804.00 frames. ], tot_loss[loss=0.2285, simple_loss=0.2849, pruned_loss=0.08606, over 956023.68 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:41:06,170 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:41:28,342 INFO [finetune.py:976] (2/7) Epoch 4, batch 4700, loss[loss=0.2262, simple_loss=0.2803, pruned_loss=0.08608, over 4864.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.2804, pruned_loss=0.0837, over 956735.75 frames. ], batch size: 31, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:41:42,834 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 03:42:02,074 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:03,191 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2046, 1.2923, 1.5949, 1.1009, 1.2015, 1.3881, 1.3213, 1.5666], device='cuda:2'), covar=tensor([0.1207, 0.1820, 0.1101, 0.1396, 0.0856, 0.1209, 0.2313, 0.0796], device='cuda:2'), in_proj_covar=tensor([0.0205, 0.0205, 0.0202, 0.0195, 0.0183, 0.0224, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:42:04,417 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8215, 1.7534, 2.3702, 1.4848, 1.9775, 2.1986, 1.7638, 2.3435], device='cuda:2'), covar=tensor([0.1735, 0.2171, 0.1445, 0.2423, 0.1121, 0.1809, 0.2615, 0.1091], device='cuda:2'), in_proj_covar=tensor([0.0205, 0.0205, 0.0202, 0.0195, 0.0183, 0.0224, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:42:08,439 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.749e+02 2.081e+02 2.546e+02 7.973e+02, threshold=4.162e+02, percent-clipped=1.0 2023-03-26 03:42:16,527 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 03:42:16,914 INFO [finetune.py:976] (2/7) Epoch 4, batch 4750, loss[loss=0.2268, simple_loss=0.2803, pruned_loss=0.08659, over 4767.00 frames. ], tot_loss[loss=0.2214, simple_loss=0.2775, pruned_loss=0.08268, over 958636.50 frames. ], batch size: 28, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:42:29,497 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:46,661 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:53,039 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:53,716 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:02,051 INFO [finetune.py:976] (2/7) Epoch 4, batch 4800, loss[loss=0.2924, simple_loss=0.3442, pruned_loss=0.1203, over 4847.00 frames. ], tot_loss[loss=0.2249, simple_loss=0.2808, pruned_loss=0.08454, over 956393.29 frames. ], batch size: 49, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:43:02,799 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21984.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:15,893 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8339, 3.7209, 3.6960, 1.6946, 3.9317, 2.8643, 0.8738, 2.5856], device='cuda:2'), covar=tensor([0.2317, 0.2305, 0.1360, 0.3593, 0.0937, 0.0998, 0.4663, 0.1706], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0171, 0.0163, 0.0128, 0.0155, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 03:43:34,456 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:36,978 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.99 vs. limit=5.0 2023-03-26 03:43:37,302 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.769e+02 1.987e+02 2.631e+02 5.032e+02, threshold=3.974e+02, percent-clipped=2.0 2023-03-26 03:43:40,590 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.72 vs. limit=5.0 2023-03-26 03:43:44,109 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22031.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:44,660 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22032.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:45,187 INFO [finetune.py:976] (2/7) Epoch 4, batch 4850, loss[loss=0.2255, simple_loss=0.2847, pruned_loss=0.08314, over 4821.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2857, pruned_loss=0.08615, over 956093.07 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:44:04,206 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8198, 0.8943, 1.6204, 1.5399, 1.4646, 1.4374, 1.3916, 1.4808], device='cuda:2'), covar=tensor([0.6122, 0.9349, 0.7393, 0.8235, 0.8938, 0.6591, 1.0143, 0.6834], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0253, 0.0258, 0.0263, 0.0243, 0.0219, 0.0279, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:44:19,576 INFO [finetune.py:976] (2/7) Epoch 4, batch 4900, loss[loss=0.264, simple_loss=0.3302, pruned_loss=0.09885, over 4851.00 frames. ], tot_loss[loss=0.2305, simple_loss=0.2872, pruned_loss=0.08686, over 956004.60 frames. ], batch size: 44, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:44:51,175 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 03:45:00,545 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:45:01,045 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.764e+02 2.232e+02 2.515e+02 4.523e+02, threshold=4.464e+02, percent-clipped=3.0 2023-03-26 03:45:18,660 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:45:20,457 INFO [finetune.py:976] (2/7) Epoch 4, batch 4950, loss[loss=0.2359, simple_loss=0.299, pruned_loss=0.08636, over 4723.00 frames. ], tot_loss[loss=0.2309, simple_loss=0.2881, pruned_loss=0.08683, over 957868.68 frames. ], batch size: 54, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:45:50,960 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 03:46:10,286 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9083, 1.3944, 1.0951, 1.8306, 2.2882, 1.8601, 1.6195, 1.8940], device='cuda:2'), covar=tensor([0.1625, 0.2451, 0.2610, 0.1477, 0.2134, 0.2912, 0.1711, 0.2267], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0099, 0.0117, 0.0094, 0.0125, 0.0098, 0.0101, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 03:46:12,764 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:19,505 INFO [finetune.py:976] (2/7) Epoch 4, batch 5000, loss[loss=0.193, simple_loss=0.2627, pruned_loss=0.06167, over 4754.00 frames. ], tot_loss[loss=0.2277, simple_loss=0.2854, pruned_loss=0.08504, over 957822.95 frames. ], batch size: 28, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:46:24,517 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:34,505 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2323, 1.8710, 1.4441, 0.5574, 1.6778, 1.8315, 1.5674, 1.7840], device='cuda:2'), covar=tensor([0.0795, 0.0991, 0.1611, 0.2232, 0.1417, 0.2676, 0.2739, 0.0941], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0203, 0.0205, 0.0192, 0.0220, 0.0212, 0.0223, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:46:35,047 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:40,519 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:43,479 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.598e+02 2.028e+02 2.482e+02 4.524e+02, threshold=4.056e+02, percent-clipped=1.0 2023-03-26 03:46:43,752 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 03:46:52,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4903, 1.3973, 1.2910, 1.3567, 1.7449, 1.7531, 1.5485, 1.3251], device='cuda:2'), covar=tensor([0.0378, 0.0362, 0.0583, 0.0388, 0.0227, 0.0422, 0.0338, 0.0431], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0115, 0.0139, 0.0119, 0.0106, 0.0101, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([6.7534e-05, 9.0417e-05, 1.1168e-04, 9.4095e-05, 8.3891e-05, 7.4954e-05, 7.0677e-05, 8.5811e-05], device='cuda:2') 2023-03-26 03:46:59,737 INFO [finetune.py:976] (2/7) Epoch 4, batch 5050, loss[loss=0.2039, simple_loss=0.2641, pruned_loss=0.07191, over 4823.00 frames. ], tot_loss[loss=0.2267, simple_loss=0.2833, pruned_loss=0.08509, over 953900.60 frames. ], batch size: 30, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:47:02,284 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:02,294 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:16,044 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:28,178 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22269.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:32,934 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:39,818 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1069, 4.8543, 4.6381, 2.8889, 4.9882, 3.7236, 0.9674, 3.4484], device='cuda:2'), covar=tensor([0.2237, 0.1738, 0.1167, 0.2651, 0.0698, 0.0762, 0.4698, 0.1441], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0173, 0.0164, 0.0129, 0.0157, 0.0123, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 03:47:40,348 INFO [finetune.py:976] (2/7) Epoch 4, batch 5100, loss[loss=0.1712, simple_loss=0.239, pruned_loss=0.05175, over 4776.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.2795, pruned_loss=0.08416, over 950395.99 frames. ], batch size: 28, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:47:49,924 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22298.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:50,453 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:48:03,276 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22317.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:48:05,001 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.872e+01 1.620e+02 1.853e+02 2.165e+02 3.345e+02, threshold=3.706e+02, percent-clipped=0.0 2023-03-26 03:48:08,734 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:48:13,376 INFO [finetune.py:976] (2/7) Epoch 4, batch 5150, loss[loss=0.2031, simple_loss=0.2693, pruned_loss=0.0684, over 4813.00 frames. ], tot_loss[loss=0.224, simple_loss=0.2795, pruned_loss=0.08424, over 951938.94 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:48:50,822 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3449, 2.1507, 2.8171, 1.6801, 2.5833, 2.9269, 2.1967, 2.7603], device='cuda:2'), covar=tensor([0.1907, 0.2488, 0.1705, 0.3071, 0.1236, 0.1914, 0.2676, 0.1252], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0206, 0.0203, 0.0197, 0.0184, 0.0225, 0.0216, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:48:51,907 INFO [finetune.py:976] (2/7) Epoch 4, batch 5200, loss[loss=0.2447, simple_loss=0.3177, pruned_loss=0.08581, over 4790.00 frames. ], tot_loss[loss=0.2269, simple_loss=0.2827, pruned_loss=0.08551, over 951705.85 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:48:53,215 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:49:26,415 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22419.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:49:26,886 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.279e+02 1.811e+02 2.155e+02 2.737e+02 4.498e+02, threshold=4.310e+02, percent-clipped=4.0 2023-03-26 03:49:40,613 INFO [finetune.py:976] (2/7) Epoch 4, batch 5250, loss[loss=0.2294, simple_loss=0.2896, pruned_loss=0.08461, over 4813.00 frames. ], tot_loss[loss=0.2297, simple_loss=0.2861, pruned_loss=0.0867, over 953934.05 frames. ], batch size: 39, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:49:54,888 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22446.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:49:55,521 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1914, 2.0898, 2.0304, 2.1649, 1.6099, 4.8216, 2.0311, 2.7278], device='cuda:2'), covar=tensor([0.3065, 0.2289, 0.1834, 0.2053, 0.1755, 0.0081, 0.2243, 0.1186], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0119, 0.0116, 0.0097, 0.0100, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 03:49:56,757 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6000, 2.3201, 1.9935, 0.8511, 2.2029, 1.9393, 1.7601, 2.0514], device='cuda:2'), covar=tensor([0.0873, 0.1004, 0.1816, 0.2661, 0.1599, 0.2672, 0.2354, 0.1214], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0201, 0.0204, 0.0191, 0.0219, 0.0210, 0.0221, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:50:18,144 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:18,839 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0702, 1.6371, 1.8054, 1.8240, 1.5890, 1.6836, 1.7961, 1.7557], device='cuda:2'), covar=tensor([0.6127, 0.8895, 0.6601, 0.8203, 0.9081, 0.6585, 1.0275, 0.6493], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0252, 0.0257, 0.0261, 0.0242, 0.0219, 0.0278, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:50:27,829 INFO [finetune.py:976] (2/7) Epoch 4, batch 5300, loss[loss=0.252, simple_loss=0.302, pruned_loss=0.101, over 4869.00 frames. ], tot_loss[loss=0.2306, simple_loss=0.2869, pruned_loss=0.08715, over 953180.85 frames. ], batch size: 31, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:50:30,484 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:49,057 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:51:10,350 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.196e+02 1.644e+02 2.088e+02 2.525e+02 4.526e+02, threshold=4.176e+02, percent-clipped=1.0 2023-03-26 03:51:29,045 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22532.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:51:29,575 INFO [finetune.py:976] (2/7) Epoch 4, batch 5350, loss[loss=0.2447, simple_loss=0.2932, pruned_loss=0.09807, over 4835.00 frames. ], tot_loss[loss=0.2309, simple_loss=0.2875, pruned_loss=0.08716, over 953447.81 frames. ], batch size: 30, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:51:43,431 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22554.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:04,982 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:20,640 INFO [finetune.py:976] (2/7) Epoch 4, batch 5400, loss[loss=0.1881, simple_loss=0.2513, pruned_loss=0.06247, over 4822.00 frames. ], tot_loss[loss=0.2277, simple_loss=0.2844, pruned_loss=0.08553, over 954816.85 frames. ], batch size: 39, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:52:21,952 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6903, 1.5954, 1.5478, 1.6689, 1.2548, 3.6645, 1.4594, 2.1782], device='cuda:2'), covar=tensor([0.3403, 0.2185, 0.2012, 0.2346, 0.1823, 0.0179, 0.2708, 0.1175], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0119, 0.0115, 0.0096, 0.0100, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 03:52:27,239 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:30,395 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3149, 1.7114, 1.9846, 1.9795, 1.7538, 1.8241, 1.9854, 1.8664], device='cuda:2'), covar=tensor([0.6178, 0.9860, 0.7524, 0.8768, 0.9694, 0.6877, 1.1269, 0.6710], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0251, 0.0256, 0.0260, 0.0241, 0.0218, 0.0276, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:52:45,272 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.598e+02 1.961e+02 2.261e+02 4.832e+02, threshold=3.922e+02, percent-clipped=2.0 2023-03-26 03:52:50,433 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22626.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:54,547 INFO [finetune.py:976] (2/7) Epoch 4, batch 5450, loss[loss=0.2111, simple_loss=0.2658, pruned_loss=0.07822, over 4908.00 frames. ], tot_loss[loss=0.2266, simple_loss=0.2821, pruned_loss=0.0855, over 954308.18 frames. ], batch size: 43, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:52:55,863 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7017, 4.0713, 4.2154, 4.4590, 4.4503, 4.1905, 4.8143, 1.5570], device='cuda:2'), covar=tensor([0.0702, 0.0820, 0.0744, 0.0763, 0.0999, 0.1266, 0.0486, 0.4833], device='cuda:2'), in_proj_covar=tensor([0.0361, 0.0245, 0.0277, 0.0295, 0.0340, 0.0286, 0.0308, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:52:58,335 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:53:01,437 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 03:53:30,709 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:53:33,144 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-26 03:53:37,508 INFO [finetune.py:976] (2/7) Epoch 4, batch 5500, loss[loss=0.2146, simple_loss=0.2699, pruned_loss=0.07971, over 4769.00 frames. ], tot_loss[loss=0.2231, simple_loss=0.2786, pruned_loss=0.08379, over 953975.75 frames. ], batch size: 26, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:53:48,419 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22700.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 03:54:00,990 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.350e+02 1.787e+02 2.088e+02 2.580e+02 6.017e+02, threshold=4.176e+02, percent-clipped=5.0 2023-03-26 03:54:16,190 INFO [finetune.py:976] (2/7) Epoch 4, batch 5550, loss[loss=0.2953, simple_loss=0.3512, pruned_loss=0.1197, over 4896.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2804, pruned_loss=0.08478, over 952221.76 frames. ], batch size: 43, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:54:23,407 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:54:55,504 INFO [finetune.py:976] (2/7) Epoch 4, batch 5600, loss[loss=0.23, simple_loss=0.2975, pruned_loss=0.08121, over 4906.00 frames. ], tot_loss[loss=0.2283, simple_loss=0.2844, pruned_loss=0.08611, over 954099.50 frames. ], batch size: 43, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:54:57,305 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22786.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:01,939 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22794.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:28,583 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.750e+02 2.147e+02 2.459e+02 4.993e+02, threshold=4.295e+02, percent-clipped=1.0 2023-03-26 03:55:31,586 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22825.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:31,597 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1176, 1.9082, 2.1303, 1.0178, 2.3047, 2.5604, 2.0759, 1.9876], device='cuda:2'), covar=tensor([0.0917, 0.0846, 0.0442, 0.0797, 0.0607, 0.0540, 0.0483, 0.0727], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0157, 0.0117, 0.0135, 0.0132, 0.0121, 0.0147, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7046e-05, 1.1655e-04, 8.5171e-05, 9.8861e-05, 9.5305e-05, 8.9808e-05, 1.0982e-04, 1.0693e-04], device='cuda:2') 2023-03-26 03:55:40,655 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:41,207 INFO [finetune.py:976] (2/7) Epoch 4, batch 5650, loss[loss=0.2172, simple_loss=0.2809, pruned_loss=0.07676, over 4890.00 frames. ], tot_loss[loss=0.2302, simple_loss=0.2873, pruned_loss=0.08653, over 953078.21 frames. ], batch size: 32, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:55:41,955 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:42,163 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 03:56:06,195 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22855.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:21,393 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:31,136 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22880.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:33,110 INFO [finetune.py:976] (2/7) Epoch 4, batch 5700, loss[loss=0.2144, simple_loss=0.2492, pruned_loss=0.08977, over 4169.00 frames. ], tot_loss[loss=0.2266, simple_loss=0.2827, pruned_loss=0.0852, over 937623.96 frames. ], batch size: 18, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:56:33,794 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2564, 1.9022, 1.4532, 0.6315, 1.7287, 1.9993, 1.6884, 1.9022], device='cuda:2'), covar=tensor([0.0819, 0.0727, 0.1237, 0.1923, 0.1306, 0.1835, 0.1998, 0.0782], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0204, 0.0191, 0.0218, 0.0210, 0.0221, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:56:34,968 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:41,784 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22893.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:57:20,677 INFO [finetune.py:976] (2/7) Epoch 5, batch 0, loss[loss=0.2444, simple_loss=0.2994, pruned_loss=0.0947, over 4760.00 frames. ], tot_loss[loss=0.2444, simple_loss=0.2994, pruned_loss=0.0947, over 4760.00 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:57:20,678 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 03:57:30,558 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5477, 1.2115, 1.3900, 1.2583, 1.6923, 1.6098, 1.5163, 1.3648], device='cuda:2'), covar=tensor([0.0339, 0.0357, 0.0544, 0.0344, 0.0296, 0.0329, 0.0326, 0.0393], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0115, 0.0138, 0.0120, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([6.7570e-05, 9.0440e-05, 1.1130e-04, 9.4555e-05, 8.3660e-05, 7.5062e-05, 7.0309e-05, 8.6112e-05], device='cuda:2') 2023-03-26 03:57:37,527 INFO [finetune.py:1010] (2/7) Epoch 5, validation: loss=0.1701, simple_loss=0.2413, pruned_loss=0.0494, over 2265189.00 frames. 2023-03-26 03:57:37,527 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 03:57:47,694 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1017, 1.5470, 1.8482, 1.8712, 1.6698, 1.7030, 1.7891, 1.7566], device='cuda:2'), covar=tensor([0.6523, 0.9604, 0.6990, 0.8518, 0.9962, 0.6932, 1.1759, 0.7105], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0251, 0.0257, 0.0260, 0.0240, 0.0218, 0.0276, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 03:57:48,821 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22919.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:57:49,349 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.844e+01 1.616e+02 1.840e+02 2.309e+02 3.969e+02, threshold=3.680e+02, percent-clipped=0.0 2023-03-26 03:58:02,584 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:58:29,020 INFO [finetune.py:976] (2/7) Epoch 5, batch 50, loss[loss=0.2288, simple_loss=0.2876, pruned_loss=0.08502, over 4862.00 frames. ], tot_loss[loss=0.2335, simple_loss=0.2898, pruned_loss=0.08864, over 216753.44 frames. ], batch size: 31, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:59:05,722 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22995.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 03:59:17,857 INFO [finetune.py:976] (2/7) Epoch 5, batch 100, loss[loss=0.2341, simple_loss=0.2836, pruned_loss=0.09226, over 4826.00 frames. ], tot_loss[loss=0.2269, simple_loss=0.2821, pruned_loss=0.08582, over 381352.62 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:59:23,734 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.764e+02 2.029e+02 2.456e+02 6.922e+02, threshold=4.057e+02, percent-clipped=5.0 2023-03-26 03:59:36,981 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:59:51,326 INFO [finetune.py:976] (2/7) Epoch 5, batch 150, loss[loss=0.1958, simple_loss=0.2611, pruned_loss=0.06524, over 4870.00 frames. ], tot_loss[loss=0.2201, simple_loss=0.2749, pruned_loss=0.08263, over 508545.99 frames. ], batch size: 34, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:00:08,773 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:00:23,410 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:00:30,574 INFO [finetune.py:976] (2/7) Epoch 5, batch 200, loss[loss=0.253, simple_loss=0.2893, pruned_loss=0.1084, over 4930.00 frames. ], tot_loss[loss=0.2174, simple_loss=0.2723, pruned_loss=0.08123, over 610682.86 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:00:42,596 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.662e+02 1.994e+02 2.595e+02 4.858e+02, threshold=3.989e+02, percent-clipped=3.0 2023-03-26 04:01:01,382 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:01:09,513 INFO [finetune.py:976] (2/7) Epoch 5, batch 250, loss[loss=0.2606, simple_loss=0.3054, pruned_loss=0.1079, over 4913.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2726, pruned_loss=0.07972, over 685914.19 frames. ], batch size: 36, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:01:18,071 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:01:31,047 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:02:00,761 INFO [finetune.py:976] (2/7) Epoch 5, batch 300, loss[loss=0.2316, simple_loss=0.3025, pruned_loss=0.08035, over 4914.00 frames. ], tot_loss[loss=0.2202, simple_loss=0.2775, pruned_loss=0.08138, over 744958.60 frames. ], batch size: 36, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:02:11,917 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:02:20,055 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.235e+02 1.736e+02 2.142e+02 2.597e+02 5.294e+02, threshold=4.284e+02, percent-clipped=3.0 2023-03-26 04:02:31,214 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:03,014 INFO [finetune.py:976] (2/7) Epoch 5, batch 350, loss[loss=0.2327, simple_loss=0.2991, pruned_loss=0.08315, over 4892.00 frames. ], tot_loss[loss=0.226, simple_loss=0.2829, pruned_loss=0.08454, over 791109.58 frames. ], batch size: 35, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:03:16,133 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6253, 1.5882, 1.4112, 1.7238, 2.0161, 1.7179, 1.2870, 1.3036], device='cuda:2'), covar=tensor([0.2599, 0.2427, 0.2227, 0.2042, 0.2351, 0.1438, 0.3312, 0.2138], device='cuda:2'), in_proj_covar=tensor([0.0232, 0.0209, 0.0198, 0.0184, 0.0235, 0.0174, 0.0214, 0.0186], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:03:20,915 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23278.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:34,248 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:41,696 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23295.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:49,362 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5624, 1.4979, 1.5533, 0.9508, 1.6506, 1.8291, 1.7691, 1.3600], device='cuda:2'), covar=tensor([0.0924, 0.0625, 0.0492, 0.0562, 0.0419, 0.0539, 0.0322, 0.0651], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0118, 0.0136, 0.0132, 0.0122, 0.0148, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.7614e-05, 1.1666e-04, 8.5751e-05, 9.9426e-05, 9.5453e-05, 9.0216e-05, 1.1028e-04, 1.0729e-04], device='cuda:2') 2023-03-26 04:03:51,198 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.10 vs. limit=5.0 2023-03-26 04:03:51,622 INFO [finetune.py:976] (2/7) Epoch 5, batch 400, loss[loss=0.2506, simple_loss=0.3165, pruned_loss=0.09239, over 4814.00 frames. ], tot_loss[loss=0.2276, simple_loss=0.2852, pruned_loss=0.08499, over 828656.10 frames. ], batch size: 39, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:03:58,185 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.903e+01 1.731e+02 2.116e+02 2.565e+02 5.981e+02, threshold=4.232e+02, percent-clipped=1.0 2023-03-26 04:04:13,595 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:04:20,851 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3284, 1.3841, 1.7188, 1.7500, 1.4920, 3.2215, 1.2129, 1.5614], device='cuda:2'), covar=tensor([0.1106, 0.1865, 0.1392, 0.1142, 0.1752, 0.0301, 0.1634, 0.1790], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 04:04:24,904 INFO [finetune.py:976] (2/7) Epoch 5, batch 450, loss[loss=0.2907, simple_loss=0.3273, pruned_loss=0.1271, over 4371.00 frames. ], tot_loss[loss=0.2257, simple_loss=0.2837, pruned_loss=0.0839, over 857912.79 frames. ], batch size: 65, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:05:10,551 INFO [finetune.py:976] (2/7) Epoch 5, batch 500, loss[loss=0.2993, simple_loss=0.3287, pruned_loss=0.135, over 4906.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.2818, pruned_loss=0.08346, over 879468.87 frames. ], batch size: 35, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:05:10,856 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 04:05:16,626 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.775e+02 2.029e+02 2.615e+02 5.539e+02, threshold=4.057e+02, percent-clipped=1.0 2023-03-26 04:05:42,938 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23450.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:05:52,276 INFO [finetune.py:976] (2/7) Epoch 5, batch 550, loss[loss=0.2049, simple_loss=0.2648, pruned_loss=0.07247, over 4941.00 frames. ], tot_loss[loss=0.2206, simple_loss=0.2776, pruned_loss=0.08183, over 894788.53 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:05:54,204 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:15,691 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:23,747 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23488.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:30,294 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23498.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:38,171 INFO [finetune.py:976] (2/7) Epoch 5, batch 600, loss[loss=0.185, simple_loss=0.2536, pruned_loss=0.05824, over 4907.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2768, pruned_loss=0.08186, over 909595.40 frames. ], batch size: 32, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:06:43,667 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1272, 2.2255, 2.3949, 1.2714, 2.6140, 2.7534, 2.3211, 2.1133], device='cuda:2'), covar=tensor([0.1130, 0.0671, 0.0629, 0.0757, 0.0560, 0.0689, 0.0484, 0.0777], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0117, 0.0135, 0.0131, 0.0121, 0.0147, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7186e-05, 1.1599e-04, 8.5241e-05, 9.9033e-05, 9.4775e-05, 8.9687e-05, 1.0936e-04, 1.0698e-04], device='cuda:2') 2023-03-26 04:06:44,761 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.759e+02 2.043e+02 2.434e+02 4.744e+02, threshold=4.086e+02, percent-clipped=2.0 2023-03-26 04:06:51,179 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23529.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:58,365 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2482, 1.7800, 2.8051, 1.7788, 2.3992, 2.5747, 1.8462, 2.6065], device='cuda:2'), covar=tensor([0.1457, 0.2259, 0.1498, 0.2418, 0.0938, 0.1458, 0.2690, 0.0925], device='cuda:2'), in_proj_covar=tensor([0.0205, 0.0205, 0.0202, 0.0196, 0.0183, 0.0222, 0.0214, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:07:18,739 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23549.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:07:25,881 INFO [finetune.py:976] (2/7) Epoch 5, batch 650, loss[loss=0.2056, simple_loss=0.2701, pruned_loss=0.07057, over 4801.00 frames. ], tot_loss[loss=0.2234, simple_loss=0.2807, pruned_loss=0.0831, over 919994.44 frames. ], batch size: 45, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:07:33,761 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23573.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:07:43,070 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:08:10,563 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.49 vs. limit=5.0 2023-03-26 04:08:14,419 INFO [finetune.py:976] (2/7) Epoch 5, batch 700, loss[loss=0.2712, simple_loss=0.3336, pruned_loss=0.1044, over 4804.00 frames. ], tot_loss[loss=0.2268, simple_loss=0.2843, pruned_loss=0.08466, over 927807.99 frames. ], batch size: 39, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:08:30,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.701e+02 2.127e+02 2.576e+02 5.648e+02, threshold=4.253e+02, percent-clipped=2.0 2023-03-26 04:09:03,356 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6642, 1.6563, 1.7632, 1.8561, 1.7473, 3.2057, 1.5279, 1.7672], device='cuda:2'), covar=tensor([0.0948, 0.1500, 0.0989, 0.0986, 0.1474, 0.0296, 0.1272, 0.1464], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 04:09:14,888 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7783, 1.4752, 2.4269, 3.4460, 2.5199, 2.5117, 1.2358, 2.7159], device='cuda:2'), covar=tensor([0.1830, 0.1626, 0.1214, 0.0558, 0.0790, 0.1435, 0.1862, 0.0720], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0119, 0.0135, 0.0166, 0.0103, 0.0142, 0.0128, 0.0104], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 04:09:25,209 INFO [finetune.py:976] (2/7) Epoch 5, batch 750, loss[loss=0.2015, simple_loss=0.2762, pruned_loss=0.06344, over 4792.00 frames. ], tot_loss[loss=0.2289, simple_loss=0.2864, pruned_loss=0.08568, over 935301.48 frames. ], batch size: 29, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:10:02,096 INFO [finetune.py:976] (2/7) Epoch 5, batch 800, loss[loss=0.2153, simple_loss=0.2818, pruned_loss=0.07438, over 4895.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2852, pruned_loss=0.08452, over 941444.08 frames. ], batch size: 37, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:10:05,277 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5028, 1.6506, 1.7358, 1.0852, 1.6336, 1.9427, 1.9388, 1.4258], device='cuda:2'), covar=tensor([0.0861, 0.0472, 0.0480, 0.0497, 0.0439, 0.0512, 0.0253, 0.0526], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0118, 0.0136, 0.0132, 0.0121, 0.0147, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7337e-05, 1.1634e-04, 8.5358e-05, 9.9464e-05, 9.5297e-05, 8.9786e-05, 1.0927e-04, 1.0697e-04], device='cuda:2') 2023-03-26 04:10:08,708 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.635e+02 1.954e+02 2.429e+02 4.773e+02, threshold=3.908e+02, percent-clipped=1.0 2023-03-26 04:10:23,468 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6694, 1.5598, 1.4242, 1.3640, 1.9305, 1.9214, 1.7281, 1.4229], device='cuda:2'), covar=tensor([0.0270, 0.0322, 0.0475, 0.0354, 0.0222, 0.0438, 0.0263, 0.0379], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0114, 0.0139, 0.0120, 0.0105, 0.0102, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([6.8118e-05, 9.0050e-05, 1.1195e-04, 9.4718e-05, 8.3368e-05, 7.6000e-05, 7.0114e-05, 8.5987e-05], device='cuda:2') 2023-03-26 04:10:25,097 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.11 vs. limit=5.0 2023-03-26 04:10:44,791 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.37 vs. limit=5.0 2023-03-26 04:10:45,853 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3525, 1.8632, 2.7641, 1.7377, 2.5689, 2.7287, 1.9852, 2.6457], device='cuda:2'), covar=tensor([0.1564, 0.2062, 0.1528, 0.2568, 0.0935, 0.1598, 0.2383, 0.1019], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0202, 0.0200, 0.0194, 0.0182, 0.0220, 0.0212, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:10:56,823 INFO [finetune.py:976] (2/7) Epoch 5, batch 850, loss[loss=0.181, simple_loss=0.2413, pruned_loss=0.06036, over 4491.00 frames. ], tot_loss[loss=0.2257, simple_loss=0.2835, pruned_loss=0.08393, over 944482.40 frames. ], batch size: 20, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:11:04,266 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23764.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:11:54,661 INFO [finetune.py:976] (2/7) Epoch 5, batch 900, loss[loss=0.1985, simple_loss=0.2655, pruned_loss=0.06577, over 4773.00 frames. ], tot_loss[loss=0.2232, simple_loss=0.2805, pruned_loss=0.0829, over 947596.76 frames. ], batch size: 28, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:11:55,340 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23812.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:11:59,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8742, 4.0169, 3.7516, 1.8471, 4.0607, 2.9904, 0.7147, 2.7261], device='cuda:2'), covar=tensor([0.2330, 0.1890, 0.1645, 0.3691, 0.0964, 0.1123, 0.4881, 0.1693], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0170, 0.0163, 0.0128, 0.0155, 0.0122, 0.0145, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 04:12:00,783 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.642e+02 1.957e+02 2.389e+02 4.840e+02, threshold=3.913e+02, percent-clipped=2.0 2023-03-26 04:12:21,816 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23844.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:12:30,076 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5275, 1.3779, 1.3844, 1.2997, 1.7766, 1.6820, 1.5592, 1.3109], device='cuda:2'), covar=tensor([0.0269, 0.0308, 0.0538, 0.0337, 0.0205, 0.0357, 0.0301, 0.0360], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0114, 0.0138, 0.0119, 0.0105, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.7837e-05, 8.9427e-05, 1.1138e-04, 9.4077e-05, 8.2901e-05, 7.5332e-05, 6.9791e-05, 8.5582e-05], device='cuda:2') 2023-03-26 04:12:37,545 INFO [finetune.py:976] (2/7) Epoch 5, batch 950, loss[loss=0.2057, simple_loss=0.2593, pruned_loss=0.07607, over 4907.00 frames. ], tot_loss[loss=0.2218, simple_loss=0.2786, pruned_loss=0.08254, over 947469.26 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:12:43,778 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 04:12:44,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23873.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:12:52,630 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:13:28,832 INFO [finetune.py:976] (2/7) Epoch 5, batch 1000, loss[loss=0.2189, simple_loss=0.269, pruned_loss=0.08445, over 4864.00 frames. ], tot_loss[loss=0.223, simple_loss=0.2797, pruned_loss=0.08318, over 950077.93 frames. ], batch size: 31, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:13:38,585 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.702e+02 2.066e+02 2.385e+02 5.722e+02, threshold=4.131e+02, percent-clipped=3.0 2023-03-26 04:13:38,656 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23921.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:13:47,710 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:14:14,856 INFO [finetune.py:976] (2/7) Epoch 5, batch 1050, loss[loss=0.2209, simple_loss=0.2796, pruned_loss=0.08111, over 4730.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.2827, pruned_loss=0.08415, over 951921.93 frames. ], batch size: 59, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:15:19,193 INFO [finetune.py:976] (2/7) Epoch 5, batch 1100, loss[loss=0.22, simple_loss=0.2907, pruned_loss=0.07467, over 4805.00 frames. ], tot_loss[loss=0.2273, simple_loss=0.2847, pruned_loss=0.08497, over 952059.79 frames. ], batch size: 40, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:15:28,406 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.825e+02 2.109e+02 2.589e+02 5.024e+02, threshold=4.219e+02, percent-clipped=4.0 2023-03-26 04:15:38,231 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:15:42,277 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2380, 2.8536, 2.9745, 3.1891, 2.9969, 2.8276, 3.2803, 1.0671], device='cuda:2'), covar=tensor([0.1048, 0.1045, 0.1003, 0.0971, 0.1647, 0.1718, 0.1117, 0.4842], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0243, 0.0274, 0.0290, 0.0337, 0.0284, 0.0305, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:15:48,779 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2948, 2.9409, 3.0240, 3.2452, 3.0301, 2.8800, 3.3517, 1.0083], device='cuda:2'), covar=tensor([0.1161, 0.0978, 0.1010, 0.1089, 0.1787, 0.1735, 0.1089, 0.4898], device='cuda:2'), in_proj_covar=tensor([0.0358, 0.0243, 0.0274, 0.0291, 0.0337, 0.0285, 0.0305, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:15:50,583 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.06 vs. limit=5.0 2023-03-26 04:15:54,485 INFO [finetune.py:976] (2/7) Epoch 5, batch 1150, loss[loss=0.2204, simple_loss=0.2949, pruned_loss=0.07298, over 4896.00 frames. ], tot_loss[loss=0.2298, simple_loss=0.2872, pruned_loss=0.08621, over 953417.42 frames. ], batch size: 35, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:16:18,916 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:16:23,603 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4182, 3.8382, 3.9665, 4.2519, 4.1259, 3.8993, 4.5610, 1.5316], device='cuda:2'), covar=tensor([0.0769, 0.0814, 0.0851, 0.0941, 0.1249, 0.1456, 0.0598, 0.4847], device='cuda:2'), in_proj_covar=tensor([0.0358, 0.0244, 0.0274, 0.0292, 0.0337, 0.0285, 0.0305, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:16:28,016 INFO [finetune.py:976] (2/7) Epoch 5, batch 1200, loss[loss=0.1907, simple_loss=0.2464, pruned_loss=0.0675, over 4769.00 frames. ], tot_loss[loss=0.2284, simple_loss=0.2857, pruned_loss=0.08554, over 952859.19 frames. ], batch size: 23, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:16:37,229 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.721e+02 2.129e+02 2.606e+02 7.150e+02, threshold=4.257e+02, percent-clipped=3.0 2023-03-26 04:16:51,890 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24144.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:17:03,454 INFO [finetune.py:976] (2/7) Epoch 5, batch 1250, loss[loss=0.2023, simple_loss=0.2483, pruned_loss=0.07811, over 4817.00 frames. ], tot_loss[loss=0.2259, simple_loss=0.2829, pruned_loss=0.08445, over 952247.09 frames. ], batch size: 25, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:17:24,833 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5809, 1.4183, 1.4611, 1.5031, 0.8487, 2.9554, 1.0827, 1.6139], device='cuda:2'), covar=tensor([0.3484, 0.2441, 0.2041, 0.2260, 0.2100, 0.0221, 0.2642, 0.1362], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0113, 0.0117, 0.0121, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 04:17:28,510 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=24192.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:17:34,794 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.04 vs. limit=5.0 2023-03-26 04:17:42,425 INFO [finetune.py:976] (2/7) Epoch 5, batch 1300, loss[loss=0.1891, simple_loss=0.2388, pruned_loss=0.06971, over 4219.00 frames. ], tot_loss[loss=0.2214, simple_loss=0.2782, pruned_loss=0.08229, over 952766.33 frames. ], batch size: 65, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:17:56,617 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.607e+02 1.851e+02 2.364e+02 3.844e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 04:18:34,753 INFO [finetune.py:976] (2/7) Epoch 5, batch 1350, loss[loss=0.1592, simple_loss=0.2195, pruned_loss=0.04948, over 4126.00 frames. ], tot_loss[loss=0.2204, simple_loss=0.2774, pruned_loss=0.08165, over 953592.93 frames. ], batch size: 18, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:18:40,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5625, 0.9962, 0.7991, 1.4169, 2.0036, 0.7700, 1.2911, 1.4595], device='cuda:2'), covar=tensor([0.1518, 0.2162, 0.1913, 0.1171, 0.1936, 0.2019, 0.1503, 0.1988], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0117, 0.0093, 0.0124, 0.0097, 0.0101, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 04:18:48,386 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4635, 3.0814, 2.7616, 1.6092, 2.8340, 2.5768, 2.3343, 2.4962], device='cuda:2'), covar=tensor([0.0865, 0.0881, 0.1742, 0.2356, 0.2023, 0.2004, 0.2034, 0.1303], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0200, 0.0201, 0.0189, 0.0214, 0.0207, 0.0219, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:19:12,805 INFO [finetune.py:976] (2/7) Epoch 5, batch 1400, loss[loss=0.2474, simple_loss=0.3163, pruned_loss=0.08927, over 4898.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2824, pruned_loss=0.08382, over 955414.70 frames. ], batch size: 43, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:19:21,620 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.173e+02 1.714e+02 2.138e+02 2.571e+02 4.877e+02, threshold=4.276e+02, percent-clipped=6.0 2023-03-26 04:19:52,882 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 04:19:56,899 INFO [finetune.py:976] (2/7) Epoch 5, batch 1450, loss[loss=0.2319, simple_loss=0.302, pruned_loss=0.08088, over 4789.00 frames. ], tot_loss[loss=0.2269, simple_loss=0.285, pruned_loss=0.08439, over 956077.03 frames. ], batch size: 51, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:20:06,981 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1681, 1.9970, 1.8676, 2.1611, 2.5751, 2.0112, 2.0757, 1.5380], device='cuda:2'), covar=tensor([0.2433, 0.2341, 0.1974, 0.1865, 0.2195, 0.1295, 0.2580, 0.2048], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0209, 0.0199, 0.0185, 0.0236, 0.0174, 0.0215, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:20:20,193 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:20:29,365 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8813, 3.3611, 3.5301, 3.7824, 3.6372, 3.3697, 3.9295, 1.4083], device='cuda:2'), covar=tensor([0.0821, 0.0861, 0.0842, 0.0878, 0.1276, 0.1515, 0.0766, 0.4526], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0244, 0.0273, 0.0291, 0.0337, 0.0284, 0.0305, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:20:31,767 INFO [finetune.py:976] (2/7) Epoch 5, batch 1500, loss[loss=0.2104, simple_loss=0.2769, pruned_loss=0.0719, over 4874.00 frames. ], tot_loss[loss=0.2293, simple_loss=0.2873, pruned_loss=0.08565, over 955878.12 frames. ], batch size: 35, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:20:38,324 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.776e+02 2.138e+02 2.564e+02 4.291e+02, threshold=4.276e+02, percent-clipped=1.0 2023-03-26 04:21:13,454 INFO [finetune.py:976] (2/7) Epoch 5, batch 1550, loss[loss=0.2566, simple_loss=0.3117, pruned_loss=0.1007, over 4810.00 frames. ], tot_loss[loss=0.2277, simple_loss=0.2861, pruned_loss=0.08463, over 955376.94 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:21:24,659 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5391, 1.4256, 1.2697, 1.4659, 1.7722, 1.6858, 1.4964, 1.3218], device='cuda:2'), covar=tensor([0.0278, 0.0271, 0.0580, 0.0255, 0.0177, 0.0390, 0.0278, 0.0312], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0114, 0.0141, 0.0119, 0.0106, 0.0102, 0.0092, 0.0111], device='cuda:2'), out_proj_covar=tensor([6.8482e-05, 8.9996e-05, 1.1320e-04, 9.4435e-05, 8.4161e-05, 7.5913e-05, 7.0340e-05, 8.6412e-05], device='cuda:2') 2023-03-26 04:21:47,121 INFO [finetune.py:976] (2/7) Epoch 5, batch 1600, loss[loss=0.2571, simple_loss=0.2954, pruned_loss=0.1094, over 4295.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.2833, pruned_loss=0.08389, over 954611.33 frames. ], batch size: 66, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:21:58,802 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.205e+02 1.770e+02 2.018e+02 2.552e+02 5.194e+02, threshold=4.037e+02, percent-clipped=4.0 2023-03-26 04:22:21,128 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:22:33,861 INFO [finetune.py:976] (2/7) Epoch 5, batch 1650, loss[loss=0.2104, simple_loss=0.2708, pruned_loss=0.07504, over 4946.00 frames. ], tot_loss[loss=0.2226, simple_loss=0.2803, pruned_loss=0.08244, over 955946.23 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:22:42,087 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 04:22:43,195 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:17,413 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24608.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:22,440 INFO [finetune.py:976] (2/7) Epoch 5, batch 1700, loss[loss=0.1814, simple_loss=0.2505, pruned_loss=0.05617, over 4824.00 frames. ], tot_loss[loss=0.2207, simple_loss=0.2779, pruned_loss=0.08179, over 956380.14 frames. ], batch size: 33, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:23:31,255 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.673e+02 1.915e+02 2.251e+02 4.027e+02, threshold=3.830e+02, percent-clipped=0.0 2023-03-26 04:23:42,120 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:42,192 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 04:24:06,425 INFO [finetune.py:976] (2/7) Epoch 5, batch 1750, loss[loss=0.2597, simple_loss=0.3083, pruned_loss=0.1055, over 4889.00 frames. ], tot_loss[loss=0.2232, simple_loss=0.2802, pruned_loss=0.08307, over 953849.50 frames. ], batch size: 35, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:24:28,022 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24693.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:24:31,688 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 04:24:39,305 INFO [finetune.py:976] (2/7) Epoch 5, batch 1800, loss[loss=0.2233, simple_loss=0.2809, pruned_loss=0.08288, over 4819.00 frames. ], tot_loss[loss=0.2266, simple_loss=0.2841, pruned_loss=0.08453, over 952461.42 frames. ], batch size: 39, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:24:45,828 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.243e+02 1.800e+02 2.166e+02 2.491e+02 4.201e+02, threshold=4.331e+02, percent-clipped=2.0 2023-03-26 04:24:59,913 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=24741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:25:12,945 INFO [finetune.py:976] (2/7) Epoch 5, batch 1850, loss[loss=0.2495, simple_loss=0.2874, pruned_loss=0.1058, over 4781.00 frames. ], tot_loss[loss=0.2281, simple_loss=0.2861, pruned_loss=0.08507, over 953491.32 frames. ], batch size: 25, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:25:13,180 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.25 vs. limit=5.0 2023-03-26 04:25:15,495 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:25:46,398 INFO [finetune.py:976] (2/7) Epoch 5, batch 1900, loss[loss=0.2295, simple_loss=0.2938, pruned_loss=0.08264, over 4872.00 frames. ], tot_loss[loss=0.2286, simple_loss=0.287, pruned_loss=0.08509, over 954754.29 frames. ], batch size: 31, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:25:52,457 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.802e+02 2.061e+02 2.489e+02 6.200e+02, threshold=4.122e+02, percent-clipped=1.0 2023-03-26 04:25:57,985 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:26:20,825 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:26:29,634 INFO [finetune.py:976] (2/7) Epoch 5, batch 1950, loss[loss=0.2038, simple_loss=0.2767, pruned_loss=0.06552, over 4909.00 frames. ], tot_loss[loss=0.227, simple_loss=0.2854, pruned_loss=0.08434, over 955877.33 frames. ], batch size: 43, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:26:54,897 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 04:26:57,653 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:01,814 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:02,942 INFO [finetune.py:976] (2/7) Epoch 5, batch 2000, loss[loss=0.2618, simple_loss=0.3068, pruned_loss=0.1084, over 4834.00 frames. ], tot_loss[loss=0.2246, simple_loss=0.2825, pruned_loss=0.08333, over 956249.92 frames. ], batch size: 30, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:27:03,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5774, 1.4624, 1.8086, 2.9795, 2.0140, 2.3675, 1.1359, 2.4063], device='cuda:2'), covar=tensor([0.1958, 0.1538, 0.1434, 0.0599, 0.0871, 0.1234, 0.1793, 0.0697], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0119, 0.0137, 0.0167, 0.0103, 0.0143, 0.0129, 0.0105], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 04:27:13,003 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.611e+02 2.012e+02 2.424e+02 3.709e+02, threshold=4.024e+02, percent-clipped=0.0 2023-03-26 04:27:15,539 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:26,330 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8443, 1.5995, 1.4107, 1.4080, 1.5424, 1.5443, 1.5292, 2.3289], device='cuda:2'), covar=tensor([0.6532, 0.6885, 0.5218, 0.6777, 0.5869, 0.3898, 0.6775, 0.2374], device='cuda:2'), in_proj_covar=tensor([0.0281, 0.0256, 0.0221, 0.0283, 0.0237, 0.0200, 0.0243, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:27:50,325 INFO [finetune.py:976] (2/7) Epoch 5, batch 2050, loss[loss=0.2092, simple_loss=0.2665, pruned_loss=0.07599, over 4235.00 frames. ], tot_loss[loss=0.2209, simple_loss=0.2784, pruned_loss=0.08167, over 954368.26 frames. ], batch size: 65, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:28:12,190 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:28:23,340 INFO [finetune.py:976] (2/7) Epoch 5, batch 2100, loss[loss=0.2122, simple_loss=0.2725, pruned_loss=0.07599, over 4916.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2775, pruned_loss=0.08151, over 953665.56 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:28:39,037 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.247e+01 1.663e+02 1.989e+02 2.457e+02 4.446e+02, threshold=3.978e+02, percent-clipped=1.0 2023-03-26 04:28:57,670 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6279, 1.2219, 2.0821, 3.1483, 2.0967, 2.2803, 1.1630, 2.5435], device='cuda:2'), covar=tensor([0.1874, 0.1770, 0.1274, 0.0560, 0.0888, 0.1709, 0.1789, 0.0624], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0119, 0.0136, 0.0166, 0.0103, 0.0142, 0.0128, 0.0104], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 04:29:03,110 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-26 04:29:08,262 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25056.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:29:11,161 INFO [finetune.py:976] (2/7) Epoch 5, batch 2150, loss[loss=0.1766, simple_loss=0.2319, pruned_loss=0.0606, over 4313.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2793, pruned_loss=0.08153, over 954082.72 frames. ], batch size: 18, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:29:45,150 INFO [finetune.py:976] (2/7) Epoch 5, batch 2200, loss[loss=0.2031, simple_loss=0.2523, pruned_loss=0.07691, over 4683.00 frames. ], tot_loss[loss=0.2234, simple_loss=0.2822, pruned_loss=0.0823, over 954952.68 frames. ], batch size: 23, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:29:52,261 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.691e+02 1.983e+02 2.301e+02 4.176e+02, threshold=3.967e+02, percent-clipped=1.0 2023-03-26 04:29:52,346 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:29:55,005 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 04:29:56,647 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7492, 1.6201, 2.0624, 1.3679, 1.8685, 2.0552, 1.5437, 2.2042], device='cuda:2'), covar=tensor([0.1544, 0.2252, 0.1351, 0.2042, 0.1028, 0.1460, 0.2759, 0.0912], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0207, 0.0202, 0.0197, 0.0184, 0.0224, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:30:18,606 INFO [finetune.py:976] (2/7) Epoch 5, batch 2250, loss[loss=0.2375, simple_loss=0.2966, pruned_loss=0.08924, over 4901.00 frames. ], tot_loss[loss=0.2247, simple_loss=0.2838, pruned_loss=0.08285, over 954522.91 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:30:22,871 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7983, 0.9361, 1.6017, 1.5724, 1.4602, 1.4353, 1.4364, 1.4898], device='cuda:2'), covar=tensor([0.5649, 0.8111, 0.6349, 0.7091, 0.7885, 0.5763, 0.8885, 0.6009], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0249, 0.0254, 0.0259, 0.0241, 0.0218, 0.0274, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:30:46,374 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25203.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:30:47,802 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:30:51,445 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 04:30:53,043 INFO [finetune.py:976] (2/7) Epoch 5, batch 2300, loss[loss=0.241, simple_loss=0.2948, pruned_loss=0.0936, over 4863.00 frames. ], tot_loss[loss=0.2256, simple_loss=0.2848, pruned_loss=0.08318, over 954392.33 frames. ], batch size: 34, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:31:01,719 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 04:31:05,175 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.840e+02 2.117e+02 2.638e+02 5.911e+02, threshold=4.234e+02, percent-clipped=5.0 2023-03-26 04:31:12,978 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8672, 2.0551, 1.8901, 1.2464, 2.0880, 2.0233, 1.9349, 1.6827], device='cuda:2'), covar=tensor([0.0692, 0.0508, 0.0740, 0.1026, 0.0508, 0.0761, 0.0647, 0.1057], device='cuda:2'), in_proj_covar=tensor([0.0138, 0.0132, 0.0143, 0.0126, 0.0110, 0.0142, 0.0144, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:31:14,024 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:31:35,988 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:31:42,891 INFO [finetune.py:976] (2/7) Epoch 5, batch 2350, loss[loss=0.2466, simple_loss=0.2859, pruned_loss=0.1037, over 4866.00 frames. ], tot_loss[loss=0.224, simple_loss=0.2822, pruned_loss=0.08285, over 951627.01 frames. ], batch size: 31, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:31:51,256 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:32:04,507 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 04:32:16,769 INFO [finetune.py:976] (2/7) Epoch 5, batch 2400, loss[loss=0.1859, simple_loss=0.2534, pruned_loss=0.05924, over 4827.00 frames. ], tot_loss[loss=0.2219, simple_loss=0.2796, pruned_loss=0.08215, over 952708.59 frames. ], batch size: 40, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:32:23,859 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.631e+02 1.900e+02 2.318e+02 5.058e+02, threshold=3.799e+02, percent-clipped=1.0 2023-03-26 04:32:58,159 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25351.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:33:04,187 INFO [finetune.py:976] (2/7) Epoch 5, batch 2450, loss[loss=0.1977, simple_loss=0.2607, pruned_loss=0.0674, over 4919.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2782, pruned_loss=0.0821, over 953538.57 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:33:15,358 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1796, 1.1441, 1.5302, 0.9796, 1.1350, 1.3095, 1.1304, 1.3978], device='cuda:2'), covar=tensor([0.1593, 0.2372, 0.1424, 0.1665, 0.1279, 0.1637, 0.3124, 0.1206], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0207, 0.0203, 0.0197, 0.0185, 0.0225, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:33:48,800 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 04:34:02,098 INFO [finetune.py:976] (2/7) Epoch 5, batch 2500, loss[loss=0.2006, simple_loss=0.2522, pruned_loss=0.07451, over 4398.00 frames. ], tot_loss[loss=0.2222, simple_loss=0.2791, pruned_loss=0.08268, over 953394.39 frames. ], batch size: 19, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:34:11,081 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2608, 2.6731, 2.6176, 1.2770, 2.7362, 2.3697, 2.1783, 2.3779], device='cuda:2'), covar=tensor([0.0787, 0.0984, 0.1558, 0.2403, 0.1806, 0.2451, 0.1979, 0.1256], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0203, 0.0189, 0.0216, 0.0209, 0.0220, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:34:18,831 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.793e+02 2.115e+02 2.620e+02 5.379e+02, threshold=4.229e+02, percent-clipped=6.0 2023-03-26 04:34:18,939 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:34:26,157 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25428.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:34:47,165 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8548, 1.1876, 0.9052, 1.7110, 2.0651, 1.3141, 1.5376, 1.7023], device='cuda:2'), covar=tensor([0.1527, 0.2239, 0.2204, 0.1204, 0.2068, 0.2211, 0.1482, 0.2086], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0116, 0.0093, 0.0124, 0.0096, 0.0100, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 04:34:47,668 INFO [finetune.py:976] (2/7) Epoch 5, batch 2550, loss[loss=0.2313, simple_loss=0.2942, pruned_loss=0.08424, over 4874.00 frames. ], tot_loss[loss=0.2265, simple_loss=0.284, pruned_loss=0.08447, over 955260.72 frames. ], batch size: 34, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:34:53,566 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25469.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:07,249 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25489.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 04:35:16,191 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:20,834 INFO [finetune.py:976] (2/7) Epoch 5, batch 2600, loss[loss=0.2291, simple_loss=0.2861, pruned_loss=0.08608, over 4820.00 frames. ], tot_loss[loss=0.2295, simple_loss=0.2871, pruned_loss=0.08594, over 954242.86 frames. ], batch size: 47, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:35:28,043 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.693e+02 2.088e+02 2.425e+02 4.415e+02, threshold=4.177e+02, percent-clipped=1.0 2023-03-26 04:35:48,650 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:54,562 INFO [finetune.py:976] (2/7) Epoch 5, batch 2650, loss[loss=0.2021, simple_loss=0.2749, pruned_loss=0.06468, over 4818.00 frames. ], tot_loss[loss=0.2294, simple_loss=0.287, pruned_loss=0.08586, over 953449.40 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:36:00,136 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7004, 4.0566, 4.2491, 4.5065, 4.3777, 4.1948, 4.7888, 1.4537], device='cuda:2'), covar=tensor([0.0691, 0.0828, 0.0675, 0.0814, 0.1227, 0.1313, 0.0514, 0.5394], device='cuda:2'), in_proj_covar=tensor([0.0356, 0.0243, 0.0273, 0.0292, 0.0336, 0.0282, 0.0303, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:36:07,129 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 04:36:33,946 INFO [finetune.py:976] (2/7) Epoch 5, batch 2700, loss[loss=0.197, simple_loss=0.2576, pruned_loss=0.06822, over 4913.00 frames. ], tot_loss[loss=0.2282, simple_loss=0.2861, pruned_loss=0.0851, over 955122.22 frames. ], batch size: 37, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:36:50,928 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.205e+02 1.711e+02 2.002e+02 2.331e+02 3.948e+02, threshold=4.004e+02, percent-clipped=0.0 2023-03-26 04:36:53,892 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2323, 2.8303, 2.9484, 3.1326, 3.0190, 2.8341, 3.2804, 1.1009], device='cuda:2'), covar=tensor([0.1087, 0.1077, 0.1042, 0.1200, 0.1565, 0.1621, 0.1111, 0.4427], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0242, 0.0270, 0.0290, 0.0333, 0.0280, 0.0300, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:37:26,225 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25651.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:37:32,345 INFO [finetune.py:976] (2/7) Epoch 5, batch 2750, loss[loss=0.1982, simple_loss=0.2675, pruned_loss=0.06446, over 4795.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2829, pruned_loss=0.08357, over 953992.18 frames. ], batch size: 29, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:37:32,458 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3910, 1.4778, 1.4253, 1.5322, 1.6411, 3.0529, 1.3724, 1.5702], device='cuda:2'), covar=tensor([0.0972, 0.1666, 0.1018, 0.0999, 0.1529, 0.0275, 0.1407, 0.1630], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0077, 0.0080, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 04:37:37,829 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9403, 1.6999, 1.4501, 1.6847, 1.6450, 1.5773, 1.5963, 2.3843], device='cuda:2'), covar=tensor([0.5956, 0.7391, 0.5158, 0.6761, 0.6218, 0.3585, 0.6815, 0.2379], device='cuda:2'), in_proj_covar=tensor([0.0280, 0.0255, 0.0220, 0.0283, 0.0237, 0.0199, 0.0242, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:37:58,588 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:38:07,650 INFO [finetune.py:976] (2/7) Epoch 5, batch 2800, loss[loss=0.1923, simple_loss=0.2696, pruned_loss=0.0575, over 4824.00 frames. ], tot_loss[loss=0.2208, simple_loss=0.2783, pruned_loss=0.08162, over 954428.47 frames. ], batch size: 40, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:38:23,887 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.133e+02 1.611e+02 1.888e+02 2.301e+02 3.388e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 04:38:35,388 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6977, 3.3848, 3.2492, 1.3414, 3.5702, 2.6511, 0.8746, 2.2133], device='cuda:2'), covar=tensor([0.2234, 0.2157, 0.1670, 0.3573, 0.1124, 0.1027, 0.4221, 0.1646], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0172, 0.0163, 0.0129, 0.0155, 0.0122, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 04:38:53,972 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 04:38:59,171 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4390, 1.4351, 1.2220, 1.3330, 1.6959, 1.6061, 1.4633, 1.2811], device='cuda:2'), covar=tensor([0.0299, 0.0293, 0.0571, 0.0294, 0.0196, 0.0408, 0.0294, 0.0341], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0112, 0.0139, 0.0118, 0.0105, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.7905e-05, 8.8494e-05, 1.1137e-04, 9.3096e-05, 8.2993e-05, 7.5295e-05, 6.9426e-05, 8.5014e-05], device='cuda:2') 2023-03-26 04:39:03,402 INFO [finetune.py:976] (2/7) Epoch 5, batch 2850, loss[loss=0.2856, simple_loss=0.3138, pruned_loss=0.1287, over 4204.00 frames. ], tot_loss[loss=0.2191, simple_loss=0.2763, pruned_loss=0.08096, over 954985.91 frames. ], batch size: 18, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:39:18,522 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25784.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:39:37,534 INFO [finetune.py:976] (2/7) Epoch 5, batch 2900, loss[loss=0.2204, simple_loss=0.2872, pruned_loss=0.07674, over 4841.00 frames. ], tot_loss[loss=0.2237, simple_loss=0.2811, pruned_loss=0.08313, over 956509.17 frames. ], batch size: 33, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:39:37,615 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.9155, 3.3604, 3.5632, 3.7012, 3.6698, 3.4352, 3.9761, 1.5983], device='cuda:2'), covar=tensor([0.0780, 0.0977, 0.0751, 0.1000, 0.1220, 0.1360, 0.0677, 0.4479], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0244, 0.0272, 0.0291, 0.0335, 0.0281, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:39:44,762 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.812e+02 2.065e+02 2.463e+02 5.082e+02, threshold=4.130e+02, percent-clipped=4.0 2023-03-26 04:40:09,028 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25858.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:40:10,731 INFO [finetune.py:976] (2/7) Epoch 5, batch 2950, loss[loss=0.2631, simple_loss=0.3138, pruned_loss=0.1062, over 4827.00 frames. ], tot_loss[loss=0.2262, simple_loss=0.2844, pruned_loss=0.08397, over 956856.91 frames. ], batch size: 47, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:40:35,004 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7959, 3.8018, 3.5245, 1.9010, 3.8573, 2.8834, 1.0413, 2.5507], device='cuda:2'), covar=tensor([0.2213, 0.1692, 0.1460, 0.3202, 0.0924, 0.1084, 0.4171, 0.1505], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0174, 0.0165, 0.0130, 0.0157, 0.0124, 0.0147, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 04:40:43,942 INFO [finetune.py:976] (2/7) Epoch 5, batch 3000, loss[loss=0.2429, simple_loss=0.2937, pruned_loss=0.09609, over 4742.00 frames. ], tot_loss[loss=0.2275, simple_loss=0.2859, pruned_loss=0.08453, over 957920.66 frames. ], batch size: 54, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:40:43,942 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 04:40:46,924 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4773, 1.6514, 1.5852, 1.6250, 1.7487, 3.0426, 1.5273, 1.7457], device='cuda:2'), covar=tensor([0.0806, 0.1422, 0.0889, 0.0834, 0.1181, 0.0335, 0.1146, 0.1292], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0077, 0.0080, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 04:40:52,392 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4785, 1.6660, 1.5850, 1.6125, 1.7870, 3.1064, 1.5231, 1.7654], device='cuda:2'), covar=tensor([0.0877, 0.1527, 0.0986, 0.0932, 0.1309, 0.0310, 0.1218, 0.1433], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0081, 0.0077, 0.0080, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 04:40:54,557 INFO [finetune.py:1010] (2/7) Epoch 5, validation: loss=0.1652, simple_loss=0.2371, pruned_loss=0.04667, over 2265189.00 frames. 2023-03-26 04:40:54,558 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 04:41:00,102 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25919.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 04:41:01,812 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.739e+02 2.096e+02 2.435e+02 4.160e+02, threshold=4.193e+02, percent-clipped=2.0 2023-03-26 04:41:27,995 INFO [finetune.py:976] (2/7) Epoch 5, batch 3050, loss[loss=0.1854, simple_loss=0.2502, pruned_loss=0.06031, over 4883.00 frames. ], tot_loss[loss=0.2259, simple_loss=0.2852, pruned_loss=0.08332, over 958352.47 frames. ], batch size: 43, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:42:08,229 INFO [finetune.py:976] (2/7) Epoch 5, batch 3100, loss[loss=0.189, simple_loss=0.2519, pruned_loss=0.06307, over 4157.00 frames. ], tot_loss[loss=0.2228, simple_loss=0.2818, pruned_loss=0.08188, over 955780.82 frames. ], batch size: 17, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:42:25,415 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.603e+02 1.879e+02 2.413e+02 4.411e+02, threshold=3.758e+02, percent-clipped=2.0 2023-03-26 04:42:35,941 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-26 04:42:55,844 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0405, 2.2236, 1.9780, 1.5113, 2.3847, 2.3046, 2.1557, 1.9450], device='cuda:2'), covar=tensor([0.0697, 0.0592, 0.0880, 0.0976, 0.0468, 0.0743, 0.0694, 0.0993], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0134, 0.0145, 0.0128, 0.0111, 0.0144, 0.0147, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:43:10,311 INFO [finetune.py:976] (2/7) Epoch 5, batch 3150, loss[loss=0.2611, simple_loss=0.2964, pruned_loss=0.1129, over 4258.00 frames. ], tot_loss[loss=0.221, simple_loss=0.2789, pruned_loss=0.08156, over 955927.23 frames. ], batch size: 65, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:43:30,941 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26084.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:43:42,310 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:43:49,558 INFO [finetune.py:976] (2/7) Epoch 5, batch 3200, loss[loss=0.2409, simple_loss=0.2987, pruned_loss=0.09158, over 4814.00 frames. ], tot_loss[loss=0.2168, simple_loss=0.2747, pruned_loss=0.07947, over 955662.06 frames. ], batch size: 41, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:43:58,346 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.599e+02 1.999e+02 2.453e+02 4.323e+02, threshold=3.997e+02, percent-clipped=1.0 2023-03-26 04:44:04,856 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26132.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:44:05,486 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3144, 1.4644, 1.5768, 0.8586, 1.4432, 1.7339, 1.7877, 1.4037], device='cuda:2'), covar=tensor([0.1168, 0.0699, 0.0450, 0.0639, 0.0515, 0.0583, 0.0358, 0.0697], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0159, 0.0120, 0.0138, 0.0134, 0.0124, 0.0149, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.8810e-05, 1.1801e-04, 8.6740e-05, 1.0075e-04, 9.6268e-05, 9.1529e-05, 1.1082e-04, 1.0838e-04], device='cuda:2') 2023-03-26 04:44:37,843 INFO [finetune.py:976] (2/7) Epoch 5, batch 3250, loss[loss=0.2061, simple_loss=0.2614, pruned_loss=0.07544, over 4776.00 frames. ], tot_loss[loss=0.2185, simple_loss=0.2758, pruned_loss=0.08064, over 955722.79 frames. ], batch size: 23, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:44:43,927 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:44:47,965 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:45:40,774 INFO [finetune.py:976] (2/7) Epoch 5, batch 3300, loss[loss=0.1637, simple_loss=0.2231, pruned_loss=0.0521, over 4388.00 frames. ], tot_loss[loss=0.2217, simple_loss=0.2796, pruned_loss=0.08194, over 955960.47 frames. ], batch size: 19, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:45:47,649 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26214.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 04:46:00,068 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.721e+02 2.149e+02 2.490e+02 4.939e+02, threshold=4.298e+02, percent-clipped=1.0 2023-03-26 04:46:05,987 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:46:13,053 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4266, 3.3798, 3.1900, 1.3385, 3.5103, 2.5009, 0.8254, 2.1600], device='cuda:2'), covar=tensor([0.2761, 0.2111, 0.1793, 0.3758, 0.1087, 0.1156, 0.4499, 0.1697], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0174, 0.0166, 0.0130, 0.0157, 0.0124, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 04:46:19,855 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:46:24,618 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7745, 1.6124, 1.3541, 1.3715, 1.5383, 1.4984, 1.5327, 2.2243], device='cuda:2'), covar=tensor([0.6399, 0.6367, 0.4864, 0.6372, 0.5458, 0.3453, 0.6369, 0.2326], device='cuda:2'), in_proj_covar=tensor([0.0281, 0.0255, 0.0220, 0.0282, 0.0238, 0.0200, 0.0243, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:46:29,542 INFO [finetune.py:976] (2/7) Epoch 5, batch 3350, loss[loss=0.2301, simple_loss=0.2887, pruned_loss=0.08571, over 4764.00 frames. ], tot_loss[loss=0.2237, simple_loss=0.2822, pruned_loss=0.08262, over 953966.84 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:47:12,172 INFO [finetune.py:976] (2/7) Epoch 5, batch 3400, loss[loss=0.1932, simple_loss=0.2527, pruned_loss=0.06688, over 4768.00 frames. ], tot_loss[loss=0.2253, simple_loss=0.2836, pruned_loss=0.08351, over 955424.60 frames. ], batch size: 27, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:47:12,289 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:47:19,911 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.624e+02 1.904e+02 2.361e+02 4.543e+02, threshold=3.807e+02, percent-clipped=1.0 2023-03-26 04:47:34,579 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9535, 4.3614, 4.1293, 2.5254, 4.3974, 3.3566, 1.1131, 3.0375], device='cuda:2'), covar=tensor([0.2504, 0.1426, 0.1249, 0.2638, 0.0802, 0.0838, 0.4250, 0.1279], device='cuda:2'), in_proj_covar=tensor([0.0157, 0.0173, 0.0165, 0.0129, 0.0156, 0.0123, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 04:47:53,149 INFO [finetune.py:976] (2/7) Epoch 5, batch 3450, loss[loss=0.1705, simple_loss=0.24, pruned_loss=0.0505, over 4751.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.2832, pruned_loss=0.08282, over 955616.74 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:48:23,298 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8083, 3.3132, 3.4892, 3.6732, 3.5507, 3.3379, 3.8823, 1.2950], device='cuda:2'), covar=tensor([0.0895, 0.0905, 0.0870, 0.1038, 0.1332, 0.1496, 0.0832, 0.5142], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0244, 0.0274, 0.0292, 0.0336, 0.0283, 0.0303, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:48:37,958 INFO [finetune.py:976] (2/7) Epoch 5, batch 3500, loss[loss=0.1454, simple_loss=0.2119, pruned_loss=0.03946, over 4761.00 frames. ], tot_loss[loss=0.221, simple_loss=0.2797, pruned_loss=0.08115, over 956325.83 frames. ], batch size: 26, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:48:43,895 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26414.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:48:51,309 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.586e+02 1.962e+02 2.229e+02 4.326e+02, threshold=3.925e+02, percent-clipped=1.0 2023-03-26 04:48:52,102 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 04:49:20,281 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-26 04:49:23,746 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:49:25,514 INFO [finetune.py:976] (2/7) Epoch 5, batch 3550, loss[loss=0.2175, simple_loss=0.2642, pruned_loss=0.08543, over 4826.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2772, pruned_loss=0.08057, over 956776.67 frames. ], batch size: 40, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:49:34,658 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:50:11,928 INFO [finetune.py:976] (2/7) Epoch 5, batch 3600, loss[loss=0.2288, simple_loss=0.2829, pruned_loss=0.08731, over 4825.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2737, pruned_loss=0.0793, over 957169.23 frames. ], batch size: 39, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:50:13,885 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26514.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:50:19,778 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.118e+02 1.588e+02 1.920e+02 2.290e+02 3.397e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 04:50:20,509 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:50:31,962 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4505, 1.5044, 1.9818, 1.7915, 1.7366, 3.9204, 1.3124, 1.7695], device='cuda:2'), covar=tensor([0.1082, 0.1860, 0.1231, 0.1046, 0.1568, 0.0209, 0.1571, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0082, 0.0077, 0.0080, 0.0093, 0.0083, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 04:50:53,327 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 04:50:55,027 INFO [finetune.py:976] (2/7) Epoch 5, batch 3650, loss[loss=0.2395, simple_loss=0.3067, pruned_loss=0.08618, over 4910.00 frames. ], tot_loss[loss=0.22, simple_loss=0.2771, pruned_loss=0.08148, over 955059.41 frames. ], batch size: 36, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:50:55,700 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26562.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:51:10,393 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3239, 2.0598, 1.9242, 0.9639, 2.0805, 1.8016, 1.4274, 1.9287], device='cuda:2'), covar=tensor([0.0965, 0.1017, 0.1609, 0.2162, 0.1565, 0.2073, 0.2450, 0.1140], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0203, 0.0204, 0.0191, 0.0218, 0.0211, 0.0223, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:51:10,471 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 04:51:31,369 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:51:34,854 INFO [finetune.py:976] (2/7) Epoch 5, batch 3700, loss[loss=0.2073, simple_loss=0.2775, pruned_loss=0.06856, over 4753.00 frames. ], tot_loss[loss=0.2238, simple_loss=0.2818, pruned_loss=0.08289, over 955914.11 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:51:36,811 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8336, 1.3259, 0.9196, 1.7087, 2.0828, 1.3198, 1.5176, 1.7866], device='cuda:2'), covar=tensor([0.1519, 0.2176, 0.2161, 0.1195, 0.2049, 0.2194, 0.1520, 0.1929], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0116, 0.0094, 0.0124, 0.0097, 0.0101, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 04:51:42,594 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.866e+02 2.222e+02 2.784e+02 4.852e+02, threshold=4.444e+02, percent-clipped=6.0 2023-03-26 04:52:07,673 INFO [finetune.py:976] (2/7) Epoch 5, batch 3750, loss[loss=0.192, simple_loss=0.2607, pruned_loss=0.06164, over 4864.00 frames. ], tot_loss[loss=0.2263, simple_loss=0.2847, pruned_loss=0.08396, over 955033.52 frames. ], batch size: 31, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:53:00,962 INFO [finetune.py:976] (2/7) Epoch 5, batch 3800, loss[loss=0.2353, simple_loss=0.2955, pruned_loss=0.08749, over 4891.00 frames. ], tot_loss[loss=0.2275, simple_loss=0.2861, pruned_loss=0.08444, over 955776.42 frames. ], batch size: 37, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:53:08,702 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.710e+02 2.076e+02 2.649e+02 5.488e+02, threshold=4.152e+02, percent-clipped=2.0 2023-03-26 04:53:50,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8858, 1.1979, 1.7787, 1.7294, 1.5229, 1.5349, 1.6033, 1.6325], device='cuda:2'), covar=tensor([0.6257, 0.8137, 0.6208, 0.7373, 0.7857, 0.5965, 0.8921, 0.5871], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0247, 0.0255, 0.0259, 0.0242, 0.0218, 0.0275, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:53:57,441 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26758.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:53:59,695 INFO [finetune.py:976] (2/7) Epoch 5, batch 3850, loss[loss=0.197, simple_loss=0.2642, pruned_loss=0.06492, over 4812.00 frames. ], tot_loss[loss=0.2253, simple_loss=0.2839, pruned_loss=0.08335, over 955465.33 frames. ], batch size: 39, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:54:11,275 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:00,183 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:03,122 INFO [finetune.py:976] (2/7) Epoch 5, batch 3900, loss[loss=0.2162, simple_loss=0.2687, pruned_loss=0.08191, over 4830.00 frames. ], tot_loss[loss=0.2231, simple_loss=0.2814, pruned_loss=0.08241, over 956919.38 frames. ], batch size: 33, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:55:09,748 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4911, 2.2779, 1.9635, 1.0181, 2.0532, 1.9438, 1.6841, 1.9676], device='cuda:2'), covar=tensor([0.0835, 0.0944, 0.1588, 0.2287, 0.1668, 0.2474, 0.2258, 0.1278], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0202, 0.0204, 0.0192, 0.0219, 0.0211, 0.0223, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:55:21,755 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.797e+02 2.146e+02 2.516e+02 4.177e+02, threshold=4.292e+02, percent-clipped=1.0 2023-03-26 04:55:22,484 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:57,245 INFO [finetune.py:976] (2/7) Epoch 5, batch 3950, loss[loss=0.2015, simple_loss=0.2663, pruned_loss=0.06829, over 4774.00 frames. ], tot_loss[loss=0.2189, simple_loss=0.2772, pruned_loss=0.08035, over 957125.07 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:56:06,702 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:56:29,735 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26904.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:56:30,949 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:56:32,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6412, 1.5054, 1.4392, 1.2612, 1.7517, 1.4298, 1.7661, 1.6482], device='cuda:2'), covar=tensor([0.1777, 0.2967, 0.3700, 0.3229, 0.2855, 0.2087, 0.3457, 0.2192], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0194, 0.0237, 0.0254, 0.0231, 0.0190, 0.0211, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:56:34,935 INFO [finetune.py:976] (2/7) Epoch 5, batch 4000, loss[loss=0.2033, simple_loss=0.2658, pruned_loss=0.07039, over 4868.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2753, pruned_loss=0.08001, over 955969.06 frames. ], batch size: 34, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:56:39,806 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 04:56:43,169 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 1.662e+02 2.009e+02 2.453e+02 3.802e+02, threshold=4.018e+02, percent-clipped=0.0 2023-03-26 04:57:05,882 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:57:08,737 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26954.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:57:18,757 INFO [finetune.py:976] (2/7) Epoch 5, batch 4050, loss[loss=0.2268, simple_loss=0.3009, pruned_loss=0.07636, over 4747.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.2803, pruned_loss=0.08255, over 954589.97 frames. ], batch size: 54, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:57:26,849 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26965.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:57:50,394 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5923, 0.6382, 1.4382, 1.3529, 1.2324, 1.2373, 1.2519, 1.3382], device='cuda:2'), covar=tensor([0.5012, 0.7013, 0.5869, 0.6075, 0.7015, 0.5301, 0.7458, 0.5405], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0247, 0.0254, 0.0258, 0.0242, 0.0218, 0.0274, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:58:07,908 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0737, 2.4344, 2.4796, 1.2401, 2.5774, 2.1835, 1.8832, 2.1031], device='cuda:2'), covar=tensor([0.0740, 0.1136, 0.2020, 0.2729, 0.1867, 0.2304, 0.2331, 0.1680], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0202, 0.0204, 0.0191, 0.0218, 0.0210, 0.0222, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:58:10,819 INFO [finetune.py:976] (2/7) Epoch 5, batch 4100, loss[loss=0.2146, simple_loss=0.2494, pruned_loss=0.08988, over 4109.00 frames. ], tot_loss[loss=0.2245, simple_loss=0.283, pruned_loss=0.08301, over 955134.30 frames. ], batch size: 17, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:58:11,446 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27011.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:58:19,552 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.740e+02 2.111e+02 2.577e+02 5.326e+02, threshold=4.223e+02, percent-clipped=3.0 2023-03-26 04:58:26,650 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0735, 1.9908, 1.6709, 2.0619, 2.0189, 1.8582, 1.8758, 2.8919], device='cuda:2'), covar=tensor([0.7295, 0.8468, 0.5712, 0.8160, 0.6971, 0.3954, 0.7681, 0.2554], device='cuda:2'), in_proj_covar=tensor([0.0279, 0.0255, 0.0218, 0.0282, 0.0237, 0.0199, 0.0241, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 04:58:58,625 INFO [finetune.py:976] (2/7) Epoch 5, batch 4150, loss[loss=0.2576, simple_loss=0.3154, pruned_loss=0.09986, over 4847.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.2843, pruned_loss=0.08338, over 954884.03 frames. ], batch size: 44, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:59:05,622 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27070.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:59:21,210 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-26 04:59:32,487 INFO [finetune.py:976] (2/7) Epoch 5, batch 4200, loss[loss=0.2243, simple_loss=0.2921, pruned_loss=0.07828, over 4865.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.2835, pruned_loss=0.08264, over 953153.86 frames. ], batch size: 31, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:59:37,724 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:59:41,614 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.682e+02 1.955e+02 2.295e+02 5.538e+02, threshold=3.911e+02, percent-clipped=3.0 2023-03-26 04:59:57,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:00:10,425 INFO [finetune.py:976] (2/7) Epoch 5, batch 4250, loss[loss=0.242, simple_loss=0.2953, pruned_loss=0.09429, over 4901.00 frames. ], tot_loss[loss=0.2223, simple_loss=0.281, pruned_loss=0.08176, over 954534.55 frames. ], batch size: 35, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:01:08,500 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:01:09,570 INFO [finetune.py:976] (2/7) Epoch 5, batch 4300, loss[loss=0.1865, simple_loss=0.2539, pruned_loss=0.05959, over 4791.00 frames. ], tot_loss[loss=0.2187, simple_loss=0.2771, pruned_loss=0.08015, over 953158.26 frames. ], batch size: 29, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:01:26,944 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.762e+02 2.023e+02 2.453e+02 1.035e+03, threshold=4.046e+02, percent-clipped=2.0 2023-03-26 05:01:57,994 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5417, 2.6900, 2.2703, 1.7229, 2.8062, 2.7834, 2.7204, 2.3727], device='cuda:2'), covar=tensor([0.0764, 0.0637, 0.0947, 0.1136, 0.0464, 0.0795, 0.0771, 0.1053], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0132, 0.0143, 0.0127, 0.0111, 0.0142, 0.0145, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:01:59,785 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27260.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:02:00,300 INFO [finetune.py:976] (2/7) Epoch 5, batch 4350, loss[loss=0.2016, simple_loss=0.2662, pruned_loss=0.06852, over 4906.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2722, pruned_loss=0.07811, over 951275.72 frames. ], batch size: 32, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:02:14,663 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 05:02:30,556 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:02:33,996 INFO [finetune.py:976] (2/7) Epoch 5, batch 4400, loss[loss=0.2121, simple_loss=0.2701, pruned_loss=0.07701, over 4781.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2747, pruned_loss=0.07958, over 951129.53 frames. ], batch size: 26, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:02:41,212 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.601e+02 1.888e+02 2.389e+02 3.644e+02, threshold=3.775e+02, percent-clipped=0.0 2023-03-26 05:02:52,351 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5279, 1.4514, 1.3346, 1.6115, 1.8334, 1.5468, 1.0685, 1.2844], device='cuda:2'), covar=tensor([0.2472, 0.2267, 0.2143, 0.1878, 0.2034, 0.1313, 0.3074, 0.2095], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0210, 0.0202, 0.0185, 0.0236, 0.0176, 0.0214, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:03:03,522 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.1968, 4.4769, 4.7456, 5.0738, 4.8976, 4.6095, 5.3419, 1.5595], device='cuda:2'), covar=tensor([0.0704, 0.0823, 0.0655, 0.0718, 0.1248, 0.1432, 0.0502, 0.5326], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0245, 0.0276, 0.0293, 0.0339, 0.0286, 0.0307, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:03:07,514 INFO [finetune.py:976] (2/7) Epoch 5, batch 4450, loss[loss=0.2579, simple_loss=0.3163, pruned_loss=0.09979, over 4800.00 frames. ], tot_loss[loss=0.2204, simple_loss=0.2787, pruned_loss=0.0811, over 951720.26 frames. ], batch size: 45, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:03:11,915 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9906, 1.8313, 1.5138, 1.7156, 2.0017, 1.6671, 2.2363, 1.9635], device='cuda:2'), covar=tensor([0.1733, 0.3051, 0.3931, 0.3574, 0.2961, 0.2068, 0.3681, 0.2405], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0193, 0.0236, 0.0253, 0.0229, 0.0188, 0.0210, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:03:23,471 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.6942, 3.1897, 3.4131, 3.5766, 3.4674, 3.3159, 3.7530, 1.4783], device='cuda:2'), covar=tensor([0.0816, 0.0837, 0.0703, 0.0868, 0.1222, 0.1272, 0.0808, 0.4187], device='cuda:2'), in_proj_covar=tensor([0.0356, 0.0244, 0.0276, 0.0292, 0.0337, 0.0285, 0.0306, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:03:40,731 INFO [finetune.py:976] (2/7) Epoch 5, batch 4500, loss[loss=0.219, simple_loss=0.2689, pruned_loss=0.08456, over 4786.00 frames. ], tot_loss[loss=0.2221, simple_loss=0.2807, pruned_loss=0.08179, over 950938.90 frames. ], batch size: 51, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:03:48,442 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.291e+02 1.723e+02 2.077e+02 2.543e+02 6.449e+02, threshold=4.154e+02, percent-clipped=4.0 2023-03-26 05:03:57,433 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 05:04:14,224 INFO [finetune.py:976] (2/7) Epoch 5, batch 4550, loss[loss=0.1931, simple_loss=0.2567, pruned_loss=0.06478, over 4801.00 frames. ], tot_loss[loss=0.2236, simple_loss=0.2823, pruned_loss=0.08249, over 953440.50 frames. ], batch size: 25, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:04:42,701 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:04:47,378 INFO [finetune.py:976] (2/7) Epoch 5, batch 4600, loss[loss=0.1986, simple_loss=0.2646, pruned_loss=0.06629, over 4825.00 frames. ], tot_loss[loss=0.2222, simple_loss=0.281, pruned_loss=0.08166, over 952956.42 frames. ], batch size: 30, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:04:55,105 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.119e+02 1.596e+02 2.009e+02 2.524e+02 8.514e+02, threshold=4.018e+02, percent-clipped=5.0 2023-03-26 05:05:20,023 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27560.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:05:20,543 INFO [finetune.py:976] (2/7) Epoch 5, batch 4650, loss[loss=0.2024, simple_loss=0.2634, pruned_loss=0.07072, over 4816.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.278, pruned_loss=0.08069, over 954782.19 frames. ], batch size: 40, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:06:07,075 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:06:13,638 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27608.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:06:15,418 INFO [finetune.py:976] (2/7) Epoch 5, batch 4700, loss[loss=0.2178, simple_loss=0.2788, pruned_loss=0.07843, over 4833.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2738, pruned_loss=0.07882, over 955593.88 frames. ], batch size: 30, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:06:27,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.549e+02 1.903e+02 2.293e+02 3.137e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 05:07:02,865 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1082, 1.6553, 1.8380, 1.9260, 1.7228, 1.7319, 1.8317, 1.8222], device='cuda:2'), covar=tensor([0.6718, 0.8036, 0.6157, 0.7804, 0.8560, 0.6563, 0.9951, 0.6126], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0247, 0.0254, 0.0257, 0.0241, 0.0218, 0.0274, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:07:05,705 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:07:14,265 INFO [finetune.py:976] (2/7) Epoch 5, batch 4750, loss[loss=0.2045, simple_loss=0.2603, pruned_loss=0.07434, over 4925.00 frames. ], tot_loss[loss=0.2131, simple_loss=0.271, pruned_loss=0.07762, over 956979.68 frames. ], batch size: 38, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:07:38,019 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 05:07:56,925 INFO [finetune.py:976] (2/7) Epoch 5, batch 4800, loss[loss=0.2103, simple_loss=0.2773, pruned_loss=0.07162, over 4848.00 frames. ], tot_loss[loss=0.2168, simple_loss=0.2742, pruned_loss=0.07966, over 955553.41 frames. ], batch size: 49, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:07:59,303 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27714.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:08:04,701 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.708e+02 2.066e+02 2.363e+02 4.852e+02, threshold=4.133e+02, percent-clipped=3.0 2023-03-26 05:08:22,159 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1827, 1.2433, 1.3376, 0.5499, 1.1460, 1.4907, 1.5216, 1.1906], device='cuda:2'), covar=tensor([0.0857, 0.0491, 0.0394, 0.0524, 0.0426, 0.0429, 0.0262, 0.0597], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0160, 0.0121, 0.0138, 0.0134, 0.0125, 0.0149, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.9293e-05, 1.1853e-04, 8.7935e-05, 1.0082e-04, 9.6793e-05, 9.2521e-05, 1.1095e-04, 1.0899e-04], device='cuda:2') 2023-03-26 05:08:30,293 INFO [finetune.py:976] (2/7) Epoch 5, batch 4850, loss[loss=0.2021, simple_loss=0.2684, pruned_loss=0.0679, over 4766.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.278, pruned_loss=0.08069, over 954629.62 frames. ], batch size: 28, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:08:39,492 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27775.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 05:08:59,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:09:04,312 INFO [finetune.py:976] (2/7) Epoch 5, batch 4900, loss[loss=0.2483, simple_loss=0.307, pruned_loss=0.09482, over 4825.00 frames. ], tot_loss[loss=0.2206, simple_loss=0.2792, pruned_loss=0.08103, over 953228.71 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:09:09,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1832, 1.8610, 1.4891, 0.5599, 1.6299, 1.7757, 1.5978, 1.8132], device='cuda:2'), covar=tensor([0.1065, 0.0961, 0.1472, 0.2240, 0.1448, 0.2565, 0.2454, 0.0926], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0200, 0.0203, 0.0190, 0.0218, 0.0209, 0.0222, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:09:12,047 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.126e+02 1.628e+02 1.864e+02 2.335e+02 3.818e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-26 05:09:30,677 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:09:37,158 INFO [finetune.py:976] (2/7) Epoch 5, batch 4950, loss[loss=0.208, simple_loss=0.2745, pruned_loss=0.07071, over 4832.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2806, pruned_loss=0.08095, over 955054.07 frames. ], batch size: 30, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:09:39,629 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9731, 1.3147, 1.7404, 1.7301, 1.5785, 1.5456, 1.6185, 1.6447], device='cuda:2'), covar=tensor([0.5709, 0.7946, 0.6439, 0.7198, 0.8446, 0.6637, 0.8904, 0.6322], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0248, 0.0255, 0.0258, 0.0242, 0.0219, 0.0275, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:09:49,874 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2508, 1.2673, 1.5302, 1.1138, 1.1999, 1.3998, 1.2519, 1.5938], device='cuda:2'), covar=tensor([0.1335, 0.2203, 0.1253, 0.1565, 0.0961, 0.1353, 0.3021, 0.0869], device='cuda:2'), in_proj_covar=tensor([0.0204, 0.0202, 0.0199, 0.0194, 0.0184, 0.0220, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:10:10,463 INFO [finetune.py:976] (2/7) Epoch 5, batch 5000, loss[loss=0.1566, simple_loss=0.221, pruned_loss=0.04605, over 4792.00 frames. ], tot_loss[loss=0.2198, simple_loss=0.279, pruned_loss=0.08036, over 957023.94 frames. ], batch size: 29, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:10:19,081 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.230e+02 1.618e+02 1.837e+02 2.301e+02 4.829e+02, threshold=3.674e+02, percent-clipped=1.0 2023-03-26 05:10:43,560 INFO [finetune.py:976] (2/7) Epoch 5, batch 5050, loss[loss=0.2201, simple_loss=0.274, pruned_loss=0.08309, over 4889.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.2763, pruned_loss=0.07965, over 958419.57 frames. ], batch size: 35, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:11:15,395 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5950, 1.4170, 1.2793, 1.2694, 1.7923, 1.6744, 1.5698, 1.3151], device='cuda:2'), covar=tensor([0.0266, 0.0297, 0.0533, 0.0339, 0.0199, 0.0479, 0.0445, 0.0406], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0111, 0.0137, 0.0117, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.7549e-05, 8.7588e-05, 1.1001e-04, 9.2568e-05, 8.2220e-05, 7.4337e-05, 6.9252e-05, 8.4788e-05], device='cuda:2') 2023-03-26 05:11:48,644 INFO [finetune.py:976] (2/7) Epoch 5, batch 5100, loss[loss=0.1906, simple_loss=0.2486, pruned_loss=0.06626, over 4895.00 frames. ], tot_loss[loss=0.2154, simple_loss=0.2733, pruned_loss=0.07877, over 958236.31 frames. ], batch size: 32, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:11:59,692 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 05:12:02,949 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.639e+02 1.875e+02 2.408e+02 3.954e+02, threshold=3.749e+02, percent-clipped=2.0 2023-03-26 05:12:13,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2946, 3.6931, 3.8713, 4.0565, 4.0410, 3.8338, 4.3704, 1.3057], device='cuda:2'), covar=tensor([0.0810, 0.0776, 0.0805, 0.1064, 0.1184, 0.1515, 0.0650, 0.5365], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0244, 0.0276, 0.0293, 0.0337, 0.0285, 0.0304, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:12:32,814 INFO [finetune.py:976] (2/7) Epoch 5, batch 5150, loss[loss=0.2625, simple_loss=0.3216, pruned_loss=0.1017, over 4932.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.2741, pruned_loss=0.07886, over 956870.46 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:12:38,915 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28070.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 05:13:05,226 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8872, 1.6891, 1.3899, 1.7230, 1.6823, 1.6041, 1.6085, 2.3588], device='cuda:2'), covar=tensor([0.6572, 0.7681, 0.5291, 0.7046, 0.6338, 0.3942, 0.6942, 0.2372], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0257, 0.0220, 0.0283, 0.0239, 0.0201, 0.0244, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:13:06,310 INFO [finetune.py:976] (2/7) Epoch 5, batch 5200, loss[loss=0.2603, simple_loss=0.3276, pruned_loss=0.09655, over 4903.00 frames. ], tot_loss[loss=0.22, simple_loss=0.2784, pruned_loss=0.08083, over 953151.63 frames. ], batch size: 43, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:13:13,552 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7006, 1.2108, 0.8695, 1.5830, 2.0236, 1.2676, 1.5440, 1.6050], device='cuda:2'), covar=tensor([0.1545, 0.2130, 0.2086, 0.1229, 0.1985, 0.2077, 0.1391, 0.2053], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0099, 0.0116, 0.0094, 0.0125, 0.0097, 0.0101, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:13:14,537 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.308e+02 1.748e+02 1.996e+02 2.342e+02 5.311e+02, threshold=3.992e+02, percent-clipped=1.0 2023-03-26 05:13:39,349 INFO [finetune.py:976] (2/7) Epoch 5, batch 5250, loss[loss=0.1667, simple_loss=0.2341, pruned_loss=0.04962, over 4692.00 frames. ], tot_loss[loss=0.2218, simple_loss=0.2806, pruned_loss=0.08149, over 954785.99 frames. ], batch size: 23, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:14:12,315 INFO [finetune.py:976] (2/7) Epoch 5, batch 5300, loss[loss=0.2294, simple_loss=0.2884, pruned_loss=0.0852, over 4817.00 frames. ], tot_loss[loss=0.223, simple_loss=0.2822, pruned_loss=0.08189, over 955159.59 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:14:19,562 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.725e+02 1.957e+02 2.435e+02 6.444e+02, threshold=3.915e+02, percent-clipped=2.0 2023-03-26 05:14:19,674 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1661, 1.3736, 0.8563, 2.0114, 2.3980, 1.8613, 1.7326, 2.0988], device='cuda:2'), covar=tensor([0.1394, 0.1970, 0.2140, 0.1137, 0.1864, 0.1819, 0.1397, 0.1795], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0116, 0.0094, 0.0125, 0.0097, 0.0101, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:14:24,313 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:14:51,817 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28255.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:14:56,275 INFO [finetune.py:976] (2/7) Epoch 5, batch 5350, loss[loss=0.1961, simple_loss=0.2628, pruned_loss=0.06468, over 4815.00 frames. ], tot_loss[loss=0.2211, simple_loss=0.2813, pruned_loss=0.0804, over 955996.61 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:15:13,546 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28287.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:15:14,108 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5388, 1.4523, 1.2563, 1.4413, 1.7606, 1.6873, 1.5270, 1.2972], device='cuda:2'), covar=tensor([0.0280, 0.0271, 0.0572, 0.0287, 0.0198, 0.0352, 0.0258, 0.0348], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0111, 0.0137, 0.0117, 0.0104, 0.0100, 0.0090, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.7494e-05, 8.7175e-05, 1.0996e-04, 9.2411e-05, 8.2111e-05, 7.4190e-05, 6.9106e-05, 8.4433e-05], device='cuda:2') 2023-03-26 05:15:15,752 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:15:28,417 INFO [finetune.py:976] (2/7) Epoch 5, batch 5400, loss[loss=0.1932, simple_loss=0.2604, pruned_loss=0.06296, over 4823.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.2776, pruned_loss=0.07903, over 954547.37 frames. ], batch size: 39, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:15:31,973 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28316.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:15:36,016 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.900e+01 1.568e+02 1.878e+02 2.260e+02 3.573e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-26 05:15:38,542 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1776, 3.5884, 3.8109, 4.0219, 3.9377, 3.6460, 4.2356, 1.3309], device='cuda:2'), covar=tensor([0.0722, 0.0868, 0.0777, 0.0863, 0.1135, 0.1433, 0.0650, 0.5150], device='cuda:2'), in_proj_covar=tensor([0.0357, 0.0245, 0.0277, 0.0295, 0.0339, 0.0286, 0.0305, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:15:43,293 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9873, 1.3775, 1.8647, 1.7425, 1.5556, 1.5393, 1.6583, 1.6965], device='cuda:2'), covar=tensor([0.5309, 0.7392, 0.5654, 0.6720, 0.7870, 0.6080, 0.8560, 0.5590], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0248, 0.0254, 0.0257, 0.0241, 0.0219, 0.0273, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:15:43,348 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.75 vs. limit=2.0 2023-03-26 05:15:49,737 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0047, 1.7374, 1.5146, 1.4886, 1.7471, 1.7153, 1.6646, 2.4425], device='cuda:2'), covar=tensor([0.6568, 0.6713, 0.5330, 0.6551, 0.5511, 0.3621, 0.6398, 0.2436], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0283, 0.0239, 0.0201, 0.0243, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:15:53,766 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:16:00,358 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2924, 1.9060, 2.7623, 1.7013, 2.2975, 2.5423, 1.8880, 2.6500], device='cuda:2'), covar=tensor([0.1487, 0.2024, 0.1577, 0.2336, 0.0943, 0.1571, 0.2812, 0.0973], device='cuda:2'), in_proj_covar=tensor([0.0207, 0.0205, 0.0201, 0.0196, 0.0187, 0.0223, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:16:01,447 INFO [finetune.py:976] (2/7) Epoch 5, batch 5450, loss[loss=0.2132, simple_loss=0.2685, pruned_loss=0.0789, over 4827.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2737, pruned_loss=0.07741, over 954073.51 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:16:18,292 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28370.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:16:21,341 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6418, 1.7194, 2.0793, 1.9114, 1.8550, 4.3790, 1.5403, 2.0145], device='cuda:2'), covar=tensor([0.1024, 0.1746, 0.1053, 0.1078, 0.1546, 0.0206, 0.1512, 0.1659], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0083, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:16:46,831 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0041, 1.9103, 1.8315, 2.1936, 2.3177, 2.0941, 1.8415, 1.4929], device='cuda:2'), covar=tensor([0.2394, 0.2357, 0.1957, 0.1725, 0.2240, 0.1269, 0.2653, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0209, 0.0201, 0.0185, 0.0236, 0.0176, 0.0214, 0.0188], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:16:47,517 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-26 05:16:49,196 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2306, 2.1498, 2.3048, 1.2656, 2.4931, 2.7218, 2.4032, 2.0957], device='cuda:2'), covar=tensor([0.0976, 0.0659, 0.0442, 0.0701, 0.0471, 0.0439, 0.0374, 0.0628], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0159, 0.0121, 0.0138, 0.0134, 0.0124, 0.0148, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.8088e-05, 1.1790e-04, 8.7877e-05, 1.0041e-04, 9.6388e-05, 9.1653e-05, 1.0953e-04, 1.0857e-04], device='cuda:2') 2023-03-26 05:17:02,776 INFO [finetune.py:976] (2/7) Epoch 5, batch 5500, loss[loss=0.2076, simple_loss=0.2714, pruned_loss=0.07191, over 4773.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2709, pruned_loss=0.0765, over 956998.06 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:17:12,774 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28418.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:17:14,972 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4834, 3.8740, 4.0888, 4.3267, 4.1974, 3.9122, 4.5569, 1.4916], device='cuda:2'), covar=tensor([0.0801, 0.0866, 0.0848, 0.0981, 0.1269, 0.1629, 0.0637, 0.5357], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0244, 0.0276, 0.0293, 0.0337, 0.0285, 0.0304, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:17:21,151 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.651e+02 2.036e+02 2.478e+02 5.642e+02, threshold=4.072e+02, percent-clipped=3.0 2023-03-26 05:18:04,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7688, 0.9895, 1.6066, 1.5744, 1.3851, 1.4166, 1.4811, 1.4774], device='cuda:2'), covar=tensor([0.4419, 0.6390, 0.5433, 0.5720, 0.6945, 0.5061, 0.6907, 0.4869], device='cuda:2'), in_proj_covar=tensor([0.0229, 0.0247, 0.0254, 0.0256, 0.0241, 0.0218, 0.0273, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:18:07,649 INFO [finetune.py:976] (2/7) Epoch 5, batch 5550, loss[loss=0.236, simple_loss=0.2824, pruned_loss=0.09485, over 4893.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2727, pruned_loss=0.07813, over 954314.39 frames. ], batch size: 32, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:18:36,673 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:18:37,318 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9421, 1.9938, 1.5517, 1.5777, 2.1714, 2.2391, 2.0433, 1.8939], device='cuda:2'), covar=tensor([0.0351, 0.0332, 0.0538, 0.0349, 0.0278, 0.0526, 0.0314, 0.0406], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0111, 0.0137, 0.0117, 0.0104, 0.0100, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.7527e-05, 8.7159e-05, 1.0982e-04, 9.2442e-05, 8.2015e-05, 7.4332e-05, 6.9268e-05, 8.4645e-05], device='cuda:2') 2023-03-26 05:18:45,469 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:02,059 INFO [finetune.py:976] (2/7) Epoch 5, batch 5600, loss[loss=0.2258, simple_loss=0.2925, pruned_loss=0.07953, over 4806.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.2769, pruned_loss=0.07937, over 954231.82 frames. ], batch size: 51, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:19:14,572 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.753e+02 2.098e+02 2.591e+02 4.684e+02, threshold=4.196e+02, percent-clipped=2.0 2023-03-26 05:19:22,934 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 05:19:40,191 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:47,574 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:52,217 INFO [finetune.py:976] (2/7) Epoch 5, batch 5650, loss[loss=0.2257, simple_loss=0.2839, pruned_loss=0.08372, over 4759.00 frames. ], tot_loss[loss=0.2201, simple_loss=0.2802, pruned_loss=0.07997, over 953811.56 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:19:54,624 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6941, 1.5589, 1.5760, 1.6081, 1.1098, 3.4082, 1.3312, 1.8660], device='cuda:2'), covar=tensor([0.3310, 0.2418, 0.1972, 0.2193, 0.1969, 0.0194, 0.2463, 0.1242], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0121, 0.0117, 0.0097, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:19:58,757 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6319, 1.2035, 0.7689, 1.5682, 1.9204, 1.4075, 1.4299, 1.7560], device='cuda:2'), covar=tensor([0.1570, 0.2105, 0.2217, 0.1235, 0.2188, 0.2122, 0.1438, 0.1812], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0098, 0.0116, 0.0094, 0.0124, 0.0097, 0.0100, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:20:06,327 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:20:21,896 INFO [finetune.py:976] (2/7) Epoch 5, batch 5700, loss[loss=0.1854, simple_loss=0.2314, pruned_loss=0.06969, over 4340.00 frames. ], tot_loss[loss=0.2189, simple_loss=0.2776, pruned_loss=0.0801, over 938349.69 frames. ], batch size: 19, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:20:21,936 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28611.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 05:20:23,247 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 05:20:29,013 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.580e+02 1.894e+02 2.210e+02 5.665e+02, threshold=3.789e+02, percent-clipped=1.0 2023-03-26 05:20:53,188 INFO [finetune.py:976] (2/7) Epoch 6, batch 0, loss[loss=0.1698, simple_loss=0.2558, pruned_loss=0.04186, over 4783.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2558, pruned_loss=0.04186, over 4783.00 frames. ], batch size: 29, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:20:53,188 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 05:20:56,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4142, 1.5378, 1.4802, 1.5888, 1.6665, 2.9652, 1.4149, 1.6397], device='cuda:2'), covar=tensor([0.1047, 0.1715, 0.1058, 0.1013, 0.1499, 0.0360, 0.1436, 0.1718], device='cuda:2'), in_proj_covar=tensor([0.0079, 0.0083, 0.0078, 0.0081, 0.0094, 0.0084, 0.0087, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:21:08,970 INFO [finetune.py:1010] (2/7) Epoch 6, validation: loss=0.1659, simple_loss=0.2379, pruned_loss=0.04693, over 2265189.00 frames. 2023-03-26 05:21:08,971 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 05:21:15,236 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:21:15,286 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4285, 1.3859, 1.4030, 0.7287, 1.6387, 1.4190, 1.4112, 1.3641], device='cuda:2'), covar=tensor([0.0753, 0.0884, 0.0799, 0.1185, 0.0732, 0.0911, 0.0793, 0.1311], device='cuda:2'), in_proj_covar=tensor([0.0141, 0.0136, 0.0146, 0.0131, 0.0114, 0.0145, 0.0149, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:21:18,005 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2299, 2.4878, 2.0597, 1.6340, 2.2992, 2.6424, 2.3644, 2.0334], device='cuda:2'), covar=tensor([0.0745, 0.0619, 0.0906, 0.1011, 0.1061, 0.0627, 0.0685, 0.1102], device='cuda:2'), in_proj_covar=tensor([0.0141, 0.0136, 0.0146, 0.0131, 0.0114, 0.0145, 0.0148, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:21:23,702 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3349, 1.3829, 1.5290, 0.6860, 1.3642, 1.6631, 1.7485, 1.3770], device='cuda:2'), covar=tensor([0.0972, 0.0634, 0.0486, 0.0622, 0.0457, 0.0546, 0.0320, 0.0655], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0159, 0.0121, 0.0138, 0.0134, 0.0124, 0.0148, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.7884e-05, 1.1752e-04, 8.7878e-05, 1.0043e-04, 9.6153e-05, 9.1878e-05, 1.0956e-04, 1.0867e-04], device='cuda:2') 2023-03-26 05:21:59,715 INFO [finetune.py:976] (2/7) Epoch 6, batch 50, loss[loss=0.2226, simple_loss=0.2868, pruned_loss=0.07926, over 4779.00 frames. ], tot_loss[loss=0.2267, simple_loss=0.2841, pruned_loss=0.08467, over 213532.34 frames. ], batch size: 29, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:22:23,306 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4591, 1.2460, 1.2505, 1.3451, 1.6351, 1.5410, 1.4279, 1.2420], device='cuda:2'), covar=tensor([0.0285, 0.0281, 0.0541, 0.0285, 0.0239, 0.0494, 0.0268, 0.0387], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0111, 0.0138, 0.0118, 0.0104, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.8117e-05, 8.7615e-05, 1.1100e-04, 9.3341e-05, 8.2424e-05, 7.5017e-05, 6.9768e-05, 8.5209e-05], device='cuda:2') 2023-03-26 05:22:30,499 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.667e+02 1.978e+02 2.446e+02 6.098e+02, threshold=3.955e+02, percent-clipped=3.0 2023-03-26 05:22:41,773 INFO [finetune.py:976] (2/7) Epoch 6, batch 100, loss[loss=0.194, simple_loss=0.2581, pruned_loss=0.06489, over 4767.00 frames. ], tot_loss[loss=0.2161, simple_loss=0.275, pruned_loss=0.0786, over 379771.07 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:22:52,798 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5647, 1.4315, 1.4409, 1.5600, 1.0108, 2.9290, 1.1120, 1.6335], device='cuda:2'), covar=tensor([0.3171, 0.2335, 0.2086, 0.2252, 0.1883, 0.0242, 0.2818, 0.1275], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0113, 0.0117, 0.0121, 0.0117, 0.0097, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:23:14,508 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.21 vs. limit=5.0 2023-03-26 05:23:15,380 INFO [finetune.py:976] (2/7) Epoch 6, batch 150, loss[loss=0.2485, simple_loss=0.2938, pruned_loss=0.1016, over 4895.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2714, pruned_loss=0.0771, over 509366.99 frames. ], batch size: 36, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:23:18,994 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4999, 1.4854, 1.2563, 1.4944, 1.7665, 1.6461, 1.5182, 1.3428], device='cuda:2'), covar=tensor([0.0327, 0.0279, 0.0574, 0.0251, 0.0210, 0.0400, 0.0281, 0.0347], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0111, 0.0137, 0.0117, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.7911e-05, 8.7281e-05, 1.1035e-04, 9.2864e-05, 8.1920e-05, 7.4570e-05, 6.9356e-05, 8.4710e-05], device='cuda:2') 2023-03-26 05:23:37,612 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.593e+02 1.877e+02 2.260e+02 4.734e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-26 05:23:48,133 INFO [finetune.py:976] (2/7) Epoch 6, batch 200, loss[loss=0.2268, simple_loss=0.2776, pruned_loss=0.08797, over 4850.00 frames. ], tot_loss[loss=0.2122, simple_loss=0.2694, pruned_loss=0.07752, over 609683.51 frames. ], batch size: 44, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:23:50,561 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:23:55,124 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:03,992 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28861.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:19,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:26,389 INFO [finetune.py:976] (2/7) Epoch 6, batch 250, loss[loss=0.2112, simple_loss=0.2663, pruned_loss=0.07808, over 4770.00 frames. ], tot_loss[loss=0.217, simple_loss=0.275, pruned_loss=0.07957, over 686682.23 frames. ], batch size: 28, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:24:47,335 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28911.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 05:25:02,676 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28922.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:03,160 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.192e+02 1.635e+02 1.950e+02 2.422e+02 4.878e+02, threshold=3.900e+02, percent-clipped=5.0 2023-03-26 05:25:03,890 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7202, 1.5495, 2.2971, 3.6093, 2.5455, 2.4901, 0.9733, 2.6875], device='cuda:2'), covar=tensor([0.1869, 0.1566, 0.1384, 0.0506, 0.0805, 0.1407, 0.2094, 0.0654], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0119, 0.0136, 0.0165, 0.0103, 0.0142, 0.0128, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 05:25:09,280 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:13,897 INFO [finetune.py:976] (2/7) Epoch 6, batch 300, loss[loss=0.2387, simple_loss=0.2926, pruned_loss=0.09243, over 4757.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2794, pruned_loss=0.08065, over 745738.20 frames. ], batch size: 27, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:25:16,439 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28943.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:28,660 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28959.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:25:47,451 INFO [finetune.py:976] (2/7) Epoch 6, batch 350, loss[loss=0.1999, simple_loss=0.2679, pruned_loss=0.06599, over 4828.00 frames. ], tot_loss[loss=0.2216, simple_loss=0.2808, pruned_loss=0.08124, over 792289.79 frames. ], batch size: 30, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:25:49,413 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:26:02,689 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:26:28,283 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 1.836e+02 2.201e+02 2.620e+02 4.241e+02, threshold=4.402e+02, percent-clipped=2.0 2023-03-26 05:26:38,005 INFO [finetune.py:976] (2/7) Epoch 6, batch 400, loss[loss=0.1968, simple_loss=0.2683, pruned_loss=0.06261, over 4838.00 frames. ], tot_loss[loss=0.2217, simple_loss=0.2811, pruned_loss=0.08117, over 828238.36 frames. ], batch size: 47, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:26:45,333 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 05:27:01,629 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29063.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:27:17,311 INFO [finetune.py:976] (2/7) Epoch 6, batch 450, loss[loss=0.1798, simple_loss=0.2359, pruned_loss=0.0618, over 4893.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2793, pruned_loss=0.07956, over 857743.00 frames. ], batch size: 36, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:27:25,789 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5875, 2.4178, 2.1495, 2.7463, 2.4845, 2.3224, 2.3556, 3.3453], device='cuda:2'), covar=tensor([0.6380, 0.7670, 0.5138, 0.6321, 0.5758, 0.3764, 0.6961, 0.2151], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0284, 0.0239, 0.0202, 0.0245, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:27:45,345 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.708e+02 1.996e+02 2.284e+02 5.200e+02, threshold=3.993e+02, percent-clipped=1.0 2023-03-26 05:27:55,104 INFO [finetune.py:976] (2/7) Epoch 6, batch 500, loss[loss=0.201, simple_loss=0.2539, pruned_loss=0.07402, over 4246.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2764, pruned_loss=0.07869, over 879372.39 frames. ], batch size: 65, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:27:57,525 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:01,669 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:28,350 INFO [finetune.py:976] (2/7) Epoch 6, batch 550, loss[loss=0.2183, simple_loss=0.2792, pruned_loss=0.07869, over 4812.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2741, pruned_loss=0.07841, over 896501.41 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:28:28,995 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:34,887 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:00,095 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:04,678 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.638e+02 2.076e+02 2.525e+02 4.090e+02, threshold=4.153e+02, percent-clipped=1.0 2023-03-26 05:29:07,167 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.7708, 3.2850, 3.4254, 3.6372, 3.5020, 3.3070, 3.8159, 1.1949], device='cuda:2'), covar=tensor([0.0852, 0.0920, 0.0875, 0.0984, 0.1369, 0.1621, 0.0901, 0.5108], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0241, 0.0273, 0.0291, 0.0335, 0.0282, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:29:18,496 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:20,235 INFO [finetune.py:976] (2/7) Epoch 6, batch 600, loss[loss=0.2396, simple_loss=0.2985, pruned_loss=0.09029, over 4868.00 frames. ], tot_loss[loss=0.2166, simple_loss=0.2752, pruned_loss=0.07901, over 909088.13 frames. ], batch size: 34, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:30:24,096 INFO [finetune.py:976] (2/7) Epoch 6, batch 650, loss[loss=0.2261, simple_loss=0.2993, pruned_loss=0.07649, over 4929.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.2789, pruned_loss=0.08023, over 921390.12 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:30:33,484 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6834, 1.4931, 1.5223, 1.6154, 1.1780, 3.6572, 1.4701, 1.9560], device='cuda:2'), covar=tensor([0.3541, 0.2641, 0.2210, 0.2427, 0.1950, 0.0153, 0.2638, 0.1427], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0117, 0.0121, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:30:34,117 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:30:41,580 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2782, 1.1532, 1.1189, 1.0994, 1.4454, 1.4331, 1.3200, 1.1176], device='cuda:2'), covar=tensor([0.0326, 0.0305, 0.0587, 0.0300, 0.0230, 0.0368, 0.0268, 0.0399], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0112, 0.0139, 0.0119, 0.0105, 0.0102, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([6.8633e-05, 8.8618e-05, 1.1149e-04, 9.3832e-05, 8.2513e-05, 7.5497e-05, 7.0227e-05, 8.5874e-05], device='cuda:2') 2023-03-26 05:31:13,223 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.735e+02 1.982e+02 2.438e+02 4.583e+02, threshold=3.965e+02, percent-clipped=2.0 2023-03-26 05:31:33,679 INFO [finetune.py:976] (2/7) Epoch 6, batch 700, loss[loss=0.1792, simple_loss=0.2454, pruned_loss=0.05651, over 4812.00 frames. ], tot_loss[loss=0.2204, simple_loss=0.2806, pruned_loss=0.08015, over 929659.89 frames. ], batch size: 25, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:31:46,288 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:32:17,024 INFO [finetune.py:976] (2/7) Epoch 6, batch 750, loss[loss=0.2389, simple_loss=0.2917, pruned_loss=0.093, over 4913.00 frames. ], tot_loss[loss=0.2217, simple_loss=0.282, pruned_loss=0.08064, over 934326.51 frames. ], batch size: 36, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:32:59,099 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.656e+01 1.814e+02 2.109e+02 2.501e+02 5.044e+02, threshold=4.217e+02, percent-clipped=3.0 2023-03-26 05:33:26,483 INFO [finetune.py:976] (2/7) Epoch 6, batch 800, loss[loss=0.217, simple_loss=0.277, pruned_loss=0.07849, over 4830.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.2826, pruned_loss=0.08137, over 937630.95 frames. ], batch size: 30, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:33:29,741 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 05:34:10,556 INFO [finetune.py:976] (2/7) Epoch 6, batch 850, loss[loss=0.2134, simple_loss=0.2744, pruned_loss=0.0762, over 4816.00 frames. ], tot_loss[loss=0.22, simple_loss=0.2795, pruned_loss=0.08021, over 940691.63 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:34:23,573 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 05:34:24,033 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8135, 0.9417, 1.7221, 1.5934, 1.4808, 1.4841, 1.4248, 1.5442], device='cuda:2'), covar=tensor([0.4951, 0.6851, 0.5712, 0.6220, 0.7358, 0.5406, 0.7574, 0.5473], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0246, 0.0253, 0.0255, 0.0240, 0.0218, 0.0273, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:34:43,561 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29517.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:34:52,273 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.241e+02 1.650e+02 2.021e+02 2.394e+02 5.702e+02, threshold=4.042e+02, percent-clipped=1.0 2023-03-26 05:35:13,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3232, 2.1985, 1.9566, 2.4311, 1.8260, 5.0037, 2.1189, 2.8400], device='cuda:2'), covar=tensor([0.2992, 0.2109, 0.1853, 0.1959, 0.1604, 0.0086, 0.2091, 0.1093], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0118, 0.0122, 0.0118, 0.0099, 0.0102, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:35:14,735 INFO [finetune.py:976] (2/7) Epoch 6, batch 900, loss[loss=0.1968, simple_loss=0.2594, pruned_loss=0.06713, over 4900.00 frames. ], tot_loss[loss=0.2167, simple_loss=0.2762, pruned_loss=0.07864, over 945496.04 frames. ], batch size: 35, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:35:32,160 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:35:46,192 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:14,403 INFO [finetune.py:976] (2/7) Epoch 6, batch 950, loss[loss=0.1632, simple_loss=0.2342, pruned_loss=0.04612, over 4920.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2735, pruned_loss=0.07748, over 946673.12 frames. ], batch size: 37, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:36:21,376 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:43,986 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:56,223 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.661e+02 1.977e+02 2.359e+02 4.237e+02, threshold=3.954e+02, percent-clipped=2.0 2023-03-26 05:37:17,825 INFO [finetune.py:976] (2/7) Epoch 6, batch 1000, loss[loss=0.1752, simple_loss=0.2368, pruned_loss=0.05679, over 4769.00 frames. ], tot_loss[loss=0.2168, simple_loss=0.2763, pruned_loss=0.0787, over 950465.99 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:37:28,436 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1207, 2.5979, 2.2724, 1.3584, 2.3766, 2.3950, 1.8834, 2.1749], device='cuda:2'), covar=tensor([0.0489, 0.0708, 0.1047, 0.1588, 0.1130, 0.1580, 0.1726, 0.0792], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0201, 0.0202, 0.0190, 0.0217, 0.0210, 0.0220, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:37:40,147 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:37:47,499 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29662.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:38:20,623 INFO [finetune.py:976] (2/7) Epoch 6, batch 1050, loss[loss=0.2147, simple_loss=0.2807, pruned_loss=0.07437, over 4856.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.2782, pruned_loss=0.07872, over 953144.24 frames. ], batch size: 44, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:38:20,778 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0917, 1.6906, 1.8290, 1.8112, 1.6522, 1.6779, 1.8574, 1.7666], device='cuda:2'), covar=tensor([0.6162, 0.7891, 0.6406, 0.8075, 0.8746, 0.6608, 0.9941, 0.5893], device='cuda:2'), in_proj_covar=tensor([0.0228, 0.0245, 0.0252, 0.0255, 0.0240, 0.0217, 0.0273, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:38:41,046 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4582, 1.4948, 1.5136, 1.7970, 1.5830, 3.1618, 1.2930, 1.5652], device='cuda:2'), covar=tensor([0.1087, 0.1890, 0.1246, 0.1045, 0.1646, 0.0299, 0.1535, 0.1762], device='cuda:2'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:38:41,608 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29706.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:39:02,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.795e+02 2.095e+02 2.533e+02 7.754e+02, threshold=4.191e+02, percent-clipped=4.0 2023-03-26 05:39:03,052 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29723.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 05:39:09,864 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 05:39:23,373 INFO [finetune.py:976] (2/7) Epoch 6, batch 1100, loss[loss=0.1903, simple_loss=0.26, pruned_loss=0.06029, over 4799.00 frames. ], tot_loss[loss=0.2188, simple_loss=0.2796, pruned_loss=0.07901, over 953500.29 frames. ], batch size: 40, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:39:41,626 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 05:40:24,911 INFO [finetune.py:976] (2/7) Epoch 6, batch 1150, loss[loss=0.2029, simple_loss=0.2615, pruned_loss=0.07212, over 4775.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2798, pruned_loss=0.07932, over 951762.77 frames. ], batch size: 51, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:40:37,971 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:01,070 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.742e+02 2.057e+02 2.377e+02 6.600e+02, threshold=4.115e+02, percent-clipped=1.0 2023-03-26 05:41:11,693 INFO [finetune.py:976] (2/7) Epoch 6, batch 1200, loss[loss=0.1807, simple_loss=0.2548, pruned_loss=0.05329, over 4816.00 frames. ], tot_loss[loss=0.2173, simple_loss=0.2778, pruned_loss=0.0784, over 953827.64 frames. ], batch size: 38, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:41:28,765 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29863.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:45,292 INFO [finetune.py:976] (2/7) Epoch 6, batch 1250, loss[loss=0.2299, simple_loss=0.2801, pruned_loss=0.08986, over 4894.00 frames. ], tot_loss[loss=0.2154, simple_loss=0.2753, pruned_loss=0.07775, over 955063.22 frames. ], batch size: 35, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:41:47,166 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:57,713 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29907.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:42:00,123 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 05:42:04,918 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 05:42:07,774 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.638e+02 1.953e+02 2.201e+02 4.150e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 05:42:18,518 INFO [finetune.py:976] (2/7) Epoch 6, batch 1300, loss[loss=0.1626, simple_loss=0.2297, pruned_loss=0.04776, over 4798.00 frames. ], tot_loss[loss=0.2125, simple_loss=0.2723, pruned_loss=0.0763, over 957973.71 frames. ], batch size: 29, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:42:19,141 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:42:53,738 INFO [finetune.py:976] (2/7) Epoch 6, batch 1350, loss[loss=0.2021, simple_loss=0.267, pruned_loss=0.06863, over 4764.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2726, pruned_loss=0.07628, over 957496.87 frames. ], batch size: 54, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:43:22,437 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30018.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:43:25,359 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.660e+02 2.040e+02 2.486e+02 4.804e+02, threshold=4.081e+02, percent-clipped=2.0 2023-03-26 05:43:35,490 INFO [finetune.py:976] (2/7) Epoch 6, batch 1400, loss[loss=0.182, simple_loss=0.2611, pruned_loss=0.05145, over 4928.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2757, pruned_loss=0.07761, over 957370.98 frames. ], batch size: 33, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:43:54,651 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 05:44:14,311 INFO [finetune.py:976] (2/7) Epoch 6, batch 1450, loss[loss=0.238, simple_loss=0.3079, pruned_loss=0.08406, over 4824.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2769, pruned_loss=0.07842, over 955941.78 frames. ], batch size: 39, lr: 3.91e-03, grad_scale: 64.0 2023-03-26 05:44:58,651 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.825e+02 2.216e+02 2.642e+02 7.386e+02, threshold=4.431e+02, percent-clipped=2.0 2023-03-26 05:44:59,432 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30125.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:45:04,857 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.00 vs. limit=5.0 2023-03-26 05:45:16,732 INFO [finetune.py:976] (2/7) Epoch 6, batch 1500, loss[loss=0.2266, simple_loss=0.2926, pruned_loss=0.0803, over 4928.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.278, pruned_loss=0.07882, over 955570.18 frames. ], batch size: 33, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:45:38,716 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:45:59,678 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:01,369 INFO [finetune.py:976] (2/7) Epoch 6, batch 1550, loss[loss=0.1997, simple_loss=0.2568, pruned_loss=0.07129, over 4819.00 frames. ], tot_loss[loss=0.218, simple_loss=0.2784, pruned_loss=0.0788, over 954767.73 frames. ], batch size: 30, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:46:12,976 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30207.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:14,206 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:25,109 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.698e+02 2.038e+02 2.458e+02 3.858e+02, threshold=4.076e+02, percent-clipped=0.0 2023-03-26 05:46:27,716 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 05:46:34,721 INFO [finetune.py:976] (2/7) Epoch 6, batch 1600, loss[loss=0.2027, simple_loss=0.2649, pruned_loss=0.07027, over 4846.00 frames. ], tot_loss[loss=0.2167, simple_loss=0.2765, pruned_loss=0.07846, over 955369.14 frames. ], batch size: 44, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:46:34,836 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8133, 1.8119, 1.7423, 1.1040, 2.0320, 2.0047, 1.8511, 1.6347], device='cuda:2'), covar=tensor([0.0625, 0.0649, 0.0760, 0.0996, 0.0575, 0.0632, 0.0680, 0.1158], device='cuda:2'), in_proj_covar=tensor([0.0141, 0.0137, 0.0148, 0.0130, 0.0115, 0.0146, 0.0149, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:46:39,568 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 05:46:44,946 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:55,996 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:47:08,014 INFO [finetune.py:976] (2/7) Epoch 6, batch 1650, loss[loss=0.1542, simple_loss=0.2195, pruned_loss=0.04448, over 4816.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2737, pruned_loss=0.07746, over 955902.01 frames. ], batch size: 25, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:47:17,169 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8747, 1.6709, 1.4240, 1.4040, 1.5875, 1.5493, 1.5135, 2.3512], device='cuda:2'), covar=tensor([0.6170, 0.6377, 0.5054, 0.6126, 0.5514, 0.3515, 0.6057, 0.2420], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0259, 0.0221, 0.0286, 0.0241, 0.0203, 0.0246, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:47:27,133 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30318.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:47:31,616 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.710e+02 1.999e+02 2.408e+02 3.997e+02, threshold=3.998e+02, percent-clipped=0.0 2023-03-26 05:47:32,507 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-26 05:47:38,987 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6695, 1.2183, 0.8853, 1.5467, 2.0736, 1.0156, 1.4444, 1.6443], device='cuda:2'), covar=tensor([0.1597, 0.2231, 0.2092, 0.1277, 0.2041, 0.2207, 0.1514, 0.2045], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0099, 0.0115, 0.0093, 0.0124, 0.0097, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:47:41,359 INFO [finetune.py:976] (2/7) Epoch 6, batch 1700, loss[loss=0.2402, simple_loss=0.2869, pruned_loss=0.09676, over 4821.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2716, pruned_loss=0.07681, over 957317.66 frames. ], batch size: 39, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:47:55,193 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:47:58,774 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30366.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:48:21,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7631, 1.8467, 2.1680, 2.0338, 2.0463, 4.4999, 1.8233, 2.0530], device='cuda:2'), covar=tensor([0.1009, 0.1705, 0.1161, 0.1055, 0.1556, 0.0206, 0.1335, 0.1644], device='cuda:2'), in_proj_covar=tensor([0.0077, 0.0082, 0.0077, 0.0080, 0.0092, 0.0083, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 05:48:27,033 INFO [finetune.py:976] (2/7) Epoch 6, batch 1750, loss[loss=0.2579, simple_loss=0.3286, pruned_loss=0.0936, over 4113.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2739, pruned_loss=0.07757, over 956269.69 frames. ], batch size: 65, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:48:48,739 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:48:50,964 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.220e+01 1.748e+02 2.265e+02 2.778e+02 4.150e+02, threshold=4.530e+02, percent-clipped=1.0 2023-03-26 05:49:00,662 INFO [finetune.py:976] (2/7) Epoch 6, batch 1800, loss[loss=0.2071, simple_loss=0.2781, pruned_loss=0.06811, over 4808.00 frames. ], tot_loss[loss=0.2174, simple_loss=0.2776, pruned_loss=0.07862, over 954983.01 frames. ], batch size: 51, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:49:13,234 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:49:20,057 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-26 05:49:28,999 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:49:33,833 INFO [finetune.py:976] (2/7) Epoch 6, batch 1850, loss[loss=0.2105, simple_loss=0.2843, pruned_loss=0.06833, over 4818.00 frames. ], tot_loss[loss=0.219, simple_loss=0.279, pruned_loss=0.07949, over 954720.81 frames. ], batch size: 38, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:49:44,726 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:50:03,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.671e+02 2.010e+02 2.577e+02 4.739e+02, threshold=4.020e+02, percent-clipped=1.0 2023-03-26 05:50:22,793 INFO [finetune.py:976] (2/7) Epoch 6, batch 1900, loss[loss=0.2239, simple_loss=0.2724, pruned_loss=0.08775, over 4164.00 frames. ], tot_loss[loss=0.2173, simple_loss=0.2778, pruned_loss=0.07838, over 951723.09 frames. ], batch size: 65, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:50:30,783 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 05:50:31,870 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5129, 1.1076, 0.8043, 1.3896, 1.9176, 0.6773, 1.2492, 1.4671], device='cuda:2'), covar=tensor([0.1513, 0.2118, 0.1856, 0.1223, 0.2009, 0.2191, 0.1469, 0.1884], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0099, 0.0116, 0.0094, 0.0125, 0.0097, 0.0100, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:50:42,047 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8886, 1.5219, 2.4432, 3.6044, 2.5911, 2.5302, 1.0487, 2.8624], device='cuda:2'), covar=tensor([0.1833, 0.1670, 0.1279, 0.0540, 0.0769, 0.1891, 0.1951, 0.0521], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0135, 0.0164, 0.0102, 0.0140, 0.0127, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 05:50:52,224 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:51:10,740 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0749, 2.0315, 1.8128, 2.1185, 2.5813, 2.0693, 1.7606, 1.5910], device='cuda:2'), covar=tensor([0.2112, 0.2057, 0.1772, 0.1685, 0.1912, 0.1184, 0.2504, 0.1811], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0210, 0.0203, 0.0187, 0.0238, 0.0177, 0.0215, 0.0190], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:51:20,334 INFO [finetune.py:976] (2/7) Epoch 6, batch 1950, loss[loss=0.2493, simple_loss=0.3128, pruned_loss=0.09287, over 4806.00 frames. ], tot_loss[loss=0.2175, simple_loss=0.2774, pruned_loss=0.07876, over 951129.62 frames. ], batch size: 40, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:51:28,879 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0153, 1.7134, 1.7725, 1.7976, 1.2430, 4.3864, 1.6203, 2.2896], device='cuda:2'), covar=tensor([0.3176, 0.2387, 0.1921, 0.2258, 0.1754, 0.0099, 0.2428, 0.1252], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0121, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 05:51:56,504 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5222, 2.2364, 1.8564, 0.8835, 1.9657, 1.9439, 1.7609, 1.9434], device='cuda:2'), covar=tensor([0.0833, 0.0868, 0.1542, 0.2210, 0.1563, 0.2487, 0.2140, 0.1084], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0202, 0.0203, 0.0191, 0.0218, 0.0211, 0.0221, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:51:58,194 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.556e+02 1.883e+02 2.215e+02 4.222e+02, threshold=3.767e+02, percent-clipped=2.0 2023-03-26 05:52:05,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7153, 2.3663, 1.9795, 0.9903, 2.1569, 2.0367, 1.8970, 2.1048], device='cuda:2'), covar=tensor([0.0715, 0.0831, 0.1495, 0.2213, 0.1424, 0.2040, 0.1933, 0.1042], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0203, 0.0191, 0.0217, 0.0211, 0.0221, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:52:09,311 INFO [finetune.py:976] (2/7) Epoch 6, batch 2000, loss[loss=0.1614, simple_loss=0.2318, pruned_loss=0.04553, over 4906.00 frames. ], tot_loss[loss=0.2135, simple_loss=0.2735, pruned_loss=0.07669, over 954475.87 frames. ], batch size: 32, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:52:14,239 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9235, 1.7355, 2.2856, 1.6474, 2.1508, 2.0942, 1.6195, 2.3378], device='cuda:2'), covar=tensor([0.1380, 0.1932, 0.1320, 0.1990, 0.0843, 0.1460, 0.2522, 0.0791], device='cuda:2'), in_proj_covar=tensor([0.0204, 0.0203, 0.0197, 0.0193, 0.0181, 0.0219, 0.0215, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:52:19,614 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:52:56,172 INFO [finetune.py:976] (2/7) Epoch 6, batch 2050, loss[loss=0.1862, simple_loss=0.2605, pruned_loss=0.05594, over 4818.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.2697, pruned_loss=0.07511, over 955846.74 frames. ], batch size: 39, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:52:59,700 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7026, 3.7915, 3.6050, 1.7125, 3.7502, 2.9765, 0.7735, 2.7351], device='cuda:2'), covar=tensor([0.2599, 0.1743, 0.1661, 0.3325, 0.1039, 0.0951, 0.4470, 0.1462], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0171, 0.0164, 0.0128, 0.0156, 0.0123, 0.0145, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 05:53:14,336 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30716.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:53:14,398 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30716.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:53:20,053 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.687e+02 1.889e+02 2.318e+02 4.112e+02, threshold=3.779e+02, percent-clipped=3.0 2023-03-26 05:53:20,808 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4226, 2.3291, 2.0167, 1.1121, 2.2040, 1.8893, 1.6906, 2.1510], device='cuda:2'), covar=tensor([0.0881, 0.0732, 0.1468, 0.2237, 0.1526, 0.2251, 0.2159, 0.0987], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0202, 0.0190, 0.0216, 0.0209, 0.0221, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:53:40,713 INFO [finetune.py:976] (2/7) Epoch 6, batch 2100, loss[loss=0.2173, simple_loss=0.2841, pruned_loss=0.07524, over 4908.00 frames. ], tot_loss[loss=0.2105, simple_loss=0.2699, pruned_loss=0.0755, over 955941.85 frames. ], batch size: 36, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:54:11,805 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1391, 3.5595, 3.7462, 3.9741, 3.8922, 3.6649, 4.2155, 1.4543], device='cuda:2'), covar=tensor([0.0744, 0.0877, 0.0757, 0.0971, 0.1187, 0.1616, 0.0707, 0.5118], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0243, 0.0276, 0.0293, 0.0332, 0.0283, 0.0303, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:54:13,623 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:54:18,836 INFO [finetune.py:976] (2/7) Epoch 6, batch 2150, loss[loss=0.2262, simple_loss=0.2887, pruned_loss=0.08184, over 4856.00 frames. ], tot_loss[loss=0.2149, simple_loss=0.2746, pruned_loss=0.07757, over 957058.79 frames. ], batch size: 31, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:54:29,017 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1486, 1.3188, 1.3914, 0.6842, 1.1892, 1.5855, 1.5934, 1.3026], device='cuda:2'), covar=tensor([0.1041, 0.0570, 0.0423, 0.0544, 0.0450, 0.0463, 0.0332, 0.0559], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0158, 0.0121, 0.0138, 0.0133, 0.0124, 0.0147, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.8493e-05, 1.1702e-04, 8.7650e-05, 1.0067e-04, 9.5606e-05, 9.1582e-05, 1.0885e-04, 1.0769e-04], device='cuda:2') 2023-03-26 05:54:42,039 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.708e+02 2.011e+02 2.625e+02 4.679e+02, threshold=4.022e+02, percent-clipped=7.0 2023-03-26 05:54:46,132 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30829.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:54:52,229 INFO [finetune.py:976] (2/7) Epoch 6, batch 2200, loss[loss=0.179, simple_loss=0.2401, pruned_loss=0.05891, over 4765.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2761, pruned_loss=0.0782, over 956022.18 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:54:55,381 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.81 vs. limit=5.0 2023-03-26 05:55:16,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:55:37,465 INFO [finetune.py:976] (2/7) Epoch 6, batch 2250, loss[loss=0.2058, simple_loss=0.2716, pruned_loss=0.07001, over 4865.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2773, pruned_loss=0.07822, over 957952.41 frames. ], batch size: 34, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:55:53,172 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:10,383 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30913.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:22,247 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.611e+02 1.958e+02 2.302e+02 5.232e+02, threshold=3.915e+02, percent-clipped=2.0 2023-03-26 05:56:22,987 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:34,284 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:42,904 INFO [finetune.py:976] (2/7) Epoch 6, batch 2300, loss[loss=0.1949, simple_loss=0.2368, pruned_loss=0.07652, over 4087.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2764, pruned_loss=0.07688, over 957467.97 frames. ], batch size: 18, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:57:06,627 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 05:57:16,198 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:57:27,168 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1427, 2.2776, 2.1636, 1.5201, 2.4919, 2.4289, 2.4479, 2.1043], device='cuda:2'), covar=tensor([0.0691, 0.0667, 0.0765, 0.0987, 0.0457, 0.0727, 0.0617, 0.0903], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0135, 0.0144, 0.0128, 0.0113, 0.0144, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 05:57:46,774 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:57:49,007 INFO [finetune.py:976] (2/7) Epoch 6, batch 2350, loss[loss=0.2274, simple_loss=0.2884, pruned_loss=0.08325, over 4867.00 frames. ], tot_loss[loss=0.2137, simple_loss=0.2746, pruned_loss=0.07645, over 955126.52 frames. ], batch size: 31, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:57:57,474 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:58:19,343 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31011.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:58:19,370 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31011.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:58:28,249 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:58:33,049 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.621e+02 1.904e+02 2.250e+02 3.149e+02, threshold=3.808e+02, percent-clipped=0.0 2023-03-26 05:58:53,274 INFO [finetune.py:976] (2/7) Epoch 6, batch 2400, loss[loss=0.1922, simple_loss=0.263, pruned_loss=0.06067, over 4869.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2732, pruned_loss=0.07672, over 955431.20 frames. ], batch size: 31, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:59:16,909 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:59:24,027 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31072.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 05:59:44,204 INFO [finetune.py:976] (2/7) Epoch 6, batch 2450, loss[loss=0.2621, simple_loss=0.3194, pruned_loss=0.1024, over 4050.00 frames. ], tot_loss[loss=0.213, simple_loss=0.2721, pruned_loss=0.07693, over 954718.43 frames. ], batch size: 65, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 06:00:21,952 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.316e+02 1.762e+02 2.221e+02 2.552e+02 5.044e+02, threshold=4.442e+02, percent-clipped=4.0 2023-03-26 06:00:30,922 INFO [finetune.py:976] (2/7) Epoch 6, batch 2500, loss[loss=0.1984, simple_loss=0.2668, pruned_loss=0.065, over 4779.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2732, pruned_loss=0.07765, over 953609.35 frames. ], batch size: 28, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:00:56,631 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.82 vs. limit=5.0 2023-03-26 06:01:06,650 INFO [finetune.py:976] (2/7) Epoch 6, batch 2550, loss[loss=0.2538, simple_loss=0.3138, pruned_loss=0.09691, over 4834.00 frames. ], tot_loss[loss=0.2181, simple_loss=0.2779, pruned_loss=0.07911, over 952136.18 frames. ], batch size: 47, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:01:28,418 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6396, 1.5413, 1.3947, 1.6897, 2.1156, 1.7132, 1.2777, 1.2821], device='cuda:2'), covar=tensor([0.2300, 0.2240, 0.1992, 0.1815, 0.1864, 0.1259, 0.2732, 0.1976], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0211, 0.0204, 0.0187, 0.0238, 0.0177, 0.0214, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:01:50,622 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.680e+02 2.034e+02 2.315e+02 3.655e+02, threshold=4.067e+02, percent-clipped=0.0 2023-03-26 06:02:04,703 INFO [finetune.py:976] (2/7) Epoch 6, batch 2600, loss[loss=0.2516, simple_loss=0.314, pruned_loss=0.0946, over 4813.00 frames. ], tot_loss[loss=0.2183, simple_loss=0.2788, pruned_loss=0.07884, over 953494.61 frames. ], batch size: 40, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:02:33,438 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:02:34,688 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0726, 1.0337, 1.1033, 0.4189, 0.7975, 1.1960, 1.2705, 1.0943], device='cuda:2'), covar=tensor([0.0865, 0.0539, 0.0433, 0.0528, 0.0499, 0.0544, 0.0340, 0.0609], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0157, 0.0119, 0.0137, 0.0132, 0.0124, 0.0145, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.7765e-05, 1.1622e-04, 8.6598e-05, 9.9804e-05, 9.4630e-05, 9.1541e-05, 1.0779e-04, 1.0656e-04], device='cuda:2') 2023-03-26 06:02:36,880 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0265, 1.9256, 1.7035, 1.9953, 1.8573, 1.8315, 1.8220, 2.5629], device='cuda:2'), covar=tensor([0.6193, 0.7505, 0.5066, 0.6806, 0.6759, 0.3736, 0.7393, 0.2348], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0257, 0.0219, 0.0283, 0.0240, 0.0202, 0.0245, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:02:55,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9334, 1.7193, 2.3058, 1.5806, 2.2586, 2.2920, 1.6734, 2.5115], device='cuda:2'), covar=tensor([0.1566, 0.2246, 0.1528, 0.2237, 0.0878, 0.1512, 0.2783, 0.0813], device='cuda:2'), in_proj_covar=tensor([0.0208, 0.0208, 0.0201, 0.0197, 0.0185, 0.0223, 0.0220, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:03:04,534 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:09,261 INFO [finetune.py:976] (2/7) Epoch 6, batch 2650, loss[loss=0.1667, simple_loss=0.2347, pruned_loss=0.04935, over 4755.00 frames. ], tot_loss[loss=0.2184, simple_loss=0.2796, pruned_loss=0.07858, over 953616.26 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:03:09,906 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:38,130 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:47,924 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.761e+02 2.106e+02 2.462e+02 3.966e+02, threshold=4.213e+02, percent-clipped=0.0 2023-03-26 06:03:59,092 INFO [finetune.py:976] (2/7) Epoch 6, batch 2700, loss[loss=0.1523, simple_loss=0.2232, pruned_loss=0.0407, over 4703.00 frames. ], tot_loss[loss=0.2166, simple_loss=0.2781, pruned_loss=0.07753, over 955279.24 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:04:23,324 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31359.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:04:33,876 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31367.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:04:43,303 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7164, 1.6447, 2.0835, 1.3594, 1.9109, 2.0751, 1.5853, 2.2061], device='cuda:2'), covar=tensor([0.1604, 0.2590, 0.1389, 0.2248, 0.1112, 0.1538, 0.3005, 0.0968], device='cuda:2'), in_proj_covar=tensor([0.0206, 0.0207, 0.0199, 0.0196, 0.0185, 0.0222, 0.0219, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:05:04,057 INFO [finetune.py:976] (2/7) Epoch 6, batch 2750, loss[loss=0.1573, simple_loss=0.2264, pruned_loss=0.04404, over 4866.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2753, pruned_loss=0.07635, over 956064.85 frames. ], batch size: 31, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:05:28,054 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31409.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:05:46,888 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.636e+02 1.989e+02 2.265e+02 3.886e+02, threshold=3.977e+02, percent-clipped=0.0 2023-03-26 06:05:56,330 INFO [finetune.py:976] (2/7) Epoch 6, batch 2800, loss[loss=0.1766, simple_loss=0.2361, pruned_loss=0.05852, over 4761.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.271, pruned_loss=0.07485, over 956618.33 frames. ], batch size: 26, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:06:03,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6923, 1.5427, 1.4438, 1.7259, 2.1195, 1.7564, 1.4790, 1.3809], device='cuda:2'), covar=tensor([0.2186, 0.2258, 0.1943, 0.1767, 0.1851, 0.1256, 0.2671, 0.1935], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0209, 0.0202, 0.0186, 0.0236, 0.0176, 0.0213, 0.0190], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:06:16,530 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:06:42,527 INFO [finetune.py:976] (2/7) Epoch 6, batch 2850, loss[loss=0.1522, simple_loss=0.2194, pruned_loss=0.04251, over 4762.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.2704, pruned_loss=0.07522, over 956436.16 frames. ], batch size: 26, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:06:43,874 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31491.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:06:43,986 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.18 vs. limit=5.0 2023-03-26 06:06:55,462 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7707, 1.7102, 1.7716, 1.1769, 1.9090, 1.8640, 1.8380, 1.4233], device='cuda:2'), covar=tensor([0.0662, 0.0686, 0.0727, 0.0941, 0.0580, 0.0787, 0.0706, 0.1292], device='cuda:2'), in_proj_covar=tensor([0.0141, 0.0136, 0.0146, 0.0130, 0.0114, 0.0146, 0.0147, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:07:05,359 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3730, 1.5652, 1.6029, 0.9047, 1.5401, 1.7925, 1.7709, 1.4189], device='cuda:2'), covar=tensor([0.0982, 0.0635, 0.0439, 0.0630, 0.0424, 0.0642, 0.0295, 0.0659], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0159, 0.0121, 0.0138, 0.0133, 0.0125, 0.0147, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.8398e-05, 1.1748e-04, 8.7430e-05, 1.0081e-04, 9.5738e-05, 9.2587e-05, 1.0920e-04, 1.0738e-04], device='cuda:2') 2023-03-26 06:07:24,076 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 06:07:26,132 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.153e+02 1.576e+02 1.974e+02 2.520e+02 5.037e+02, threshold=3.948e+02, percent-clipped=2.0 2023-03-26 06:07:27,976 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4958, 1.2466, 1.2404, 1.2946, 1.6338, 1.5484, 1.4769, 1.2448], device='cuda:2'), covar=tensor([0.0240, 0.0315, 0.0597, 0.0309, 0.0224, 0.0464, 0.0247, 0.0380], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0112, 0.0138, 0.0117, 0.0104, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.7880e-05, 8.7970e-05, 1.1023e-04, 9.2499e-05, 8.1711e-05, 7.4797e-05, 6.9084e-05, 8.4874e-05], device='cuda:2') 2023-03-26 06:07:33,893 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.06 vs. limit=5.0 2023-03-26 06:07:42,539 INFO [finetune.py:976] (2/7) Epoch 6, batch 2900, loss[loss=0.2346, simple_loss=0.3006, pruned_loss=0.0843, over 4835.00 frames. ], tot_loss[loss=0.2136, simple_loss=0.2735, pruned_loss=0.07681, over 955726.60 frames. ], batch size: 47, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:07:51,445 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2150, 2.0896, 1.6982, 0.9491, 1.8654, 1.8175, 1.6342, 1.9267], device='cuda:2'), covar=tensor([0.0806, 0.0666, 0.1241, 0.1824, 0.1349, 0.1686, 0.1834, 0.0832], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0201, 0.0190, 0.0217, 0.0209, 0.0221, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:07:55,748 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31552.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:08:04,328 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31559.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:19,197 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31581.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:27,346 INFO [finetune.py:976] (2/7) Epoch 6, batch 2950, loss[loss=0.2168, simple_loss=0.2824, pruned_loss=0.07563, over 4901.00 frames. ], tot_loss[loss=0.2172, simple_loss=0.2778, pruned_loss=0.07827, over 956905.13 frames. ], batch size: 37, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:08:33,426 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31590.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:47,769 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:59,951 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.780e+02 2.105e+02 2.565e+02 4.082e+02, threshold=4.210e+02, percent-clipped=1.0 2023-03-26 06:09:02,952 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:09:09,344 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31638.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:09:09,892 INFO [finetune.py:976] (2/7) Epoch 6, batch 3000, loss[loss=0.1968, simple_loss=0.2777, pruned_loss=0.05797, over 4918.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.2803, pruned_loss=0.07954, over 957900.84 frames. ], batch size: 42, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:09:09,893 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 06:09:14,632 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3023, 1.2393, 1.2451, 1.2759, 1.5387, 1.4379, 1.3999, 1.1810], device='cuda:2'), covar=tensor([0.0390, 0.0306, 0.0501, 0.0286, 0.0258, 0.0404, 0.0255, 0.0384], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0111, 0.0137, 0.0116, 0.0103, 0.0100, 0.0090, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.7290e-05, 8.7190e-05, 1.0945e-04, 9.1587e-05, 8.1257e-05, 7.4104e-05, 6.8327e-05, 8.4034e-05], device='cuda:2') 2023-03-26 06:09:23,490 INFO [finetune.py:1010] (2/7) Epoch 6, validation: loss=0.1625, simple_loss=0.2344, pruned_loss=0.04534, over 2265189.00 frames. 2023-03-26 06:09:23,491 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 06:09:55,817 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31667.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:10:19,291 INFO [finetune.py:976] (2/7) Epoch 6, batch 3050, loss[loss=0.1911, simple_loss=0.261, pruned_loss=0.06061, over 4747.00 frames. ], tot_loss[loss=0.2201, simple_loss=0.2812, pruned_loss=0.07956, over 957173.20 frames. ], batch size: 59, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:10:28,746 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6976, 1.6348, 1.5258, 1.7257, 2.0779, 1.6529, 1.3801, 1.4941], device='cuda:2'), covar=tensor([0.1968, 0.2032, 0.1804, 0.1677, 0.1666, 0.1216, 0.2652, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0209, 0.0202, 0.0186, 0.0236, 0.0176, 0.0213, 0.0190], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:10:29,949 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31704.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:10:33,655 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7492, 1.6231, 1.4818, 1.4088, 1.8361, 1.5246, 1.8489, 1.7850], device='cuda:2'), covar=tensor([0.1756, 0.3191, 0.4089, 0.3273, 0.3237, 0.1989, 0.3405, 0.2299], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0192, 0.0237, 0.0254, 0.0232, 0.0191, 0.0211, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:10:36,650 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31715.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:10:43,059 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.708e+02 2.072e+02 2.420e+02 4.871e+02, threshold=4.144e+02, percent-clipped=1.0 2023-03-26 06:10:59,684 INFO [finetune.py:976] (2/7) Epoch 6, batch 3100, loss[loss=0.2042, simple_loss=0.2742, pruned_loss=0.06705, over 4902.00 frames. ], tot_loss[loss=0.2186, simple_loss=0.2794, pruned_loss=0.07893, over 955773.91 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:11:20,229 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:20,276 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:35,784 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:36,281 INFO [finetune.py:976] (2/7) Epoch 6, batch 3150, loss[loss=0.1787, simple_loss=0.2407, pruned_loss=0.05829, over 4930.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2753, pruned_loss=0.07703, over 956347.10 frames. ], batch size: 38, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:12:00,899 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:12:01,992 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.748e+01 1.682e+02 2.020e+02 2.535e+02 4.605e+02, threshold=4.040e+02, percent-clipped=1.0 2023-03-26 06:12:15,952 INFO [finetune.py:976] (2/7) Epoch 6, batch 3200, loss[loss=0.2355, simple_loss=0.2811, pruned_loss=0.09495, over 4714.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2716, pruned_loss=0.07571, over 955431.23 frames. ], batch size: 54, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:12:25,650 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31847.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 06:12:31,921 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:06,113 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:11,183 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:14,083 INFO [finetune.py:976] (2/7) Epoch 6, batch 3250, loss[loss=0.2634, simple_loss=0.303, pruned_loss=0.1119, over 4727.00 frames. ], tot_loss[loss=0.2118, simple_loss=0.2718, pruned_loss=0.07591, over 954891.74 frames. ], batch size: 59, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:13:51,964 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:01,793 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.169e+02 1.695e+02 2.048e+02 2.376e+02 4.231e+02, threshold=4.096e+02, percent-clipped=1.0 2023-03-26 06:14:17,633 INFO [finetune.py:976] (2/7) Epoch 6, batch 3300, loss[loss=0.2148, simple_loss=0.2762, pruned_loss=0.07674, over 4754.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2764, pruned_loss=0.07732, over 956513.98 frames. ], batch size: 27, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:14:17,764 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:29,142 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:43,700 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-26 06:14:46,574 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:54,838 INFO [finetune.py:976] (2/7) Epoch 6, batch 3350, loss[loss=0.2265, simple_loss=0.2894, pruned_loss=0.08182, over 4836.00 frames. ], tot_loss[loss=0.218, simple_loss=0.2788, pruned_loss=0.07854, over 955618.85 frames. ], batch size: 49, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:15:11,470 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=32011.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:15:22,692 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.186e+02 1.743e+02 2.097e+02 2.540e+02 4.089e+02, threshold=4.194e+02, percent-clipped=0.0 2023-03-26 06:15:31,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1319, 2.0339, 1.6904, 2.1480, 1.9629, 1.9147, 1.9171, 2.8232], device='cuda:2'), covar=tensor([0.6492, 0.7628, 0.5285, 0.7341, 0.6533, 0.3738, 0.7683, 0.2456], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0257, 0.0220, 0.0283, 0.0239, 0.0203, 0.0245, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:15:41,803 INFO [finetune.py:976] (2/7) Epoch 6, batch 3400, loss[loss=0.1963, simple_loss=0.2689, pruned_loss=0.06186, over 4770.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2785, pruned_loss=0.07849, over 954440.63 frames. ], batch size: 28, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:16:04,434 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32060.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:16:13,851 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:16:18,863 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8481, 1.7300, 1.5848, 1.9168, 2.4381, 1.9285, 1.4441, 1.4556], device='cuda:2'), covar=tensor([0.2310, 0.2235, 0.2037, 0.1809, 0.1847, 0.1263, 0.2769, 0.2047], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0208, 0.0202, 0.0185, 0.0237, 0.0175, 0.0213, 0.0189], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:16:41,401 INFO [finetune.py:976] (2/7) Epoch 6, batch 3450, loss[loss=0.1764, simple_loss=0.2422, pruned_loss=0.05533, over 4818.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2762, pruned_loss=0.07699, over 953318.93 frames. ], batch size: 30, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:17:01,931 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 06:17:07,733 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:17:16,933 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.573e+02 1.926e+02 2.516e+02 4.351e+02, threshold=3.853e+02, percent-clipped=2.0 2023-03-26 06:17:18,882 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3174, 2.9944, 2.6723, 1.4623, 2.8047, 2.3186, 2.3652, 2.3286], device='cuda:2'), covar=tensor([0.0749, 0.1072, 0.1608, 0.2401, 0.1865, 0.2246, 0.1810, 0.1408], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0203, 0.0201, 0.0190, 0.0217, 0.0209, 0.0220, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:17:25,471 INFO [finetune.py:976] (2/7) Epoch 6, batch 3500, loss[loss=0.1892, simple_loss=0.2522, pruned_loss=0.06314, over 4694.00 frames. ], tot_loss[loss=0.2137, simple_loss=0.2747, pruned_loss=0.07629, over 953220.26 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:17:29,073 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:17:30,891 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32147.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 06:18:09,968 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:18:15,990 INFO [finetune.py:976] (2/7) Epoch 6, batch 3550, loss[loss=0.2109, simple_loss=0.2625, pruned_loss=0.07963, over 4757.00 frames. ], tot_loss[loss=0.2116, simple_loss=0.2721, pruned_loss=0.07559, over 954316.09 frames. ], batch size: 54, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:18:19,693 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32195.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 06:18:34,839 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0828, 1.9689, 1.6545, 2.0179, 1.9330, 1.8399, 1.8014, 2.7535], device='cuda:2'), covar=tensor([0.6407, 0.8491, 0.5267, 0.7166, 0.7037, 0.3801, 0.7719, 0.2474], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0283, 0.0240, 0.0203, 0.0245, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:18:36,654 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0239, 1.8022, 1.5693, 1.7188, 1.7410, 1.7098, 1.6866, 2.5772], device='cuda:2'), covar=tensor([0.6161, 0.7487, 0.5113, 0.6503, 0.5660, 0.3411, 0.6272, 0.2271], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0283, 0.0239, 0.0203, 0.0245, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:18:40,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.617e+02 1.889e+02 2.318e+02 4.823e+02, threshold=3.777e+02, percent-clipped=2.0 2023-03-26 06:18:52,023 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:19:00,023 INFO [finetune.py:976] (2/7) Epoch 6, batch 3600, loss[loss=0.1598, simple_loss=0.2324, pruned_loss=0.04359, over 4754.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2688, pruned_loss=0.07426, over 954406.23 frames. ], batch size: 27, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:19:00,861 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 06:19:36,023 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32271.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:19:57,774 INFO [finetune.py:976] (2/7) Epoch 6, batch 3650, loss[loss=0.2121, simple_loss=0.2861, pruned_loss=0.06902, over 4823.00 frames. ], tot_loss[loss=0.2122, simple_loss=0.2721, pruned_loss=0.07615, over 956445.84 frames. ], batch size: 40, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:20:14,266 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:20:18,585 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6422, 1.5271, 2.4083, 3.4826, 2.3659, 2.4847, 1.4329, 2.6599], device='cuda:2'), covar=tensor([0.1834, 0.1562, 0.1147, 0.0604, 0.0820, 0.1447, 0.1603, 0.0626], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0118, 0.0136, 0.0167, 0.0102, 0.0142, 0.0130, 0.0104], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 06:20:26,743 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.776e+02 2.129e+02 2.587e+02 5.126e+02, threshold=4.259e+02, percent-clipped=5.0 2023-03-26 06:20:43,049 INFO [finetune.py:976] (2/7) Epoch 6, batch 3700, loss[loss=0.2144, simple_loss=0.2824, pruned_loss=0.07325, over 4782.00 frames. ], tot_loss[loss=0.2138, simple_loss=0.2747, pruned_loss=0.07646, over 956742.21 frames. ], batch size: 51, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:20:53,495 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9609, 1.4242, 1.6481, 1.6672, 1.4944, 1.5636, 1.6070, 1.6865], device='cuda:2'), covar=tensor([0.6957, 0.7838, 0.6776, 0.7329, 0.9332, 0.6545, 0.9279, 0.6195], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0245, 0.0255, 0.0256, 0.0242, 0.0220, 0.0274, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:20:56,523 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:20:57,817 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1874, 2.1311, 1.6962, 2.2293, 2.0897, 1.9843, 1.9173, 3.0609], device='cuda:2'), covar=tensor([0.6275, 0.8025, 0.5383, 0.6531, 0.6197, 0.3759, 0.6995, 0.2341], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0283, 0.0240, 0.0203, 0.0245, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:20:59,653 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.8606, 0.7834, 0.8141, 0.9385, 1.0675, 0.9922, 0.9233, 0.8517], device='cuda:2'), covar=tensor([0.0333, 0.0245, 0.0494, 0.0231, 0.0260, 0.0372, 0.0258, 0.0301], device='cuda:2'), in_proj_covar=tensor([0.0086, 0.0111, 0.0136, 0.0115, 0.0103, 0.0099, 0.0090, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.7265e-05, 8.7208e-05, 1.0925e-04, 9.0943e-05, 8.1001e-05, 7.3202e-05, 6.8308e-05, 8.3592e-05], device='cuda:2') 2023-03-26 06:21:16,593 INFO [finetune.py:976] (2/7) Epoch 6, batch 3750, loss[loss=0.2332, simple_loss=0.3099, pruned_loss=0.07826, over 4852.00 frames. ], tot_loss[loss=0.2156, simple_loss=0.2768, pruned_loss=0.07722, over 955377.32 frames. ], batch size: 49, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:21:37,051 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:21:45,639 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7760, 3.3958, 3.5452, 2.0206, 3.5135, 2.8131, 1.1988, 2.6966], device='cuda:2'), covar=tensor([0.2886, 0.1606, 0.1269, 0.2614, 0.1063, 0.0770, 0.3371, 0.1240], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0171, 0.0162, 0.0127, 0.0154, 0.0122, 0.0144, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 06:21:48,323 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.626e+02 1.929e+02 2.294e+02 3.909e+02, threshold=3.857e+02, percent-clipped=0.0 2023-03-26 06:21:55,283 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9372, 1.6347, 2.2252, 3.5841, 2.5547, 2.5791, 0.8927, 2.7734], device='cuda:2'), covar=tensor([0.1639, 0.1494, 0.1311, 0.0553, 0.0716, 0.2107, 0.1883, 0.0567], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0118, 0.0135, 0.0166, 0.0102, 0.0141, 0.0128, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 06:21:58,657 INFO [finetune.py:976] (2/7) Epoch 6, batch 3800, loss[loss=0.2069, simple_loss=0.2701, pruned_loss=0.0719, over 4898.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2778, pruned_loss=0.07759, over 953896.18 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:22:01,775 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:10,322 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 06:22:24,409 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:31,247 INFO [finetune.py:976] (2/7) Epoch 6, batch 3850, loss[loss=0.1794, simple_loss=0.2327, pruned_loss=0.06302, over 4926.00 frames. ], tot_loss[loss=0.2144, simple_loss=0.2754, pruned_loss=0.07672, over 953830.74 frames. ], batch size: 33, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:22:33,636 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:36,267 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 06:22:53,999 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.617e+01 1.620e+02 2.056e+02 2.694e+02 5.558e+02, threshold=4.113e+02, percent-clipped=4.0 2023-03-26 06:22:54,220 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 06:22:55,288 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:57,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5139, 2.3427, 1.9096, 0.9793, 2.1257, 1.9794, 1.6976, 1.9906], device='cuda:2'), covar=tensor([0.0995, 0.0856, 0.1717, 0.2272, 0.1759, 0.2441, 0.2397, 0.1241], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0200, 0.0199, 0.0188, 0.0214, 0.0207, 0.0220, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:23:00,101 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32534.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:04,461 INFO [finetune.py:976] (2/7) Epoch 6, batch 3900, loss[loss=0.2123, simple_loss=0.2704, pruned_loss=0.07706, over 4838.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2724, pruned_loss=0.07533, over 954493.02 frames. ], batch size: 47, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:23:24,748 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:31,334 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32582.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:36,048 INFO [finetune.py:976] (2/7) Epoch 6, batch 3950, loss[loss=0.1966, simple_loss=0.2635, pruned_loss=0.06487, over 4741.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2686, pruned_loss=0.07365, over 953562.60 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:23:53,886 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:24:07,657 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32619.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:24:11,251 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.672e+02 2.141e+02 2.509e+02 4.478e+02, threshold=4.281e+02, percent-clipped=2.0 2023-03-26 06:24:20,839 INFO [finetune.py:976] (2/7) Epoch 6, batch 4000, loss[loss=0.1966, simple_loss=0.2641, pruned_loss=0.06461, over 4910.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2698, pruned_loss=0.07489, over 954619.98 frames. ], batch size: 37, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:24:24,088 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 06:24:30,952 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:25:04,476 INFO [finetune.py:976] (2/7) Epoch 6, batch 4050, loss[loss=0.1946, simple_loss=0.2427, pruned_loss=0.07322, over 4711.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2741, pruned_loss=0.07629, over 955945.94 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:25:28,436 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.831e+02 2.133e+02 2.637e+02 5.226e+02, threshold=4.267e+02, percent-clipped=1.0 2023-03-26 06:25:29,761 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5002, 1.3474, 1.4001, 1.4451, 1.0213, 3.2539, 1.3236, 1.7128], device='cuda:2'), covar=tensor([0.3620, 0.2711, 0.2329, 0.2563, 0.2178, 0.0225, 0.3027, 0.1481], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0119, 0.0099, 0.0102, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 06:25:42,720 INFO [finetune.py:976] (2/7) Epoch 6, batch 4100, loss[loss=0.2412, simple_loss=0.3036, pruned_loss=0.08943, over 4904.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2747, pruned_loss=0.07584, over 952521.85 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:25:56,592 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4781, 1.4270, 2.0248, 1.7666, 1.6349, 3.8807, 1.2695, 1.7052], device='cuda:2'), covar=tensor([0.1052, 0.1922, 0.1432, 0.1108, 0.1669, 0.0202, 0.1588, 0.1794], device='cuda:2'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 06:26:34,195 INFO [finetune.py:976] (2/7) Epoch 6, batch 4150, loss[loss=0.1957, simple_loss=0.2682, pruned_loss=0.06159, over 4916.00 frames. ], tot_loss[loss=0.2153, simple_loss=0.2765, pruned_loss=0.07703, over 951507.65 frames. ], batch size: 38, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:27:18,093 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.203e+02 1.784e+02 2.149e+02 2.523e+02 4.029e+02, threshold=4.299e+02, percent-clipped=0.0 2023-03-26 06:27:37,296 INFO [finetune.py:976] (2/7) Epoch 6, batch 4200, loss[loss=0.2054, simple_loss=0.2566, pruned_loss=0.07705, over 4722.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2771, pruned_loss=0.0766, over 953503.81 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:28:13,052 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8363, 1.6194, 2.0328, 1.4152, 1.8387, 1.9121, 1.5804, 2.1486], device='cuda:2'), covar=tensor([0.1363, 0.2007, 0.1503, 0.2149, 0.0982, 0.1620, 0.2602, 0.0816], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0204, 0.0198, 0.0195, 0.0183, 0.0219, 0.0216, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:28:34,720 INFO [finetune.py:976] (2/7) Epoch 6, batch 4250, loss[loss=0.2143, simple_loss=0.274, pruned_loss=0.07726, over 4934.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2742, pruned_loss=0.07568, over 954280.14 frames. ], batch size: 38, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:29:04,552 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 06:29:25,480 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.595e+02 1.920e+02 2.303e+02 3.727e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 06:29:44,594 INFO [finetune.py:976] (2/7) Epoch 6, batch 4300, loss[loss=0.2343, simple_loss=0.2782, pruned_loss=0.09524, over 3933.00 frames. ], tot_loss[loss=0.2102, simple_loss=0.271, pruned_loss=0.07472, over 954024.03 frames. ], batch size: 17, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:30:17,430 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=32966.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:30:47,043 INFO [finetune.py:976] (2/7) Epoch 6, batch 4350, loss[loss=0.2251, simple_loss=0.2887, pruned_loss=0.08074, over 4813.00 frames. ], tot_loss[loss=0.2085, simple_loss=0.2686, pruned_loss=0.07424, over 954352.05 frames. ], batch size: 41, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:30:48,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1528, 3.5838, 3.7629, 3.9885, 3.8968, 3.7127, 4.2757, 1.3998], device='cuda:2'), covar=tensor([0.0870, 0.0849, 0.0844, 0.1054, 0.1297, 0.1463, 0.0626, 0.5130], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0240, 0.0272, 0.0292, 0.0332, 0.0283, 0.0300, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:31:32,013 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.730e+02 2.041e+02 2.589e+02 3.941e+02, threshold=4.082e+02, percent-clipped=1.0 2023-03-26 06:31:33,359 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:31:50,662 INFO [finetune.py:976] (2/7) Epoch 6, batch 4400, loss[loss=0.222, simple_loss=0.274, pruned_loss=0.08504, over 4908.00 frames. ], tot_loss[loss=0.211, simple_loss=0.2708, pruned_loss=0.07564, over 953883.64 frames. ], batch size: 35, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:32:25,085 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6365, 3.2759, 3.1347, 1.3639, 3.4696, 2.5778, 0.9534, 2.2251], device='cuda:2'), covar=tensor([0.2474, 0.1688, 0.1571, 0.3333, 0.1120, 0.0988, 0.3822, 0.1532], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0170, 0.0161, 0.0127, 0.0154, 0.0121, 0.0144, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 06:32:31,754 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:32:54,583 INFO [finetune.py:976] (2/7) Epoch 6, batch 4450, loss[loss=0.2211, simple_loss=0.2875, pruned_loss=0.07732, over 4924.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2739, pruned_loss=0.07636, over 954619.32 frames. ], batch size: 42, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:33:39,235 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.714e+02 2.125e+02 2.690e+02 4.211e+02, threshold=4.250e+02, percent-clipped=1.0 2023-03-26 06:33:47,536 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:33:58,504 INFO [finetune.py:976] (2/7) Epoch 6, batch 4500, loss[loss=0.1836, simple_loss=0.2415, pruned_loss=0.06282, over 4564.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2761, pruned_loss=0.07742, over 950697.38 frames. ], batch size: 20, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:34:19,038 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33155.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:34:59,966 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0630, 2.1778, 1.8181, 1.6945, 2.3282, 2.4530, 2.1794, 1.9447], device='cuda:2'), covar=tensor([0.0287, 0.0368, 0.0470, 0.0361, 0.0230, 0.0472, 0.0299, 0.0391], device='cuda:2'), in_proj_covar=tensor([0.0087, 0.0111, 0.0137, 0.0116, 0.0103, 0.0099, 0.0090, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.7712e-05, 8.7207e-05, 1.0947e-04, 9.1130e-05, 8.1497e-05, 7.3750e-05, 6.8936e-05, 8.4566e-05], device='cuda:2') 2023-03-26 06:35:01,058 INFO [finetune.py:976] (2/7) Epoch 6, batch 4550, loss[loss=0.2122, simple_loss=0.2859, pruned_loss=0.06928, over 4899.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2764, pruned_loss=0.07692, over 952585.68 frames. ], batch size: 43, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:35:33,149 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33216.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:35:44,645 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.755e+02 2.084e+02 2.335e+02 4.621e+02, threshold=4.168e+02, percent-clipped=2.0 2023-03-26 06:36:05,052 INFO [finetune.py:976] (2/7) Epoch 6, batch 4600, loss[loss=0.1955, simple_loss=0.2551, pruned_loss=0.06798, over 4819.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2757, pruned_loss=0.07648, over 952906.08 frames. ], batch size: 30, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:36:23,884 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7626, 1.4134, 0.9686, 1.7539, 2.1417, 1.4988, 1.7889, 1.7442], device='cuda:2'), covar=tensor([0.1498, 0.2057, 0.2168, 0.1218, 0.2004, 0.1935, 0.1301, 0.1961], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0092, 0.0123, 0.0095, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 06:36:25,096 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1541, 2.1254, 2.0529, 1.4801, 2.3474, 2.2759, 2.1877, 1.9161], device='cuda:2'), covar=tensor([0.0662, 0.0634, 0.0896, 0.1054, 0.0482, 0.0756, 0.0674, 0.1010], device='cuda:2'), in_proj_covar=tensor([0.0140, 0.0134, 0.0145, 0.0128, 0.0114, 0.0146, 0.0146, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:37:06,570 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0344, 2.5819, 2.3842, 1.3388, 2.6219, 2.2604, 1.9337, 2.2331], device='cuda:2'), covar=tensor([0.1111, 0.0980, 0.1850, 0.2206, 0.1641, 0.1840, 0.2116, 0.1361], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0200, 0.0199, 0.0187, 0.0214, 0.0206, 0.0218, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:37:07,545 INFO [finetune.py:976] (2/7) Epoch 6, batch 4650, loss[loss=0.1926, simple_loss=0.2409, pruned_loss=0.07215, over 4827.00 frames. ], tot_loss[loss=0.2116, simple_loss=0.2727, pruned_loss=0.07528, over 954306.54 frames. ], batch size: 39, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:37:37,415 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1031, 1.9676, 2.8130, 1.5400, 2.2041, 2.2393, 1.7813, 2.4187], device='cuda:2'), covar=tensor([0.1899, 0.2431, 0.1448, 0.2702, 0.1312, 0.2147, 0.3125, 0.1477], device='cuda:2'), in_proj_covar=tensor([0.0204, 0.0204, 0.0198, 0.0196, 0.0183, 0.0220, 0.0218, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:37:48,445 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:37:50,278 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.46 vs. limit=5.0 2023-03-26 06:37:50,591 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.572e+02 1.961e+02 2.434e+02 3.752e+02, threshold=3.921e+02, percent-clipped=0.0 2023-03-26 06:38:10,951 INFO [finetune.py:976] (2/7) Epoch 6, batch 4700, loss[loss=0.1793, simple_loss=0.2411, pruned_loss=0.05876, over 4821.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2696, pruned_loss=0.07438, over 952793.34 frames. ], batch size: 30, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:38:43,694 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 06:39:20,540 INFO [finetune.py:976] (2/7) Epoch 6, batch 4750, loss[loss=0.1591, simple_loss=0.2132, pruned_loss=0.05248, over 3995.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2678, pruned_loss=0.07389, over 953633.49 frames. ], batch size: 17, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:39:21,263 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5923, 1.5959, 1.5030, 0.9197, 1.5502, 1.7810, 1.8295, 1.3796], device='cuda:2'), covar=tensor([0.1096, 0.0629, 0.0454, 0.0605, 0.0438, 0.0462, 0.0322, 0.0668], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0121, 0.0138, 0.0133, 0.0125, 0.0147, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.7141e-05, 1.1695e-04, 8.7790e-05, 1.0057e-04, 9.5325e-05, 9.1966e-05, 1.0917e-04, 1.0765e-04], device='cuda:2') 2023-03-26 06:40:04,161 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.633e+02 1.917e+02 2.350e+02 3.562e+02, threshold=3.835e+02, percent-clipped=0.0 2023-03-26 06:40:04,237 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:40:24,233 INFO [finetune.py:976] (2/7) Epoch 6, batch 4800, loss[loss=0.2703, simple_loss=0.3193, pruned_loss=0.1107, over 4819.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.2704, pruned_loss=0.07524, over 954119.55 frames. ], batch size: 40, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:41:07,474 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:41:28,614 INFO [finetune.py:976] (2/7) Epoch 6, batch 4850, loss[loss=0.2128, simple_loss=0.2749, pruned_loss=0.07537, over 4759.00 frames. ], tot_loss[loss=0.2137, simple_loss=0.2742, pruned_loss=0.0766, over 955720.60 frames. ], batch size: 54, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:41:59,896 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33511.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:42:14,016 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.841e+02 2.130e+02 2.477e+02 4.983e+02, threshold=4.260e+02, percent-clipped=3.0 2023-03-26 06:42:22,900 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4506, 2.1479, 1.6389, 0.7196, 1.9475, 1.9860, 1.7391, 1.8264], device='cuda:2'), covar=tensor([0.0741, 0.0876, 0.1626, 0.2127, 0.1260, 0.2180, 0.2287, 0.0953], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0201, 0.0200, 0.0188, 0.0216, 0.0208, 0.0219, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:42:24,114 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33533.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:42:32,809 INFO [finetune.py:976] (2/7) Epoch 6, batch 4900, loss[loss=0.2104, simple_loss=0.2779, pruned_loss=0.07151, over 4825.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2762, pruned_loss=0.07754, over 954021.92 frames. ], batch size: 33, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:42:35,327 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3116, 2.9512, 3.0473, 3.2506, 3.0885, 2.9138, 3.3787, 1.0917], device='cuda:2'), covar=tensor([0.1141, 0.0945, 0.0968, 0.1180, 0.1760, 0.1787, 0.1026, 0.4760], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0240, 0.0273, 0.0292, 0.0332, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:43:03,715 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 06:43:28,529 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3465, 3.0078, 2.7158, 1.4192, 2.8181, 2.3485, 2.3277, 2.3895], device='cuda:2'), covar=tensor([0.1028, 0.1085, 0.1624, 0.2489, 0.2004, 0.2238, 0.2074, 0.1456], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0201, 0.0189, 0.0217, 0.0208, 0.0220, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:43:36,449 INFO [finetune.py:976] (2/7) Epoch 6, batch 4950, loss[loss=0.2372, simple_loss=0.2903, pruned_loss=0.09206, over 4722.00 frames. ], tot_loss[loss=0.2161, simple_loss=0.2771, pruned_loss=0.0775, over 953450.62 frames. ], batch size: 59, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:44:20,483 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33622.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:44:22,685 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.562e+01 1.670e+02 2.087e+02 2.382e+02 5.310e+02, threshold=4.173e+02, percent-clipped=3.0 2023-03-26 06:44:41,929 INFO [finetune.py:976] (2/7) Epoch 6, batch 5000, loss[loss=0.1771, simple_loss=0.2409, pruned_loss=0.05665, over 4856.00 frames. ], tot_loss[loss=0.2136, simple_loss=0.2745, pruned_loss=0.0764, over 953312.08 frames. ], batch size: 44, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:45:07,760 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8301, 1.8969, 1.3263, 1.9980, 1.9124, 1.6145, 2.7040, 1.9168], device='cuda:2'), covar=tensor([0.1560, 0.2841, 0.3808, 0.3335, 0.2943, 0.1804, 0.2755, 0.2242], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0234, 0.0254, 0.0232, 0.0191, 0.0212, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:45:14,313 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33670.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:19,039 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:26,105 INFO [finetune.py:976] (2/7) Epoch 6, batch 5050, loss[loss=0.1916, simple_loss=0.2607, pruned_loss=0.06126, over 4793.00 frames. ], tot_loss[loss=0.2114, simple_loss=0.2716, pruned_loss=0.07555, over 953750.29 frames. ], batch size: 29, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:45:50,301 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.621e+02 1.877e+02 2.380e+02 3.773e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-26 06:45:50,414 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33725.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:52,878 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5842, 1.5299, 1.3281, 1.4703, 1.7904, 1.7131, 1.5969, 1.2587], device='cuda:2'), covar=tensor([0.0302, 0.0240, 0.0531, 0.0281, 0.0190, 0.0381, 0.0249, 0.0435], device='cuda:2'), in_proj_covar=tensor([0.0085, 0.0108, 0.0135, 0.0114, 0.0102, 0.0098, 0.0089, 0.0106], device='cuda:2'), out_proj_covar=tensor([6.6655e-05, 8.5304e-05, 1.0799e-04, 8.9922e-05, 8.0374e-05, 7.2547e-05, 6.7323e-05, 8.2882e-05], device='cuda:2') 2023-03-26 06:45:58,858 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33738.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:45:59,300 INFO [finetune.py:976] (2/7) Epoch 6, batch 5100, loss[loss=0.1857, simple_loss=0.2522, pruned_loss=0.05959, over 4698.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2674, pruned_loss=0.07329, over 953534.22 frames. ], batch size: 23, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:46:22,562 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:46:22,706 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.18 vs. limit=5.0 2023-03-26 06:46:26,986 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 06:46:32,748 INFO [finetune.py:976] (2/7) Epoch 6, batch 5150, loss[loss=0.2198, simple_loss=0.2849, pruned_loss=0.07734, over 4848.00 frames. ], tot_loss[loss=0.2064, simple_loss=0.2669, pruned_loss=0.07293, over 953647.27 frames. ], batch size: 49, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:46:48,264 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:46:57,549 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.768e+02 2.107e+02 2.614e+02 3.782e+02, threshold=4.214e+02, percent-clipped=1.0 2023-03-26 06:46:59,466 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33828.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:47:06,627 INFO [finetune.py:976] (2/7) Epoch 6, batch 5200, loss[loss=0.2537, simple_loss=0.3082, pruned_loss=0.09965, over 4865.00 frames. ], tot_loss[loss=0.2127, simple_loss=0.2733, pruned_loss=0.07604, over 953338.28 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:47:19,510 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:47:44,970 INFO [finetune.py:976] (2/7) Epoch 6, batch 5250, loss[loss=0.2481, simple_loss=0.2994, pruned_loss=0.09844, over 4155.00 frames. ], tot_loss[loss=0.2152, simple_loss=0.276, pruned_loss=0.07717, over 950716.38 frames. ], batch size: 65, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:47:52,499 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8000, 1.7554, 1.5281, 1.8976, 2.3305, 1.8197, 1.5331, 1.4010], device='cuda:2'), covar=tensor([0.2117, 0.2169, 0.1999, 0.1692, 0.1950, 0.1239, 0.2699, 0.1969], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0210, 0.0204, 0.0186, 0.0238, 0.0176, 0.0213, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:48:10,021 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.728e+02 2.112e+02 2.559e+02 4.196e+02, threshold=4.224e+02, percent-clipped=0.0 2023-03-26 06:48:18,714 INFO [finetune.py:976] (2/7) Epoch 6, batch 5300, loss[loss=0.2017, simple_loss=0.2736, pruned_loss=0.06493, over 4802.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2767, pruned_loss=0.07715, over 952596.60 frames. ], batch size: 51, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:48:19,398 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:48:34,122 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8260, 4.5109, 4.2602, 2.2207, 4.6668, 3.5272, 0.8593, 3.2302], device='cuda:2'), covar=tensor([0.2499, 0.1519, 0.1319, 0.3157, 0.0790, 0.0807, 0.4613, 0.1308], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0173, 0.0163, 0.0128, 0.0156, 0.0123, 0.0147, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 06:48:53,985 INFO [finetune.py:976] (2/7) Epoch 6, batch 5350, loss[loss=0.1704, simple_loss=0.2316, pruned_loss=0.05458, over 4753.00 frames. ], tot_loss[loss=0.2153, simple_loss=0.2766, pruned_loss=0.07697, over 954531.87 frames. ], batch size: 27, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:49:07,564 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34000.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:49:40,310 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.730e+02 2.013e+02 2.437e+02 5.230e+02, threshold=4.026e+02, percent-clipped=3.0 2023-03-26 06:49:50,501 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34033.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 06:49:59,342 INFO [finetune.py:976] (2/7) Epoch 6, batch 5400, loss[loss=0.2312, simple_loss=0.285, pruned_loss=0.08866, over 4908.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2738, pruned_loss=0.07629, over 955297.50 frames. ], batch size: 37, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:50:02,511 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:50:30,099 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 06:50:44,494 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5333, 1.4895, 1.9893, 3.4186, 2.2880, 2.2248, 0.9323, 2.7361], device='cuda:2'), covar=tensor([0.1813, 0.1421, 0.1360, 0.0516, 0.0792, 0.1332, 0.1997, 0.0523], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0118, 0.0137, 0.0167, 0.0103, 0.0141, 0.0130, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 06:50:51,575 INFO [finetune.py:976] (2/7) Epoch 6, batch 5450, loss[loss=0.193, simple_loss=0.2533, pruned_loss=0.06638, over 4870.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.271, pruned_loss=0.07508, over 956841.29 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:51:04,657 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34105.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:15,351 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0854, 1.8267, 2.3453, 1.4916, 2.2112, 2.2631, 1.6872, 2.5437], device='cuda:2'), covar=tensor([0.1429, 0.1983, 0.1380, 0.2218, 0.0967, 0.1642, 0.2905, 0.0870], device='cuda:2'), in_proj_covar=tensor([0.0202, 0.0203, 0.0198, 0.0195, 0.0181, 0.0219, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:51:17,059 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.588e+02 1.777e+02 2.163e+02 4.002e+02, threshold=3.553e+02, percent-clipped=0.0 2023-03-26 06:51:19,939 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34128.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:25,012 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9217, 1.7410, 1.7694, 1.8199, 1.4879, 3.8230, 1.6808, 2.3335], device='cuda:2'), covar=tensor([0.3117, 0.2362, 0.1903, 0.2301, 0.1664, 0.0149, 0.2405, 0.1133], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0117, 0.0098, 0.0102, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 06:51:27,293 INFO [finetune.py:976] (2/7) Epoch 6, batch 5500, loss[loss=0.2032, simple_loss=0.2716, pruned_loss=0.06743, over 4847.00 frames. ], tot_loss[loss=0.2073, simple_loss=0.2675, pruned_loss=0.07358, over 954904.45 frames. ], batch size: 44, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:51:31,027 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34145.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:50,699 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:50,753 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7612, 1.6009, 1.5574, 1.7153, 1.1907, 3.5896, 1.3980, 2.1508], device='cuda:2'), covar=tensor([0.3159, 0.2339, 0.1988, 0.2297, 0.1859, 0.0167, 0.2482, 0.1152], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0117, 0.0122, 0.0117, 0.0098, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 06:52:00,938 INFO [finetune.py:976] (2/7) Epoch 6, batch 5550, loss[loss=0.217, simple_loss=0.2961, pruned_loss=0.06892, over 4862.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2684, pruned_loss=0.07366, over 954674.11 frames. ], batch size: 44, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:52:05,977 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-26 06:52:15,683 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:52:37,389 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.703e+02 1.979e+02 2.376e+02 4.570e+02, threshold=3.959e+02, percent-clipped=2.0 2023-03-26 06:52:55,405 INFO [finetune.py:976] (2/7) Epoch 6, batch 5600, loss[loss=0.2156, simple_loss=0.2814, pruned_loss=0.07494, over 4831.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2736, pruned_loss=0.07558, over 954175.46 frames. ], batch size: 49, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:52:55,515 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8685, 2.5920, 2.1114, 1.1789, 2.3828, 2.1882, 1.9790, 2.3512], device='cuda:2'), covar=tensor([0.0746, 0.0873, 0.1647, 0.2156, 0.1483, 0.2162, 0.2068, 0.0962], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0201, 0.0201, 0.0189, 0.0216, 0.0207, 0.0219, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:53:54,498 INFO [finetune.py:976] (2/7) Epoch 6, batch 5650, loss[loss=0.2977, simple_loss=0.3432, pruned_loss=0.1261, over 4748.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2769, pruned_loss=0.07622, over 956213.66 frames. ], batch size: 59, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:53:58,424 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34295.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:54:29,892 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-26 06:54:35,325 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.805e+01 1.675e+02 2.029e+02 2.481e+02 4.265e+02, threshold=4.057e+02, percent-clipped=3.0 2023-03-26 06:54:40,160 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:54:48,433 INFO [finetune.py:976] (2/7) Epoch 6, batch 5700, loss[loss=0.1981, simple_loss=0.2381, pruned_loss=0.07908, over 4284.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.271, pruned_loss=0.07486, over 933398.38 frames. ], batch size: 18, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:55:38,389 INFO [finetune.py:976] (2/7) Epoch 7, batch 0, loss[loss=0.2045, simple_loss=0.2733, pruned_loss=0.06788, over 4773.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2733, pruned_loss=0.06788, over 4773.00 frames. ], batch size: 26, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:55:38,389 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 06:55:55,927 INFO [finetune.py:1010] (2/7) Epoch 7, validation: loss=0.165, simple_loss=0.2365, pruned_loss=0.04677, over 2265189.00 frames. 2023-03-26 06:55:55,927 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 06:56:14,684 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:56:16,535 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9229, 1.7944, 1.4921, 1.6487, 1.6493, 1.5760, 1.7018, 2.3654], device='cuda:2'), covar=tensor([0.5313, 0.5187, 0.4397, 0.5193, 0.4971, 0.3301, 0.5107, 0.2088], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0282, 0.0240, 0.0204, 0.0244, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:56:36,606 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:56:59,197 INFO [finetune.py:976] (2/7) Epoch 7, batch 50, loss[loss=0.186, simple_loss=0.2677, pruned_loss=0.05211, over 4844.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.2723, pruned_loss=0.07409, over 216017.83 frames. ], batch size: 44, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:57:09,544 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.600e+02 2.022e+02 2.565e+02 5.766e+02, threshold=4.045e+02, percent-clipped=4.0 2023-03-26 06:57:48,626 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2365, 2.0672, 2.6768, 1.8254, 2.3836, 2.4763, 2.0235, 2.5826], device='cuda:2'), covar=tensor([0.1013, 0.1465, 0.1139, 0.1724, 0.0602, 0.1166, 0.1846, 0.0580], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0205, 0.0199, 0.0196, 0.0181, 0.0220, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 06:58:05,576 INFO [finetune.py:976] (2/7) Epoch 7, batch 100, loss[loss=0.2328, simple_loss=0.2827, pruned_loss=0.09142, over 4870.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2687, pruned_loss=0.0743, over 380485.78 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:58:35,955 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 06:58:45,253 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:59:06,468 INFO [finetune.py:976] (2/7) Epoch 7, batch 150, loss[loss=0.1431, simple_loss=0.2105, pruned_loss=0.03781, over 4834.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2621, pruned_loss=0.07084, over 508215.50 frames. ], batch size: 33, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:59:17,659 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.586e+02 1.902e+02 2.328e+02 6.438e+02, threshold=3.804e+02, percent-clipped=3.0 2023-03-26 06:59:21,505 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1937, 1.9921, 1.7052, 1.9409, 2.1419, 1.8623, 2.4510, 2.0971], device='cuda:2'), covar=tensor([0.1569, 0.2809, 0.3720, 0.3111, 0.2675, 0.1819, 0.3262, 0.2068], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0237, 0.0257, 0.0234, 0.0193, 0.0214, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:00:10,777 INFO [finetune.py:976] (2/7) Epoch 7, batch 200, loss[loss=0.1735, simple_loss=0.2445, pruned_loss=0.05122, over 4798.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.2648, pruned_loss=0.07353, over 607149.33 frames. ], batch size: 29, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 07:00:19,839 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3262, 1.2575, 1.6334, 2.2766, 1.5689, 1.8301, 0.9665, 1.8577], device='cuda:2'), covar=tensor([0.1582, 0.1355, 0.1050, 0.0714, 0.0840, 0.1976, 0.1374, 0.0741], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0118, 0.0136, 0.0166, 0.0102, 0.0141, 0.0129, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 07:00:42,256 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:00:43,445 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:13,425 INFO [finetune.py:976] (2/7) Epoch 7, batch 250, loss[loss=0.2155, simple_loss=0.279, pruned_loss=0.07598, over 4904.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2703, pruned_loss=0.07598, over 683181.94 frames. ], batch size: 35, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:01:22,730 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34622.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:24,450 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.737e+02 2.023e+02 2.550e+02 3.958e+02, threshold=4.047e+02, percent-clipped=1.0 2023-03-26 07:01:25,175 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34626.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:45,026 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:53,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6722, 1.4135, 1.2772, 1.1013, 1.3954, 1.4153, 1.3630, 2.0192], device='cuda:2'), covar=tensor([0.6267, 0.6111, 0.4613, 0.6089, 0.5653, 0.3348, 0.5469, 0.2324], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0258, 0.0220, 0.0283, 0.0240, 0.0205, 0.0244, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:01:57,384 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:14,153 INFO [finetune.py:976] (2/7) Epoch 7, batch 300, loss[loss=0.2662, simple_loss=0.3175, pruned_loss=0.1075, over 4824.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2727, pruned_loss=0.07624, over 742800.18 frames. ], batch size: 51, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:02:14,902 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8520, 1.8301, 1.3494, 2.0108, 1.8774, 1.5475, 2.7301, 1.8057], device='cuda:2'), covar=tensor([0.1759, 0.2746, 0.4073, 0.3433, 0.3143, 0.2093, 0.2662, 0.2401], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0237, 0.0256, 0.0234, 0.0193, 0.0214, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:02:15,001 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 07:02:16,184 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 07:02:26,464 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8143, 1.6418, 1.4223, 1.4736, 1.5707, 1.5860, 1.5740, 2.3008], device='cuda:2'), covar=tensor([0.5903, 0.6165, 0.4577, 0.5695, 0.5113, 0.3317, 0.5647, 0.2205], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0258, 0.0220, 0.0283, 0.0240, 0.0205, 0.0245, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:02:34,474 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34683.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:02:36,954 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34687.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:36,989 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9289, 1.7431, 1.5307, 1.7585, 1.7045, 1.6618, 1.6549, 2.4003], device='cuda:2'), covar=tensor([0.5529, 0.6627, 0.4382, 0.5698, 0.5613, 0.3325, 0.5656, 0.2042], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0257, 0.0220, 0.0282, 0.0239, 0.0204, 0.0244, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:02:45,046 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7437, 1.5798, 1.5484, 1.6888, 1.1950, 3.6587, 1.3847, 2.0229], device='cuda:2'), covar=tensor([0.3248, 0.2353, 0.2069, 0.2300, 0.1858, 0.0181, 0.2631, 0.1288], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0122, 0.0116, 0.0098, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 07:02:55,168 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34700.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:03:06,648 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34710.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:03:15,681 INFO [finetune.py:976] (2/7) Epoch 7, batch 350, loss[loss=0.2135, simple_loss=0.2756, pruned_loss=0.07571, over 4810.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2747, pruned_loss=0.07695, over 791940.88 frames. ], batch size: 39, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:03:17,433 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6791, 1.6659, 1.6643, 0.9611, 1.7361, 1.9574, 1.8563, 1.5401], device='cuda:2'), covar=tensor([0.0936, 0.0605, 0.0512, 0.0611, 0.0422, 0.0497, 0.0326, 0.0640], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0159, 0.0121, 0.0139, 0.0132, 0.0125, 0.0146, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.7369e-05, 1.1726e-04, 8.7501e-05, 1.0108e-04, 9.4581e-05, 9.2095e-05, 1.0825e-04, 1.0785e-04], device='cuda:2') 2023-03-26 07:03:27,161 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.664e+02 2.043e+02 2.496e+02 5.690e+02, threshold=4.087e+02, percent-clipped=3.0 2023-03-26 07:03:36,362 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 07:03:54,746 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:04:16,438 INFO [finetune.py:976] (2/7) Epoch 7, batch 400, loss[loss=0.1826, simple_loss=0.252, pruned_loss=0.05658, over 4804.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2752, pruned_loss=0.07637, over 829854.82 frames. ], batch size: 45, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:04:18,285 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6584, 1.4814, 1.5204, 1.6144, 1.0733, 3.7028, 1.4184, 2.0616], device='cuda:2'), covar=tensor([0.3368, 0.2550, 0.2037, 0.2286, 0.1984, 0.0170, 0.2521, 0.1281], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0121, 0.0116, 0.0098, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 07:04:24,032 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34771.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:04:55,068 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:05:14,337 INFO [finetune.py:976] (2/7) Epoch 7, batch 450, loss[loss=0.1786, simple_loss=0.2385, pruned_loss=0.05933, over 4749.00 frames. ], tot_loss[loss=0.2125, simple_loss=0.2739, pruned_loss=0.07557, over 858113.62 frames. ], batch size: 26, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:05:22,434 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 07:05:25,979 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.133e+01 1.634e+02 1.827e+02 2.211e+02 3.915e+02, threshold=3.654e+02, percent-clipped=0.0 2023-03-26 07:05:50,913 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:06:11,591 INFO [finetune.py:976] (2/7) Epoch 7, batch 500, loss[loss=0.2018, simple_loss=0.2612, pruned_loss=0.07114, over 4896.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.271, pruned_loss=0.07494, over 879455.99 frames. ], batch size: 35, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:07:15,211 INFO [finetune.py:976] (2/7) Epoch 7, batch 550, loss[loss=0.1716, simple_loss=0.2332, pruned_loss=0.05494, over 4733.00 frames. ], tot_loss[loss=0.2073, simple_loss=0.2673, pruned_loss=0.07367, over 897603.53 frames. ], batch size: 23, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:07:26,488 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.074e+02 1.634e+02 2.013e+02 2.383e+02 4.182e+02, threshold=4.026e+02, percent-clipped=3.0 2023-03-26 07:07:53,592 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:08:14,352 INFO [finetune.py:976] (2/7) Epoch 7, batch 600, loss[loss=0.2417, simple_loss=0.3038, pruned_loss=0.0898, over 4930.00 frames. ], tot_loss[loss=0.2088, simple_loss=0.2685, pruned_loss=0.07458, over 908399.85 frames. ], batch size: 33, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:08:31,410 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34978.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:08:34,224 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5923, 1.4326, 1.4926, 1.5441, 1.1158, 3.2752, 1.2955, 1.8494], device='cuda:2'), covar=tensor([0.3225, 0.2453, 0.1951, 0.2302, 0.1895, 0.0204, 0.2738, 0.1263], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0117, 0.0098, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 07:08:34,793 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34982.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:09:16,193 INFO [finetune.py:976] (2/7) Epoch 7, batch 650, loss[loss=0.2154, simple_loss=0.2776, pruned_loss=0.07656, over 4131.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2729, pruned_loss=0.076, over 918972.84 frames. ], batch size: 65, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:09:27,414 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.303e+02 1.723e+02 2.031e+02 2.472e+02 3.902e+02, threshold=4.061e+02, percent-clipped=0.0 2023-03-26 07:09:34,408 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7769, 1.5920, 1.3556, 1.4128, 1.5507, 1.5207, 1.5507, 2.2682], device='cuda:2'), covar=tensor([0.5639, 0.5940, 0.4354, 0.5211, 0.5180, 0.3090, 0.5525, 0.2127], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0258, 0.0221, 0.0283, 0.0241, 0.0205, 0.0245, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:09:38,938 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 07:10:17,354 INFO [finetune.py:976] (2/7) Epoch 7, batch 700, loss[loss=0.1904, simple_loss=0.2647, pruned_loss=0.058, over 4789.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2742, pruned_loss=0.07622, over 926205.14 frames. ], batch size: 29, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:10:17,430 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35066.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:10:46,476 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:11:04,983 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4567, 2.2266, 1.8100, 0.9024, 1.9775, 1.8902, 1.6441, 1.9027], device='cuda:2'), covar=tensor([0.0912, 0.0894, 0.1598, 0.2201, 0.1527, 0.2409, 0.2327, 0.1077], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0202, 0.0200, 0.0188, 0.0217, 0.0207, 0.0219, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:11:11,737 INFO [finetune.py:976] (2/7) Epoch 7, batch 750, loss[loss=0.2246, simple_loss=0.2821, pruned_loss=0.08352, over 4804.00 frames. ], tot_loss[loss=0.2146, simple_loss=0.2756, pruned_loss=0.07679, over 933087.84 frames. ], batch size: 45, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:11:23,577 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.641e+02 2.027e+02 2.429e+02 3.682e+02, threshold=4.054e+02, percent-clipped=0.0 2023-03-26 07:12:01,754 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35153.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:12:14,609 INFO [finetune.py:976] (2/7) Epoch 7, batch 800, loss[loss=0.2319, simple_loss=0.276, pruned_loss=0.09393, over 4228.00 frames. ], tot_loss[loss=0.2146, simple_loss=0.2754, pruned_loss=0.07686, over 937136.39 frames. ], batch size: 18, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:12:43,033 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 07:13:17,535 INFO [finetune.py:976] (2/7) Epoch 7, batch 850, loss[loss=0.1979, simple_loss=0.2675, pruned_loss=0.06419, over 4840.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2726, pruned_loss=0.07566, over 941495.68 frames. ], batch size: 44, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:13:27,730 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.645e+02 1.948e+02 2.227e+02 3.525e+02, threshold=3.897e+02, percent-clipped=0.0 2023-03-26 07:13:55,181 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35249.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:14:14,655 INFO [finetune.py:976] (2/7) Epoch 7, batch 900, loss[loss=0.1581, simple_loss=0.2249, pruned_loss=0.04566, over 4923.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2698, pruned_loss=0.07473, over 944062.21 frames. ], batch size: 46, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:14:32,301 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35278.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:14:35,278 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:14:55,279 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:16,599 INFO [finetune.py:976] (2/7) Epoch 7, batch 950, loss[loss=0.2237, simple_loss=0.2855, pruned_loss=0.08094, over 4816.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2675, pruned_loss=0.07347, over 946000.41 frames. ], batch size: 38, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:15:25,980 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3366, 2.9004, 2.7832, 1.2692, 2.9335, 2.1979, 0.9820, 1.9285], device='cuda:2'), covar=tensor([0.2518, 0.2114, 0.1927, 0.3436, 0.1447, 0.1051, 0.3803, 0.1635], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0162, 0.0127, 0.0155, 0.0122, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 07:15:31,373 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.515e+02 1.811e+02 2.306e+02 3.628e+02, threshold=3.621e+02, percent-clipped=0.0 2023-03-26 07:15:31,444 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:31,498 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7574, 2.7623, 3.0336, 2.8816, 2.9125, 4.8673, 2.7437, 2.7977], device='cuda:2'), covar=tensor([0.0741, 0.1148, 0.0790, 0.0716, 0.1006, 0.0138, 0.0855, 0.1097], device='cuda:2'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0091, 0.0083, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 07:15:33,896 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:50,901 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35345.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:16:18,534 INFO [finetune.py:976] (2/7) Epoch 7, batch 1000, loss[loss=0.246, simple_loss=0.2962, pruned_loss=0.09792, over 4097.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.269, pruned_loss=0.07418, over 944930.83 frames. ], batch size: 65, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:16:18,628 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35366.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:16:27,465 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:16:41,132 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 07:16:51,626 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7498, 2.4351, 1.8522, 0.9800, 2.0551, 2.1529, 1.8830, 2.1526], device='cuda:2'), covar=tensor([0.0809, 0.0760, 0.1654, 0.2117, 0.1502, 0.1958, 0.2055, 0.0962], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0202, 0.0200, 0.0189, 0.0217, 0.0207, 0.0219, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:17:08,092 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35406.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:17:08,204 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.71 vs. limit=5.0 2023-03-26 07:17:17,933 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35414.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:17:19,085 INFO [finetune.py:976] (2/7) Epoch 7, batch 1050, loss[loss=0.1602, simple_loss=0.2283, pruned_loss=0.04607, over 4776.00 frames. ], tot_loss[loss=0.2122, simple_loss=0.2733, pruned_loss=0.07556, over 946347.53 frames. ], batch size: 26, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:17:30,142 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.696e+02 1.924e+02 2.371e+02 5.787e+02, threshold=3.848e+02, percent-clipped=4.0 2023-03-26 07:17:41,397 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:17:42,010 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8966, 1.4349, 1.7796, 1.7585, 1.5091, 1.5193, 1.6659, 1.6065], device='cuda:2'), covar=tensor([0.4708, 0.5814, 0.5207, 0.5555, 0.6656, 0.5047, 0.6653, 0.4831], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0244, 0.0255, 0.0256, 0.0243, 0.0219, 0.0272, 0.0226], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:17:59,816 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:18:20,379 INFO [finetune.py:976] (2/7) Epoch 7, batch 1100, loss[loss=0.196, simple_loss=0.2518, pruned_loss=0.07007, over 4783.00 frames. ], tot_loss[loss=0.215, simple_loss=0.2762, pruned_loss=0.07693, over 949851.84 frames. ], batch size: 29, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:18:32,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5907, 1.5038, 1.2893, 1.4138, 1.7753, 1.7115, 1.5677, 1.2945], device='cuda:2'), covar=tensor([0.0299, 0.0276, 0.0586, 0.0289, 0.0239, 0.0454, 0.0319, 0.0407], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0111, 0.0138, 0.0116, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.8994e-05, 8.7162e-05, 1.1039e-04, 9.1428e-05, 8.2355e-05, 7.4077e-05, 6.8890e-05, 8.5148e-05], device='cuda:2') 2023-03-26 07:18:42,447 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8474, 1.8655, 1.9271, 1.4683, 2.0138, 2.0681, 1.9943, 1.5983], device='cuda:2'), covar=tensor([0.0530, 0.0527, 0.0626, 0.0811, 0.0573, 0.0536, 0.0465, 0.0887], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0133, 0.0142, 0.0126, 0.0112, 0.0143, 0.0144, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:19:13,898 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9958, 1.7540, 1.4899, 1.8196, 1.6621, 1.5884, 1.6444, 2.4917], device='cuda:2'), covar=tensor([0.5804, 0.6407, 0.4793, 0.5523, 0.5209, 0.3391, 0.5774, 0.2110], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0257, 0.0219, 0.0281, 0.0239, 0.0204, 0.0244, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:19:22,492 INFO [finetune.py:976] (2/7) Epoch 7, batch 1150, loss[loss=0.1631, simple_loss=0.2385, pruned_loss=0.04381, over 4859.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2772, pruned_loss=0.07742, over 951240.18 frames. ], batch size: 31, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:19:33,760 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.764e+02 2.068e+02 2.432e+02 4.937e+02, threshold=4.137e+02, percent-clipped=2.0 2023-03-26 07:19:41,287 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:20:02,302 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 07:20:24,974 INFO [finetune.py:976] (2/7) Epoch 7, batch 1200, loss[loss=0.1889, simple_loss=0.2527, pruned_loss=0.06257, over 4894.00 frames. ], tot_loss[loss=0.2153, simple_loss=0.2758, pruned_loss=0.07741, over 952706.41 frames. ], batch size: 35, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:20:51,612 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:21:18,403 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2928, 1.2241, 1.2307, 1.2042, 1.5171, 1.4688, 1.3295, 1.1433], device='cuda:2'), covar=tensor([0.0348, 0.0254, 0.0485, 0.0312, 0.0221, 0.0343, 0.0284, 0.0372], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0111, 0.0138, 0.0116, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.9121e-05, 8.7388e-05, 1.1045e-04, 9.1722e-05, 8.2185e-05, 7.3987e-05, 6.9139e-05, 8.5076e-05], device='cuda:2') 2023-03-26 07:21:20,130 INFO [finetune.py:976] (2/7) Epoch 7, batch 1250, loss[loss=0.2512, simple_loss=0.2885, pruned_loss=0.107, over 4909.00 frames. ], tot_loss[loss=0.2114, simple_loss=0.2715, pruned_loss=0.07564, over 953262.10 frames. ], batch size: 32, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:21:31,425 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.688e+02 2.018e+02 2.672e+02 1.298e+03, threshold=4.035e+02, percent-clipped=4.0 2023-03-26 07:22:18,864 INFO [finetune.py:976] (2/7) Epoch 7, batch 1300, loss[loss=0.2226, simple_loss=0.2758, pruned_loss=0.08472, over 4762.00 frames. ], tot_loss[loss=0.2088, simple_loss=0.2689, pruned_loss=0.0744, over 953819.77 frames. ], batch size: 26, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:23:05,605 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35701.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:23:25,710 INFO [finetune.py:976] (2/7) Epoch 7, batch 1350, loss[loss=0.2008, simple_loss=0.2684, pruned_loss=0.06657, over 4822.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2689, pruned_loss=0.0745, over 955203.55 frames. ], batch size: 40, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:23:29,870 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8977, 3.4259, 3.5611, 3.7468, 3.6287, 3.4760, 3.9552, 1.1516], device='cuda:2'), covar=tensor([0.0898, 0.0891, 0.0905, 0.1050, 0.1390, 0.1589, 0.0841, 0.5711], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0241, 0.0272, 0.0291, 0.0331, 0.0282, 0.0302, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:23:38,292 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.109e+01 1.659e+02 1.871e+02 2.249e+02 4.421e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-26 07:23:40,804 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:24:06,286 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:24:25,446 INFO [finetune.py:976] (2/7) Epoch 7, batch 1400, loss[loss=0.2372, simple_loss=0.2937, pruned_loss=0.09034, over 4873.00 frames. ], tot_loss[loss=0.2113, simple_loss=0.2717, pruned_loss=0.07549, over 954280.06 frames. ], batch size: 34, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:24:33,475 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:24:54,454 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8858, 3.9670, 3.8618, 1.9010, 4.1080, 3.0141, 0.9707, 2.8431], device='cuda:2'), covar=tensor([0.2245, 0.1597, 0.1223, 0.2991, 0.0840, 0.0893, 0.4032, 0.1391], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0171, 0.0161, 0.0127, 0.0154, 0.0121, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 07:25:01,855 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35796.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:25:20,925 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:25:23,858 INFO [finetune.py:976] (2/7) Epoch 7, batch 1450, loss[loss=0.2038, simple_loss=0.2644, pruned_loss=0.07159, over 4747.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2736, pruned_loss=0.07597, over 953531.47 frames. ], batch size: 27, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:25:33,655 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.737e+02 2.011e+02 2.560e+02 4.083e+02, threshold=4.021e+02, percent-clipped=3.0 2023-03-26 07:25:41,504 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.94 vs. limit=5.0 2023-03-26 07:25:43,667 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:26:24,786 INFO [finetune.py:976] (2/7) Epoch 7, batch 1500, loss[loss=0.2393, simple_loss=0.2925, pruned_loss=0.09308, over 4853.00 frames. ], tot_loss[loss=0.2152, simple_loss=0.2761, pruned_loss=0.07716, over 954630.63 frames. ], batch size: 31, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:26:32,637 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:26:47,883 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:27:11,196 INFO [finetune.py:976] (2/7) Epoch 7, batch 1550, loss[loss=0.197, simple_loss=0.2636, pruned_loss=0.06517, over 4908.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2749, pruned_loss=0.07575, over 953892.21 frames. ], batch size: 46, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:27:17,686 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.940e+01 1.555e+02 1.902e+02 2.352e+02 4.828e+02, threshold=3.804e+02, percent-clipped=1.0 2023-03-26 07:27:35,641 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2410, 2.3858, 2.3249, 1.7334, 2.4830, 2.5441, 2.4326, 2.0037], device='cuda:2'), covar=tensor([0.0595, 0.0505, 0.0680, 0.0859, 0.0414, 0.0661, 0.0589, 0.0912], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0135, 0.0144, 0.0127, 0.0114, 0.0145, 0.0146, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:27:36,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5928, 3.3965, 3.3096, 1.5320, 3.5282, 2.5115, 0.8232, 2.4001], device='cuda:2'), covar=tensor([0.2694, 0.1965, 0.1533, 0.3310, 0.1155, 0.1065, 0.4399, 0.1485], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0172, 0.0162, 0.0128, 0.0155, 0.0122, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 07:27:41,147 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.94 vs. limit=5.0 2023-03-26 07:27:45,068 INFO [finetune.py:976] (2/7) Epoch 7, batch 1600, loss[loss=0.1793, simple_loss=0.2451, pruned_loss=0.05679, over 4839.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2721, pruned_loss=0.07454, over 954804.47 frames. ], batch size: 44, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:28:04,736 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 07:28:25,869 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36001.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:28:35,455 INFO [finetune.py:976] (2/7) Epoch 7, batch 1650, loss[loss=0.2451, simple_loss=0.2904, pruned_loss=0.09992, over 4829.00 frames. ], tot_loss[loss=0.209, simple_loss=0.2702, pruned_loss=0.07389, over 953392.10 frames. ], batch size: 33, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:28:41,915 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.569e+02 1.867e+02 2.342e+02 3.778e+02, threshold=3.734e+02, percent-clipped=0.0 2023-03-26 07:28:50,702 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:29:09,239 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36049.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:29:25,210 INFO [finetune.py:976] (2/7) Epoch 7, batch 1700, loss[loss=0.1851, simple_loss=0.2493, pruned_loss=0.06042, over 4890.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2687, pruned_loss=0.07365, over 955219.30 frames. ], batch size: 35, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:29:39,786 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:30:15,626 INFO [finetune.py:976] (2/7) Epoch 7, batch 1750, loss[loss=0.2085, simple_loss=0.2791, pruned_loss=0.06893, over 4909.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2699, pruned_loss=0.07427, over 956230.45 frames. ], batch size: 43, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:30:27,792 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.647e+02 1.960e+02 2.452e+02 4.962e+02, threshold=3.920e+02, percent-clipped=3.0 2023-03-26 07:30:28,485 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36127.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:30:45,910 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 07:31:18,546 INFO [finetune.py:976] (2/7) Epoch 7, batch 1800, loss[loss=0.2006, simple_loss=0.263, pruned_loss=0.06912, over 4905.00 frames. ], tot_loss[loss=0.2109, simple_loss=0.2728, pruned_loss=0.07452, over 955567.63 frames. ], batch size: 36, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:31:19,217 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:38,160 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:47,170 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:57,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4807, 2.8482, 2.3265, 1.8732, 2.8195, 3.0519, 2.7405, 2.3387], device='cuda:2'), covar=tensor([0.0666, 0.0560, 0.0882, 0.0958, 0.0471, 0.0686, 0.0633, 0.1035], device='cuda:2'), in_proj_covar=tensor([0.0139, 0.0134, 0.0144, 0.0127, 0.0114, 0.0145, 0.0146, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:32:21,146 INFO [finetune.py:976] (2/7) Epoch 7, batch 1850, loss[loss=0.2242, simple_loss=0.2781, pruned_loss=0.0851, over 4872.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2746, pruned_loss=0.07527, over 956389.46 frames. ], batch size: 31, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:32:33,456 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.736e+02 2.131e+02 2.651e+02 6.216e+02, threshold=4.263e+02, percent-clipped=3.0 2023-03-26 07:32:40,444 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:32:50,344 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36242.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:33:10,923 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-26 07:33:19,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9592, 1.8082, 1.4949, 1.7135, 1.9495, 1.6496, 2.1580, 1.8849], device='cuda:2'), covar=tensor([0.1480, 0.2673, 0.3647, 0.2885, 0.2722, 0.1877, 0.3403, 0.2096], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0190, 0.0236, 0.0254, 0.0233, 0.0192, 0.0212, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:33:21,421 INFO [finetune.py:976] (2/7) Epoch 7, batch 1900, loss[loss=0.2148, simple_loss=0.2726, pruned_loss=0.07849, over 4861.00 frames. ], tot_loss[loss=0.2135, simple_loss=0.2758, pruned_loss=0.07559, over 957278.35 frames. ], batch size: 34, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:34:25,473 INFO [finetune.py:976] (2/7) Epoch 7, batch 1950, loss[loss=0.1277, simple_loss=0.2104, pruned_loss=0.02252, over 4808.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2735, pruned_loss=0.07412, over 958772.68 frames. ], batch size: 41, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:34:36,926 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.969e+01 1.685e+02 2.051e+02 2.475e+02 4.640e+02, threshold=4.103e+02, percent-clipped=3.0 2023-03-26 07:34:52,947 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4764, 2.3382, 1.9360, 1.0458, 2.1666, 1.8321, 1.6889, 2.0970], device='cuda:2'), covar=tensor([0.0975, 0.0807, 0.1590, 0.2237, 0.1381, 0.2373, 0.2183, 0.1107], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0198, 0.0187, 0.0215, 0.0205, 0.0219, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:35:04,046 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 07:35:28,761 INFO [finetune.py:976] (2/7) Epoch 7, batch 2000, loss[loss=0.2044, simple_loss=0.2683, pruned_loss=0.07026, over 4852.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2708, pruned_loss=0.07354, over 956342.13 frames. ], batch size: 47, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:36:17,711 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1760, 3.5603, 3.7555, 4.0025, 3.9264, 3.6540, 4.2211, 1.2172], device='cuda:2'), covar=tensor([0.0806, 0.0827, 0.0867, 0.1029, 0.1246, 0.1469, 0.0666, 0.5327], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0242, 0.0272, 0.0289, 0.0330, 0.0281, 0.0301, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:36:30,440 INFO [finetune.py:976] (2/7) Epoch 7, batch 2050, loss[loss=0.2222, simple_loss=0.2733, pruned_loss=0.08557, over 4901.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2684, pruned_loss=0.07286, over 957361.80 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:36:40,724 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0318, 1.9659, 2.0411, 1.2509, 2.1832, 2.2300, 2.0784, 1.7164], device='cuda:2'), covar=tensor([0.0598, 0.0677, 0.0709, 0.1062, 0.0543, 0.0715, 0.0617, 0.1096], device='cuda:2'), in_proj_covar=tensor([0.0140, 0.0135, 0.0145, 0.0128, 0.0115, 0.0146, 0.0147, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:36:43,670 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.910e+01 1.532e+02 1.893e+02 2.218e+02 7.941e+02, threshold=3.786e+02, percent-clipped=2.0 2023-03-26 07:36:44,393 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36427.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:37:34,744 INFO [finetune.py:976] (2/7) Epoch 7, batch 2100, loss[loss=0.1834, simple_loss=0.2503, pruned_loss=0.05821, over 4756.00 frames. ], tot_loss[loss=0.2062, simple_loss=0.2674, pruned_loss=0.07249, over 958034.45 frames. ], batch size: 26, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:37:35,444 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:37:45,796 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:38:36,450 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:38:36,992 INFO [finetune.py:976] (2/7) Epoch 7, batch 2150, loss[loss=0.2002, simple_loss=0.2617, pruned_loss=0.06936, over 4769.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2699, pruned_loss=0.07365, over 957877.49 frames. ], batch size: 28, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:38:48,004 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.787e+02 2.211e+02 2.590e+02 5.595e+02, threshold=4.423e+02, percent-clipped=4.0 2023-03-26 07:38:56,725 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 07:39:00,125 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36537.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:39:08,398 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8758, 1.7152, 1.4334, 1.6054, 1.8413, 1.6013, 2.0744, 1.8623], device='cuda:2'), covar=tensor([0.1601, 0.2802, 0.3818, 0.3362, 0.3127, 0.1980, 0.4397, 0.2226], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0236, 0.0255, 0.0234, 0.0192, 0.0211, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:39:35,927 INFO [finetune.py:976] (2/7) Epoch 7, batch 2200, loss[loss=0.1662, simple_loss=0.2176, pruned_loss=0.05736, over 4710.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2726, pruned_loss=0.07476, over 956477.73 frames. ], batch size: 23, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:39:57,465 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7777, 1.4280, 2.2280, 1.5264, 1.9060, 2.0224, 1.4390, 2.0151], device='cuda:2'), covar=tensor([0.1433, 0.2002, 0.1137, 0.1803, 0.0982, 0.1469, 0.2556, 0.0970], device='cuda:2'), in_proj_covar=tensor([0.0200, 0.0203, 0.0195, 0.0194, 0.0179, 0.0218, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:40:25,375 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:27,770 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36608.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:30,244 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:38,177 INFO [finetune.py:976] (2/7) Epoch 7, batch 2250, loss[loss=0.1998, simple_loss=0.2755, pruned_loss=0.06211, over 4794.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2729, pruned_loss=0.07437, over 957068.14 frames. ], batch size: 45, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:40:38,914 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6411, 1.6209, 1.3502, 1.4112, 1.9613, 1.8659, 1.6327, 1.3945], device='cuda:2'), covar=tensor([0.0355, 0.0302, 0.0533, 0.0340, 0.0203, 0.0444, 0.0339, 0.0442], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0111, 0.0138, 0.0116, 0.0103, 0.0099, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.8923e-05, 8.7021e-05, 1.1046e-04, 9.1187e-05, 8.1528e-05, 7.3702e-05, 6.8895e-05, 8.4581e-05], device='cuda:2') 2023-03-26 07:40:49,914 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.735e+02 1.950e+02 2.446e+02 5.153e+02, threshold=3.899e+02, percent-clipped=1.0 2023-03-26 07:41:41,215 INFO [finetune.py:976] (2/7) Epoch 7, batch 2300, loss[loss=0.1985, simple_loss=0.2656, pruned_loss=0.06571, over 4756.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.2731, pruned_loss=0.07413, over 955521.11 frames. ], batch size: 28, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:41:41,332 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:41:43,639 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:41:51,491 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36673.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:42:18,128 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36700.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:42:36,647 INFO [finetune.py:976] (2/7) Epoch 7, batch 2350, loss[loss=0.1933, simple_loss=0.2498, pruned_loss=0.06839, over 4761.00 frames. ], tot_loss[loss=0.2085, simple_loss=0.2705, pruned_loss=0.07329, over 954435.31 frames. ], batch size: 28, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:42:44,859 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 07:42:49,192 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.509e+02 1.866e+02 2.321e+02 4.735e+02, threshold=3.732e+02, percent-clipped=2.0 2023-03-26 07:43:29,353 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36761.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:43:37,712 INFO [finetune.py:976] (2/7) Epoch 7, batch 2400, loss[loss=0.2238, simple_loss=0.2772, pruned_loss=0.08523, over 4835.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2684, pruned_loss=0.07286, over 955085.23 frames. ], batch size: 33, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:44:28,026 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36805.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:44:40,195 INFO [finetune.py:976] (2/7) Epoch 7, batch 2450, loss[loss=0.1805, simple_loss=0.2558, pruned_loss=0.05262, over 4806.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.266, pruned_loss=0.07226, over 954525.61 frames. ], batch size: 39, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:44:51,697 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.798e+02 2.140e+02 2.594e+02 4.660e+02, threshold=4.281e+02, percent-clipped=3.0 2023-03-26 07:45:10,240 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36837.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:45:49,126 INFO [finetune.py:976] (2/7) Epoch 7, batch 2500, loss[loss=0.2396, simple_loss=0.2824, pruned_loss=0.09836, over 4823.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2689, pruned_loss=0.07427, over 956044.94 frames. ], batch size: 30, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:45:49,281 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36866.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:46:12,539 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:46:44,088 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:46:51,752 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7108, 3.9214, 3.6153, 1.8520, 4.0261, 3.0105, 0.6860, 2.6705], device='cuda:2'), covar=tensor([0.2435, 0.1899, 0.1605, 0.3396, 0.1034, 0.0970, 0.4720, 0.1535], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0161, 0.0128, 0.0153, 0.0122, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 07:46:52,300 INFO [finetune.py:976] (2/7) Epoch 7, batch 2550, loss[loss=0.2174, simple_loss=0.2936, pruned_loss=0.07062, over 4900.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2722, pruned_loss=0.0747, over 955470.54 frames. ], batch size: 37, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:47:03,264 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 1.620e+02 1.912e+02 2.307e+02 6.491e+02, threshold=3.825e+02, percent-clipped=1.0 2023-03-26 07:47:28,027 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 07:47:45,947 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36958.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:47,753 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36961.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:55,199 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:56,361 INFO [finetune.py:976] (2/7) Epoch 7, batch 2600, loss[loss=0.2279, simple_loss=0.2798, pruned_loss=0.08798, over 4870.00 frames. ], tot_loss[loss=0.2123, simple_loss=0.2738, pruned_loss=0.0754, over 955412.32 frames. ], batch size: 34, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:47:57,653 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:05,157 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:32,625 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7514, 1.1955, 0.8047, 1.6630, 2.0358, 1.2018, 1.5124, 1.6197], device='cuda:2'), covar=tensor([0.1426, 0.2219, 0.2181, 0.1191, 0.2044, 0.2211, 0.1446, 0.1941], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0113, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 07:48:41,493 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:53,698 INFO [finetune.py:976] (2/7) Epoch 7, batch 2650, loss[loss=0.1909, simple_loss=0.2566, pruned_loss=0.06263, over 4777.00 frames. ], tot_loss[loss=0.2118, simple_loss=0.2737, pruned_loss=0.07498, over 954855.50 frames. ], batch size: 26, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:48:56,115 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37019.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:49:05,452 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.627e+02 1.954e+02 2.393e+02 3.704e+02, threshold=3.907e+02, percent-clipped=0.0 2023-03-26 07:49:24,524 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0058, 1.5257, 2.2236, 3.5732, 2.5166, 2.6051, 0.7766, 2.7171], device='cuda:2'), covar=tensor([0.1768, 0.1831, 0.1440, 0.0763, 0.0815, 0.2180, 0.2212, 0.0681], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0118, 0.0135, 0.0166, 0.0102, 0.0140, 0.0128, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 07:49:41,694 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37056.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:49:47,736 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:49:48,223 INFO [finetune.py:976] (2/7) Epoch 7, batch 2700, loss[loss=0.1853, simple_loss=0.2483, pruned_loss=0.0612, over 4930.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2733, pruned_loss=0.07487, over 956201.56 frames. ], batch size: 38, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:50:22,106 INFO [finetune.py:976] (2/7) Epoch 7, batch 2750, loss[loss=0.2225, simple_loss=0.2807, pruned_loss=0.08218, over 4910.00 frames. ], tot_loss[loss=0.2085, simple_loss=0.2702, pruned_loss=0.07342, over 956661.71 frames. ], batch size: 37, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:50:28,698 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.629e+02 1.991e+02 2.307e+02 4.303e+02, threshold=3.983e+02, percent-clipped=1.0 2023-03-26 07:50:58,193 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37161.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:51:03,499 INFO [finetune.py:976] (2/7) Epoch 7, batch 2800, loss[loss=0.1843, simple_loss=0.2394, pruned_loss=0.06456, over 4869.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2666, pruned_loss=0.07198, over 954910.47 frames. ], batch size: 31, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:51:06,558 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3115, 1.5445, 1.6736, 0.8847, 1.5320, 1.8580, 1.8852, 1.5495], device='cuda:2'), covar=tensor([0.0816, 0.0587, 0.0392, 0.0492, 0.0377, 0.0448, 0.0273, 0.0537], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0120, 0.0136, 0.0131, 0.0123, 0.0145, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.6120e-05, 1.1490e-04, 8.6959e-05, 9.9098e-05, 9.4108e-05, 9.0706e-05, 1.0694e-04, 1.0666e-04], device='cuda:2') 2023-03-26 07:52:09,314 INFO [finetune.py:976] (2/7) Epoch 7, batch 2850, loss[loss=0.173, simple_loss=0.2459, pruned_loss=0.05001, over 4764.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2662, pruned_loss=0.07196, over 957694.55 frames. ], batch size: 54, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:52:20,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.582e+02 1.929e+02 2.327e+02 4.539e+02, threshold=3.857e+02, percent-clipped=3.0 2023-03-26 07:52:36,750 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1778, 2.0322, 1.6222, 2.1857, 2.1656, 1.8133, 2.4359, 2.0753], device='cuda:2'), covar=tensor([0.1599, 0.2753, 0.3936, 0.3106, 0.2930, 0.1944, 0.3152, 0.2414], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0190, 0.0234, 0.0254, 0.0233, 0.0191, 0.0211, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:53:00,978 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37261.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:02,779 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37264.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:03,915 INFO [finetune.py:976] (2/7) Epoch 7, batch 2900, loss[loss=0.3349, simple_loss=0.3754, pruned_loss=0.1472, over 4814.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.271, pruned_loss=0.07439, over 956910.51 frames. ], batch size: 51, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:53:09,377 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37267.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:10,005 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:10,018 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:52,563 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 07:54:04,929 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37309.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:06,759 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:07,994 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37314.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:13,844 INFO [finetune.py:976] (2/7) Epoch 7, batch 2950, loss[loss=0.2054, simple_loss=0.2737, pruned_loss=0.06859, over 4903.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.274, pruned_loss=0.07579, over 954579.54 frames. ], batch size: 37, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:54:13,904 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37316.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:25,441 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.702e+02 2.045e+02 2.514e+02 5.908e+02, threshold=4.090e+02, percent-clipped=3.0 2023-03-26 07:54:27,384 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:00,455 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37356.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:55:05,689 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:08,109 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37364.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:09,240 INFO [finetune.py:976] (2/7) Epoch 7, batch 3000, loss[loss=0.1766, simple_loss=0.2628, pruned_loss=0.04517, over 4800.00 frames. ], tot_loss[loss=0.215, simple_loss=0.276, pruned_loss=0.07701, over 954385.27 frames. ], batch size: 29, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:55:09,241 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 07:55:17,823 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0731, 1.9150, 1.8453, 1.7551, 1.8061, 1.8500, 1.8359, 2.5274], device='cuda:2'), covar=tensor([0.5231, 0.6281, 0.4508, 0.5154, 0.4972, 0.3203, 0.5304, 0.2087], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0258, 0.0220, 0.0281, 0.0240, 0.0204, 0.0244, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:55:29,694 INFO [finetune.py:1010] (2/7) Epoch 7, validation: loss=0.161, simple_loss=0.2327, pruned_loss=0.04464, over 2265189.00 frames. 2023-03-26 07:55:29,695 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 07:55:37,367 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2660, 1.4607, 1.4702, 1.5903, 1.5574, 2.9071, 1.3576, 1.5488], device='cuda:2'), covar=tensor([0.1083, 0.1716, 0.1133, 0.0994, 0.1501, 0.0296, 0.1429, 0.1626], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 07:55:54,502 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37404.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:56:01,847 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 07:56:02,292 INFO [finetune.py:976] (2/7) Epoch 7, batch 3050, loss[loss=0.236, simple_loss=0.3075, pruned_loss=0.0823, over 4907.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.278, pruned_loss=0.07787, over 952957.22 frames. ], batch size: 37, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:56:11,552 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:56:12,025 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.173e+02 1.581e+02 1.871e+02 2.387e+02 4.591e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-26 07:56:48,986 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37461.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:56:57,054 INFO [finetune.py:976] (2/7) Epoch 7, batch 3100, loss[loss=0.2008, simple_loss=0.2622, pruned_loss=0.06965, over 4819.00 frames. ], tot_loss[loss=0.2135, simple_loss=0.2753, pruned_loss=0.07585, over 953308.83 frames. ], batch size: 41, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:57:20,731 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9809, 1.7906, 1.5525, 1.6637, 1.7541, 1.7101, 1.7270, 2.4850], device='cuda:2'), covar=tensor([0.5425, 0.5882, 0.4310, 0.5967, 0.5126, 0.3301, 0.5391, 0.2155], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0258, 0.0221, 0.0282, 0.0241, 0.0205, 0.0245, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:57:52,633 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37509.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:57:52,646 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3427, 2.8955, 2.7977, 1.2928, 2.9953, 2.2260, 0.8110, 1.8903], device='cuda:2'), covar=tensor([0.2330, 0.2036, 0.1714, 0.3756, 0.1367, 0.1184, 0.4343, 0.1816], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0161, 0.0128, 0.0152, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 07:58:01,949 INFO [finetune.py:976] (2/7) Epoch 7, batch 3150, loss[loss=0.2057, simple_loss=0.2689, pruned_loss=0.07126, over 4755.00 frames. ], tot_loss[loss=0.2113, simple_loss=0.2727, pruned_loss=0.07496, over 955019.26 frames. ], batch size: 26, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:58:13,088 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.704e+02 2.041e+02 2.515e+02 5.799e+02, threshold=4.081e+02, percent-clipped=3.0 2023-03-26 07:58:34,669 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1345, 2.0525, 1.7973, 2.2221, 2.0110, 1.9675, 1.9448, 2.7970], device='cuda:2'), covar=tensor([0.5431, 0.6723, 0.4466, 0.5913, 0.6164, 0.3104, 0.6382, 0.2052], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0258, 0.0220, 0.0281, 0.0241, 0.0204, 0.0245, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:59:05,775 INFO [finetune.py:976] (2/7) Epoch 7, batch 3200, loss[loss=0.2065, simple_loss=0.2737, pruned_loss=0.06963, over 4734.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2681, pruned_loss=0.07306, over 954822.59 frames. ], batch size: 59, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:59:06,472 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:59:25,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0987, 1.9659, 2.0601, 1.4950, 2.0920, 2.2369, 2.1706, 1.6523], device='cuda:2'), covar=tensor([0.0477, 0.0596, 0.0732, 0.0892, 0.0559, 0.0569, 0.0511, 0.1040], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0133, 0.0143, 0.0126, 0.0112, 0.0144, 0.0146, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 07:59:57,144 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-26 08:00:08,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37614.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:09,462 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37615.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:09,559 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8761, 1.3864, 1.7418, 1.7575, 1.5382, 1.5153, 1.7121, 1.6200], device='cuda:2'), covar=tensor([0.5218, 0.6055, 0.5086, 0.5512, 0.6628, 0.5181, 0.6914, 0.4889], device='cuda:2'), in_proj_covar=tensor([0.0230, 0.0243, 0.0255, 0.0255, 0.0244, 0.0220, 0.0273, 0.0226], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:00:14,949 INFO [finetune.py:976] (2/7) Epoch 7, batch 3250, loss[loss=0.2079, simple_loss=0.2861, pruned_loss=0.06486, over 4904.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2682, pruned_loss=0.07298, over 953934.34 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:00:19,904 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:26,116 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.664e+02 1.918e+02 2.274e+02 4.430e+02, threshold=3.836e+02, percent-clipped=1.0 2023-03-26 08:01:09,566 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37660.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:01:10,705 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37662.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:01:18,270 INFO [finetune.py:976] (2/7) Epoch 7, batch 3300, loss[loss=0.2084, simple_loss=0.275, pruned_loss=0.07092, over 4100.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.272, pruned_loss=0.07469, over 951679.68 frames. ], batch size: 65, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:01:50,148 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2284, 2.0445, 1.7443, 2.2419, 2.2909, 1.9198, 2.6370, 2.1999], device='cuda:2'), covar=tensor([0.1400, 0.2604, 0.3486, 0.2896, 0.2540, 0.1672, 0.3782, 0.1988], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0189, 0.0233, 0.0253, 0.0232, 0.0191, 0.0210, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:02:12,454 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:02:22,966 INFO [finetune.py:976] (2/7) Epoch 7, batch 3350, loss[loss=0.1925, simple_loss=0.2699, pruned_loss=0.05754, over 4799.00 frames. ], tot_loss[loss=0.212, simple_loss=0.2736, pruned_loss=0.07522, over 951319.59 frames. ], batch size: 45, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:02:25,464 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:02:34,111 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.767e+02 2.019e+02 2.457e+02 5.992e+02, threshold=4.038e+02, percent-clipped=4.0 2023-03-26 08:02:44,769 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2601, 2.1507, 1.9673, 2.2813, 2.9195, 2.2176, 2.1690, 1.7173], device='cuda:2'), covar=tensor([0.2096, 0.2063, 0.1819, 0.1653, 0.1671, 0.1065, 0.2213, 0.1835], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0208, 0.0203, 0.0186, 0.0239, 0.0177, 0.0213, 0.0191], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:02:55,862 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6818, 1.5191, 1.5969, 1.6386, 1.0290, 3.5943, 1.4035, 2.0393], device='cuda:2'), covar=tensor([0.3276, 0.2357, 0.1961, 0.2182, 0.1871, 0.0145, 0.2514, 0.1230], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0098, 0.0102, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:03:05,981 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 08:03:28,095 INFO [finetune.py:976] (2/7) Epoch 7, batch 3400, loss[loss=0.1962, simple_loss=0.2701, pruned_loss=0.06117, over 4744.00 frames. ], tot_loss[loss=0.213, simple_loss=0.2746, pruned_loss=0.07564, over 952312.41 frames. ], batch size: 54, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:04:09,009 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8369, 4.1158, 3.8726, 1.9993, 4.2294, 3.1484, 0.9939, 2.8757], device='cuda:2'), covar=tensor([0.2208, 0.1714, 0.1383, 0.3185, 0.0847, 0.0948, 0.4337, 0.1342], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0172, 0.0162, 0.0128, 0.0154, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:04:32,123 INFO [finetune.py:976] (2/7) Epoch 7, batch 3450, loss[loss=0.303, simple_loss=0.3233, pruned_loss=0.1414, over 4220.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2751, pruned_loss=0.07584, over 953421.54 frames. ], batch size: 66, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:04:32,861 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0300, 1.8290, 2.6073, 3.9934, 2.7205, 2.6374, 0.6373, 3.1581], device='cuda:2'), covar=tensor([0.1784, 0.1511, 0.1329, 0.0490, 0.0758, 0.1470, 0.2373, 0.0574], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0101, 0.0139, 0.0128, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:04:43,340 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.715e+02 1.991e+02 2.496e+02 6.747e+02, threshold=3.982e+02, percent-clipped=3.0 2023-03-26 08:05:36,357 INFO [finetune.py:976] (2/7) Epoch 7, batch 3500, loss[loss=0.1782, simple_loss=0.2376, pruned_loss=0.05938, over 4821.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2723, pruned_loss=0.075, over 952736.42 frames. ], batch size: 38, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:05:54,078 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2521, 4.8368, 4.5613, 2.6771, 4.8349, 3.6674, 0.9936, 3.4574], device='cuda:2'), covar=tensor([0.2048, 0.1319, 0.1287, 0.2930, 0.0725, 0.0899, 0.4681, 0.1420], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0161, 0.0129, 0.0153, 0.0122, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:06:26,317 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9249, 3.9596, 3.7246, 1.6948, 3.9368, 2.8948, 0.8407, 2.6885], device='cuda:2'), covar=tensor([0.2198, 0.1542, 0.1497, 0.3469, 0.0959, 0.1135, 0.4584, 0.1605], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0172, 0.0162, 0.0129, 0.0153, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:06:41,166 INFO [finetune.py:976] (2/7) Epoch 7, batch 3550, loss[loss=0.1455, simple_loss=0.2133, pruned_loss=0.03889, over 4833.00 frames. ], tot_loss[loss=0.2085, simple_loss=0.2692, pruned_loss=0.07393, over 953119.44 frames. ], batch size: 33, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:06:50,299 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 08:06:51,851 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:06:58,533 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.559e+02 1.846e+02 2.185e+02 4.242e+02, threshold=3.693e+02, percent-clipped=1.0 2023-03-26 08:07:12,548 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7706, 1.6633, 1.7621, 1.1618, 1.8478, 1.8697, 1.8439, 1.5477], device='cuda:2'), covar=tensor([0.0553, 0.0637, 0.0689, 0.0867, 0.0641, 0.0603, 0.0553, 0.0973], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0134, 0.0144, 0.0127, 0.0113, 0.0145, 0.0147, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:07:52,117 INFO [finetune.py:976] (2/7) Epoch 7, batch 3600, loss[loss=0.1814, simple_loss=0.245, pruned_loss=0.05896, over 4821.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.266, pruned_loss=0.07253, over 954079.49 frames. ], batch size: 30, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:07:56,330 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:08:06,518 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37979.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:08:27,598 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 08:08:59,066 INFO [finetune.py:976] (2/7) Epoch 7, batch 3650, loss[loss=0.1962, simple_loss=0.2601, pruned_loss=0.06618, over 4769.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2675, pruned_loss=0.07349, over 952490.57 frames. ], batch size: 26, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:09:07,268 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:09:10,800 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.699e+02 2.068e+02 2.418e+02 4.148e+02, threshold=4.136e+02, percent-clipped=4.0 2023-03-26 08:09:27,947 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:09:46,796 INFO [finetune.py:976] (2/7) Epoch 7, batch 3700, loss[loss=0.204, simple_loss=0.2712, pruned_loss=0.06833, over 4924.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2713, pruned_loss=0.07415, over 953066.75 frames. ], batch size: 33, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:09:48,562 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=38068.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:10:19,920 INFO [finetune.py:976] (2/7) Epoch 7, batch 3750, loss[loss=0.1745, simple_loss=0.2314, pruned_loss=0.0588, over 4705.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2714, pruned_loss=0.07389, over 952839.24 frames. ], batch size: 23, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:10:23,003 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 08:10:26,924 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.737e+01 1.627e+02 1.982e+02 2.503e+02 4.763e+02, threshold=3.965e+02, percent-clipped=1.0 2023-03-26 08:10:42,086 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 08:10:57,173 INFO [finetune.py:976] (2/7) Epoch 7, batch 3800, loss[loss=0.2057, simple_loss=0.2489, pruned_loss=0.0813, over 3786.00 frames. ], tot_loss[loss=0.2123, simple_loss=0.274, pruned_loss=0.07528, over 954253.82 frames. ], batch size: 16, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:11:09,506 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1178, 1.0282, 1.0349, 0.5696, 0.8706, 1.1720, 1.2426, 1.0299], device='cuda:2'), covar=tensor([0.0873, 0.0512, 0.0531, 0.0541, 0.0553, 0.0622, 0.0359, 0.0661], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0137, 0.0131, 0.0123, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.5943e-05, 1.1476e-04, 8.7372e-05, 9.9752e-05, 9.3858e-05, 9.0885e-05, 1.0624e-04, 1.0662e-04], device='cuda:2') 2023-03-26 08:11:30,369 INFO [finetune.py:976] (2/7) Epoch 7, batch 3850, loss[loss=0.1632, simple_loss=0.2157, pruned_loss=0.05535, over 4087.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.2729, pruned_loss=0.07429, over 954041.61 frames. ], batch size: 17, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:11:41,373 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8425, 1.4589, 0.8375, 1.6934, 2.1592, 1.3667, 1.7554, 1.7691], device='cuda:2'), covar=tensor([0.1439, 0.1998, 0.2056, 0.1178, 0.1952, 0.1898, 0.1307, 0.1915], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0115, 0.0093, 0.0125, 0.0096, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 08:11:41,463 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-26 08:11:43,043 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.610e+02 2.090e+02 2.406e+02 4.877e+02, threshold=4.181e+02, percent-clipped=2.0 2023-03-26 08:12:24,102 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3163, 1.3810, 0.6813, 2.0425, 2.4431, 1.7672, 1.9863, 2.1977], device='cuda:2'), covar=tensor([0.1335, 0.2094, 0.2268, 0.1131, 0.1775, 0.1746, 0.1376, 0.1855], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0093, 0.0125, 0.0096, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 08:12:25,296 INFO [finetune.py:976] (2/7) Epoch 7, batch 3900, loss[loss=0.1837, simple_loss=0.2521, pruned_loss=0.0576, over 4824.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2698, pruned_loss=0.07314, over 953906.05 frames. ], batch size: 38, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:13:28,060 INFO [finetune.py:976] (2/7) Epoch 7, batch 3950, loss[loss=0.182, simple_loss=0.2473, pruned_loss=0.05829, over 4760.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2671, pruned_loss=0.07271, over 955442.71 frames. ], batch size: 27, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:13:45,513 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.693e+02 1.988e+02 2.374e+02 4.679e+02, threshold=3.976e+02, percent-clipped=1.0 2023-03-26 08:13:56,369 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38335.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:14:11,537 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 08:14:21,687 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:14:22,292 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6680, 1.4937, 1.5196, 1.6015, 1.4138, 3.5395, 1.5936, 2.0425], device='cuda:2'), covar=tensor([0.4317, 0.3124, 0.2473, 0.2741, 0.1814, 0.0296, 0.2265, 0.1272], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0116, 0.0098, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:14:25,157 INFO [finetune.py:976] (2/7) Epoch 7, batch 4000, loss[loss=0.2172, simple_loss=0.2798, pruned_loss=0.07729, over 4906.00 frames. ], tot_loss[loss=0.206, simple_loss=0.2666, pruned_loss=0.07267, over 955072.80 frames. ], batch size: 35, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:15:19,369 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0048, 1.8774, 1.6856, 2.0176, 2.6727, 1.9596, 1.7410, 1.4856], device='cuda:2'), covar=tensor([0.2312, 0.2171, 0.1876, 0.1755, 0.1794, 0.1151, 0.2473, 0.1868], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0209, 0.0204, 0.0186, 0.0239, 0.0177, 0.0214, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:15:29,792 INFO [finetune.py:976] (2/7) Epoch 7, batch 4050, loss[loss=0.2539, simple_loss=0.3126, pruned_loss=0.09756, over 4812.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2701, pruned_loss=0.07383, over 952354.86 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:15:33,974 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:15:42,068 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.248e+02 1.778e+02 2.129e+02 2.625e+02 5.238e+02, threshold=4.258e+02, percent-clipped=5.0 2023-03-26 08:16:32,427 INFO [finetune.py:976] (2/7) Epoch 7, batch 4100, loss[loss=0.1888, simple_loss=0.255, pruned_loss=0.06127, over 4744.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2716, pruned_loss=0.074, over 951752.09 frames. ], batch size: 27, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:17:30,478 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8210, 1.6667, 1.6897, 1.6917, 1.3761, 4.1566, 1.6463, 2.2343], device='cuda:2'), covar=tensor([0.3420, 0.2324, 0.2103, 0.2336, 0.1724, 0.0120, 0.2492, 0.1323], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0098, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:17:31,590 INFO [finetune.py:976] (2/7) Epoch 7, batch 4150, loss[loss=0.1731, simple_loss=0.2378, pruned_loss=0.05417, over 4738.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2733, pruned_loss=0.07446, over 952342.25 frames. ], batch size: 27, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:17:43,680 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.723e+02 2.145e+02 2.598e+02 6.605e+02, threshold=4.291e+02, percent-clipped=2.0 2023-03-26 08:18:34,213 INFO [finetune.py:976] (2/7) Epoch 7, batch 4200, loss[loss=0.2196, simple_loss=0.2806, pruned_loss=0.07933, over 4776.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2733, pruned_loss=0.07421, over 952731.59 frames. ], batch size: 45, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:19:34,111 INFO [finetune.py:976] (2/7) Epoch 7, batch 4250, loss[loss=0.1939, simple_loss=0.2531, pruned_loss=0.06738, over 4817.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2709, pruned_loss=0.0732, over 953058.84 frames. ], batch size: 30, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:19:44,840 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.606e+02 1.980e+02 2.259e+02 5.740e+02, threshold=3.960e+02, percent-clipped=2.0 2023-03-26 08:20:02,304 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:20:14,421 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1322, 1.0133, 1.0604, 0.4453, 0.8172, 1.1728, 1.2184, 1.0465], device='cuda:2'), covar=tensor([0.0860, 0.0526, 0.0416, 0.0511, 0.0555, 0.0514, 0.0347, 0.0613], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0154, 0.0119, 0.0136, 0.0130, 0.0123, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.5133e-05, 1.1381e-04, 8.6446e-05, 9.8939e-05, 9.3628e-05, 9.0401e-05, 1.0598e-04, 1.0667e-04], device='cuda:2') 2023-03-26 08:20:38,695 INFO [finetune.py:976] (2/7) Epoch 7, batch 4300, loss[loss=0.2406, simple_loss=0.2839, pruned_loss=0.09864, over 4224.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2681, pruned_loss=0.07249, over 953919.73 frames. ], batch size: 65, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:20:59,378 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=38683.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:21:33,176 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7706, 1.1808, 0.8844, 1.6467, 2.1592, 1.4794, 1.4904, 1.7581], device='cuda:2'), covar=tensor([0.1502, 0.2332, 0.2038, 0.1255, 0.1990, 0.1996, 0.1514, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0099, 0.0115, 0.0094, 0.0125, 0.0097, 0.0102, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 08:21:41,239 INFO [finetune.py:976] (2/7) Epoch 7, batch 4350, loss[loss=0.1841, simple_loss=0.2403, pruned_loss=0.06398, over 4778.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2647, pruned_loss=0.07144, over 951940.73 frames. ], batch size: 29, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:21:41,306 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38716.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:21:52,346 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.679e+02 1.871e+02 2.197e+02 5.866e+02, threshold=3.741e+02, percent-clipped=4.0 2023-03-26 08:22:01,145 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 08:22:43,656 INFO [finetune.py:976] (2/7) Epoch 7, batch 4400, loss[loss=0.2053, simple_loss=0.2683, pruned_loss=0.07114, over 4929.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2652, pruned_loss=0.07094, over 955534.61 frames. ], batch size: 38, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:22:46,220 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 08:23:33,670 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5735, 1.0938, 0.9339, 1.4287, 1.9731, 1.0457, 1.3245, 1.5139], device='cuda:2'), covar=tensor([0.1598, 0.2289, 0.1973, 0.1266, 0.2056, 0.2165, 0.1552, 0.2014], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0093, 0.0124, 0.0097, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 08:23:42,200 INFO [finetune.py:976] (2/7) Epoch 7, batch 4450, loss[loss=0.2253, simple_loss=0.291, pruned_loss=0.07978, over 4924.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2699, pruned_loss=0.07284, over 955057.92 frames. ], batch size: 36, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:23:42,284 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1935, 3.5957, 3.8502, 4.0664, 3.9125, 3.7182, 4.2751, 1.2725], device='cuda:2'), covar=tensor([0.0763, 0.0850, 0.0766, 0.0895, 0.1265, 0.1422, 0.0747, 0.5285], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0274, 0.0294, 0.0332, 0.0281, 0.0304, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:23:44,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8079, 1.5081, 2.1971, 1.6416, 2.0491, 2.0752, 1.5614, 2.2707], device='cuda:2'), covar=tensor([0.1542, 0.2227, 0.1524, 0.2020, 0.0930, 0.1614, 0.2727, 0.0895], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0204, 0.0199, 0.0195, 0.0182, 0.0221, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:23:44,181 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5848, 1.4889, 1.5299, 0.8761, 1.6253, 1.7395, 1.7453, 1.4131], device='cuda:2'), covar=tensor([0.0961, 0.0611, 0.0392, 0.0631, 0.0357, 0.0570, 0.0297, 0.0590], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0120, 0.0137, 0.0131, 0.0124, 0.0145, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.5980e-05, 1.1458e-04, 8.6845e-05, 9.9653e-05, 9.4268e-05, 9.1174e-05, 1.0655e-04, 1.0734e-04], device='cuda:2') 2023-03-26 08:23:51,889 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:23:53,601 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.712e+02 1.965e+02 2.330e+02 4.727e+02, threshold=3.929e+02, percent-clipped=4.0 2023-03-26 08:24:44,784 INFO [finetune.py:976] (2/7) Epoch 7, batch 4500, loss[loss=0.2121, simple_loss=0.2674, pruned_loss=0.07842, over 4894.00 frames. ], tot_loss[loss=0.2102, simple_loss=0.2723, pruned_loss=0.07411, over 955542.27 frames. ], batch size: 32, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:25:06,330 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:25:49,097 INFO [finetune.py:976] (2/7) Epoch 7, batch 4550, loss[loss=0.2155, simple_loss=0.2829, pruned_loss=0.07399, over 4890.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2744, pruned_loss=0.07473, over 956359.28 frames. ], batch size: 32, lr: 3.86e-03, grad_scale: 64.0 2023-03-26 08:25:59,508 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.671e+02 2.000e+02 2.524e+02 3.434e+02, threshold=4.000e+02, percent-clipped=0.0 2023-03-26 08:26:47,278 INFO [finetune.py:976] (2/7) Epoch 7, batch 4600, loss[loss=0.1743, simple_loss=0.2431, pruned_loss=0.05273, over 4753.00 frames. ], tot_loss[loss=0.2105, simple_loss=0.2733, pruned_loss=0.07391, over 956721.65 frames. ], batch size: 27, lr: 3.86e-03, grad_scale: 64.0 2023-03-26 08:27:16,975 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2868, 2.2068, 2.3451, 0.9665, 2.5946, 2.7594, 2.2249, 2.1763], device='cuda:2'), covar=tensor([0.0899, 0.0684, 0.0524, 0.0763, 0.0453, 0.0617, 0.0575, 0.0646], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0120, 0.0137, 0.0132, 0.0124, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.5882e-05, 1.1430e-04, 8.7023e-05, 9.9814e-05, 9.4532e-05, 9.0998e-05, 1.0639e-04, 1.0719e-04], device='cuda:2') 2023-03-26 08:27:49,688 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 08:27:54,984 INFO [finetune.py:976] (2/7) Epoch 7, batch 4650, loss[loss=0.2347, simple_loss=0.2776, pruned_loss=0.09588, over 4825.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2707, pruned_loss=0.07334, over 955518.62 frames. ], batch size: 33, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:27:55,087 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:28:06,170 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.761e+01 1.505e+02 1.924e+02 2.345e+02 4.238e+02, threshold=3.847e+02, percent-clipped=2.0 2023-03-26 08:28:50,940 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:28:57,994 INFO [finetune.py:976] (2/7) Epoch 7, batch 4700, loss[loss=0.2516, simple_loss=0.298, pruned_loss=0.1026, over 4711.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2676, pruned_loss=0.07229, over 954587.98 frames. ], batch size: 23, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:29:08,145 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 08:29:17,886 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4496, 2.9454, 2.8412, 1.3166, 3.0858, 2.2511, 0.8144, 1.9173], device='cuda:2'), covar=tensor([0.2197, 0.2243, 0.1744, 0.3696, 0.1317, 0.1154, 0.4161, 0.1837], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0173, 0.0162, 0.0130, 0.0154, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:29:56,018 INFO [finetune.py:976] (2/7) Epoch 7, batch 4750, loss[loss=0.1922, simple_loss=0.2502, pruned_loss=0.06708, over 4746.00 frames. ], tot_loss[loss=0.205, simple_loss=0.2656, pruned_loss=0.07218, over 954855.51 frames. ], batch size: 23, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:30:08,812 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.221e+02 1.609e+02 1.819e+02 2.206e+02 4.512e+02, threshold=3.638e+02, percent-clipped=2.0 2023-03-26 08:30:30,629 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-26 08:30:39,848 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 08:30:40,583 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 08:30:58,729 INFO [finetune.py:976] (2/7) Epoch 7, batch 4800, loss[loss=0.2145, simple_loss=0.2793, pruned_loss=0.07492, over 4787.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2685, pruned_loss=0.07313, over 955151.36 frames. ], batch size: 26, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:31:12,887 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:31:22,013 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39185.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:31:22,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5788, 1.3870, 1.6880, 1.8031, 1.5811, 3.4011, 1.3524, 1.6590], device='cuda:2'), covar=tensor([0.0937, 0.1861, 0.1178, 0.0996, 0.1616, 0.0256, 0.1468, 0.1711], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0078, 0.0092, 0.0083, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 08:31:57,083 INFO [finetune.py:976] (2/7) Epoch 7, batch 4850, loss[loss=0.2074, simple_loss=0.2742, pruned_loss=0.07033, over 4816.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2712, pruned_loss=0.07364, over 951281.20 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:32:06,049 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.691e+02 2.004e+02 2.499e+02 4.240e+02, threshold=4.008e+02, percent-clipped=2.0 2023-03-26 08:32:18,159 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:32:30,568 INFO [finetune.py:976] (2/7) Epoch 7, batch 4900, loss[loss=0.1963, simple_loss=0.27, pruned_loss=0.0613, over 4903.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.272, pruned_loss=0.07433, over 949637.06 frames. ], batch size: 43, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:32:43,191 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:32:46,423 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 08:33:03,517 INFO [finetune.py:976] (2/7) Epoch 7, batch 4950, loss[loss=0.2064, simple_loss=0.2706, pruned_loss=0.07111, over 4781.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2743, pruned_loss=0.0748, over 951782.40 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:33:12,725 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.619e+02 1.980e+02 2.423e+02 3.796e+02, threshold=3.961e+02, percent-clipped=0.0 2023-03-26 08:33:24,230 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:33:37,224 INFO [finetune.py:976] (2/7) Epoch 7, batch 5000, loss[loss=0.161, simple_loss=0.2327, pruned_loss=0.04466, over 4876.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.2723, pruned_loss=0.07375, over 953392.50 frames. ], batch size: 32, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:33:43,854 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3061, 2.2238, 1.7482, 2.4410, 2.4431, 1.9792, 2.8801, 2.4910], device='cuda:2'), covar=tensor([0.1457, 0.3244, 0.3287, 0.3048, 0.2413, 0.1667, 0.3620, 0.1904], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0191, 0.0235, 0.0255, 0.0235, 0.0193, 0.0212, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:34:04,440 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 08:34:10,935 INFO [finetune.py:976] (2/7) Epoch 7, batch 5050, loss[loss=0.1609, simple_loss=0.2315, pruned_loss=0.04513, over 4908.00 frames. ], tot_loss[loss=0.2067, simple_loss=0.2683, pruned_loss=0.0725, over 952910.47 frames. ], batch size: 36, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:34:16,180 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9548, 1.7728, 1.7190, 1.9226, 1.5999, 3.9204, 1.8872, 2.3526], device='cuda:2'), covar=tensor([0.4096, 0.2994, 0.2301, 0.2597, 0.1750, 0.0226, 0.2185, 0.1155], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0099, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:34:19,591 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.623e+02 1.955e+02 2.404e+02 3.498e+02, threshold=3.910e+02, percent-clipped=0.0 2023-03-26 08:34:36,606 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1022, 1.8910, 1.8785, 0.7708, 2.1724, 2.4565, 2.0058, 1.9218], device='cuda:2'), covar=tensor([0.0912, 0.0741, 0.0540, 0.0781, 0.0580, 0.0444, 0.0536, 0.0645], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0137, 0.0132, 0.0124, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.6400e-05, 1.1518e-04, 8.7235e-05, 9.9841e-05, 9.4367e-05, 9.1545e-05, 1.0598e-04, 1.0717e-04], device='cuda:2') 2023-03-26 08:34:52,701 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:34:54,406 INFO [finetune.py:976] (2/7) Epoch 7, batch 5100, loss[loss=0.2136, simple_loss=0.2718, pruned_loss=0.07768, over 4925.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2648, pruned_loss=0.071, over 952433.79 frames. ], batch size: 36, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:34:55,240 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.54 vs. limit=5.0 2023-03-26 08:35:05,373 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:05,553 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 08:35:25,895 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8715, 1.7459, 1.8979, 1.1322, 1.9670, 1.9161, 1.7902, 1.5368], device='cuda:2'), covar=tensor([0.0575, 0.0709, 0.0660, 0.0971, 0.0569, 0.0746, 0.0654, 0.1251], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0134, 0.0144, 0.0126, 0.0114, 0.0145, 0.0147, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:35:35,512 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 08:35:39,560 INFO [finetune.py:976] (2/7) Epoch 7, batch 5150, loss[loss=0.2414, simple_loss=0.2992, pruned_loss=0.09178, over 4868.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2663, pruned_loss=0.07232, over 953033.90 frames. ], batch size: 34, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:35:47,012 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:47,569 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2122, 1.7931, 2.6525, 4.2599, 3.0788, 2.7486, 1.0576, 3.3890], device='cuda:2'), covar=tensor([0.1688, 0.1568, 0.1317, 0.0518, 0.0726, 0.1428, 0.1971, 0.0471], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0163, 0.0100, 0.0139, 0.0126, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:35:48,213 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4433, 1.4604, 1.4552, 0.7515, 1.4730, 1.7353, 1.7511, 1.3859], device='cuda:2'), covar=tensor([0.1105, 0.0719, 0.0514, 0.0686, 0.0424, 0.0634, 0.0392, 0.0760], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0120, 0.0137, 0.0131, 0.0124, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.6153e-05, 1.1502e-04, 8.6947e-05, 9.9525e-05, 9.4202e-05, 9.1498e-05, 1.0595e-04, 1.0719e-04], device='cuda:2') 2023-03-26 08:35:48,748 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:49,815 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.405e+01 1.710e+02 2.016e+02 2.412e+02 5.054e+02, threshold=4.032e+02, percent-clipped=2.0 2023-03-26 08:36:08,539 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39541.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:36:24,637 INFO [finetune.py:976] (2/7) Epoch 7, batch 5200, loss[loss=0.213, simple_loss=0.2828, pruned_loss=0.07167, over 4816.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2706, pruned_loss=0.07363, over 952315.90 frames. ], batch size: 39, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:36:27,268 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6135, 1.5060, 1.3450, 1.3652, 1.6741, 1.3628, 1.7170, 1.5950], device='cuda:2'), covar=tensor([0.1641, 0.2672, 0.3700, 0.2926, 0.3064, 0.2029, 0.3538, 0.2145], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0191, 0.0236, 0.0255, 0.0236, 0.0194, 0.0213, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:37:07,820 INFO [finetune.py:976] (2/7) Epoch 7, batch 5250, loss[loss=0.1904, simple_loss=0.2691, pruned_loss=0.05585, over 4755.00 frames. ], tot_loss[loss=0.2109, simple_loss=0.2733, pruned_loss=0.07429, over 953908.69 frames. ], batch size: 28, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:37:08,684 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.19 vs. limit=5.0 2023-03-26 08:37:15,021 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.709e+02 2.070e+02 2.577e+02 5.953e+02, threshold=4.140e+02, percent-clipped=1.0 2023-03-26 08:37:26,001 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39639.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 08:37:26,603 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:37:43,681 INFO [finetune.py:976] (2/7) Epoch 7, batch 5300, loss[loss=0.1635, simple_loss=0.2455, pruned_loss=0.04075, over 4822.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2745, pruned_loss=0.07481, over 953440.39 frames. ], batch size: 47, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:37:52,748 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9124, 1.7529, 1.4739, 1.6457, 1.6737, 1.6958, 1.6851, 2.4300], device='cuda:2'), covar=tensor([0.5636, 0.6221, 0.4377, 0.5460, 0.5360, 0.3221, 0.5590, 0.2129], device='cuda:2'), in_proj_covar=tensor([0.0281, 0.0256, 0.0218, 0.0278, 0.0239, 0.0204, 0.0243, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:38:17,432 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39700.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 08:38:30,956 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8131, 1.6559, 2.0817, 1.4395, 1.9428, 2.0991, 1.6584, 2.2853], device='cuda:2'), covar=tensor([0.1491, 0.2109, 0.1404, 0.2057, 0.0946, 0.1308, 0.2580, 0.0928], device='cuda:2'), in_proj_covar=tensor([0.0201, 0.0203, 0.0197, 0.0195, 0.0180, 0.0220, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:38:32,614 INFO [finetune.py:976] (2/7) Epoch 7, batch 5350, loss[loss=0.2273, simple_loss=0.3015, pruned_loss=0.07658, over 4912.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.274, pruned_loss=0.07408, over 954295.46 frames. ], batch size: 33, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:38:40,830 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.565e+02 1.855e+02 2.323e+02 5.491e+02, threshold=3.710e+02, percent-clipped=1.0 2023-03-26 08:39:15,939 INFO [finetune.py:976] (2/7) Epoch 7, batch 5400, loss[loss=0.1704, simple_loss=0.2301, pruned_loss=0.05534, over 4024.00 frames. ], tot_loss[loss=0.2083, simple_loss=0.2706, pruned_loss=0.07298, over 953291.52 frames. ], batch size: 17, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:39:16,657 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39767.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:39:20,659 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9045, 1.7778, 1.6489, 2.0293, 1.3811, 4.5666, 1.7102, 2.2227], device='cuda:2'), covar=tensor([0.3243, 0.2366, 0.2057, 0.2130, 0.1735, 0.0107, 0.2277, 0.1280], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:39:27,316 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3068, 1.4945, 1.4707, 0.7311, 1.3470, 1.7160, 1.7196, 1.3854], device='cuda:2'), covar=tensor([0.0906, 0.0594, 0.0438, 0.0575, 0.0422, 0.0602, 0.0278, 0.0723], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0121, 0.0137, 0.0132, 0.0125, 0.0145, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.6653e-05, 1.1542e-04, 8.7167e-05, 9.9514e-05, 9.4710e-05, 9.1742e-05, 1.0658e-04, 1.0743e-04], device='cuda:2') 2023-03-26 08:39:50,815 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39815.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:39:51,314 INFO [finetune.py:976] (2/7) Epoch 7, batch 5450, loss[loss=0.2109, simple_loss=0.2696, pruned_loss=0.0761, over 4780.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2682, pruned_loss=0.07223, over 953833.55 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:39:53,197 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:03,634 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.339e+01 1.599e+02 1.928e+02 2.299e+02 3.698e+02, threshold=3.856e+02, percent-clipped=0.0 2023-03-26 08:40:03,744 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39828.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:16,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39841.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:54,099 INFO [finetune.py:976] (2/7) Epoch 7, batch 5500, loss[loss=0.1916, simple_loss=0.2521, pruned_loss=0.06559, over 4833.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2654, pruned_loss=0.07158, over 954938.82 frames. ], batch size: 33, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:41:05,348 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39876.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:41:18,433 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39889.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:41:57,092 INFO [finetune.py:976] (2/7) Epoch 7, batch 5550, loss[loss=0.1818, simple_loss=0.2619, pruned_loss=0.05086, over 4795.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2655, pruned_loss=0.07148, over 954458.10 frames. ], batch size: 51, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:42:07,610 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1444, 3.6178, 3.7732, 3.9888, 3.9112, 3.7460, 4.2245, 1.4067], device='cuda:2'), covar=tensor([0.0836, 0.0895, 0.0832, 0.0959, 0.1261, 0.1491, 0.0757, 0.5262], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0245, 0.0276, 0.0294, 0.0334, 0.0282, 0.0305, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:42:09,838 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.647e+02 1.995e+02 2.278e+02 3.177e+02, threshold=3.991e+02, percent-clipped=0.0 2023-03-26 08:42:28,013 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:42:31,635 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7858, 1.1091, 0.9375, 1.6212, 1.9304, 1.4617, 1.2757, 1.4597], device='cuda:2'), covar=tensor([0.1620, 0.2662, 0.2228, 0.1452, 0.2308, 0.2206, 0.1848, 0.2266], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:42:58,493 INFO [finetune.py:976] (2/7) Epoch 7, batch 5600, loss[loss=0.2293, simple_loss=0.2814, pruned_loss=0.08865, over 4761.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.27, pruned_loss=0.07313, over 956462.02 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:43:20,787 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:43:29,035 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2099, 3.6033, 3.7889, 4.0486, 3.9995, 3.7880, 4.2783, 1.3329], device='cuda:2'), covar=tensor([0.0839, 0.0937, 0.0835, 0.1006, 0.1256, 0.1513, 0.0774, 0.5329], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0245, 0.0276, 0.0294, 0.0333, 0.0282, 0.0305, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:43:30,217 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39995.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 08:43:52,202 INFO [finetune.py:976] (2/7) Epoch 7, batch 5650, loss[loss=0.241, simple_loss=0.296, pruned_loss=0.09299, over 4891.00 frames. ], tot_loss[loss=0.2091, simple_loss=0.2719, pruned_loss=0.07312, over 956756.38 frames. ], batch size: 32, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:44:09,058 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.647e+02 1.995e+02 2.469e+02 4.643e+02, threshold=3.989e+02, percent-clipped=3.0 2023-03-26 08:44:10,319 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7972, 1.2807, 0.7059, 1.6358, 1.8680, 1.3107, 1.3958, 1.6006], device='cuda:2'), covar=tensor([0.1464, 0.2294, 0.2306, 0.1267, 0.2301, 0.2060, 0.1624, 0.2011], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:44:50,691 INFO [finetune.py:976] (2/7) Epoch 7, batch 5700, loss[loss=0.1863, simple_loss=0.2366, pruned_loss=0.06797, over 4161.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2682, pruned_loss=0.07266, over 940041.67 frames. ], batch size: 18, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:45:42,067 INFO [finetune.py:976] (2/7) Epoch 8, batch 0, loss[loss=0.2194, simple_loss=0.2876, pruned_loss=0.07562, over 4813.00 frames. ], tot_loss[loss=0.2194, simple_loss=0.2876, pruned_loss=0.07562, over 4813.00 frames. ], batch size: 38, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:45:42,068 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 08:45:49,207 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3304, 1.2097, 1.2056, 1.2715, 1.5279, 1.3740, 1.3179, 1.1529], device='cuda:2'), covar=tensor([0.0385, 0.0266, 0.0576, 0.0281, 0.0266, 0.0553, 0.0294, 0.0379], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0113, 0.0140, 0.0116, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.0022e-05, 8.8500e-05, 1.1228e-04, 9.1282e-05, 8.2386e-05, 7.5241e-05, 6.9340e-05, 8.5588e-05], device='cuda:2') 2023-03-26 08:45:57,865 INFO [finetune.py:1010] (2/7) Epoch 8, validation: loss=0.1624, simple_loss=0.234, pruned_loss=0.04544, over 2265189.00 frames. 2023-03-26 08:45:57,866 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 08:46:07,997 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7280, 1.5139, 1.6278, 1.6758, 1.0985, 3.6528, 1.5177, 2.1195], device='cuda:2'), covar=tensor([0.3450, 0.2524, 0.2079, 0.2406, 0.2039, 0.0193, 0.2640, 0.1259], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0098, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:46:20,316 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:46:20,531 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 08:46:26,527 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40123.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:46:29,510 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.580e+02 2.018e+02 2.508e+02 5.130e+02, threshold=4.036e+02, percent-clipped=1.0 2023-03-26 08:46:41,217 INFO [finetune.py:976] (2/7) Epoch 8, batch 50, loss[loss=0.2006, simple_loss=0.2605, pruned_loss=0.07038, over 4801.00 frames. ], tot_loss[loss=0.21, simple_loss=0.2736, pruned_loss=0.07321, over 217427.09 frames. ], batch size: 51, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:46:44,270 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5680, 1.4311, 1.4639, 1.5493, 1.0491, 3.1069, 1.2576, 1.8528], device='cuda:2'), covar=tensor([0.3303, 0.2442, 0.2032, 0.2298, 0.1871, 0.0235, 0.2813, 0.1258], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0116, 0.0120, 0.0123, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:47:08,174 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:10,680 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40171.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:10,743 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8293, 1.7055, 1.5111, 1.6470, 1.9638, 1.9375, 1.7981, 1.4687], device='cuda:2'), covar=tensor([0.0289, 0.0314, 0.0526, 0.0299, 0.0238, 0.0456, 0.0286, 0.0394], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0112, 0.0140, 0.0116, 0.0105, 0.0101, 0.0091, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.0010e-05, 8.8174e-05, 1.1180e-04, 9.0993e-05, 8.2370e-05, 7.5014e-05, 6.9134e-05, 8.5262e-05], device='cuda:2') 2023-03-26 08:47:26,487 INFO [finetune.py:976] (2/7) Epoch 8, batch 100, loss[loss=0.1649, simple_loss=0.2301, pruned_loss=0.04987, over 4788.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2656, pruned_loss=0.07058, over 380606.64 frames. ], batch size: 29, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:47:27,194 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:29,610 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1629, 1.2702, 1.1658, 1.2764, 1.3932, 2.4800, 1.1633, 1.4023], device='cuda:2'), covar=tensor([0.1167, 0.2167, 0.1279, 0.1166, 0.1935, 0.0456, 0.1824, 0.2056], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 08:47:35,865 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 08:47:42,322 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40218.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:48,308 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.570e+02 1.832e+02 2.394e+02 3.868e+02, threshold=3.663e+02, percent-clipped=0.0 2023-03-26 08:47:56,591 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 08:47:57,226 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.01 vs. limit=5.0 2023-03-26 08:47:59,305 INFO [finetune.py:976] (2/7) Epoch 8, batch 150, loss[loss=0.186, simple_loss=0.2522, pruned_loss=0.05987, over 4815.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2629, pruned_loss=0.07063, over 508169.42 frames. ], batch size: 38, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:48:06,422 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6234, 1.3419, 1.8761, 3.1465, 2.0794, 2.2590, 0.8276, 2.4667], device='cuda:2'), covar=tensor([0.1899, 0.1689, 0.1464, 0.0671, 0.0942, 0.1342, 0.2068, 0.0676], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0134, 0.0165, 0.0102, 0.0140, 0.0127, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:48:07,726 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40256.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:18,411 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:22,013 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9522, 3.7457, 3.6747, 1.8413, 3.8919, 2.9272, 0.8968, 2.7011], device='cuda:2'), covar=tensor([0.2376, 0.1725, 0.1342, 0.3106, 0.0938, 0.0875, 0.4208, 0.1358], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0173, 0.0162, 0.0130, 0.0156, 0.0123, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:48:22,630 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40279.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:33,049 INFO [finetune.py:976] (2/7) Epoch 8, batch 200, loss[loss=0.1759, simple_loss=0.2309, pruned_loss=0.06043, over 4726.00 frames. ], tot_loss[loss=0.204, simple_loss=0.264, pruned_loss=0.07205, over 606680.20 frames. ], batch size: 23, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:48:33,757 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40295.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 08:48:36,591 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4893, 2.3987, 1.8776, 2.6146, 2.4743, 2.1076, 2.9759, 2.4909], device='cuda:2'), covar=tensor([0.1506, 0.3056, 0.3926, 0.3432, 0.3067, 0.1933, 0.3670, 0.2134], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0190, 0.0235, 0.0255, 0.0235, 0.0193, 0.0211, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:48:55,741 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.652e+02 1.957e+02 2.371e+02 3.958e+02, threshold=3.914e+02, percent-clipped=3.0 2023-03-26 08:48:57,115 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:58,979 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:49:05,956 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40343.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 08:49:06,465 INFO [finetune.py:976] (2/7) Epoch 8, batch 250, loss[loss=0.2041, simple_loss=0.2805, pruned_loss=0.0638, over 4828.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.266, pruned_loss=0.07192, over 683875.73 frames. ], batch size: 47, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:49:17,085 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7354, 1.6604, 1.4624, 1.5368, 1.9567, 1.9099, 1.7500, 1.4294], device='cuda:2'), covar=tensor([0.0251, 0.0280, 0.0564, 0.0278, 0.0189, 0.0458, 0.0231, 0.0389], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0112, 0.0140, 0.0116, 0.0104, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.9964e-05, 8.7553e-05, 1.1180e-04, 9.1016e-05, 8.1972e-05, 7.5209e-05, 6.8874e-05, 8.4915e-05], device='cuda:2') 2023-03-26 08:49:37,956 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40391.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:49:40,040 INFO [finetune.py:976] (2/7) Epoch 8, batch 300, loss[loss=0.2399, simple_loss=0.3058, pruned_loss=0.08704, over 4823.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2694, pruned_loss=0.07235, over 742494.03 frames. ], batch size: 40, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:49:47,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3739, 1.1889, 1.5889, 2.4210, 1.6794, 2.1517, 0.8410, 1.9527], device='cuda:2'), covar=tensor([0.1909, 0.1714, 0.1291, 0.0773, 0.0994, 0.1151, 0.1735, 0.0788], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0134, 0.0165, 0.0102, 0.0140, 0.0127, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 08:49:59,867 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:50:07,992 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.680e+02 2.022e+02 2.440e+02 4.521e+02, threshold=4.043e+02, percent-clipped=1.0 2023-03-26 08:50:27,979 INFO [finetune.py:976] (2/7) Epoch 8, batch 350, loss[loss=0.1776, simple_loss=0.2378, pruned_loss=0.05872, over 4771.00 frames. ], tot_loss[loss=0.209, simple_loss=0.2712, pruned_loss=0.07336, over 791276.44 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:51:00,913 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.70 vs. limit=5.0 2023-03-26 08:51:01,404 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:01,461 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:26,233 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:27,353 INFO [finetune.py:976] (2/7) Epoch 8, batch 400, loss[loss=0.2247, simple_loss=0.2896, pruned_loss=0.07987, over 4756.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.2725, pruned_loss=0.07407, over 827006.96 frames. ], batch size: 59, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:51:36,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7340, 1.4868, 1.4297, 1.6558, 1.9365, 1.7404, 1.0971, 1.4135], device='cuda:2'), covar=tensor([0.2119, 0.2248, 0.1931, 0.1731, 0.1738, 0.1191, 0.2800, 0.1923], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0210, 0.0204, 0.0188, 0.0240, 0.0179, 0.0215, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:51:52,934 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40519.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:58,845 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.662e+02 2.008e+02 2.590e+02 4.107e+02, threshold=4.016e+02, percent-clipped=2.0 2023-03-26 08:52:11,111 INFO [finetune.py:976] (2/7) Epoch 8, batch 450, loss[loss=0.2195, simple_loss=0.2669, pruned_loss=0.08609, over 4855.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2703, pruned_loss=0.07321, over 854112.29 frames. ], batch size: 44, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:52:21,159 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:26,844 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:41,174 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40574.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:54,278 INFO [finetune.py:976] (2/7) Epoch 8, batch 500, loss[loss=0.1743, simple_loss=0.246, pruned_loss=0.05131, over 4873.00 frames. ], tot_loss[loss=0.2052, simple_loss=0.2671, pruned_loss=0.07159, over 877447.69 frames. ], batch size: 34, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:53:17,850 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.648e+02 1.946e+02 2.379e+02 4.476e+02, threshold=3.892e+02, percent-clipped=1.0 2023-03-26 08:53:17,930 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40628.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:53:28,115 INFO [finetune.py:976] (2/7) Epoch 8, batch 550, loss[loss=0.1892, simple_loss=0.2487, pruned_loss=0.06488, over 4900.00 frames. ], tot_loss[loss=0.2016, simple_loss=0.2638, pruned_loss=0.06973, over 895776.90 frames. ], batch size: 43, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:53:29,494 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 08:53:30,669 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4239, 1.3994, 1.7986, 1.6774, 1.5712, 3.3951, 1.3632, 1.6247], device='cuda:2'), covar=tensor([0.0931, 0.1719, 0.1069, 0.0998, 0.1545, 0.0251, 0.1410, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 08:53:32,528 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:53:56,756 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:54:01,538 INFO [finetune.py:976] (2/7) Epoch 8, batch 600, loss[loss=0.2167, simple_loss=0.2912, pruned_loss=0.07117, over 4791.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.2646, pruned_loss=0.06988, over 910065.98 frames. ], batch size: 29, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:54:11,170 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4944, 2.1720, 1.6856, 0.8041, 1.9550, 1.9526, 1.7475, 2.0815], device='cuda:2'), covar=tensor([0.0783, 0.0858, 0.1530, 0.2181, 0.1352, 0.2306, 0.2233, 0.0913], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0188, 0.0218, 0.0206, 0.0222, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:54:14,589 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40712.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:54:24,591 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.757e+02 2.080e+02 2.524e+02 4.426e+02, threshold=4.160e+02, percent-clipped=1.0 2023-03-26 08:54:34,712 INFO [finetune.py:976] (2/7) Epoch 8, batch 650, loss[loss=0.2435, simple_loss=0.3162, pruned_loss=0.0854, over 4818.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2667, pruned_loss=0.07025, over 921444.73 frames. ], batch size: 40, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:54:34,971 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 08:54:39,557 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8251, 2.1557, 1.5926, 1.7350, 2.2262, 2.0450, 1.9925, 1.8211], device='cuda:2'), covar=tensor([0.0330, 0.0259, 0.0522, 0.0295, 0.0241, 0.0674, 0.0254, 0.0329], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0111, 0.0140, 0.0116, 0.0104, 0.0101, 0.0091, 0.0109], device='cuda:2'), out_proj_covar=tensor([6.9890e-05, 8.7111e-05, 1.1165e-04, 9.1142e-05, 8.1840e-05, 7.4809e-05, 6.8503e-05, 8.4635e-05], device='cuda:2') 2023-03-26 08:54:53,181 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 08:55:08,422 INFO [finetune.py:976] (2/7) Epoch 8, batch 700, loss[loss=0.1451, simple_loss=0.2221, pruned_loss=0.03405, over 4756.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2692, pruned_loss=0.07165, over 924743.34 frames. ], batch size: 28, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:55:31,876 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.702e+02 1.948e+02 2.422e+02 4.930e+02, threshold=3.896e+02, percent-clipped=3.0 2023-03-26 08:55:39,585 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8113, 3.9218, 3.8067, 2.0855, 3.9998, 3.0446, 0.7030, 2.8793], device='cuda:2'), covar=tensor([0.2319, 0.1759, 0.1334, 0.3120, 0.0890, 0.0970, 0.4599, 0.1296], device='cuda:2'), in_proj_covar=tensor([0.0156, 0.0174, 0.0163, 0.0131, 0.0158, 0.0124, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 08:55:51,723 INFO [finetune.py:976] (2/7) Epoch 8, batch 750, loss[loss=0.2294, simple_loss=0.2882, pruned_loss=0.08525, over 4894.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2712, pruned_loss=0.07229, over 933231.58 frames. ], batch size: 32, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:55:54,208 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:01,154 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:20,352 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:32,137 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40874.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:55,831 INFO [finetune.py:976] (2/7) Epoch 8, batch 800, loss[loss=0.1978, simple_loss=0.2571, pruned_loss=0.06932, over 4826.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2702, pruned_loss=0.07136, over 937607.50 frames. ], batch size: 49, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:57:01,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2669, 2.2041, 1.6937, 2.4297, 2.3059, 2.0222, 2.7656, 2.2552], device='cuda:2'), covar=tensor([0.1518, 0.3391, 0.3891, 0.3360, 0.2843, 0.1801, 0.3660, 0.2355], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0190, 0.0236, 0.0256, 0.0236, 0.0194, 0.0212, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:57:03,992 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:04,647 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:23,555 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40922.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:26,540 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40926.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:27,573 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.606e+02 1.985e+02 2.397e+02 9.945e+02, threshold=3.971e+02, percent-clipped=3.0 2023-03-26 08:57:27,686 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40928.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:45,603 INFO [finetune.py:976] (2/7) Epoch 8, batch 850, loss[loss=0.1758, simple_loss=0.2372, pruned_loss=0.05723, over 4856.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2676, pruned_loss=0.07046, over 943246.13 frames. ], batch size: 49, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:00,396 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40961.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 08:58:08,267 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3421, 1.4471, 1.5419, 0.8004, 1.4190, 1.6578, 1.8142, 1.4021], device='cuda:2'), covar=tensor([0.0871, 0.0569, 0.0407, 0.0542, 0.0428, 0.0498, 0.0280, 0.0648], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0119, 0.0137, 0.0131, 0.0124, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.5763e-05, 1.1425e-04, 8.5889e-05, 9.9336e-05, 9.4125e-05, 9.1109e-05, 1.0630e-04, 1.0717e-04], device='cuda:2') 2023-03-26 08:58:10,970 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:18,129 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:22,822 INFO [finetune.py:976] (2/7) Epoch 8, batch 900, loss[loss=0.1822, simple_loss=0.2409, pruned_loss=0.06177, over 4795.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2654, pruned_loss=0.07033, over 944432.50 frames. ], batch size: 29, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:25,201 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:28,413 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2885, 2.6228, 2.1264, 1.7961, 2.4669, 2.6204, 2.5134, 2.1203], device='cuda:2'), covar=tensor([0.0648, 0.0516, 0.0824, 0.0913, 0.0724, 0.0687, 0.0612, 0.1021], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0132, 0.0142, 0.0124, 0.0113, 0.0143, 0.0144, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 08:58:31,512 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:46,165 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.525e+02 1.868e+02 2.283e+02 3.598e+02, threshold=3.736e+02, percent-clipped=0.0 2023-03-26 08:58:50,349 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41034.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:56,854 INFO [finetune.py:976] (2/7) Epoch 8, batch 950, loss[loss=0.3277, simple_loss=0.3555, pruned_loss=0.1499, over 4288.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2642, pruned_loss=0.07041, over 945910.61 frames. ], batch size: 65, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:57,586 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41045.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:06,041 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:30,583 INFO [finetune.py:976] (2/7) Epoch 8, batch 1000, loss[loss=0.2471, simple_loss=0.3155, pruned_loss=0.08934, over 4805.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2663, pruned_loss=0.07106, over 947754.22 frames. ], batch size: 41, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:59:36,017 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:38,375 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:52,966 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.652e+02 2.000e+02 2.359e+02 4.809e+02, threshold=4.000e+02, percent-clipped=2.0 2023-03-26 08:59:55,436 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 09:00:04,080 INFO [finetune.py:976] (2/7) Epoch 8, batch 1050, loss[loss=0.1807, simple_loss=0.2352, pruned_loss=0.06305, over 4346.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2699, pruned_loss=0.07213, over 949200.92 frames. ], batch size: 19, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:00:06,646 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:16,284 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:18,134 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1867, 2.0587, 1.7227, 2.0678, 2.1573, 1.8128, 2.4358, 2.1935], device='cuda:2'), covar=tensor([0.1554, 0.2556, 0.3504, 0.3133, 0.2954, 0.1892, 0.3321, 0.2038], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0191, 0.0236, 0.0256, 0.0238, 0.0195, 0.0213, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:00:37,444 INFO [finetune.py:976] (2/7) Epoch 8, batch 1100, loss[loss=0.2182, simple_loss=0.2751, pruned_loss=0.08069, over 4397.00 frames. ], tot_loss[loss=0.2073, simple_loss=0.2702, pruned_loss=0.07217, over 952109.30 frames. ], batch size: 19, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:00:38,741 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:54,959 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41221.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:59,684 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.750e+02 2.155e+02 2.664e+02 4.791e+02, threshold=4.309e+02, percent-clipped=2.0 2023-03-26 09:01:17,459 INFO [finetune.py:976] (2/7) Epoch 8, batch 1150, loss[loss=0.1975, simple_loss=0.2742, pruned_loss=0.06034, over 4831.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2717, pruned_loss=0.07201, over 954687.74 frames. ], batch size: 47, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:01:21,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5879, 3.4291, 3.3290, 1.5335, 3.5527, 2.5252, 0.7615, 2.3230], device='cuda:2'), covar=tensor([0.2033, 0.1882, 0.1408, 0.3265, 0.1136, 0.1146, 0.4238, 0.1535], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0174, 0.0162, 0.0131, 0.0158, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 09:01:30,859 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9810, 1.8326, 1.5812, 1.6867, 1.7479, 1.7479, 1.8130, 2.5618], device='cuda:2'), covar=tensor([0.5334, 0.5660, 0.4195, 0.5301, 0.5284, 0.3035, 0.4928, 0.1872], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0257, 0.0221, 0.0280, 0.0241, 0.0205, 0.0244, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:01:32,010 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41256.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:01:41,291 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.53 vs. limit=5.0 2023-03-26 09:02:05,266 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2919, 1.2970, 1.2988, 0.6385, 1.1916, 1.4357, 1.4809, 1.1814], device='cuda:2'), covar=tensor([0.0715, 0.0589, 0.0392, 0.0495, 0.0444, 0.0468, 0.0313, 0.0539], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0155, 0.0119, 0.0137, 0.0131, 0.0125, 0.0145, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.5606e-05, 1.1465e-04, 8.6122e-05, 9.9401e-05, 9.4221e-05, 9.1428e-05, 1.0664e-04, 1.0741e-04], device='cuda:2') 2023-03-26 09:02:12,816 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7710, 1.6869, 1.6404, 1.9047, 2.4311, 1.9277, 1.6565, 1.4826], device='cuda:2'), covar=tensor([0.2429, 0.2391, 0.1952, 0.1797, 0.1870, 0.1262, 0.2608, 0.2163], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0212, 0.0207, 0.0189, 0.0242, 0.0180, 0.0216, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:02:15,112 INFO [finetune.py:976] (2/7) Epoch 8, batch 1200, loss[loss=0.162, simple_loss=0.2394, pruned_loss=0.04226, over 4766.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2697, pruned_loss=0.07145, over 953587.95 frames. ], batch size: 26, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:02:24,086 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:02:37,270 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.297e+01 1.635e+02 1.914e+02 2.289e+02 4.123e+02, threshold=3.829e+02, percent-clipped=0.0 2023-03-26 09:02:37,415 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8750, 1.6955, 1.6469, 1.9631, 2.3919, 1.9829, 1.4205, 1.5638], device='cuda:2'), covar=tensor([0.2298, 0.2304, 0.1905, 0.1679, 0.1854, 0.1162, 0.2652, 0.1897], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0211, 0.0206, 0.0188, 0.0240, 0.0179, 0.0216, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:02:42,677 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0687, 2.0943, 2.1100, 1.3636, 2.2424, 2.1758, 2.1492, 1.7164], device='cuda:2'), covar=tensor([0.0680, 0.0627, 0.0679, 0.0964, 0.0479, 0.0732, 0.0628, 0.1188], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0133, 0.0144, 0.0126, 0.0114, 0.0145, 0.0146, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:02:51,368 INFO [finetune.py:976] (2/7) Epoch 8, batch 1250, loss[loss=0.1781, simple_loss=0.2393, pruned_loss=0.05844, over 4854.00 frames. ], tot_loss[loss=0.2026, simple_loss=0.2659, pruned_loss=0.06966, over 954966.72 frames. ], batch size: 47, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:03:02,679 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:04,419 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:15,724 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4663, 3.8972, 4.0972, 4.2804, 4.2553, 3.9655, 4.5000, 1.7796], device='cuda:2'), covar=tensor([0.0808, 0.0718, 0.0883, 0.0790, 0.1170, 0.1442, 0.0727, 0.4667], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0241, 0.0273, 0.0291, 0.0330, 0.0279, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:03:24,977 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 09:03:32,985 INFO [finetune.py:976] (2/7) Epoch 8, batch 1300, loss[loss=0.2203, simple_loss=0.2804, pruned_loss=0.08016, over 4868.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2623, pruned_loss=0.06809, over 954587.08 frames. ], batch size: 34, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:03:37,919 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41401.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:56,246 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.674e+02 1.900e+02 2.309e+02 4.379e+02, threshold=3.799e+02, percent-clipped=1.0 2023-03-26 09:04:06,253 INFO [finetune.py:976] (2/7) Epoch 8, batch 1350, loss[loss=0.2489, simple_loss=0.3013, pruned_loss=0.09822, over 4794.00 frames. ], tot_loss[loss=0.2007, simple_loss=0.2634, pruned_loss=0.06905, over 955180.78 frames. ], batch size: 45, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:04:16,301 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:04:20,478 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-26 09:04:39,875 INFO [finetune.py:976] (2/7) Epoch 8, batch 1400, loss[loss=0.2178, simple_loss=0.2705, pruned_loss=0.08251, over 4899.00 frames. ], tot_loss[loss=0.2027, simple_loss=0.2659, pruned_loss=0.06976, over 956138.80 frames. ], batch size: 35, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:04:48,737 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5105, 1.3461, 1.7563, 1.8732, 1.6079, 3.4445, 1.3260, 1.5731], device='cuda:2'), covar=tensor([0.1005, 0.1848, 0.1079, 0.0961, 0.1638, 0.0257, 0.1483, 0.1790], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 09:04:58,592 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:05:02,225 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0706, 0.9981, 1.0522, 0.3788, 0.8169, 1.1686, 1.2218, 0.9884], device='cuda:2'), covar=tensor([0.0957, 0.0569, 0.0526, 0.0572, 0.0537, 0.0654, 0.0411, 0.0685], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0120, 0.0137, 0.0133, 0.0125, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.6397e-05, 1.1485e-04, 8.6790e-05, 9.9741e-05, 9.5205e-05, 9.1842e-05, 1.0721e-04, 1.0811e-04], device='cuda:2') 2023-03-26 09:05:03,328 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.704e+02 2.004e+02 2.444e+02 3.700e+02, threshold=4.008e+02, percent-clipped=0.0 2023-03-26 09:05:12,590 INFO [finetune.py:976] (2/7) Epoch 8, batch 1450, loss[loss=0.2145, simple_loss=0.2699, pruned_loss=0.07951, over 4818.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2691, pruned_loss=0.07102, over 956710.09 frames. ], batch size: 39, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:05:21,944 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41556.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:05:30,811 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:05:46,318 INFO [finetune.py:976] (2/7) Epoch 8, batch 1500, loss[loss=0.2293, simple_loss=0.3055, pruned_loss=0.07656, over 4817.00 frames. ], tot_loss[loss=0.2069, simple_loss=0.2704, pruned_loss=0.07165, over 956828.10 frames. ], batch size: 38, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:05:53,544 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:06:10,823 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.635e+02 1.924e+02 2.365e+02 3.634e+02, threshold=3.848e+02, percent-clipped=0.0 2023-03-26 09:06:22,436 INFO [finetune.py:976] (2/7) Epoch 8, batch 1550, loss[loss=0.2051, simple_loss=0.2665, pruned_loss=0.0718, over 4866.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2713, pruned_loss=0.07232, over 955179.94 frames. ], batch size: 34, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:06:34,445 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:06:35,066 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:06:46,518 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1487, 1.4021, 1.5220, 0.7629, 1.3674, 1.6012, 1.6618, 1.3661], device='cuda:2'), covar=tensor([0.0982, 0.0637, 0.0632, 0.0608, 0.0537, 0.0778, 0.0425, 0.0804], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0121, 0.0137, 0.0133, 0.0125, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.6016e-05, 1.1458e-04, 8.7448e-05, 9.9653e-05, 9.5317e-05, 9.2017e-05, 1.0711e-04, 1.0824e-04], device='cuda:2') 2023-03-26 09:07:19,719 INFO [finetune.py:976] (2/7) Epoch 8, batch 1600, loss[loss=0.1712, simple_loss=0.2238, pruned_loss=0.05929, over 4027.00 frames. ], tot_loss[loss=0.2064, simple_loss=0.2692, pruned_loss=0.07182, over 954799.58 frames. ], batch size: 17, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:07:23,843 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0800, 2.0668, 2.1394, 1.3142, 2.2401, 2.2439, 1.9992, 1.7396], device='cuda:2'), covar=tensor([0.0598, 0.0639, 0.0604, 0.0945, 0.0478, 0.0626, 0.0690, 0.1140], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0130, 0.0140, 0.0124, 0.0112, 0.0141, 0.0142, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:07:25,549 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:07:25,585 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:07:38,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5917, 1.5432, 1.3397, 1.5097, 1.8612, 1.7431, 1.5964, 1.2984], device='cuda:2'), covar=tensor([0.0253, 0.0275, 0.0508, 0.0259, 0.0175, 0.0446, 0.0282, 0.0426], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0112, 0.0140, 0.0117, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.0705e-05, 8.7852e-05, 1.1202e-04, 9.2120e-05, 8.2376e-05, 7.5225e-05, 6.9279e-05, 8.5356e-05], device='cuda:2') 2023-03-26 09:07:39,585 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41715.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:07:43,116 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2456, 2.7481, 2.6000, 1.3004, 2.6838, 2.2935, 2.1337, 2.1662], device='cuda:2'), covar=tensor([0.0773, 0.0925, 0.1806, 0.2412, 0.1947, 0.2036, 0.2117, 0.1439], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0201, 0.0204, 0.0189, 0.0220, 0.0207, 0.0223, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:07:47,200 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4324, 1.0205, 0.7574, 1.3687, 1.9187, 0.7453, 1.2429, 1.3637], device='cuda:2'), covar=tensor([0.1550, 0.2106, 0.1882, 0.1245, 0.2105, 0.2013, 0.1508, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 09:07:48,912 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.581e+02 1.949e+02 2.490e+02 4.755e+02, threshold=3.899e+02, percent-clipped=2.0 2023-03-26 09:07:57,453 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-26 09:07:58,438 INFO [finetune.py:976] (2/7) Epoch 8, batch 1650, loss[loss=0.1934, simple_loss=0.2266, pruned_loss=0.08009, over 4217.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2659, pruned_loss=0.07091, over 955236.38 frames. ], batch size: 18, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:08:01,529 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41749.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:08:09,826 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41758.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:08:21,680 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.03 vs. limit=5.0 2023-03-26 09:08:42,690 INFO [finetune.py:976] (2/7) Epoch 8, batch 1700, loss[loss=0.1889, simple_loss=0.2427, pruned_loss=0.06756, over 4709.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2637, pruned_loss=0.06958, over 955931.93 frames. ], batch size: 23, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:08:50,309 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:08:55,037 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3870, 2.7201, 2.1992, 1.7099, 2.6113, 2.8120, 2.4785, 2.2570], device='cuda:2'), covar=tensor([0.0619, 0.0529, 0.0808, 0.0949, 0.0620, 0.0659, 0.0690, 0.1002], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0132, 0.0142, 0.0125, 0.0113, 0.0143, 0.0144, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:09:06,690 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.763e+02 2.034e+02 2.335e+02 4.675e+02, threshold=4.069e+02, percent-clipped=2.0 2023-03-26 09:09:08,509 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8113, 1.3044, 0.9500, 1.6649, 2.1990, 1.1506, 1.5635, 1.5019], device='cuda:2'), covar=tensor([0.1341, 0.2099, 0.1800, 0.1179, 0.1728, 0.1812, 0.1359, 0.1980], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0101, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 09:09:16,340 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 09:09:16,758 INFO [finetune.py:976] (2/7) Epoch 8, batch 1750, loss[loss=0.2136, simple_loss=0.2904, pruned_loss=0.06842, over 4747.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.266, pruned_loss=0.07074, over 955992.72 frames. ], batch size: 54, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:09:27,923 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 09:09:50,589 INFO [finetune.py:976] (2/7) Epoch 8, batch 1800, loss[loss=0.2714, simple_loss=0.3268, pruned_loss=0.108, over 4226.00 frames. ], tot_loss[loss=0.2062, simple_loss=0.269, pruned_loss=0.07174, over 952463.45 frames. ], batch size: 65, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:09:57,996 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:10:13,605 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.792e+02 2.103e+02 2.633e+02 4.479e+02, threshold=4.207e+02, percent-clipped=2.0 2023-03-26 09:10:20,702 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5492, 2.3121, 2.0213, 0.9577, 2.2339, 1.8234, 1.5694, 2.1070], device='cuda:2'), covar=tensor([0.0789, 0.0898, 0.1601, 0.2094, 0.1421, 0.2491, 0.2567, 0.1023], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0201, 0.0203, 0.0188, 0.0219, 0.0208, 0.0223, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:10:23,653 INFO [finetune.py:976] (2/7) Epoch 8, batch 1850, loss[loss=0.1954, simple_loss=0.2587, pruned_loss=0.06604, over 4752.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2698, pruned_loss=0.07213, over 949146.26 frames. ], batch size: 26, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:10:26,657 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41948.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:10:38,719 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41967.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:10:57,329 INFO [finetune.py:976] (2/7) Epoch 8, batch 1900, loss[loss=0.2271, simple_loss=0.2715, pruned_loss=0.09137, over 4145.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2718, pruned_loss=0.07276, over 951301.06 frames. ], batch size: 66, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:11:08,253 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42009.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:11:08,808 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42010.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 09:11:22,120 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.556e+02 1.920e+02 2.218e+02 3.872e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 09:11:32,121 INFO [finetune.py:976] (2/7) Epoch 8, batch 1950, loss[loss=0.1667, simple_loss=0.2336, pruned_loss=0.04988, over 4742.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2698, pruned_loss=0.07192, over 951903.85 frames. ], batch size: 59, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:11:52,054 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9420, 1.8526, 1.7104, 1.7503, 1.1869, 4.0133, 1.7062, 2.3099], device='cuda:2'), covar=tensor([0.3180, 0.2300, 0.1856, 0.2117, 0.1768, 0.0155, 0.2583, 0.1206], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0099, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 09:12:30,959 INFO [finetune.py:976] (2/7) Epoch 8, batch 2000, loss[loss=0.21, simple_loss=0.2656, pruned_loss=0.07726, over 4841.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2671, pruned_loss=0.07097, over 952597.04 frames. ], batch size: 47, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:12:45,996 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4545, 1.6930, 0.8882, 2.2753, 2.6297, 1.8629, 2.0660, 2.2221], device='cuda:2'), covar=tensor([0.1184, 0.1827, 0.2001, 0.0975, 0.1736, 0.1781, 0.1316, 0.1740], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0092, 0.0124, 0.0096, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 09:12:56,437 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.506e+02 1.840e+02 2.176e+02 3.856e+02, threshold=3.679e+02, percent-clipped=1.0 2023-03-26 09:13:06,567 INFO [finetune.py:976] (2/7) Epoch 8, batch 2050, loss[loss=0.177, simple_loss=0.235, pruned_loss=0.0595, over 4749.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2636, pruned_loss=0.06972, over 954162.40 frames. ], batch size: 28, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:13:08,805 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 09:13:35,857 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 09:13:52,346 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 09:13:53,279 INFO [finetune.py:976] (2/7) Epoch 8, batch 2100, loss[loss=0.241, simple_loss=0.2901, pruned_loss=0.09596, over 4144.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2639, pruned_loss=0.07005, over 952383.18 frames. ], batch size: 65, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:14:06,874 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.16 vs. limit=5.0 2023-03-26 09:14:08,507 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0593, 4.8790, 4.6232, 2.8608, 4.9278, 3.7223, 1.1054, 3.5454], device='cuda:2'), covar=tensor([0.2451, 0.2076, 0.1364, 0.2904, 0.0681, 0.0836, 0.4556, 0.1301], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0170, 0.0158, 0.0128, 0.0154, 0.0121, 0.0144, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 09:14:11,168 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 09:14:16,279 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.710e+02 1.945e+02 2.376e+02 4.149e+02, threshold=3.889e+02, percent-clipped=2.0 2023-03-26 09:14:26,997 INFO [finetune.py:976] (2/7) Epoch 8, batch 2150, loss[loss=0.2236, simple_loss=0.2812, pruned_loss=0.08306, over 4838.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2675, pruned_loss=0.07118, over 954158.84 frames. ], batch size: 30, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:14:27,113 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1831, 2.1145, 1.8542, 1.0899, 1.9636, 1.7631, 1.6378, 2.0062], device='cuda:2'), covar=tensor([0.0800, 0.0618, 0.1146, 0.1507, 0.1158, 0.1715, 0.1691, 0.0818], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0202, 0.0203, 0.0189, 0.0220, 0.0208, 0.0224, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:14:38,878 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42262.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:15:18,053 INFO [finetune.py:976] (2/7) Epoch 8, batch 2200, loss[loss=0.1868, simple_loss=0.2499, pruned_loss=0.06179, over 4728.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.2709, pruned_loss=0.07271, over 952678.51 frames. ], batch size: 23, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:15:25,344 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42304.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 09:15:29,022 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42310.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:15:45,963 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.543e+02 1.921e+02 2.479e+02 5.347e+02, threshold=3.843e+02, percent-clipped=1.0 2023-03-26 09:16:07,395 INFO [finetune.py:976] (2/7) Epoch 8, batch 2250, loss[loss=0.2081, simple_loss=0.2828, pruned_loss=0.0667, over 4924.00 frames. ], tot_loss[loss=0.2101, simple_loss=0.2727, pruned_loss=0.07371, over 953913.51 frames. ], batch size: 41, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:16:27,423 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:16:37,936 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8412, 1.2732, 1.8224, 1.7378, 1.5209, 1.5453, 1.6370, 1.6693], device='cuda:2'), covar=tensor([0.4408, 0.5286, 0.4314, 0.4665, 0.5889, 0.4258, 0.5844, 0.3971], device='cuda:2'), in_proj_covar=tensor([0.0232, 0.0242, 0.0255, 0.0255, 0.0246, 0.0223, 0.0273, 0.0227], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:16:43,638 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5537, 1.3531, 1.3091, 1.5719, 1.6035, 1.6219, 0.8661, 1.3326], device='cuda:2'), covar=tensor([0.2178, 0.2181, 0.1813, 0.1631, 0.1752, 0.1147, 0.2846, 0.1894], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0211, 0.0205, 0.0188, 0.0240, 0.0179, 0.0215, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:16:47,920 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:16:59,146 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4381, 1.0216, 0.7295, 1.3764, 1.8850, 0.7364, 1.2561, 1.3557], device='cuda:2'), covar=tensor([0.1495, 0.2179, 0.1869, 0.1206, 0.2102, 0.2190, 0.1520, 0.1894], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0098, 0.0115, 0.0093, 0.0125, 0.0096, 0.0102, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 09:17:09,028 INFO [finetune.py:976] (2/7) Epoch 8, batch 2300, loss[loss=0.2286, simple_loss=0.2878, pruned_loss=0.08467, over 4774.00 frames. ], tot_loss[loss=0.21, simple_loss=0.2732, pruned_loss=0.07339, over 953249.76 frames. ], batch size: 28, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:17:57,228 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.935e+01 1.473e+02 1.816e+02 2.175e+02 3.275e+02, threshold=3.633e+02, percent-clipped=0.0 2023-03-26 09:18:06,507 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:18:09,263 INFO [finetune.py:976] (2/7) Epoch 8, batch 2350, loss[loss=0.1737, simple_loss=0.2383, pruned_loss=0.05459, over 4751.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.27, pruned_loss=0.07203, over 955209.36 frames. ], batch size: 28, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:18:51,871 INFO [finetune.py:976] (2/7) Epoch 8, batch 2400, loss[loss=0.1996, simple_loss=0.2447, pruned_loss=0.07731, over 4829.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2664, pruned_loss=0.07108, over 954930.57 frames. ], batch size: 25, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:19:02,390 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:19:14,093 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 09:19:25,681 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.517e+02 1.798e+02 2.223e+02 5.682e+02, threshold=3.597e+02, percent-clipped=2.0 2023-03-26 09:19:35,401 INFO [finetune.py:976] (2/7) Epoch 8, batch 2450, loss[loss=0.213, simple_loss=0.2778, pruned_loss=0.07407, over 4902.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2629, pruned_loss=0.06977, over 954255.66 frames. ], batch size: 37, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:19:48,004 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42562.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:19:51,969 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:08,910 INFO [finetune.py:976] (2/7) Epoch 8, batch 2500, loss[loss=0.1735, simple_loss=0.2525, pruned_loss=0.0472, over 4801.00 frames. ], tot_loss[loss=0.2018, simple_loss=0.264, pruned_loss=0.06982, over 953255.89 frames. ], batch size: 45, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:20:16,163 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42604.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:20:16,198 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1981, 1.2892, 1.5861, 1.0903, 1.1170, 1.4243, 1.2682, 1.5323], device='cuda:2'), covar=tensor([0.1355, 0.2247, 0.1422, 0.1702, 0.1193, 0.1382, 0.2915, 0.0932], device='cuda:2'), in_proj_covar=tensor([0.0202, 0.0203, 0.0197, 0.0196, 0.0181, 0.0220, 0.0219, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:20:20,730 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:26,683 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3695, 2.2574, 1.9236, 1.0004, 2.0806, 1.7895, 1.6173, 2.0696], device='cuda:2'), covar=tensor([0.0966, 0.0765, 0.1523, 0.2034, 0.1459, 0.2415, 0.2520, 0.1050], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0202, 0.0187, 0.0217, 0.0206, 0.0222, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:20:33,719 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.709e+02 1.979e+02 2.316e+02 5.134e+02, threshold=3.959e+02, percent-clipped=4.0 2023-03-26 09:20:42,882 INFO [finetune.py:976] (2/7) Epoch 8, batch 2550, loss[loss=0.1822, simple_loss=0.25, pruned_loss=0.05714, over 4762.00 frames. ], tot_loss[loss=0.205, simple_loss=0.268, pruned_loss=0.07095, over 952830.33 frames. ], batch size: 54, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:20:46,540 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:46,615 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 09:20:47,633 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6816, 1.2017, 0.8294, 1.6017, 2.0875, 1.0658, 1.4364, 1.4954], device='cuda:2'), covar=tensor([0.1551, 0.2180, 0.1997, 0.1203, 0.1970, 0.2014, 0.1575, 0.2103], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0113, 0.0092, 0.0123, 0.0095, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 09:20:53,417 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42652.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:21:25,355 INFO [finetune.py:976] (2/7) Epoch 8, batch 2600, loss[loss=0.233, simple_loss=0.2898, pruned_loss=0.08809, over 4790.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2701, pruned_loss=0.07211, over 953793.11 frames. ], batch size: 51, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:21:36,647 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42710.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:21:49,601 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.153e+02 1.771e+02 2.168e+02 2.787e+02 4.495e+02, threshold=4.337e+02, percent-clipped=4.0 2023-03-26 09:21:53,824 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:21:59,147 INFO [finetune.py:976] (2/7) Epoch 8, batch 2650, loss[loss=0.2057, simple_loss=0.2646, pruned_loss=0.0734, over 4829.00 frames. ], tot_loss[loss=0.2088, simple_loss=0.272, pruned_loss=0.07275, over 953393.38 frames. ], batch size: 49, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:22:37,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5070, 1.8105, 1.4387, 1.4096, 2.0265, 1.8939, 1.6981, 1.7052], device='cuda:2'), covar=tensor([0.0532, 0.0336, 0.0596, 0.0380, 0.0324, 0.0503, 0.0351, 0.0377], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0112, 0.0141, 0.0117, 0.0105, 0.0102, 0.0092, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.0846e-05, 8.8063e-05, 1.1253e-04, 9.1783e-05, 8.2731e-05, 7.5939e-05, 6.9322e-05, 8.5506e-05], device='cuda:2') 2023-03-26 09:22:40,396 INFO [finetune.py:976] (2/7) Epoch 8, batch 2700, loss[loss=0.1693, simple_loss=0.2287, pruned_loss=0.05499, over 4230.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2704, pruned_loss=0.07206, over 952944.24 frames. ], batch size: 18, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:23:06,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1967, 1.9456, 2.7315, 1.5100, 2.3337, 2.3996, 1.8823, 2.6378], device='cuda:2'), covar=tensor([0.1780, 0.2256, 0.1862, 0.2852, 0.1228, 0.1914, 0.2910, 0.1168], device='cuda:2'), in_proj_covar=tensor([0.0202, 0.0205, 0.0197, 0.0196, 0.0182, 0.0221, 0.0219, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:23:27,701 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.604e+02 1.897e+02 2.218e+02 3.599e+02, threshold=3.793e+02, percent-clipped=0.0 2023-03-26 09:23:28,505 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.92 vs. limit=5.0 2023-03-26 09:23:46,896 INFO [finetune.py:976] (2/7) Epoch 8, batch 2750, loss[loss=0.2122, simple_loss=0.265, pruned_loss=0.07972, over 4760.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.268, pruned_loss=0.07148, over 953792.06 frames. ], batch size: 27, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:23:55,959 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:02,858 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42862.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:07,906 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 09:24:14,754 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42879.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:31,338 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.83 vs. limit=5.0 2023-03-26 09:24:31,685 INFO [finetune.py:976] (2/7) Epoch 8, batch 2800, loss[loss=0.1656, simple_loss=0.2302, pruned_loss=0.05054, over 4936.00 frames. ], tot_loss[loss=0.202, simple_loss=0.264, pruned_loss=0.06995, over 954113.84 frames. ], batch size: 38, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:24:48,126 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 09:24:48,564 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:54,864 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.137e+02 1.632e+02 1.941e+02 2.379e+02 3.960e+02, threshold=3.882e+02, percent-clipped=2.0 2023-03-26 09:25:02,618 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:25:04,920 INFO [finetune.py:976] (2/7) Epoch 8, batch 2850, loss[loss=0.2449, simple_loss=0.3126, pruned_loss=0.08864, over 4822.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2629, pruned_loss=0.0699, over 951882.09 frames. ], batch size: 39, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:25:23,404 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4564, 1.4815, 1.7454, 1.8218, 1.5646, 3.5013, 1.2408, 1.6244], device='cuda:2'), covar=tensor([0.1027, 0.1805, 0.1273, 0.1014, 0.1623, 0.0234, 0.1636, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 09:25:38,296 INFO [finetune.py:976] (2/7) Epoch 8, batch 2900, loss[loss=0.2518, simple_loss=0.3123, pruned_loss=0.09565, over 4738.00 frames. ], tot_loss[loss=0.2046, simple_loss=0.2665, pruned_loss=0.07131, over 952618.40 frames. ], batch size: 54, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:25:45,629 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43005.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:03,401 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.926e+01 1.731e+02 1.970e+02 2.373e+02 5.777e+02, threshold=3.941e+02, percent-clipped=2.0 2023-03-26 09:26:13,023 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43035.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:18,880 INFO [finetune.py:976] (2/7) Epoch 8, batch 2950, loss[loss=0.2442, simple_loss=0.2996, pruned_loss=0.0944, over 4911.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2692, pruned_loss=0.07169, over 952110.70 frames. ], batch size: 36, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:26:44,528 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:52,569 INFO [finetune.py:976] (2/7) Epoch 8, batch 3000, loss[loss=0.2179, simple_loss=0.2832, pruned_loss=0.07634, over 4830.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2725, pruned_loss=0.07326, over 954126.81 frames. ], batch size: 30, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:26:52,569 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 09:26:55,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7977, 1.6062, 1.4962, 1.8316, 2.1865, 1.7583, 1.2749, 1.5080], device='cuda:2'), covar=tensor([0.2220, 0.2226, 0.1995, 0.1806, 0.1687, 0.1307, 0.2691, 0.1951], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0210, 0.0206, 0.0188, 0.0241, 0.0180, 0.0215, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:26:56,554 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8945, 1.1865, 1.8948, 1.6922, 1.6006, 1.5227, 1.6267, 1.6990], device='cuda:2'), covar=tensor([0.3984, 0.4909, 0.4238, 0.4889, 0.5910, 0.4474, 0.5818, 0.3871], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0241, 0.0253, 0.0254, 0.0245, 0.0222, 0.0273, 0.0227], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:27:10,877 INFO [finetune.py:1010] (2/7) Epoch 8, validation: loss=0.16, simple_loss=0.2311, pruned_loss=0.04446, over 2265189.00 frames. 2023-03-26 09:27:10,878 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 09:27:49,850 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.686e+02 2.049e+02 2.426e+02 3.920e+02, threshold=4.099e+02, percent-clipped=0.0 2023-03-26 09:27:52,959 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4027, 2.2696, 1.9030, 2.4179, 2.4089, 2.1102, 2.7155, 2.3603], device='cuda:2'), covar=tensor([0.1416, 0.2439, 0.3505, 0.2638, 0.2418, 0.1638, 0.3030, 0.2065], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0189, 0.0234, 0.0254, 0.0235, 0.0194, 0.0211, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:27:54,162 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3481, 2.8289, 2.6724, 1.3228, 2.8121, 2.3550, 2.2647, 2.5866], device='cuda:2'), covar=tensor([0.0884, 0.1154, 0.1700, 0.2518, 0.1807, 0.2054, 0.2023, 0.1285], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0202, 0.0187, 0.0218, 0.0206, 0.0223, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:27:57,675 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3652, 1.4974, 1.4849, 1.6493, 1.5781, 3.1476, 1.3222, 1.5813], device='cuda:2'), covar=tensor([0.1011, 0.1759, 0.1141, 0.0966, 0.1563, 0.0259, 0.1459, 0.1653], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0080, 0.0075, 0.0078, 0.0091, 0.0082, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 09:28:00,450 INFO [finetune.py:976] (2/7) Epoch 8, batch 3050, loss[loss=0.2251, simple_loss=0.2861, pruned_loss=0.08205, over 4827.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2733, pruned_loss=0.07295, over 954814.22 frames. ], batch size: 47, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:28:13,421 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43162.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:28:16,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5034, 3.9315, 4.1016, 4.2881, 4.2104, 3.9687, 4.5939, 1.4047], device='cuda:2'), covar=tensor([0.0853, 0.0784, 0.0857, 0.1126, 0.1331, 0.1607, 0.0623, 0.5378], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0241, 0.0275, 0.0293, 0.0329, 0.0282, 0.0300, 0.0292], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:28:36,035 INFO [finetune.py:976] (2/7) Epoch 8, batch 3100, loss[loss=0.1741, simple_loss=0.2333, pruned_loss=0.05747, over 4850.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2704, pruned_loss=0.07233, over 954089.06 frames. ], batch size: 49, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:28:52,835 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43210.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:01,135 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:14,666 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.652e+02 1.930e+02 2.337e+02 4.149e+02, threshold=3.860e+02, percent-clipped=1.0 2023-03-26 09:29:23,816 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:34,855 INFO [finetune.py:976] (2/7) Epoch 8, batch 3150, loss[loss=0.2386, simple_loss=0.2959, pruned_loss=0.09064, over 4874.00 frames. ], tot_loss[loss=0.204, simple_loss=0.2662, pruned_loss=0.07091, over 954454.48 frames. ], batch size: 34, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:30:24,892 INFO [finetune.py:976] (2/7) Epoch 8, batch 3200, loss[loss=0.1795, simple_loss=0.2521, pruned_loss=0.0534, over 4758.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2619, pruned_loss=0.06886, over 955175.46 frames. ], batch size: 27, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:30:32,644 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43305.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:30:49,477 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.668e+02 2.087e+02 2.541e+02 1.424e+03, threshold=4.174e+02, percent-clipped=3.0 2023-03-26 09:31:03,653 INFO [finetune.py:976] (2/7) Epoch 8, batch 3250, loss[loss=0.1748, simple_loss=0.2285, pruned_loss=0.0605, over 4224.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2632, pruned_loss=0.06972, over 954898.24 frames. ], batch size: 18, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:31:15,389 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:31:56,294 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0435, 2.8145, 2.5369, 1.3922, 2.5571, 2.2059, 2.1546, 2.4297], device='cuda:2'), covar=tensor([0.1037, 0.0892, 0.1625, 0.2415, 0.1939, 0.2326, 0.2043, 0.1205], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0202, 0.0188, 0.0220, 0.0207, 0.0224, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:31:59,779 INFO [finetune.py:976] (2/7) Epoch 8, batch 3300, loss[loss=0.1749, simple_loss=0.2361, pruned_loss=0.05687, over 4443.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2655, pruned_loss=0.07001, over 953997.79 frames. ], batch size: 19, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:32:32,407 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7526, 1.1521, 0.7777, 1.5472, 1.9227, 1.2953, 1.4155, 1.4909], device='cuda:2'), covar=tensor([0.1568, 0.2288, 0.2194, 0.1278, 0.2258, 0.2151, 0.1535, 0.2179], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0124, 0.0096, 0.0101, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 09:32:45,625 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.691e+02 2.029e+02 2.462e+02 4.055e+02, threshold=4.059e+02, percent-clipped=0.0 2023-03-26 09:33:04,668 INFO [finetune.py:976] (2/7) Epoch 8, batch 3350, loss[loss=0.1623, simple_loss=0.2381, pruned_loss=0.04326, over 4861.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2672, pruned_loss=0.07003, over 956250.47 frames. ], batch size: 34, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:33:51,119 INFO [finetune.py:976] (2/7) Epoch 8, batch 3400, loss[loss=0.1771, simple_loss=0.2501, pruned_loss=0.05203, over 4846.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2692, pruned_loss=0.07124, over 955479.05 frames. ], batch size: 49, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:33:58,541 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6747, 1.3759, 1.0537, 0.2506, 1.2333, 1.4316, 1.3918, 1.4694], device='cuda:2'), covar=tensor([0.0897, 0.0947, 0.1387, 0.2043, 0.1503, 0.2455, 0.2336, 0.0857], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0203, 0.0189, 0.0219, 0.0208, 0.0224, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:34:04,475 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:05,086 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:15,395 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.653e+02 1.866e+02 2.233e+02 4.638e+02, threshold=3.733e+02, percent-clipped=1.0 2023-03-26 09:34:19,626 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43535.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:25,008 INFO [finetune.py:976] (2/7) Epoch 8, batch 3450, loss[loss=0.1776, simple_loss=0.2462, pruned_loss=0.05451, over 4722.00 frames. ], tot_loss[loss=0.2052, simple_loss=0.2695, pruned_loss=0.07051, over 958322.73 frames. ], batch size: 23, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:34:43,306 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:55,896 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43575.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:35:02,197 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43583.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:35:06,596 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9370, 1.8299, 1.3756, 1.8000, 1.9299, 1.6533, 2.5273, 1.8745], device='cuda:2'), covar=tensor([0.1537, 0.2456, 0.3795, 0.3104, 0.2939, 0.1772, 0.2424, 0.2298], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0188, 0.0233, 0.0253, 0.0235, 0.0193, 0.0210, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:35:08,909 INFO [finetune.py:976] (2/7) Epoch 8, batch 3500, loss[loss=0.2212, simple_loss=0.2632, pruned_loss=0.08967, over 4896.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2667, pruned_loss=0.07025, over 956018.26 frames. ], batch size: 35, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:35:16,434 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8799, 1.2883, 1.8480, 1.8507, 1.5892, 1.5609, 1.7059, 1.6623], device='cuda:2'), covar=tensor([0.4126, 0.5284, 0.4265, 0.4528, 0.5826, 0.4489, 0.5621, 0.4192], device='cuda:2'), in_proj_covar=tensor([0.0233, 0.0242, 0.0255, 0.0255, 0.0247, 0.0224, 0.0274, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:35:34,552 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.655e+02 2.028e+02 2.395e+02 4.370e+02, threshold=4.057e+02, percent-clipped=4.0 2023-03-26 09:35:44,695 INFO [finetune.py:976] (2/7) Epoch 8, batch 3550, loss[loss=0.1968, simple_loss=0.247, pruned_loss=0.07331, over 4898.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2639, pruned_loss=0.06935, over 956434.37 frames. ], batch size: 36, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:35:56,477 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6223, 1.5258, 1.4805, 1.5794, 1.0672, 3.2322, 1.2620, 1.8263], device='cuda:2'), covar=tensor([0.2965, 0.2235, 0.2011, 0.2183, 0.1863, 0.0215, 0.2462, 0.1193], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0122, 0.0116, 0.0097, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 09:36:16,318 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3784, 2.9266, 2.8256, 1.3267, 3.0690, 2.2888, 0.8271, 2.0264], device='cuda:2'), covar=tensor([0.2335, 0.2623, 0.1716, 0.3559, 0.1354, 0.1138, 0.4097, 0.1706], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0160, 0.0129, 0.0155, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 09:36:18,004 INFO [finetune.py:976] (2/7) Epoch 8, batch 3600, loss[loss=0.1641, simple_loss=0.2345, pruned_loss=0.04684, over 4774.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2592, pruned_loss=0.06693, over 955716.22 frames. ], batch size: 27, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:36:40,280 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.641e+02 1.836e+02 2.142e+02 3.900e+02, threshold=3.673e+02, percent-clipped=0.0 2023-03-26 09:37:03,251 INFO [finetune.py:976] (2/7) Epoch 8, batch 3650, loss[loss=0.2643, simple_loss=0.3281, pruned_loss=0.1003, over 4265.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2624, pruned_loss=0.06826, over 955076.98 frames. ], batch size: 65, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:37:34,359 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 09:37:41,700 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43787.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:37:53,053 INFO [finetune.py:976] (2/7) Epoch 8, batch 3700, loss[loss=0.2074, simple_loss=0.2798, pruned_loss=0.06751, over 4810.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2669, pruned_loss=0.0698, over 954463.83 frames. ], batch size: 39, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:38:37,141 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.066e+01 1.637e+02 2.055e+02 2.501e+02 4.825e+02, threshold=4.110e+02, percent-clipped=4.0 2023-03-26 09:38:56,858 INFO [finetune.py:976] (2/7) Epoch 8, batch 3750, loss[loss=0.2644, simple_loss=0.3068, pruned_loss=0.111, over 4807.00 frames. ], tot_loss[loss=0.2052, simple_loss=0.2692, pruned_loss=0.07062, over 955745.87 frames. ], batch size: 38, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:39:00,430 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:39:13,801 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43870.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:39:30,692 INFO [finetune.py:976] (2/7) Epoch 8, batch 3800, loss[loss=0.1788, simple_loss=0.2497, pruned_loss=0.05396, over 4844.00 frames. ], tot_loss[loss=0.2067, simple_loss=0.271, pruned_loss=0.07116, over 958037.21 frames. ], batch size: 49, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:39:40,906 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:39:51,926 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7506, 1.2910, 0.9081, 1.5321, 2.1400, 1.0516, 1.4712, 1.6635], device='cuda:2'), covar=tensor([0.1473, 0.2059, 0.1959, 0.1239, 0.1892, 0.2006, 0.1459, 0.1970], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0091, 0.0123, 0.0095, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 09:39:52,552 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2125, 2.2268, 2.2456, 1.4747, 2.3388, 2.3554, 2.1889, 1.8913], device='cuda:2'), covar=tensor([0.0509, 0.0570, 0.0647, 0.0887, 0.0456, 0.0591, 0.0599, 0.1124], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0133, 0.0145, 0.0126, 0.0115, 0.0144, 0.0145, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:40:01,524 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.597e+02 1.980e+02 2.453e+02 5.062e+02, threshold=3.959e+02, percent-clipped=3.0 2023-03-26 09:40:16,145 INFO [finetune.py:976] (2/7) Epoch 8, batch 3850, loss[loss=0.1602, simple_loss=0.2257, pruned_loss=0.04732, over 4805.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2689, pruned_loss=0.06964, over 958876.81 frames. ], batch size: 25, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:40:26,752 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43959.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:40:33,735 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:40:54,207 INFO [finetune.py:976] (2/7) Epoch 8, batch 3900, loss[loss=0.1765, simple_loss=0.2445, pruned_loss=0.0542, over 4812.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2658, pruned_loss=0.0691, over 956842.20 frames. ], batch size: 51, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:41:17,077 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:41:20,145 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 09:41:22,369 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.530e+02 1.807e+02 2.225e+02 3.627e+02, threshold=3.614e+02, percent-clipped=0.0 2023-03-26 09:41:32,527 INFO [finetune.py:976] (2/7) Epoch 8, batch 3950, loss[loss=0.1597, simple_loss=0.2158, pruned_loss=0.05178, over 4750.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2618, pruned_loss=0.06787, over 957079.27 frames. ], batch size: 23, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:41:53,397 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5808, 1.5535, 2.0152, 1.3117, 1.7298, 1.8492, 1.4933, 2.0607], device='cuda:2'), covar=tensor([0.1414, 0.2172, 0.1289, 0.1880, 0.0910, 0.1389, 0.3203, 0.0956], device='cuda:2'), in_proj_covar=tensor([0.0203, 0.0205, 0.0199, 0.0197, 0.0181, 0.0221, 0.0221, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:42:06,486 INFO [finetune.py:976] (2/7) Epoch 8, batch 4000, loss[loss=0.218, simple_loss=0.2738, pruned_loss=0.08106, over 4906.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2605, pruned_loss=0.06769, over 957590.83 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:42:06,600 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8923, 1.8697, 1.9396, 1.2982, 1.9584, 1.9891, 1.9365, 1.5725], device='cuda:2'), covar=tensor([0.0691, 0.0655, 0.0677, 0.0922, 0.0624, 0.0704, 0.0638, 0.1274], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0134, 0.0145, 0.0126, 0.0116, 0.0145, 0.0145, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:42:37,513 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.793e+02 2.145e+02 2.588e+02 4.712e+02, threshold=4.291e+02, percent-clipped=10.0 2023-03-26 09:42:56,500 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44143.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:42:57,048 INFO [finetune.py:976] (2/7) Epoch 8, batch 4050, loss[loss=0.1889, simple_loss=0.2621, pruned_loss=0.05787, over 4901.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2655, pruned_loss=0.07, over 956832.59 frames. ], batch size: 43, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:43:31,070 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44170.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:43:49,921 INFO [finetune.py:976] (2/7) Epoch 8, batch 4100, loss[loss=0.2515, simple_loss=0.3198, pruned_loss=0.09163, over 4743.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.269, pruned_loss=0.07137, over 955567.81 frames. ], batch size: 54, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:43:55,531 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5868, 1.2718, 0.8688, 1.5056, 2.0009, 1.2461, 1.4327, 1.5901], device='cuda:2'), covar=tensor([0.1570, 0.2138, 0.2071, 0.1268, 0.2069, 0.2146, 0.1537, 0.2010], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 09:44:04,060 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 09:44:15,791 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44218.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:44:22,432 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.696e+02 1.895e+02 2.360e+02 4.949e+02, threshold=3.791e+02, percent-clipped=1.0 2023-03-26 09:44:31,518 INFO [finetune.py:976] (2/7) Epoch 8, batch 4150, loss[loss=0.2185, simple_loss=0.2671, pruned_loss=0.08492, over 4771.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2704, pruned_loss=0.07215, over 953914.17 frames. ], batch size: 26, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:44:46,739 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:45:07,272 INFO [finetune.py:976] (2/7) Epoch 8, batch 4200, loss[loss=0.1846, simple_loss=0.2512, pruned_loss=0.05903, over 4777.00 frames. ], tot_loss[loss=0.2069, simple_loss=0.2706, pruned_loss=0.07163, over 952727.26 frames. ], batch size: 51, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:45:30,592 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:45:40,452 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.746e+02 1.975e+02 2.337e+02 5.106e+02, threshold=3.951e+02, percent-clipped=1.0 2023-03-26 09:45:54,906 INFO [finetune.py:976] (2/7) Epoch 8, batch 4250, loss[loss=0.1862, simple_loss=0.2553, pruned_loss=0.05857, over 4789.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2683, pruned_loss=0.07095, over 953114.04 frames. ], batch size: 29, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:46:10,153 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.9058, 3.3560, 3.5378, 3.6688, 3.6933, 3.4940, 3.9523, 1.5481], device='cuda:2'), covar=tensor([0.0719, 0.0798, 0.0747, 0.0975, 0.1096, 0.1298, 0.0711, 0.4517], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0241, 0.0275, 0.0294, 0.0332, 0.0283, 0.0302, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:46:32,565 INFO [finetune.py:976] (2/7) Epoch 8, batch 4300, loss[loss=0.2185, simple_loss=0.2804, pruned_loss=0.07834, over 4916.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2653, pruned_loss=0.07023, over 951900.84 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:46:51,783 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 09:46:56,827 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.550e+02 1.851e+02 2.365e+02 4.860e+02, threshold=3.701e+02, percent-clipped=2.0 2023-03-26 09:47:05,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44443.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:47:06,355 INFO [finetune.py:976] (2/7) Epoch 8, batch 4350, loss[loss=0.1851, simple_loss=0.2376, pruned_loss=0.06629, over 4893.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2608, pruned_loss=0.06776, over 954027.22 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:47:30,228 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 09:47:35,937 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7370, 1.5265, 1.5122, 1.5778, 1.8525, 1.8234, 1.6914, 1.3756], device='cuda:2'), covar=tensor([0.0304, 0.0295, 0.0505, 0.0287, 0.0241, 0.0477, 0.0269, 0.0454], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0110, 0.0138, 0.0115, 0.0102, 0.0100, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.9482e-05, 8.5897e-05, 1.1046e-04, 9.0210e-05, 8.0425e-05, 7.4415e-05, 6.8718e-05, 8.3791e-05], device='cuda:2') 2023-03-26 09:47:37,687 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44491.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:47:39,432 INFO [finetune.py:976] (2/7) Epoch 8, batch 4400, loss[loss=0.1617, simple_loss=0.2141, pruned_loss=0.0546, over 3263.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.261, pruned_loss=0.06826, over 951297.31 frames. ], batch size: 14, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:48:05,591 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 09:48:17,047 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:48:18,612 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 1.746e+02 1.995e+02 2.515e+02 6.158e+02, threshold=3.991e+02, percent-clipped=2.0 2023-03-26 09:48:25,208 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9170, 1.2309, 1.8804, 1.8237, 1.6196, 1.6145, 1.7352, 1.6830], device='cuda:2'), covar=tensor([0.4168, 0.5529, 0.4385, 0.4961, 0.6239, 0.4338, 0.6206, 0.4308], device='cuda:2'), in_proj_covar=tensor([0.0231, 0.0241, 0.0253, 0.0254, 0.0247, 0.0223, 0.0273, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:48:28,720 INFO [finetune.py:976] (2/7) Epoch 8, batch 4450, loss[loss=0.2413, simple_loss=0.293, pruned_loss=0.09481, over 4230.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2662, pruned_loss=0.07043, over 951597.91 frames. ], batch size: 65, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:48:37,007 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:48:51,563 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:19,339 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:28,022 INFO [finetune.py:976] (2/7) Epoch 8, batch 4500, loss[loss=0.2423, simple_loss=0.3082, pruned_loss=0.08821, over 4801.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2703, pruned_loss=0.07226, over 951262.89 frames. ], batch size: 45, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:49:47,681 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44611.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:48,881 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:50,685 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44615.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:50:00,000 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.305e+02 1.720e+02 2.026e+02 2.465e+02 5.780e+02, threshold=4.053e+02, percent-clipped=2.0 2023-03-26 09:50:10,541 INFO [finetune.py:976] (2/7) Epoch 8, batch 4550, loss[loss=0.2155, simple_loss=0.2911, pruned_loss=0.06995, over 4725.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2716, pruned_loss=0.07261, over 952890.10 frames. ], batch size: 59, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:50:27,737 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44663.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:50:52,807 INFO [finetune.py:976] (2/7) Epoch 8, batch 4600, loss[loss=0.1792, simple_loss=0.249, pruned_loss=0.05477, over 4848.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2707, pruned_loss=0.07186, over 953228.95 frames. ], batch size: 49, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:51:15,457 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.778e+01 1.566e+02 1.839e+02 2.114e+02 3.234e+02, threshold=3.678e+02, percent-clipped=0.0 2023-03-26 09:51:25,982 INFO [finetune.py:976] (2/7) Epoch 8, batch 4650, loss[loss=0.213, simple_loss=0.2741, pruned_loss=0.07598, over 4774.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2674, pruned_loss=0.0706, over 954574.61 frames. ], batch size: 54, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:51:35,492 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6546, 3.3774, 3.2496, 1.6093, 3.5167, 2.4646, 0.7306, 2.2364], device='cuda:2'), covar=tensor([0.2479, 0.1797, 0.1609, 0.3419, 0.1039, 0.1144, 0.4622, 0.1711], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0171, 0.0160, 0.0128, 0.0156, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 09:51:47,955 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0469, 1.8551, 1.6539, 1.8857, 1.8342, 1.7777, 1.8541, 2.5566], device='cuda:2'), covar=tensor([0.5035, 0.6176, 0.4128, 0.5580, 0.5306, 0.3005, 0.5282, 0.1950], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0258, 0.0220, 0.0278, 0.0240, 0.0205, 0.0243, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:51:59,446 INFO [finetune.py:976] (2/7) Epoch 8, batch 4700, loss[loss=0.1743, simple_loss=0.2428, pruned_loss=0.05295, over 4906.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2645, pruned_loss=0.06925, over 954946.62 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:52:22,741 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.499e+02 1.877e+02 2.319e+02 4.193e+02, threshold=3.754e+02, percent-clipped=1.0 2023-03-26 09:52:24,546 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2897, 2.1916, 1.6833, 0.7669, 1.7739, 1.8398, 1.6618, 1.9051], device='cuda:2'), covar=tensor([0.1095, 0.0739, 0.1471, 0.2035, 0.1519, 0.2260, 0.2188, 0.0887], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0189, 0.0218, 0.0206, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:52:32,231 INFO [finetune.py:976] (2/7) Epoch 8, batch 4750, loss[loss=0.1814, simple_loss=0.2424, pruned_loss=0.06017, over 4837.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2618, pruned_loss=0.06835, over 955505.98 frames. ], batch size: 30, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:52:58,204 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44883.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:53:05,273 INFO [finetune.py:976] (2/7) Epoch 8, batch 4800, loss[loss=0.2071, simple_loss=0.2719, pruned_loss=0.07121, over 4906.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2652, pruned_loss=0.07034, over 953689.35 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:53:15,867 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:53:15,948 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8157, 1.0273, 1.7832, 1.6582, 1.4765, 1.4618, 1.5114, 1.6691], device='cuda:2'), covar=tensor([0.3999, 0.4898, 0.3736, 0.4328, 0.5464, 0.4159, 0.5482, 0.3758], device='cuda:2'), in_proj_covar=tensor([0.0233, 0.0242, 0.0254, 0.0256, 0.0248, 0.0224, 0.0275, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:53:41,287 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.589e+02 1.932e+02 2.383e+02 4.430e+02, threshold=3.864e+02, percent-clipped=2.0 2023-03-26 09:53:55,404 INFO [finetune.py:976] (2/7) Epoch 8, batch 4850, loss[loss=0.213, simple_loss=0.2836, pruned_loss=0.07116, over 4906.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2697, pruned_loss=0.07259, over 951560.71 frames. ], batch size: 43, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:54:24,941 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9449, 1.7860, 1.4913, 1.7132, 1.6974, 1.6273, 1.7620, 2.4269], device='cuda:2'), covar=tensor([0.4971, 0.5434, 0.4049, 0.4739, 0.4541, 0.3041, 0.4794, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0259, 0.0221, 0.0279, 0.0240, 0.0206, 0.0244, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:54:43,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4867, 1.2887, 1.2422, 1.4992, 1.5614, 1.5327, 0.8916, 1.2837], device='cuda:2'), covar=tensor([0.2283, 0.2246, 0.1966, 0.1736, 0.1792, 0.1240, 0.2809, 0.1944], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0208, 0.0205, 0.0187, 0.0240, 0.0177, 0.0214, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:54:49,816 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1139, 1.7905, 1.3697, 0.5738, 1.5752, 1.6437, 1.3699, 1.6228], device='cuda:2'), covar=tensor([0.0991, 0.1015, 0.1627, 0.2200, 0.1679, 0.3003, 0.2802, 0.1100], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0202, 0.0202, 0.0189, 0.0217, 0.0206, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 09:54:57,803 INFO [finetune.py:976] (2/7) Epoch 8, batch 4900, loss[loss=0.2661, simple_loss=0.3241, pruned_loss=0.104, over 4850.00 frames. ], tot_loss[loss=0.209, simple_loss=0.2714, pruned_loss=0.07335, over 953210.32 frames. ], batch size: 31, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:55:14,274 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 09:55:25,592 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.694e+02 2.008e+02 2.325e+02 4.035e+02, threshold=4.016e+02, percent-clipped=1.0 2023-03-26 09:55:35,208 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.35 vs. limit=5.0 2023-03-26 09:55:44,367 INFO [finetune.py:976] (2/7) Epoch 8, batch 4950, loss[loss=0.2184, simple_loss=0.2869, pruned_loss=0.075, over 4879.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2723, pruned_loss=0.07302, over 952392.51 frames. ], batch size: 32, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:55:57,048 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:09,634 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.53 vs. limit=5.0 2023-03-26 09:56:13,218 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:17,535 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.61 vs. limit=5.0 2023-03-26 09:56:21,541 INFO [finetune.py:976] (2/7) Epoch 8, batch 5000, loss[loss=0.2126, simple_loss=0.2671, pruned_loss=0.07903, over 4891.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2698, pruned_loss=0.07162, over 953652.75 frames. ], batch size: 32, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:56:37,083 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45117.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:56:45,212 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.621e+02 1.919e+02 2.483e+02 3.797e+02, threshold=3.837e+02, percent-clipped=0.0 2023-03-26 09:56:52,682 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:53,771 INFO [finetune.py:976] (2/7) Epoch 8, batch 5050, loss[loss=0.1763, simple_loss=0.2404, pruned_loss=0.05614, over 4919.00 frames. ], tot_loss[loss=0.204, simple_loss=0.2666, pruned_loss=0.07072, over 952092.12 frames. ], batch size: 36, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:57:19,959 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45183.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:57:26,575 INFO [finetune.py:976] (2/7) Epoch 8, batch 5100, loss[loss=0.1897, simple_loss=0.2561, pruned_loss=0.06162, over 4923.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2632, pruned_loss=0.0692, over 952151.26 frames. ], batch size: 43, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:57:34,989 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:57:55,113 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.630e+02 1.903e+02 2.262e+02 3.588e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 09:57:55,793 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45231.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:58:04,091 INFO [finetune.py:976] (2/7) Epoch 8, batch 5150, loss[loss=0.1965, simple_loss=0.2583, pruned_loss=0.06736, over 4690.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2631, pruned_loss=0.06945, over 951731.84 frames. ], batch size: 23, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:58:11,269 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45254.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:58:38,099 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.00 vs. limit=5.0 2023-03-26 09:58:40,358 INFO [finetune.py:976] (2/7) Epoch 8, batch 5200, loss[loss=0.2182, simple_loss=0.2929, pruned_loss=0.07172, over 4833.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2667, pruned_loss=0.07112, over 951505.63 frames. ], batch size: 49, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:59:02,516 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6077, 1.6463, 1.6948, 0.9091, 1.8013, 1.9491, 1.9195, 1.4362], device='cuda:2'), covar=tensor([0.0916, 0.0635, 0.0500, 0.0596, 0.0376, 0.0618, 0.0299, 0.0678], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0122, 0.0137, 0.0132, 0.0125, 0.0146, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.6632e-05, 1.1598e-04, 8.7763e-05, 9.9523e-05, 9.4799e-05, 9.1796e-05, 1.0737e-04, 1.0952e-04], device='cuda:2') 2023-03-26 09:59:09,668 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.659e+02 1.911e+02 2.326e+02 4.760e+02, threshold=3.822e+02, percent-clipped=1.0 2023-03-26 09:59:10,648 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 09:59:18,275 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5129, 1.2293, 1.7702, 1.8409, 1.4942, 3.3597, 1.1427, 1.5032], device='cuda:2'), covar=tensor([0.1164, 0.2330, 0.1385, 0.1238, 0.1997, 0.0327, 0.2102, 0.2215], device='cuda:2'), in_proj_covar=tensor([0.0077, 0.0082, 0.0076, 0.0079, 0.0093, 0.0083, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 09:59:18,770 INFO [finetune.py:976] (2/7) Epoch 8, batch 5250, loss[loss=0.2071, simple_loss=0.2778, pruned_loss=0.0682, over 4819.00 frames. ], tot_loss[loss=0.205, simple_loss=0.268, pruned_loss=0.07106, over 951018.49 frames. ], batch size: 39, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:59:27,655 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45348.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:00:03,243 INFO [finetune.py:976] (2/7) Epoch 8, batch 5300, loss[loss=0.2056, simple_loss=0.2835, pruned_loss=0.06386, over 4846.00 frames. ], tot_loss[loss=0.2052, simple_loss=0.2682, pruned_loss=0.07108, over 950057.45 frames. ], batch size: 44, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:00:16,213 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45409.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:00:18,470 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45412.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:00:30,691 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.516e+02 1.858e+02 2.399e+02 4.469e+02, threshold=3.716e+02, percent-clipped=2.0 2023-03-26 10:00:38,260 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 10:00:39,968 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45437.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:00:49,408 INFO [finetune.py:976] (2/7) Epoch 8, batch 5350, loss[loss=0.1642, simple_loss=0.2445, pruned_loss=0.04188, over 4329.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2685, pruned_loss=0.07071, over 950618.16 frames. ], batch size: 66, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:01:54,083 INFO [finetune.py:976] (2/7) Epoch 8, batch 5400, loss[loss=0.205, simple_loss=0.2591, pruned_loss=0.07549, over 4767.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2678, pruned_loss=0.07115, over 952036.54 frames. ], batch size: 28, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:02:13,529 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 10:02:26,571 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2997, 4.5834, 4.7674, 5.0657, 5.0455, 4.7738, 5.3730, 1.5153], device='cuda:2'), covar=tensor([0.0649, 0.0767, 0.0684, 0.0769, 0.1123, 0.1194, 0.0476, 0.5390], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0242, 0.0277, 0.0294, 0.0332, 0.0282, 0.0302, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:02:34,313 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.640e+02 2.019e+02 2.439e+02 4.086e+02, threshold=4.037e+02, percent-clipped=1.0 2023-03-26 10:02:54,208 INFO [finetune.py:976] (2/7) Epoch 8, batch 5450, loss[loss=0.198, simple_loss=0.2681, pruned_loss=0.06392, over 4903.00 frames. ], tot_loss[loss=0.2016, simple_loss=0.2644, pruned_loss=0.06945, over 952714.57 frames. ], batch size: 32, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:03:54,237 INFO [finetune.py:976] (2/7) Epoch 8, batch 5500, loss[loss=0.1513, simple_loss=0.2191, pruned_loss=0.04176, over 4778.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.2613, pruned_loss=0.06816, over 952728.08 frames. ], batch size: 26, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:04:03,517 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5928, 1.4468, 1.9935, 1.8604, 1.7803, 4.0208, 1.3110, 1.9740], device='cuda:2'), covar=tensor([0.0996, 0.1782, 0.1296, 0.1024, 0.1533, 0.0232, 0.1644, 0.1653], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 10:04:18,373 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.550e+02 1.829e+02 2.210e+02 3.729e+02, threshold=3.658e+02, percent-clipped=0.0 2023-03-26 10:04:28,410 INFO [finetune.py:976] (2/7) Epoch 8, batch 5550, loss[loss=0.1422, simple_loss=0.198, pruned_loss=0.04315, over 4118.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2632, pruned_loss=0.06927, over 952200.95 frames. ], batch size: 17, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:04:46,981 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.6566, 3.1664, 2.8989, 1.5445, 2.9976, 2.5179, 2.4214, 2.5501], device='cuda:2'), covar=tensor([0.0860, 0.0987, 0.1910, 0.2299, 0.1878, 0.2188, 0.1999, 0.1371], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0200, 0.0202, 0.0188, 0.0217, 0.0206, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:05:19,423 INFO [finetune.py:976] (2/7) Epoch 8, batch 5600, loss[loss=0.1911, simple_loss=0.2663, pruned_loss=0.05798, over 4864.00 frames. ], tot_loss[loss=0.2031, simple_loss=0.2659, pruned_loss=0.07015, over 952344.20 frames. ], batch size: 34, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:05:24,106 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45702.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:05:25,254 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45704.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 10:05:25,304 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0920, 1.6960, 1.7427, 0.7512, 1.9984, 2.2029, 1.9277, 1.7616], device='cuda:2'), covar=tensor([0.0929, 0.0830, 0.0557, 0.0860, 0.0492, 0.0643, 0.0505, 0.0703], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0122, 0.0137, 0.0133, 0.0126, 0.0147, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.6711e-05, 1.1624e-04, 8.7968e-05, 9.9515e-05, 9.5232e-05, 9.2242e-05, 1.0811e-04, 1.0937e-04], device='cuda:2') 2023-03-26 10:05:29,905 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45712.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:05:40,352 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.739e+02 1.993e+02 2.505e+02 5.014e+02, threshold=3.987e+02, percent-clipped=3.0 2023-03-26 10:05:44,491 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:05:48,897 INFO [finetune.py:976] (2/7) Epoch 8, batch 5650, loss[loss=0.2616, simple_loss=0.3307, pruned_loss=0.09628, over 4910.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2682, pruned_loss=0.07057, over 952844.70 frames. ], batch size: 37, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:05:58,592 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:00,364 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:06,760 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:12,042 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4835, 2.2526, 2.0337, 2.4393, 2.2297, 2.2434, 2.1361, 2.9979], device='cuda:2'), covar=tensor([0.4714, 0.5126, 0.3713, 0.4453, 0.4362, 0.2631, 0.4817, 0.1784], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0222, 0.0281, 0.0242, 0.0207, 0.0245, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:06:13,160 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45785.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:18,322 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3618, 2.3712, 2.3664, 1.7241, 2.4272, 2.5817, 2.4168, 2.1008], device='cuda:2'), covar=tensor([0.0557, 0.0569, 0.0702, 0.0845, 0.0719, 0.0594, 0.0653, 0.0933], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0132, 0.0144, 0.0124, 0.0116, 0.0144, 0.0144, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:06:18,810 INFO [finetune.py:976] (2/7) Epoch 8, batch 5700, loss[loss=0.1964, simple_loss=0.2328, pruned_loss=0.08003, over 4200.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2646, pruned_loss=0.07052, over 933767.19 frames. ], batch size: 18, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:06:54,867 INFO [finetune.py:976] (2/7) Epoch 9, batch 0, loss[loss=0.2513, simple_loss=0.309, pruned_loss=0.09678, over 4822.00 frames. ], tot_loss[loss=0.2513, simple_loss=0.309, pruned_loss=0.09678, over 4822.00 frames. ], batch size: 38, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:06:54,868 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 10:07:00,805 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7090, 1.4625, 2.0087, 2.8776, 1.9692, 2.3075, 0.9610, 2.3129], device='cuda:2'), covar=tensor([0.1881, 0.1647, 0.1277, 0.0683, 0.0977, 0.1216, 0.1919, 0.0784], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0118, 0.0135, 0.0166, 0.0103, 0.0140, 0.0127, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:07:11,027 INFO [finetune.py:1010] (2/7) Epoch 9, validation: loss=0.1616, simple_loss=0.233, pruned_loss=0.04515, over 2265189.00 frames. 2023-03-26 10:07:11,027 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 10:07:17,606 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.132e+01 1.600e+02 1.914e+02 2.307e+02 4.538e+02, threshold=3.829e+02, percent-clipped=2.0 2023-03-26 10:07:22,771 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45835.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:07:33,765 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7103, 1.2341, 0.9518, 1.6345, 2.0617, 1.4193, 1.4612, 1.6542], device='cuda:2'), covar=tensor([0.1465, 0.2144, 0.2004, 0.1199, 0.2000, 0.2131, 0.1465, 0.1915], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0093], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:07:46,515 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:07:55,657 INFO [finetune.py:976] (2/7) Epoch 9, batch 50, loss[loss=0.2107, simple_loss=0.2722, pruned_loss=0.07463, over 4907.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2726, pruned_loss=0.07356, over 215713.93 frames. ], batch size: 38, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:08:13,655 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6741, 1.8055, 1.9332, 1.0200, 1.9439, 2.1733, 2.0263, 1.6187], device='cuda:2'), covar=tensor([0.0854, 0.0561, 0.0372, 0.0597, 0.0373, 0.0480, 0.0345, 0.0640], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0157, 0.0121, 0.0136, 0.0132, 0.0125, 0.0147, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.6434e-05, 1.1582e-04, 8.7317e-05, 9.9096e-05, 9.4817e-05, 9.1873e-05, 1.0774e-04, 1.0883e-04], device='cuda:2') 2023-03-26 10:08:35,698 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:08:36,814 INFO [finetune.py:976] (2/7) Epoch 9, batch 100, loss[loss=0.1773, simple_loss=0.24, pruned_loss=0.05725, over 4808.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2608, pruned_loss=0.06831, over 380889.33 frames. ], batch size: 25, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:08:42,570 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.760e+02 2.008e+02 2.423e+02 3.807e+02, threshold=4.016e+02, percent-clipped=0.0 2023-03-26 10:09:09,394 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-26 10:09:10,374 INFO [finetune.py:976] (2/7) Epoch 9, batch 150, loss[loss=0.1839, simple_loss=0.2531, pruned_loss=0.05736, over 4828.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2564, pruned_loss=0.06628, over 508954.83 frames. ], batch size: 33, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:09:32,040 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46004.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:09:37,634 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.84 vs. limit=5.0 2023-03-26 10:09:49,161 INFO [finetune.py:976] (2/7) Epoch 9, batch 200, loss[loss=0.186, simple_loss=0.2348, pruned_loss=0.06866, over 4818.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2541, pruned_loss=0.06569, over 606351.15 frames. ], batch size: 30, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:09:58,664 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.678e+02 2.056e+02 2.461e+02 4.455e+02, threshold=4.113e+02, percent-clipped=4.0 2023-03-26 10:10:22,926 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46052.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:10:22,986 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8211, 1.6750, 1.4789, 1.8854, 2.2867, 1.9348, 1.4259, 1.4250], device='cuda:2'), covar=tensor([0.2472, 0.2353, 0.2220, 0.1843, 0.2026, 0.1272, 0.2962, 0.2313], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0209, 0.0206, 0.0188, 0.0242, 0.0179, 0.0214, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:10:26,562 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:10:36,486 INFO [finetune.py:976] (2/7) Epoch 9, batch 250, loss[loss=0.1956, simple_loss=0.2689, pruned_loss=0.06116, over 4803.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2604, pruned_loss=0.06841, over 684297.25 frames. ], batch size: 51, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:11:09,154 INFO [finetune.py:976] (2/7) Epoch 9, batch 300, loss[loss=0.2211, simple_loss=0.2864, pruned_loss=0.07792, over 4892.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2653, pruned_loss=0.0692, over 747308.48 frames. ], batch size: 35, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:11:12,817 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 10:11:14,967 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.304e+02 1.737e+02 2.021e+02 2.354e+02 3.684e+02, threshold=4.042e+02, percent-clipped=0.0 2023-03-26 10:11:15,066 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:11:22,171 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4369, 1.3793, 1.4837, 0.8165, 1.5947, 1.5128, 1.5153, 1.2738], device='cuda:2'), covar=tensor([0.0627, 0.0835, 0.0775, 0.1002, 0.0740, 0.0778, 0.0684, 0.1341], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0132, 0.0144, 0.0124, 0.0116, 0.0144, 0.0144, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:11:22,236 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-26 10:11:41,960 INFO [finetune.py:976] (2/7) Epoch 9, batch 350, loss[loss=0.2509, simple_loss=0.3112, pruned_loss=0.09525, over 4833.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2689, pruned_loss=0.0703, over 794541.32 frames. ], batch size: 47, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:11:52,844 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6153, 3.5936, 3.3831, 1.5640, 3.6952, 2.8256, 0.7165, 2.5184], device='cuda:2'), covar=tensor([0.2481, 0.2146, 0.1492, 0.3351, 0.1016, 0.0993, 0.4422, 0.1426], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0172, 0.0160, 0.0128, 0.0156, 0.0122, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 10:12:11,369 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:12:17,566 INFO [finetune.py:976] (2/7) Epoch 9, batch 400, loss[loss=0.2434, simple_loss=0.2986, pruned_loss=0.09407, over 4890.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.27, pruned_loss=0.07107, over 827987.68 frames. ], batch size: 43, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:12:23,439 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.951e+01 1.676e+02 2.053e+02 2.418e+02 4.627e+02, threshold=4.106e+02, percent-clipped=2.0 2023-03-26 10:13:00,669 INFO [finetune.py:976] (2/7) Epoch 9, batch 450, loss[loss=0.2199, simple_loss=0.2855, pruned_loss=0.07714, over 4910.00 frames. ], tot_loss[loss=0.2046, simple_loss=0.2685, pruned_loss=0.07034, over 856155.81 frames. ], batch size: 46, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:13:03,229 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:13:35,970 INFO [finetune.py:976] (2/7) Epoch 9, batch 500, loss[loss=0.1836, simple_loss=0.2535, pruned_loss=0.05687, over 4735.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2655, pruned_loss=0.06917, over 877193.94 frames. ], batch size: 23, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:13:45,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.586e+02 1.900e+02 2.408e+02 3.619e+02, threshold=3.799e+02, percent-clipped=0.0 2023-03-26 10:13:54,718 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:13:59,930 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-26 10:14:08,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:14:09,081 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-26 10:14:17,265 INFO [finetune.py:976] (2/7) Epoch 9, batch 550, loss[loss=0.2099, simple_loss=0.2696, pruned_loss=0.0751, over 4924.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2615, pruned_loss=0.06779, over 894486.19 frames. ], batch size: 33, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:14:39,979 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46406.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:14:43,745 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3254, 1.5307, 1.5136, 0.8941, 1.4478, 1.7442, 1.7453, 1.3192], device='cuda:2'), covar=tensor([0.0851, 0.0485, 0.0436, 0.0497, 0.0462, 0.0525, 0.0307, 0.0630], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0136, 0.0132, 0.0125, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.6270e-05, 1.1497e-04, 8.7723e-05, 9.8639e-05, 9.4303e-05, 9.1793e-05, 1.0692e-04, 1.0814e-04], device='cuda:2') 2023-03-26 10:14:50,106 INFO [finetune.py:976] (2/7) Epoch 9, batch 600, loss[loss=0.2079, simple_loss=0.2804, pruned_loss=0.06765, over 4863.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2642, pruned_loss=0.0692, over 907749.06 frames. ], batch size: 31, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:14:54,847 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.702e+02 1.969e+02 2.390e+02 4.680e+02, threshold=3.938e+02, percent-clipped=3.0 2023-03-26 10:14:54,941 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46430.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:15:08,741 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7328, 1.5466, 1.4583, 1.2065, 1.6308, 1.5431, 1.5329, 2.0261], device='cuda:2'), covar=tensor([0.4661, 0.4736, 0.3755, 0.4211, 0.3752, 0.2783, 0.3903, 0.2137], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0222, 0.0280, 0.0243, 0.0208, 0.0245, 0.0209], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:15:36,213 INFO [finetune.py:976] (2/7) Epoch 9, batch 650, loss[loss=0.1844, simple_loss=0.2669, pruned_loss=0.05095, over 4826.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2669, pruned_loss=0.07019, over 918093.92 frames. ], batch size: 33, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:15:40,463 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:15:51,879 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 10:16:05,403 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:16:09,544 INFO [finetune.py:976] (2/7) Epoch 9, batch 700, loss[loss=0.2174, simple_loss=0.2793, pruned_loss=0.07772, over 4752.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.27, pruned_loss=0.0711, over 926456.92 frames. ], batch size: 27, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:16:14,888 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.666e+02 2.010e+02 2.529e+02 4.289e+02, threshold=4.019e+02, percent-clipped=2.0 2023-03-26 10:16:26,202 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4014, 1.3614, 1.2785, 1.3732, 1.6844, 1.5337, 1.4959, 1.2095], device='cuda:2'), covar=tensor([0.0317, 0.0247, 0.0473, 0.0273, 0.0196, 0.0406, 0.0254, 0.0311], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0110, 0.0139, 0.0115, 0.0102, 0.0101, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([6.9912e-05, 8.6186e-05, 1.1125e-04, 9.0722e-05, 8.0368e-05, 7.5266e-05, 6.8636e-05, 8.3810e-05], device='cuda:2') 2023-03-26 10:16:37,472 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:16:42,850 INFO [finetune.py:976] (2/7) Epoch 9, batch 750, loss[loss=0.2356, simple_loss=0.2931, pruned_loss=0.08904, over 4776.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2712, pruned_loss=0.0715, over 930301.29 frames. ], batch size: 51, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:16:58,834 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:03,702 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:15,989 INFO [finetune.py:976] (2/7) Epoch 9, batch 800, loss[loss=0.2304, simple_loss=0.2872, pruned_loss=0.08676, over 4925.00 frames. ], tot_loss[loss=0.206, simple_loss=0.2706, pruned_loss=0.07067, over 938637.15 frames. ], batch size: 38, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:17:20,814 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.216e+01 1.561e+02 1.863e+02 2.153e+02 3.377e+02, threshold=3.726e+02, percent-clipped=0.0 2023-03-26 10:17:22,506 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46632.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:39,095 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46656.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 10:17:40,244 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 10:17:43,774 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46663.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:49,097 INFO [finetune.py:976] (2/7) Epoch 9, batch 850, loss[loss=0.1565, simple_loss=0.2256, pruned_loss=0.04374, over 4754.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2674, pruned_loss=0.06911, over 943067.29 frames. ], batch size: 26, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:18:34,821 INFO [finetune.py:976] (2/7) Epoch 9, batch 900, loss[loss=0.1853, simple_loss=0.2477, pruned_loss=0.0614, over 4791.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2649, pruned_loss=0.0686, over 946482.05 frames. ], batch size: 29, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:18:39,658 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.365e+01 1.501e+02 1.768e+02 2.130e+02 3.855e+02, threshold=3.537e+02, percent-clipped=1.0 2023-03-26 10:19:05,943 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5896, 1.3685, 1.8462, 2.8713, 2.0487, 2.0213, 0.9538, 2.3602], device='cuda:2'), covar=tensor([0.1545, 0.1526, 0.1211, 0.0793, 0.0787, 0.1429, 0.1734, 0.0612], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0117, 0.0133, 0.0165, 0.0102, 0.0138, 0.0126, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:19:09,054 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7529, 1.8172, 2.1052, 1.9014, 1.8733, 4.4877, 1.6689, 2.1374], device='cuda:2'), covar=tensor([0.0981, 0.1760, 0.1127, 0.1093, 0.1552, 0.0159, 0.1465, 0.1603], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0076, 0.0079, 0.0093, 0.0084, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 10:19:10,158 INFO [finetune.py:976] (2/7) Epoch 9, batch 950, loss[loss=0.2132, simple_loss=0.271, pruned_loss=0.07769, over 4831.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2642, pruned_loss=0.06921, over 948170.69 frames. ], batch size: 40, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:19:34,882 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.35 vs. limit=5.0 2023-03-26 10:19:36,043 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:19:44,273 INFO [finetune.py:976] (2/7) Epoch 9, batch 1000, loss[loss=0.1979, simple_loss=0.2723, pruned_loss=0.06177, over 4796.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2667, pruned_loss=0.07004, over 949976.37 frames. ], batch size: 29, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:19:49,056 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.690e+02 1.992e+02 2.479e+02 5.334e+02, threshold=3.983e+02, percent-clipped=2.0 2023-03-26 10:20:22,577 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:20:23,062 INFO [finetune.py:976] (2/7) Epoch 9, batch 1050, loss[loss=0.2085, simple_loss=0.2675, pruned_loss=0.07475, over 4752.00 frames. ], tot_loss[loss=0.2046, simple_loss=0.2689, pruned_loss=0.07015, over 951351.51 frames. ], batch size: 26, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:20:56,333 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9941, 2.2313, 1.9975, 1.7183, 2.6523, 2.7242, 2.2011, 2.1053], device='cuda:2'), covar=tensor([0.0381, 0.0304, 0.0453, 0.0327, 0.0234, 0.0426, 0.0350, 0.0361], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0108, 0.0138, 0.0114, 0.0101, 0.0099, 0.0090, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.9260e-05, 8.4745e-05, 1.0990e-04, 8.9455e-05, 7.9265e-05, 7.3653e-05, 6.7971e-05, 8.2595e-05], device='cuda:2') 2023-03-26 10:21:05,008 INFO [finetune.py:976] (2/7) Epoch 9, batch 1100, loss[loss=0.2529, simple_loss=0.3065, pruned_loss=0.09968, over 4807.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2708, pruned_loss=0.07147, over 951554.24 frames. ], batch size: 51, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:21:10,487 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.654e+02 2.044e+02 2.534e+02 3.814e+02, threshold=4.089e+02, percent-clipped=0.0 2023-03-26 10:21:11,166 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46932.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:21:23,032 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46951.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:21:28,248 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46958.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:21:37,702 INFO [finetune.py:976] (2/7) Epoch 9, batch 1150, loss[loss=0.1921, simple_loss=0.2569, pruned_loss=0.06369, over 4757.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2707, pruned_loss=0.07121, over 951635.69 frames. ], batch size: 27, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:21:42,584 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:22:10,619 INFO [finetune.py:976] (2/7) Epoch 9, batch 1200, loss[loss=0.2075, simple_loss=0.2633, pruned_loss=0.07581, over 4896.00 frames. ], tot_loss[loss=0.2048, simple_loss=0.2686, pruned_loss=0.07053, over 953113.71 frames. ], batch size: 35, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:22:11,386 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3102, 1.3347, 1.3097, 1.4194, 1.6581, 1.6019, 1.4121, 1.2370], device='cuda:2'), covar=tensor([0.0373, 0.0270, 0.0488, 0.0246, 0.0208, 0.0394, 0.0301, 0.0344], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0108, 0.0138, 0.0114, 0.0101, 0.0099, 0.0090, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.9401e-05, 8.4752e-05, 1.1017e-04, 8.9327e-05, 7.9608e-05, 7.3901e-05, 6.8291e-05, 8.2712e-05], device='cuda:2') 2023-03-26 10:22:16,138 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.814e+01 1.625e+02 2.018e+02 2.406e+02 3.989e+02, threshold=4.036e+02, percent-clipped=0.0 2023-03-26 10:22:23,575 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1553, 2.2116, 2.1222, 1.5404, 2.1952, 2.3575, 2.2536, 1.8588], device='cuda:2'), covar=tensor([0.0597, 0.0543, 0.0752, 0.0869, 0.0564, 0.0655, 0.0612, 0.1032], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0133, 0.0144, 0.0124, 0.0117, 0.0144, 0.0144, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:22:43,585 INFO [finetune.py:976] (2/7) Epoch 9, batch 1250, loss[loss=0.2047, simple_loss=0.2707, pruned_loss=0.0694, over 4824.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.267, pruned_loss=0.07019, over 953750.68 frames. ], batch size: 40, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:23:21,439 INFO [finetune.py:976] (2/7) Epoch 9, batch 1300, loss[loss=0.224, simple_loss=0.2865, pruned_loss=0.08075, over 4834.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2631, pruned_loss=0.0687, over 954331.75 frames. ], batch size: 33, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:23:31,783 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.169e+01 1.658e+02 1.956e+02 2.404e+02 5.414e+02, threshold=3.912e+02, percent-clipped=2.0 2023-03-26 10:23:57,904 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:03,032 INFO [finetune.py:976] (2/7) Epoch 9, batch 1350, loss[loss=0.1969, simple_loss=0.2684, pruned_loss=0.06275, over 4932.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2649, pruned_loss=0.06955, over 953981.04 frames. ], batch size: 38, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:24:33,217 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 10:24:36,409 INFO [finetune.py:976] (2/7) Epoch 9, batch 1400, loss[loss=0.2116, simple_loss=0.284, pruned_loss=0.06959, over 4893.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2663, pruned_loss=0.06932, over 954945.31 frames. ], batch size: 43, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:24:42,785 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.739e+02 2.002e+02 2.347e+02 4.228e+02, threshold=4.005e+02, percent-clipped=2.0 2023-03-26 10:24:49,012 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:55,116 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:59,389 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47258.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:09,254 INFO [finetune.py:976] (2/7) Epoch 9, batch 1450, loss[loss=0.2604, simple_loss=0.3072, pruned_loss=0.1068, over 4857.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2682, pruned_loss=0.07003, over 955507.03 frames. ], batch size: 31, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:25:27,570 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:29,422 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:36,195 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:53,868 INFO [finetune.py:976] (2/7) Epoch 9, batch 1500, loss[loss=0.1958, simple_loss=0.2782, pruned_loss=0.05666, over 4848.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2695, pruned_loss=0.07067, over 956648.83 frames. ], batch size: 44, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:26:00,802 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.648e+02 1.983e+02 2.305e+02 4.092e+02, threshold=3.967e+02, percent-clipped=1.0 2023-03-26 10:26:26,876 INFO [finetune.py:976] (2/7) Epoch 9, batch 1550, loss[loss=0.1983, simple_loss=0.2613, pruned_loss=0.06764, over 4817.00 frames. ], tot_loss[loss=0.2046, simple_loss=0.2693, pruned_loss=0.06991, over 957369.52 frames. ], batch size: 38, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:26:33,204 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:26:50,610 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 10:26:51,165 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4948, 2.7356, 2.3505, 1.8319, 2.7082, 2.8713, 2.7980, 2.4007], device='cuda:2'), covar=tensor([0.0687, 0.0593, 0.0968, 0.1003, 0.0540, 0.0819, 0.0728, 0.1021], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0132, 0.0145, 0.0125, 0.0117, 0.0145, 0.0145, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:26:51,392 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 10:27:00,841 INFO [finetune.py:976] (2/7) Epoch 9, batch 1600, loss[loss=0.1851, simple_loss=0.2367, pruned_loss=0.0667, over 4824.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2653, pruned_loss=0.06877, over 956165.96 frames. ], batch size: 25, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:27:06,345 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 10:27:07,815 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.632e+02 1.991e+02 2.321e+02 5.028e+02, threshold=3.982e+02, percent-clipped=1.0 2023-03-26 10:27:14,438 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:27:15,523 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1761, 1.9188, 1.5189, 2.0553, 1.9710, 1.7533, 2.4108, 2.0812], device='cuda:2'), covar=tensor([0.1542, 0.2869, 0.3708, 0.3193, 0.3068, 0.1975, 0.3994, 0.2107], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0188, 0.0233, 0.0253, 0.0237, 0.0194, 0.0211, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:27:30,688 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:27:34,229 INFO [finetune.py:976] (2/7) Epoch 9, batch 1650, loss[loss=0.1503, simple_loss=0.2117, pruned_loss=0.04448, over 4675.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2612, pruned_loss=0.06691, over 955241.63 frames. ], batch size: 23, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:27:50,620 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 10:28:02,972 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:28:07,770 INFO [finetune.py:976] (2/7) Epoch 9, batch 1700, loss[loss=0.208, simple_loss=0.2528, pruned_loss=0.08158, over 4768.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2595, pruned_loss=0.06693, over 955895.22 frames. ], batch size: 26, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:28:13,223 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.617e+02 1.864e+02 2.209e+02 5.015e+02, threshold=3.728e+02, percent-clipped=2.0 2023-03-26 10:28:17,980 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1373, 1.0668, 1.0494, 0.4304, 0.8417, 1.2066, 1.2729, 1.0546], device='cuda:2'), covar=tensor([0.0887, 0.0516, 0.0476, 0.0534, 0.0485, 0.0567, 0.0339, 0.0655], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0135, 0.0132, 0.0125, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.5666e-05, 1.1515e-04, 8.7450e-05, 9.8036e-05, 9.4347e-05, 9.1648e-05, 1.0683e-04, 1.0838e-04], device='cuda:2') 2023-03-26 10:28:47,731 INFO [finetune.py:976] (2/7) Epoch 9, batch 1750, loss[loss=0.2156, simple_loss=0.2799, pruned_loss=0.07563, over 4746.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2629, pruned_loss=0.06859, over 955206.13 frames. ], batch size: 28, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:29:08,521 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9295, 1.4030, 0.8816, 1.6831, 2.0120, 1.6000, 1.6838, 1.7202], device='cuda:2'), covar=tensor([0.1440, 0.2066, 0.2089, 0.1197, 0.2168, 0.1974, 0.1343, 0.2002], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0122, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:29:13,720 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47597.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:29:29,376 INFO [finetune.py:976] (2/7) Epoch 9, batch 1800, loss[loss=0.1724, simple_loss=0.2449, pruned_loss=0.04998, over 4867.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.266, pruned_loss=0.06923, over 955484.96 frames. ], batch size: 44, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:29:33,769 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5294, 1.5341, 1.4420, 1.6351, 1.0496, 3.6692, 1.4866, 1.9553], device='cuda:2'), covar=tensor([0.3890, 0.2716, 0.2387, 0.2494, 0.2134, 0.0180, 0.2586, 0.1380], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0117, 0.0098, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 10:29:34,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.736e+02 1.994e+02 2.573e+02 6.193e+02, threshold=3.989e+02, percent-clipped=4.0 2023-03-26 10:29:55,595 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 10:30:02,647 INFO [finetune.py:976] (2/7) Epoch 9, batch 1850, loss[loss=0.2233, simple_loss=0.2867, pruned_loss=0.0799, over 4729.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.268, pruned_loss=0.07027, over 954218.35 frames. ], batch size: 54, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:30:06,410 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6885, 1.4926, 1.5173, 0.7773, 1.6213, 1.8483, 1.7749, 1.3990], device='cuda:2'), covar=tensor([0.1177, 0.0808, 0.0585, 0.0771, 0.0556, 0.0597, 0.0428, 0.0782], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0123, 0.0136, 0.0133, 0.0127, 0.0147, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.6973e-05, 1.1639e-04, 8.8668e-05, 9.9142e-05, 9.4993e-05, 9.2878e-05, 1.0797e-04, 1.0971e-04], device='cuda:2') 2023-03-26 10:30:35,894 INFO [finetune.py:976] (2/7) Epoch 9, batch 1900, loss[loss=0.2455, simple_loss=0.293, pruned_loss=0.09902, over 4903.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2684, pruned_loss=0.06965, over 954520.87 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:30:46,261 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.608e+02 1.835e+02 2.236e+02 3.803e+02, threshold=3.670e+02, percent-clipped=0.0 2023-03-26 10:30:52,997 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:30:54,848 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:21,753 INFO [finetune.py:976] (2/7) Epoch 9, batch 1950, loss[loss=0.1949, simple_loss=0.2544, pruned_loss=0.06774, over 4907.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2655, pruned_loss=0.06838, over 954958.73 frames. ], batch size: 37, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:31:31,521 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:39,140 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:46,697 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47809.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:31:53,945 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0972, 1.8405, 1.5976, 1.7191, 2.0506, 1.7550, 2.1791, 2.0759], device='cuda:2'), covar=tensor([0.1490, 0.2795, 0.3850, 0.3015, 0.2887, 0.1966, 0.3283, 0.2093], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0187, 0.0231, 0.0252, 0.0236, 0.0193, 0.0210, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:31:55,009 INFO [finetune.py:976] (2/7) Epoch 9, batch 2000, loss[loss=0.2702, simple_loss=0.3087, pruned_loss=0.1158, over 4236.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2628, pruned_loss=0.0676, over 954096.29 frames. ], batch size: 65, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:32:00,455 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.527e+02 1.824e+02 2.186e+02 3.277e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-26 10:32:11,911 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:32:19,276 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2495, 1.4038, 1.5173, 0.8025, 1.2590, 1.5983, 1.6832, 1.3340], device='cuda:2'), covar=tensor([0.0915, 0.0496, 0.0449, 0.0512, 0.0513, 0.0617, 0.0348, 0.0642], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0158, 0.0123, 0.0136, 0.0133, 0.0127, 0.0147, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.6767e-05, 1.1602e-04, 8.8707e-05, 9.8883e-05, 9.5187e-05, 9.2750e-05, 1.0783e-04, 1.0943e-04], device='cuda:2') 2023-03-26 10:32:35,765 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47870.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:32:36,816 INFO [finetune.py:976] (2/7) Epoch 9, batch 2050, loss[loss=0.2224, simple_loss=0.2564, pruned_loss=0.09425, over 4732.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2585, pruned_loss=0.0661, over 955785.07 frames. ], batch size: 59, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:32:57,844 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:33:15,823 INFO [finetune.py:976] (2/7) Epoch 9, batch 2100, loss[loss=0.308, simple_loss=0.341, pruned_loss=0.1375, over 4016.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2572, pruned_loss=0.06534, over 956289.07 frames. ], batch size: 65, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:33:21,286 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.604e+02 1.917e+02 2.370e+02 5.169e+02, threshold=3.834e+02, percent-clipped=3.0 2023-03-26 10:33:29,786 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:33:54,984 INFO [finetune.py:976] (2/7) Epoch 9, batch 2150, loss[loss=0.2605, simple_loss=0.3201, pruned_loss=0.1004, over 4866.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2604, pruned_loss=0.0669, over 953684.27 frames. ], batch size: 34, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:34:18,173 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5355, 1.5028, 1.6559, 0.8499, 1.6043, 1.6744, 1.5302, 1.3988], device='cuda:2'), covar=tensor([0.0550, 0.0689, 0.0586, 0.0857, 0.0688, 0.0637, 0.0606, 0.1130], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0132, 0.0144, 0.0125, 0.0116, 0.0144, 0.0145, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:35:00,855 INFO [finetune.py:976] (2/7) Epoch 9, batch 2200, loss[loss=0.1927, simple_loss=0.264, pruned_loss=0.06072, over 4823.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2641, pruned_loss=0.06764, over 955848.89 frames. ], batch size: 33, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:35:11,848 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.338e+02 1.754e+02 2.052e+02 2.528e+02 4.321e+02, threshold=4.105e+02, percent-clipped=1.0 2023-03-26 10:35:18,731 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48035.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:02,992 INFO [finetune.py:976] (2/7) Epoch 9, batch 2250, loss[loss=0.2374, simple_loss=0.298, pruned_loss=0.08842, over 4765.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2665, pruned_loss=0.06881, over 954597.59 frames. ], batch size: 28, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:36:03,118 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:20,661 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:32,289 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48094.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:51,956 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7570, 1.6474, 1.6145, 1.7216, 1.3111, 3.6583, 1.5130, 2.0329], device='cuda:2'), covar=tensor([0.3216, 0.2345, 0.1972, 0.2163, 0.1649, 0.0175, 0.2364, 0.1175], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 10:37:05,451 INFO [finetune.py:976] (2/7) Epoch 9, batch 2300, loss[loss=0.1919, simple_loss=0.2587, pruned_loss=0.06259, over 4715.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2661, pruned_loss=0.06833, over 953378.03 frames. ], batch size: 59, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:37:12,523 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-26 10:37:16,268 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-26 10:37:16,651 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.508e+01 1.652e+02 1.874e+02 2.360e+02 5.580e+02, threshold=3.748e+02, percent-clipped=1.0 2023-03-26 10:37:23,216 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48133.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:37:34,217 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:37:54,847 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48165.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:37:59,932 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1131, 0.9750, 1.0394, 0.4352, 0.8720, 1.1773, 1.2133, 1.0175], device='cuda:2'), covar=tensor([0.0789, 0.0480, 0.0453, 0.0499, 0.0514, 0.0527, 0.0377, 0.0615], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0135, 0.0132, 0.0126, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.6007e-05, 1.1479e-04, 8.7587e-05, 9.7941e-05, 9.4230e-05, 9.2240e-05, 1.0741e-04, 1.0804e-04], device='cuda:2') 2023-03-26 10:38:01,014 INFO [finetune.py:976] (2/7) Epoch 9, batch 2350, loss[loss=0.1949, simple_loss=0.2597, pruned_loss=0.06505, over 4907.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2641, pruned_loss=0.06795, over 955133.44 frames. ], batch size: 43, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:38:35,587 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48212.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:38:43,033 INFO [finetune.py:976] (2/7) Epoch 9, batch 2400, loss[loss=0.2227, simple_loss=0.2761, pruned_loss=0.08467, over 4847.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2607, pruned_loss=0.06685, over 956100.05 frames. ], batch size: 44, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:38:49,467 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.602e+02 2.015e+02 2.421e+02 3.465e+02, threshold=4.031e+02, percent-clipped=0.0 2023-03-26 10:39:01,007 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 10:39:18,902 INFO [finetune.py:976] (2/7) Epoch 9, batch 2450, loss[loss=0.2199, simple_loss=0.2773, pruned_loss=0.08124, over 4770.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2587, pruned_loss=0.06698, over 954727.35 frames. ], batch size: 26, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:39:19,252 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 10:39:19,623 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48273.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:40:01,691 INFO [finetune.py:976] (2/7) Epoch 9, batch 2500, loss[loss=0.2086, simple_loss=0.285, pruned_loss=0.0661, over 4911.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2611, pruned_loss=0.06802, over 954013.61 frames. ], batch size: 37, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:40:03,382 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:09,017 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.681e+02 1.962e+02 2.420e+02 5.026e+02, threshold=3.923e+02, percent-clipped=2.0 2023-03-26 10:40:35,356 INFO [finetune.py:976] (2/7) Epoch 9, batch 2550, loss[loss=0.1781, simple_loss=0.2564, pruned_loss=0.04987, over 4901.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2642, pruned_loss=0.06777, over 955473.13 frames. ], batch size: 35, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:40:45,886 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:51,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:53,634 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:08,895 INFO [finetune.py:976] (2/7) Epoch 9, batch 2600, loss[loss=0.2168, simple_loss=0.2848, pruned_loss=0.07445, over 4766.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2671, pruned_loss=0.06931, over 953894.67 frames. ], batch size: 28, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:41:09,600 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:12,597 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:15,191 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.754e+02 2.041e+02 2.553e+02 4.015e+02, threshold=4.083e+02, percent-clipped=1.0 2023-03-26 10:41:23,746 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48442.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:25,438 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:34,010 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:38,227 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48465.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:41:42,379 INFO [finetune.py:976] (2/7) Epoch 9, batch 2650, loss[loss=0.173, simple_loss=0.2502, pruned_loss=0.04792, over 4879.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2695, pruned_loss=0.07064, over 953897.57 frames. ], batch size: 32, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:41:56,014 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48484.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:42:06,173 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:42:19,480 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48513.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:42:24,896 INFO [finetune.py:976] (2/7) Epoch 9, batch 2700, loss[loss=0.1737, simple_loss=0.2448, pruned_loss=0.05129, over 4902.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2677, pruned_loss=0.06966, over 954376.11 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:42:30,333 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.590e+02 1.873e+02 2.487e+02 4.580e+02, threshold=3.745e+02, percent-clipped=2.0 2023-03-26 10:43:07,957 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48568.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:43:10,319 INFO [finetune.py:976] (2/7) Epoch 9, batch 2750, loss[loss=0.2036, simple_loss=0.2671, pruned_loss=0.07005, over 4751.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2647, pruned_loss=0.06881, over 955680.22 frames. ], batch size: 27, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:43:45,668 INFO [finetune.py:976] (2/7) Epoch 9, batch 2800, loss[loss=0.1562, simple_loss=0.2234, pruned_loss=0.0445, over 4811.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2605, pruned_loss=0.06742, over 952726.27 frames. ], batch size: 25, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:43:51,108 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.496e+02 1.797e+02 2.211e+02 4.995e+02, threshold=3.593e+02, percent-clipped=1.0 2023-03-26 10:43:53,171 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-26 10:44:00,341 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 10:44:09,091 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.01 vs. limit=5.0 2023-03-26 10:44:19,114 INFO [finetune.py:976] (2/7) Epoch 9, batch 2850, loss[loss=0.2033, simple_loss=0.258, pruned_loss=0.07433, over 4827.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2584, pruned_loss=0.06651, over 952334.09 frames. ], batch size: 33, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:44:24,041 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:44:39,853 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8705, 1.7271, 1.7127, 1.8205, 1.4013, 3.7658, 1.5431, 2.1305], device='cuda:2'), covar=tensor([0.3230, 0.2324, 0.1963, 0.2165, 0.1665, 0.0181, 0.2480, 0.1227], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0116, 0.0120, 0.0124, 0.0117, 0.0099, 0.0101, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 10:44:46,927 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1855, 1.7253, 2.5115, 3.8717, 2.7780, 2.8382, 0.9366, 3.1358], device='cuda:2'), covar=tensor([0.1716, 0.1561, 0.1289, 0.0530, 0.0705, 0.1561, 0.1929, 0.0514], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0133, 0.0164, 0.0101, 0.0138, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:45:04,573 INFO [finetune.py:976] (2/7) Epoch 9, batch 2900, loss[loss=0.1969, simple_loss=0.2554, pruned_loss=0.0692, over 4742.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2617, pruned_loss=0.06751, over 952230.80 frames. ], batch size: 26, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:45:08,334 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48728.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:10,029 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.660e+02 1.926e+02 2.355e+02 4.281e+02, threshold=3.853e+02, percent-clipped=2.0 2023-03-26 10:45:25,494 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:35,073 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 10:45:38,483 INFO [finetune.py:976] (2/7) Epoch 9, batch 2950, loss[loss=0.1605, simple_loss=0.2328, pruned_loss=0.04409, over 4819.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2645, pruned_loss=0.06785, over 952341.50 frames. ], batch size: 25, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:45:40,981 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:42,827 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48779.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:11,304 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:11,795 INFO [finetune.py:976] (2/7) Epoch 9, batch 3000, loss[loss=0.2083, simple_loss=0.2764, pruned_loss=0.07009, over 4788.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2673, pruned_loss=0.06862, over 953556.42 frames. ], batch size: 29, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:46:11,795 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 10:46:22,396 INFO [finetune.py:1010] (2/7) Epoch 9, validation: loss=0.159, simple_loss=0.2302, pruned_loss=0.04393, over 2265189.00 frames. 2023-03-26 10:46:22,396 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 10:46:27,908 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.631e+02 1.894e+02 2.277e+02 3.777e+02, threshold=3.789e+02, percent-clipped=0.0 2023-03-26 10:46:32,264 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48838.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:47,861 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 10:46:52,191 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48868.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:46:54,487 INFO [finetune.py:976] (2/7) Epoch 9, batch 3050, loss[loss=0.2047, simple_loss=0.284, pruned_loss=0.06266, over 4902.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2694, pruned_loss=0.06962, over 954402.07 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:47:03,402 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48882.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:47:13,614 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:47:24,762 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48916.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:47:29,279 INFO [finetune.py:976] (2/7) Epoch 9, batch 3100, loss[loss=0.2096, simple_loss=0.2731, pruned_loss=0.073, over 4901.00 frames. ], tot_loss[loss=0.2027, simple_loss=0.2674, pruned_loss=0.069, over 955302.47 frames. ], batch size: 32, lr: 3.79e-03, grad_scale: 32.0 2023-03-26 10:47:36,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.624e+02 1.916e+02 2.206e+02 4.881e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-26 10:47:57,748 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 10:48:06,164 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.27 vs. limit=5.0 2023-03-26 10:48:07,637 INFO [finetune.py:976] (2/7) Epoch 9, batch 3150, loss[loss=0.1796, simple_loss=0.2341, pruned_loss=0.06255, over 4741.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2653, pruned_loss=0.0687, over 955303.78 frames. ], batch size: 23, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:48:16,692 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:48:51,003 INFO [finetune.py:976] (2/7) Epoch 9, batch 3200, loss[loss=0.1762, simple_loss=0.2379, pruned_loss=0.05727, over 4779.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2603, pruned_loss=0.06655, over 954882.96 frames. ], batch size: 26, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:48:55,660 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:48:57,863 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.147e+02 1.665e+02 1.973e+02 2.326e+02 6.022e+02, threshold=3.945e+02, percent-clipped=4.0 2023-03-26 10:49:12,720 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49053.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:49:30,123 INFO [finetune.py:976] (2/7) Epoch 9, batch 3250, loss[loss=0.2194, simple_loss=0.2926, pruned_loss=0.0731, over 4901.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2604, pruned_loss=0.06666, over 953570.58 frames. ], batch size: 43, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:49:40,136 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49079.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:49:51,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3893, 1.5546, 1.6298, 0.9655, 1.5889, 1.8307, 1.8343, 1.4614], device='cuda:2'), covar=tensor([0.0903, 0.0601, 0.0433, 0.0490, 0.0383, 0.0587, 0.0312, 0.0624], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0154, 0.0121, 0.0134, 0.0131, 0.0125, 0.0145, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.5150e-05, 1.1355e-04, 8.7165e-05, 9.7534e-05, 9.3424e-05, 9.1840e-05, 1.0665e-04, 1.0754e-04], device='cuda:2') 2023-03-26 10:50:05,718 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:50:21,989 INFO [finetune.py:976] (2/7) Epoch 9, batch 3300, loss[loss=0.2081, simple_loss=0.2724, pruned_loss=0.07194, over 4908.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2648, pruned_loss=0.06798, over 952943.76 frames. ], batch size: 37, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:50:26,597 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49127.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:50:28,940 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.707e+02 1.914e+02 2.346e+02 3.542e+02, threshold=3.827e+02, percent-clipped=0.0 2023-03-26 10:50:56,004 INFO [finetune.py:976] (2/7) Epoch 9, batch 3350, loss[loss=0.2246, simple_loss=0.2894, pruned_loss=0.07986, over 4842.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2666, pruned_loss=0.06877, over 953353.85 frames. ], batch size: 49, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:50:59,113 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49177.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:51:17,584 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:51:50,171 INFO [finetune.py:976] (2/7) Epoch 9, batch 3400, loss[loss=0.156, simple_loss=0.2141, pruned_loss=0.04895, over 4744.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2672, pruned_loss=0.06895, over 952848.30 frames. ], batch size: 23, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:51:59,826 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.353e+01 1.606e+02 1.878e+02 2.295e+02 4.525e+02, threshold=3.756e+02, percent-clipped=2.0 2023-03-26 10:52:10,044 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:52:13,086 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 10:52:41,686 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 10:52:55,270 INFO [finetune.py:976] (2/7) Epoch 9, batch 3450, loss[loss=0.2519, simple_loss=0.2976, pruned_loss=0.1031, over 4724.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2665, pruned_loss=0.06808, over 952531.51 frames. ], batch size: 23, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:53:32,520 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:53:53,350 INFO [finetune.py:976] (2/7) Epoch 9, batch 3500, loss[loss=0.2152, simple_loss=0.2793, pruned_loss=0.07555, over 4763.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2627, pruned_loss=0.06629, over 954292.67 frames. ], batch size: 54, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:53:58,769 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.331e+01 1.641e+02 1.916e+02 2.289e+02 6.335e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-26 10:54:34,218 INFO [finetune.py:976] (2/7) Epoch 9, batch 3550, loss[loss=0.1914, simple_loss=0.2496, pruned_loss=0.0666, over 4822.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2608, pruned_loss=0.06615, over 955296.77 frames. ], batch size: 40, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:02,238 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8918, 1.7591, 1.6373, 2.0116, 2.3582, 2.1015, 1.4739, 1.5872], device='cuda:2'), covar=tensor([0.2348, 0.2062, 0.1971, 0.1713, 0.1754, 0.1059, 0.2637, 0.1860], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0208, 0.0206, 0.0187, 0.0240, 0.0178, 0.0213, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:55:09,813 INFO [finetune.py:976] (2/7) Epoch 9, batch 3600, loss[loss=0.1972, simple_loss=0.2652, pruned_loss=0.06463, over 4753.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2591, pruned_loss=0.06607, over 954458.80 frames. ], batch size: 27, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:09,951 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:15,222 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.662e+02 2.002e+02 2.382e+02 4.044e+02, threshold=4.004e+02, percent-clipped=1.0 2023-03-26 10:55:15,968 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49432.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:55:43,178 INFO [finetune.py:976] (2/7) Epoch 9, batch 3650, loss[loss=0.203, simple_loss=0.266, pruned_loss=0.07002, over 4878.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2628, pruned_loss=0.06804, over 951725.91 frames. ], batch size: 31, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:46,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:50,102 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:56,180 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49493.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:55:56,776 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49494.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:17,055 INFO [finetune.py:976] (2/7) Epoch 9, batch 3700, loss[loss=0.1498, simple_loss=0.232, pruned_loss=0.0338, over 4894.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2669, pruned_loss=0.06961, over 951367.21 frames. ], batch size: 32, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:56:18,952 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:22,513 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.782e+02 2.076e+02 2.384e+02 4.659e+02, threshold=4.152e+02, percent-clipped=5.0 2023-03-26 10:56:29,207 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49542.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:50,562 INFO [finetune.py:976] (2/7) Epoch 9, batch 3750, loss[loss=0.2139, simple_loss=0.2755, pruned_loss=0.0761, over 4920.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2687, pruned_loss=0.07037, over 951418.26 frames. ], batch size: 33, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:57:02,691 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:57:28,146 INFO [finetune.py:976] (2/7) Epoch 9, batch 3800, loss[loss=0.2004, simple_loss=0.2767, pruned_loss=0.06204, over 4887.00 frames. ], tot_loss[loss=0.205, simple_loss=0.2695, pruned_loss=0.07028, over 951916.82 frames. ], batch size: 35, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:57:39,045 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.635e+02 1.863e+02 2.259e+02 4.048e+02, threshold=3.725e+02, percent-clipped=0.0 2023-03-26 10:57:48,421 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 10:58:12,409 INFO [finetune.py:976] (2/7) Epoch 9, batch 3850, loss[loss=0.2595, simple_loss=0.3013, pruned_loss=0.1089, over 4805.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2677, pruned_loss=0.06952, over 952565.94 frames. ], batch size: 51, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:58:19,222 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 10:58:27,567 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6233, 1.2300, 0.8966, 1.5860, 2.0130, 1.2849, 1.5173, 1.6539], device='cuda:2'), covar=tensor([0.1349, 0.1930, 0.2059, 0.1130, 0.1906, 0.2147, 0.1329, 0.1735], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0093, 0.0122, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:58:34,337 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5398, 1.3381, 2.0963, 3.1258, 2.0914, 2.3165, 1.2791, 2.4069], device='cuda:2'), covar=tensor([0.1658, 0.1531, 0.1135, 0.0537, 0.0781, 0.1523, 0.1502, 0.0580], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0119, 0.0135, 0.0166, 0.0103, 0.0140, 0.0127, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 10:58:42,512 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 10:58:48,065 INFO [finetune.py:976] (2/7) Epoch 9, batch 3900, loss[loss=0.2183, simple_loss=0.279, pruned_loss=0.07878, over 4909.00 frames. ], tot_loss[loss=0.2003, simple_loss=0.2645, pruned_loss=0.06806, over 952845.01 frames. ], batch size: 35, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:58:58,257 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.638e+02 1.913e+02 2.415e+02 4.821e+02, threshold=3.825e+02, percent-clipped=2.0 2023-03-26 10:59:00,888 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.75 vs. limit=5.0 2023-03-26 10:59:02,021 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1459, 3.5920, 3.7674, 3.9905, 3.9126, 3.6686, 4.2138, 1.2965], device='cuda:2'), covar=tensor([0.0738, 0.0791, 0.0885, 0.0810, 0.1235, 0.1372, 0.0695, 0.5150], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0276, 0.0292, 0.0329, 0.0280, 0.0299, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:59:31,485 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:59:36,466 INFO [finetune.py:976] (2/7) Epoch 9, batch 3950, loss[loss=0.2026, simple_loss=0.2627, pruned_loss=0.0713, over 4828.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2604, pruned_loss=0.06645, over 954579.18 frames. ], batch size: 30, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:59:40,666 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:59:45,453 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0160, 2.6647, 2.3447, 1.2289, 2.5403, 2.2028, 1.9296, 2.2199], device='cuda:2'), covar=tensor([0.1253, 0.0808, 0.1809, 0.2042, 0.1623, 0.2085, 0.2075, 0.1210], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0198, 0.0200, 0.0186, 0.0214, 0.0206, 0.0221, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 10:59:46,865 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-26 10:59:47,183 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49788.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:00:09,536 INFO [finetune.py:976] (2/7) Epoch 9, batch 4000, loss[loss=0.1538, simple_loss=0.2273, pruned_loss=0.04017, over 4764.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2587, pruned_loss=0.06566, over 955742.43 frames. ], batch size: 26, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:00:13,729 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49827.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:00:16,514 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.645e+02 2.017e+02 2.376e+02 6.319e+02, threshold=4.034e+02, percent-clipped=3.0 2023-03-26 11:00:42,824 INFO [finetune.py:976] (2/7) Epoch 9, batch 4050, loss[loss=0.1611, simple_loss=0.2235, pruned_loss=0.04936, over 4777.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2618, pruned_loss=0.06709, over 953655.68 frames. ], batch size: 26, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:00:56,958 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:00:58,181 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2644, 2.8905, 2.8234, 1.2147, 2.9846, 2.2187, 0.7242, 1.8924], device='cuda:2'), covar=tensor([0.2350, 0.2398, 0.1618, 0.3494, 0.1332, 0.1200, 0.4054, 0.1643], device='cuda:2'), in_proj_covar=tensor([0.0155, 0.0176, 0.0162, 0.0130, 0.0158, 0.0124, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 11:01:15,998 INFO [finetune.py:976] (2/7) Epoch 9, batch 4100, loss[loss=0.1599, simple_loss=0.2211, pruned_loss=0.04929, over 4725.00 frames. ], tot_loss[loss=0.2018, simple_loss=0.2661, pruned_loss=0.06874, over 952120.60 frames. ], batch size: 23, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:01:22,950 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.254e+02 1.718e+02 2.083e+02 2.512e+02 3.689e+02, threshold=4.166e+02, percent-clipped=0.0 2023-03-26 11:01:28,935 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:01:48,770 INFO [finetune.py:976] (2/7) Epoch 9, batch 4150, loss[loss=0.1641, simple_loss=0.2388, pruned_loss=0.04468, over 4769.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2676, pruned_loss=0.06911, over 952816.58 frames. ], batch size: 28, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:02:23,457 INFO [finetune.py:976] (2/7) Epoch 9, batch 4200, loss[loss=0.166, simple_loss=0.2282, pruned_loss=0.05188, over 4092.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2667, pruned_loss=0.06806, over 953371.34 frames. ], batch size: 18, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:02:31,378 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.147e+02 1.706e+02 2.002e+02 2.506e+02 6.230e+02, threshold=4.003e+02, percent-clipped=2.0 2023-03-26 11:03:15,796 INFO [finetune.py:976] (2/7) Epoch 9, batch 4250, loss[loss=0.1964, simple_loss=0.2536, pruned_loss=0.0696, over 4934.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2644, pruned_loss=0.06692, over 955125.44 frames. ], batch size: 38, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:03:24,159 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:32,343 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50088.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:03:53,929 INFO [finetune.py:976] (2/7) Epoch 9, batch 4300, loss[loss=0.2122, simple_loss=0.266, pruned_loss=0.07922, over 4865.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2622, pruned_loss=0.06676, over 954163.67 frames. ], batch size: 49, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:03:53,997 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:56,366 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:57,607 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50128.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:04:00,430 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.608e+02 1.917e+02 2.228e+02 4.011e+02, threshold=3.835e+02, percent-clipped=1.0 2023-03-26 11:04:03,877 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50136.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:04:13,064 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3684, 2.1525, 2.3011, 1.8411, 2.4179, 2.4740, 2.2920, 1.6509], device='cuda:2'), covar=tensor([0.0655, 0.0736, 0.0808, 0.0930, 0.0618, 0.0732, 0.0813, 0.1649], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0133, 0.0144, 0.0124, 0.0118, 0.0144, 0.0144, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:04:49,371 INFO [finetune.py:976] (2/7) Epoch 9, batch 4350, loss[loss=0.2171, simple_loss=0.2694, pruned_loss=0.08243, over 4884.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2607, pruned_loss=0.06686, over 953286.17 frames. ], batch size: 32, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:05:17,880 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50189.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:05:30,430 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50197.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:06:02,455 INFO [finetune.py:976] (2/7) Epoch 9, batch 4400, loss[loss=0.2227, simple_loss=0.2838, pruned_loss=0.08078, over 4826.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2633, pruned_loss=0.06823, over 954733.04 frames. ], batch size: 33, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:06:14,078 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.827e+01 1.714e+02 1.989e+02 2.480e+02 5.028e+02, threshold=3.977e+02, percent-clipped=2.0 2023-03-26 11:06:24,975 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0577, 0.9986, 1.0791, 0.4512, 0.9811, 1.2129, 1.2539, 1.0217], device='cuda:2'), covar=tensor([0.0962, 0.0548, 0.0460, 0.0562, 0.0472, 0.0552, 0.0389, 0.0690], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0156, 0.0121, 0.0135, 0.0132, 0.0126, 0.0146, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.6342e-05, 1.1456e-04, 8.7541e-05, 9.8367e-05, 9.4691e-05, 9.2491e-05, 1.0728e-04, 1.0883e-04], device='cuda:2') 2023-03-26 11:06:48,112 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50258.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:06:58,869 INFO [finetune.py:976] (2/7) Epoch 9, batch 4450, loss[loss=0.1679, simple_loss=0.2247, pruned_loss=0.0556, over 3910.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2661, pruned_loss=0.06907, over 956026.09 frames. ], batch size: 17, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:07:08,141 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7660, 1.0243, 1.7714, 1.6400, 1.5350, 1.4627, 1.5703, 1.5715], device='cuda:2'), covar=tensor([0.4240, 0.4921, 0.4046, 0.4509, 0.5643, 0.4272, 0.5176, 0.4051], device='cuda:2'), in_proj_covar=tensor([0.0233, 0.0240, 0.0253, 0.0256, 0.0250, 0.0224, 0.0273, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:07:18,381 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6416, 1.4708, 1.3935, 1.6868, 1.6417, 1.7023, 1.0079, 1.4168], device='cuda:2'), covar=tensor([0.2331, 0.2110, 0.2033, 0.1713, 0.1622, 0.1220, 0.2582, 0.1889], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0211, 0.0209, 0.0191, 0.0244, 0.0182, 0.0217, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:07:32,436 INFO [finetune.py:976] (2/7) Epoch 9, batch 4500, loss[loss=0.1776, simple_loss=0.2571, pruned_loss=0.04904, over 4892.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.267, pruned_loss=0.06931, over 954618.36 frames. ], batch size: 35, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:07:38,449 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.523e+02 1.896e+02 2.480e+02 4.445e+02, threshold=3.793e+02, percent-clipped=2.0 2023-03-26 11:07:53,268 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.73 vs. limit=5.0 2023-03-26 11:08:05,986 INFO [finetune.py:976] (2/7) Epoch 9, batch 4550, loss[loss=0.1948, simple_loss=0.2629, pruned_loss=0.06338, over 4816.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2687, pruned_loss=0.07035, over 955241.25 frames. ], batch size: 30, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:08:10,987 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0441, 2.0846, 1.9910, 1.4737, 2.0826, 2.1628, 2.0176, 1.7827], device='cuda:2'), covar=tensor([0.0609, 0.0612, 0.0724, 0.0847, 0.0571, 0.0712, 0.0656, 0.1029], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0132, 0.0144, 0.0123, 0.0117, 0.0143, 0.0143, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:08:58,491 INFO [finetune.py:976] (2/7) Epoch 9, batch 4600, loss[loss=0.1562, simple_loss=0.2356, pruned_loss=0.03841, over 4900.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2676, pruned_loss=0.06926, over 955000.88 frames. ], batch size: 37, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:08:58,606 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:09:03,959 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3995, 2.9493, 2.6349, 1.3425, 2.7715, 2.4039, 2.1919, 2.4745], device='cuda:2'), covar=tensor([0.1110, 0.0823, 0.1837, 0.2164, 0.1782, 0.2015, 0.2191, 0.1353], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0199, 0.0200, 0.0186, 0.0214, 0.0206, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:09:05,736 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0809, 4.3769, 4.6218, 4.9109, 4.7738, 4.5134, 5.1875, 1.8030], device='cuda:2'), covar=tensor([0.0707, 0.0758, 0.0689, 0.0774, 0.1151, 0.1506, 0.0511, 0.5583], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0244, 0.0276, 0.0291, 0.0330, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:09:06,261 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.671e+02 1.983e+02 2.416e+02 3.848e+02, threshold=3.965e+02, percent-clipped=1.0 2023-03-26 11:09:25,357 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5833, 1.4713, 1.4040, 1.5067, 1.7685, 1.7244, 1.6108, 1.3558], device='cuda:2'), covar=tensor([0.0326, 0.0349, 0.0580, 0.0302, 0.0223, 0.0587, 0.0299, 0.0421], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0109, 0.0139, 0.0114, 0.0102, 0.0102, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.0100e-05, 8.5571e-05, 1.1108e-04, 8.9782e-05, 7.9973e-05, 7.5724e-05, 6.8961e-05, 8.3563e-05], device='cuda:2') 2023-03-26 11:09:39,359 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:09:40,499 INFO [finetune.py:976] (2/7) Epoch 9, batch 4650, loss[loss=0.2113, simple_loss=0.2667, pruned_loss=0.07798, over 4729.00 frames. ], tot_loss[loss=0.2007, simple_loss=0.2645, pruned_loss=0.06842, over 954510.52 frames. ], batch size: 23, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:09:50,493 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50484.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:10:15,807 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 11:10:18,171 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 11:10:22,781 INFO [finetune.py:976] (2/7) Epoch 9, batch 4700, loss[loss=0.1781, simple_loss=0.2309, pruned_loss=0.06271, over 4725.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2604, pruned_loss=0.0671, over 955575.48 frames. ], batch size: 23, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:10:29,350 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.527e+02 1.850e+02 2.224e+02 3.838e+02, threshold=3.699e+02, percent-clipped=0.0 2023-03-26 11:10:38,046 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50546.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:10:42,729 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50553.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 11:10:56,119 INFO [finetune.py:976] (2/7) Epoch 9, batch 4750, loss[loss=0.2008, simple_loss=0.268, pruned_loss=0.06683, over 4935.00 frames. ], tot_loss[loss=0.194, simple_loss=0.257, pruned_loss=0.06548, over 954811.82 frames. ], batch size: 33, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:11:14,652 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 11:11:18,566 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:11:29,494 INFO [finetune.py:976] (2/7) Epoch 9, batch 4800, loss[loss=0.181, simple_loss=0.2602, pruned_loss=0.0509, over 4891.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2607, pruned_loss=0.06692, over 952812.30 frames. ], batch size: 46, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:11:36,104 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.695e+01 1.587e+02 1.907e+02 2.189e+02 3.978e+02, threshold=3.813e+02, percent-clipped=2.0 2023-03-26 11:11:54,692 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8448, 2.5079, 3.3458, 4.6529, 3.4859, 3.3443, 1.7907, 3.8123], device='cuda:2'), covar=tensor([0.1517, 0.1388, 0.1117, 0.0552, 0.0571, 0.1194, 0.1702, 0.0483], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0118, 0.0135, 0.0165, 0.0102, 0.0140, 0.0127, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 11:12:03,074 INFO [finetune.py:976] (2/7) Epoch 9, batch 4850, loss[loss=0.1748, simple_loss=0.2361, pruned_loss=0.05673, over 4691.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2639, pruned_loss=0.06769, over 950928.59 frames. ], batch size: 23, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:12:03,842 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1655, 2.0002, 1.7471, 1.9290, 2.2170, 1.8588, 2.2814, 2.0855], device='cuda:2'), covar=tensor([0.1362, 0.2210, 0.3284, 0.2479, 0.2441, 0.1776, 0.2876, 0.2057], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0189, 0.0234, 0.0255, 0.0239, 0.0196, 0.0212, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:12:36,220 INFO [finetune.py:976] (2/7) Epoch 9, batch 4900, loss[loss=0.212, simple_loss=0.2746, pruned_loss=0.0747, over 4835.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2668, pruned_loss=0.06908, over 951750.06 frames. ], batch size: 30, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:12:42,295 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.610e+02 1.915e+02 2.289e+02 4.400e+02, threshold=3.830e+02, percent-clipped=2.0 2023-03-26 11:12:44,192 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3248, 1.2003, 1.2009, 1.3527, 1.6014, 1.4201, 1.3039, 1.1622], device='cuda:2'), covar=tensor([0.0342, 0.0295, 0.0580, 0.0259, 0.0205, 0.0455, 0.0320, 0.0374], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0110, 0.0140, 0.0115, 0.0103, 0.0102, 0.0092, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.0664e-05, 8.6104e-05, 1.1162e-04, 9.0684e-05, 8.0576e-05, 7.5974e-05, 6.9502e-05, 8.4179e-05], device='cuda:2') 2023-03-26 11:13:08,686 INFO [finetune.py:976] (2/7) Epoch 9, batch 4950, loss[loss=0.1908, simple_loss=0.2604, pruned_loss=0.06062, over 4859.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.2681, pruned_loss=0.06923, over 952447.81 frames. ], batch size: 34, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:13:15,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50782.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:13:16,531 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:13:33,489 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50808.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:13:51,973 INFO [finetune.py:976] (2/7) Epoch 9, batch 5000, loss[loss=0.2164, simple_loss=0.2499, pruned_loss=0.09149, over 4367.00 frames. ], tot_loss[loss=0.2007, simple_loss=0.2648, pruned_loss=0.06827, over 952531.00 frames. ], batch size: 19, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:14:03,063 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.212e+02 1.752e+02 2.044e+02 2.444e+02 6.074e+02, threshold=4.089e+02, percent-clipped=4.0 2023-03-26 11:14:03,136 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:14:10,936 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 11:14:13,389 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50843.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:14:23,660 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3205, 1.4756, 1.2907, 1.5311, 1.7246, 1.6183, 1.5235, 1.4057], device='cuda:2'), covar=tensor([0.0373, 0.0283, 0.0501, 0.0257, 0.0201, 0.0405, 0.0246, 0.0344], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0108, 0.0138, 0.0114, 0.0101, 0.0100, 0.0091, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.9673e-05, 8.4737e-05, 1.0994e-04, 8.9383e-05, 7.9231e-05, 7.4590e-05, 6.8474e-05, 8.2687e-05], device='cuda:2') 2023-03-26 11:14:24,225 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50853.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:14:37,403 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50869.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:14:40,057 INFO [finetune.py:976] (2/7) Epoch 9, batch 5050, loss[loss=0.1594, simple_loss=0.2261, pruned_loss=0.0464, over 4315.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2615, pruned_loss=0.067, over 951756.10 frames. ], batch size: 65, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:14:47,834 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1577, 1.7221, 2.0811, 1.9889, 1.7756, 1.7572, 1.9167, 1.9126], device='cuda:2'), covar=tensor([0.5001, 0.5514, 0.4261, 0.5064, 0.6327, 0.4613, 0.6206, 0.4269], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0242, 0.0254, 0.0257, 0.0252, 0.0227, 0.0275, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:14:48,399 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5684, 1.4300, 1.4511, 1.5606, 1.0402, 2.9494, 1.0900, 1.5777], device='cuda:2'), covar=tensor([0.3426, 0.2436, 0.2170, 0.2345, 0.1916, 0.0250, 0.2772, 0.1328], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0114, 0.0119, 0.0122, 0.0115, 0.0098, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:14:59,822 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 11:15:00,244 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50901.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:15:00,843 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:15:21,255 INFO [finetune.py:976] (2/7) Epoch 9, batch 5100, loss[loss=0.1925, simple_loss=0.259, pruned_loss=0.06299, over 4856.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2582, pruned_loss=0.06584, over 951368.54 frames. ], batch size: 31, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:15:24,288 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0027, 1.8218, 1.5767, 1.7703, 1.8105, 1.7379, 1.7898, 2.4326], device='cuda:2'), covar=tensor([0.4943, 0.5457, 0.4010, 0.4634, 0.4437, 0.2776, 0.4670, 0.2068], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0259, 0.0222, 0.0280, 0.0243, 0.0208, 0.0245, 0.0211], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:15:29,783 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.571e+02 1.989e+02 2.366e+02 5.072e+02, threshold=3.977e+02, percent-clipped=2.0 2023-03-26 11:15:30,577 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.69 vs. limit=5.0 2023-03-26 11:15:38,839 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50946.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:15:55,123 INFO [finetune.py:976] (2/7) Epoch 9, batch 5150, loss[loss=0.214, simple_loss=0.2779, pruned_loss=0.0751, over 4790.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2596, pruned_loss=0.06713, over 949044.52 frames. ], batch size: 41, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:16:06,319 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1875, 1.6715, 2.3526, 1.6792, 2.2022, 2.3588, 1.6089, 2.4367], device='cuda:2'), covar=tensor([0.1128, 0.1855, 0.1347, 0.1843, 0.0814, 0.1312, 0.2626, 0.0791], device='cuda:2'), in_proj_covar=tensor([0.0198, 0.0204, 0.0194, 0.0190, 0.0178, 0.0217, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:16:07,557 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.9986, 1.6403, 1.8749, 0.7611, 1.9988, 2.1951, 1.7728, 1.6821], device='cuda:2'), covar=tensor([0.1140, 0.1040, 0.0537, 0.0810, 0.0515, 0.0768, 0.0612, 0.0988], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0156, 0.0123, 0.0135, 0.0132, 0.0127, 0.0147, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.7109e-05, 1.1507e-04, 8.8351e-05, 9.8047e-05, 9.4560e-05, 9.2661e-05, 1.0769e-04, 1.0900e-04], device='cuda:2') 2023-03-26 11:16:19,659 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:16:26,023 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 11:16:29,130 INFO [finetune.py:976] (2/7) Epoch 9, batch 5200, loss[loss=0.1999, simple_loss=0.2762, pruned_loss=0.06179, over 4829.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2646, pruned_loss=0.06862, over 948976.18 frames. ], batch size: 33, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:16:37,436 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.691e+02 2.095e+02 2.506e+02 4.401e+02, threshold=4.191e+02, percent-clipped=2.0 2023-03-26 11:17:12,603 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51062.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:17:18,566 INFO [finetune.py:976] (2/7) Epoch 9, batch 5250, loss[loss=0.2207, simple_loss=0.2831, pruned_loss=0.07915, over 4829.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2676, pruned_loss=0.0694, over 952059.69 frames. ], batch size: 30, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:17:50,690 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6630, 1.5989, 1.9774, 1.3952, 1.6371, 1.8749, 1.4499, 2.1119], device='cuda:2'), covar=tensor([0.1247, 0.1741, 0.1132, 0.1615, 0.0910, 0.1342, 0.2574, 0.0657], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0203, 0.0193, 0.0190, 0.0178, 0.0216, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:17:51,198 INFO [finetune.py:976] (2/7) Epoch 9, batch 5300, loss[loss=0.2337, simple_loss=0.3072, pruned_loss=0.08012, over 4894.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2694, pruned_loss=0.07065, over 949868.00 frames. ], batch size: 43, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:17:51,931 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51123.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:17:57,262 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.689e+02 2.023e+02 2.414e+02 5.734e+02, threshold=4.045e+02, percent-clipped=1.0 2023-03-26 11:18:01,389 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51138.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:18:19,581 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51164.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:18:24,357 INFO [finetune.py:976] (2/7) Epoch 9, batch 5350, loss[loss=0.1607, simple_loss=0.2321, pruned_loss=0.04459, over 4774.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2695, pruned_loss=0.07031, over 952605.01 frames. ], batch size: 29, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:18:24,449 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51172.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:18:56,212 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:19:18,523 INFO [finetune.py:976] (2/7) Epoch 9, batch 5400, loss[loss=0.2252, simple_loss=0.266, pruned_loss=0.09217, over 4288.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2652, pruned_loss=0.06835, over 952358.55 frames. ], batch size: 65, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:19:19,286 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1492, 2.0201, 1.7972, 2.1130, 2.0020, 1.9740, 1.9219, 2.8666], device='cuda:2'), covar=tensor([0.4594, 0.6231, 0.4164, 0.5932, 0.5251, 0.2987, 0.5606, 0.1992], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0259, 0.0223, 0.0281, 0.0243, 0.0209, 0.0246, 0.0213], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:19:26,734 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.913e+01 1.608e+02 1.826e+02 2.251e+02 3.272e+02, threshold=3.651e+02, percent-clipped=0.0 2023-03-26 11:19:32,699 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51233.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:19:48,894 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:20:03,230 INFO [finetune.py:976] (2/7) Epoch 9, batch 5450, loss[loss=0.1886, simple_loss=0.2494, pruned_loss=0.06389, over 4908.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.261, pruned_loss=0.06646, over 952552.98 frames. ], batch size: 36, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:20:31,733 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:20:50,272 INFO [finetune.py:976] (2/7) Epoch 9, batch 5500, loss[loss=0.1598, simple_loss=0.2281, pruned_loss=0.04576, over 4913.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2574, pruned_loss=0.06507, over 952967.37 frames. ], batch size: 36, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:20:56,825 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.581e+02 1.869e+02 2.249e+02 3.902e+02, threshold=3.738e+02, percent-clipped=2.0 2023-03-26 11:21:44,838 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9179, 2.0077, 2.0091, 1.3055, 2.0297, 2.0288, 2.0170, 1.6961], device='cuda:2'), covar=tensor([0.0619, 0.0631, 0.0766, 0.0938, 0.0581, 0.0772, 0.0637, 0.1058], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0133, 0.0145, 0.0124, 0.0118, 0.0144, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:21:48,954 INFO [finetune.py:976] (2/7) Epoch 9, batch 5550, loss[loss=0.1309, simple_loss=0.2045, pruned_loss=0.02862, over 4786.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2587, pruned_loss=0.0657, over 951466.63 frames. ], batch size: 26, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:21:59,532 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51382.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:07,969 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51392.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:22,099 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2295, 2.1212, 1.7498, 0.8571, 1.9038, 1.7726, 1.6171, 1.9401], device='cuda:2'), covar=tensor([0.0914, 0.0702, 0.1560, 0.1946, 0.1425, 0.2204, 0.2182, 0.0926], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0198, 0.0201, 0.0187, 0.0215, 0.0206, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:22:24,403 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:27,577 INFO [finetune.py:976] (2/7) Epoch 9, batch 5600, loss[loss=0.1788, simple_loss=0.2558, pruned_loss=0.05089, over 4736.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2651, pruned_loss=0.06823, over 952865.03 frames. ], batch size: 27, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:22:29,975 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1074, 1.7822, 1.7524, 2.0414, 2.6446, 2.0071, 1.9807, 1.6145], device='cuda:2'), covar=tensor([0.2259, 0.2337, 0.2042, 0.1739, 0.2033, 0.1232, 0.2294, 0.1954], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0209, 0.0207, 0.0189, 0.0241, 0.0181, 0.0214, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:22:33,286 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.626e+02 2.005e+02 2.362e+02 4.096e+02, threshold=4.011e+02, percent-clipped=2.0 2023-03-26 11:22:34,579 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5058, 1.3848, 1.4197, 1.3685, 0.7612, 2.1845, 0.7458, 1.2669], device='cuda:2'), covar=tensor([0.3261, 0.2375, 0.2081, 0.2396, 0.2037, 0.0387, 0.2642, 0.1288], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0116, 0.0098, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:22:36,875 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51438.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:39,807 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51443.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:46,020 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51453.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:52,433 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51464.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:22:57,071 INFO [finetune.py:976] (2/7) Epoch 9, batch 5650, loss[loss=0.1958, simple_loss=0.2703, pruned_loss=0.06062, over 4817.00 frames. ], tot_loss[loss=0.2039, simple_loss=0.2688, pruned_loss=0.0695, over 953120.01 frames. ], batch size: 40, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:23:05,307 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:23:10,695 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:23:20,775 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51512.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:23:26,711 INFO [finetune.py:976] (2/7) Epoch 9, batch 5700, loss[loss=0.1442, simple_loss=0.2071, pruned_loss=0.04067, over 4004.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2646, pruned_loss=0.06922, over 933952.55 frames. ], batch size: 17, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:23:30,371 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51528.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:23:32,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.987e+01 1.624e+02 1.963e+02 2.341e+02 6.572e+02, threshold=3.927e+02, percent-clipped=1.0 2023-03-26 11:23:36,556 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4455, 1.6051, 1.7544, 0.9149, 1.6445, 1.7630, 1.8294, 1.5575], device='cuda:2'), covar=tensor([0.0943, 0.0597, 0.0384, 0.0570, 0.0455, 0.0640, 0.0344, 0.0669], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0155, 0.0121, 0.0134, 0.0132, 0.0125, 0.0145, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.5656e-05, 1.1371e-04, 8.7407e-05, 9.7399e-05, 9.4154e-05, 9.1753e-05, 1.0646e-04, 1.0816e-04], device='cuda:2') 2023-03-26 11:23:57,292 INFO [finetune.py:976] (2/7) Epoch 10, batch 0, loss[loss=0.2106, simple_loss=0.2768, pruned_loss=0.07224, over 4812.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2768, pruned_loss=0.07224, over 4812.00 frames. ], batch size: 39, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:23:57,292 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 11:24:16,169 INFO [finetune.py:1010] (2/7) Epoch 10, validation: loss=0.1604, simple_loss=0.2317, pruned_loss=0.04451, over 2265189.00 frames. 2023-03-26 11:24:16,170 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 11:24:22,500 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:24:30,970 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-26 11:24:52,580 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7103, 1.2600, 0.9143, 1.7132, 2.0138, 1.4099, 1.5438, 1.7536], device='cuda:2'), covar=tensor([0.1454, 0.2096, 0.2182, 0.1197, 0.2087, 0.2307, 0.1475, 0.1862], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0121, 0.0095, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 11:24:58,290 INFO [finetune.py:976] (2/7) Epoch 10, batch 50, loss[loss=0.1943, simple_loss=0.2634, pruned_loss=0.06263, over 4896.00 frames. ], tot_loss[loss=0.2031, simple_loss=0.2681, pruned_loss=0.06907, over 216337.14 frames. ], batch size: 37, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:25:01,141 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:25:20,208 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.735e+02 2.131e+02 2.642e+02 7.480e+02, threshold=4.262e+02, percent-clipped=4.0 2023-03-26 11:25:31,992 INFO [finetune.py:976] (2/7) Epoch 10, batch 100, loss[loss=0.2047, simple_loss=0.2591, pruned_loss=0.07518, over 4783.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2615, pruned_loss=0.06635, over 380563.45 frames. ], batch size: 29, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:25:32,633 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:04,781 INFO [finetune.py:976] (2/7) Epoch 10, batch 150, loss[loss=0.1882, simple_loss=0.2521, pruned_loss=0.06219, over 4813.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2571, pruned_loss=0.06448, over 508398.88 frames. ], batch size: 39, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:26:18,700 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:33,393 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.594e+02 1.858e+02 2.240e+02 3.308e+02, threshold=3.716e+02, percent-clipped=0.0 2023-03-26 11:26:37,114 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:47,597 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:48,179 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:52,005 INFO [finetune.py:976] (2/7) Epoch 10, batch 200, loss[loss=0.1833, simple_loss=0.2424, pruned_loss=0.0621, over 4788.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.256, pruned_loss=0.06461, over 608912.89 frames. ], batch size: 29, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:27:04,851 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:27:25,424 INFO [finetune.py:976] (2/7) Epoch 10, batch 250, loss[loss=0.2081, simple_loss=0.2672, pruned_loss=0.07452, over 4827.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2621, pruned_loss=0.06716, over 683497.74 frames. ], batch size: 30, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:27:33,014 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51808.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:27:45,690 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51828.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:27:48,003 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.605e+02 1.930e+02 2.331e+02 5.576e+02, threshold=3.861e+02, percent-clipped=5.0 2023-03-26 11:27:58,881 INFO [finetune.py:976] (2/7) Epoch 10, batch 300, loss[loss=0.2137, simple_loss=0.2751, pruned_loss=0.07616, over 4764.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2649, pruned_loss=0.06755, over 744030.61 frames. ], batch size: 29, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:28:00,156 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:28:04,960 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8327, 1.2142, 1.7950, 1.7368, 1.5320, 1.4988, 1.6363, 1.6775], device='cuda:2'), covar=tensor([0.4070, 0.4657, 0.3772, 0.4172, 0.5088, 0.4071, 0.4945, 0.3646], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0241, 0.0255, 0.0257, 0.0252, 0.0227, 0.0276, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:28:17,689 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51876.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:28:31,954 INFO [finetune.py:976] (2/7) Epoch 10, batch 350, loss[loss=0.2307, simple_loss=0.2919, pruned_loss=0.08474, over 4898.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2677, pruned_loss=0.069, over 791709.60 frames. ], batch size: 36, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:28:46,751 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.87 vs. limit=5.0 2023-03-26 11:28:54,278 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.678e+02 2.044e+02 2.443e+02 3.814e+02, threshold=4.089e+02, percent-clipped=0.0 2023-03-26 11:29:04,641 INFO [finetune.py:976] (2/7) Epoch 10, batch 400, loss[loss=0.1995, simple_loss=0.2715, pruned_loss=0.06379, over 4926.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2669, pruned_loss=0.0677, over 828721.50 frames. ], batch size: 33, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:29:12,576 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 11:29:17,477 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6658, 1.4418, 2.2911, 3.3405, 2.2777, 2.4075, 1.1081, 2.5456], device='cuda:2'), covar=tensor([0.1830, 0.1611, 0.1276, 0.0676, 0.0857, 0.1568, 0.1888, 0.0730], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0102, 0.0139, 0.0126, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 11:29:47,991 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.72 vs. limit=5.0 2023-03-26 11:29:56,981 INFO [finetune.py:976] (2/7) Epoch 10, batch 450, loss[loss=0.2157, simple_loss=0.2767, pruned_loss=0.07737, over 4919.00 frames. ], tot_loss[loss=0.2007, simple_loss=0.266, pruned_loss=0.06765, over 857448.08 frames. ], batch size: 37, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:30:13,852 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.21 vs. limit=5.0 2023-03-26 11:30:21,154 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.724e+02 2.025e+02 2.574e+02 4.346e+02, threshold=4.050e+02, percent-clipped=1.0 2023-03-26 11:30:24,956 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52038.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:26,197 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:30,963 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:31,463 INFO [finetune.py:976] (2/7) Epoch 10, batch 500, loss[loss=0.1699, simple_loss=0.2361, pruned_loss=0.05183, over 4762.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2628, pruned_loss=0.06677, over 880574.29 frames. ], batch size: 27, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:30:56,805 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52086.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:02,787 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:04,604 INFO [finetune.py:976] (2/7) Epoch 10, batch 550, loss[loss=0.1961, simple_loss=0.2643, pruned_loss=0.06393, over 4852.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2603, pruned_loss=0.06655, over 898861.17 frames. ], batch size: 49, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:31:05,923 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:07,069 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:10,816 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0261, 2.7868, 2.5423, 3.1543, 2.7986, 2.8387, 2.7726, 3.7368], device='cuda:2'), covar=tensor([0.3442, 0.4286, 0.3093, 0.3803, 0.3679, 0.2129, 0.3971, 0.1288], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0259, 0.0222, 0.0279, 0.0243, 0.0209, 0.0244, 0.0212], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:31:27,059 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.591e+02 1.822e+02 2.163e+02 6.487e+02, threshold=3.643e+02, percent-clipped=1.0 2023-03-26 11:31:37,961 INFO [finetune.py:976] (2/7) Epoch 10, batch 600, loss[loss=0.2343, simple_loss=0.3036, pruned_loss=0.08246, over 4860.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2603, pruned_loss=0.06675, over 912255.05 frames. ], batch size: 44, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:31:39,251 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52151.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:10,102 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1736, 2.0556, 1.7088, 2.1303, 2.0057, 1.9432, 1.9855, 2.9331], device='cuda:2'), covar=tensor([0.4543, 0.5944, 0.4143, 0.5387, 0.5332, 0.2900, 0.5273, 0.1726], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0259, 0.0222, 0.0280, 0.0244, 0.0209, 0.0245, 0.0213], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:32:11,832 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7865, 1.6558, 1.6501, 1.7260, 1.1759, 4.2488, 1.6361, 2.0930], device='cuda:2'), covar=tensor([0.3156, 0.2385, 0.2089, 0.2140, 0.1777, 0.0111, 0.2563, 0.1308], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0116, 0.0099, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:32:15,957 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52193.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:19,559 INFO [finetune.py:976] (2/7) Epoch 10, batch 650, loss[loss=0.2142, simple_loss=0.2897, pruned_loss=0.06938, over 4813.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2642, pruned_loss=0.06802, over 921162.18 frames. ], batch size: 45, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:32:19,618 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:42,612 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.681e+02 1.969e+02 2.336e+02 3.855e+02, threshold=3.938e+02, percent-clipped=2.0 2023-03-26 11:32:46,418 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8568, 1.7076, 2.1604, 1.9644, 1.9370, 4.4017, 1.6526, 1.9988], device='cuda:2'), covar=tensor([0.0925, 0.1702, 0.1182, 0.1031, 0.1540, 0.0190, 0.1400, 0.1624], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0076, 0.0078, 0.0091, 0.0082, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:32:53,490 INFO [finetune.py:976] (2/7) Epoch 10, batch 700, loss[loss=0.2777, simple_loss=0.3251, pruned_loss=0.1151, over 4231.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2656, pruned_loss=0.06818, over 929938.79 frames. ], batch size: 65, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:32:54,963 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 11:32:56,644 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52254.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:33:26,212 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1621, 1.8128, 1.8603, 0.9410, 1.9775, 2.1824, 1.9049, 1.8150], device='cuda:2'), covar=tensor([0.1034, 0.0745, 0.0596, 0.0687, 0.0494, 0.0672, 0.0502, 0.0759], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0154, 0.0121, 0.0134, 0.0131, 0.0124, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.5619e-05, 1.1309e-04, 8.7005e-05, 9.6825e-05, 9.3733e-05, 9.0706e-05, 1.0589e-04, 1.0731e-04], device='cuda:2') 2023-03-26 11:33:26,705 INFO [finetune.py:976] (2/7) Epoch 10, batch 750, loss[loss=0.1901, simple_loss=0.2585, pruned_loss=0.06089, over 4736.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2673, pruned_loss=0.06865, over 935640.25 frames. ], batch size: 59, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:33:45,061 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:02,791 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.612e+02 1.864e+02 2.364e+02 4.342e+02, threshold=3.728e+02, percent-clipped=1.0 2023-03-26 11:34:15,210 INFO [finetune.py:976] (2/7) Epoch 10, batch 800, loss[loss=0.1895, simple_loss=0.2513, pruned_loss=0.06383, over 4815.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2666, pruned_loss=0.06776, over 938918.88 frames. ], batch size: 33, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:34:30,566 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52373.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:49,110 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:50,910 INFO [finetune.py:976] (2/7) Epoch 10, batch 850, loss[loss=0.1454, simple_loss=0.2106, pruned_loss=0.04006, over 4795.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2649, pruned_loss=0.06718, over 943476.02 frames. ], batch size: 51, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:34:54,129 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52403.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:35:14,904 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.556e+02 1.848e+02 2.239e+02 3.627e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 11:35:25,628 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.04 vs. limit=2.0 2023-03-26 11:35:36,905 INFO [finetune.py:976] (2/7) Epoch 10, batch 900, loss[loss=0.2176, simple_loss=0.2685, pruned_loss=0.08334, over 4285.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2612, pruned_loss=0.06598, over 944384.69 frames. ], batch size: 65, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:35:38,210 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52451.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:35:59,109 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2118, 2.3060, 2.2433, 1.6682, 2.2714, 2.5193, 2.3152, 1.9045], device='cuda:2'), covar=tensor([0.0676, 0.0558, 0.0745, 0.0887, 0.0548, 0.0684, 0.0624, 0.1039], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0132, 0.0142, 0.0123, 0.0118, 0.0141, 0.0141, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:36:16,908 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 11:36:25,700 INFO [finetune.py:976] (2/7) Epoch 10, batch 950, loss[loss=0.2357, simple_loss=0.2921, pruned_loss=0.08967, over 4869.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2598, pruned_loss=0.06642, over 946640.71 frames. ], batch size: 34, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:36:46,477 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 11:36:46,733 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.551e+02 1.918e+02 2.238e+02 5.409e+02, threshold=3.837e+02, percent-clipped=4.0 2023-03-26 11:37:01,194 INFO [finetune.py:976] (2/7) Epoch 10, batch 1000, loss[loss=0.2483, simple_loss=0.3097, pruned_loss=0.09343, over 4150.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2602, pruned_loss=0.06617, over 947415.66 frames. ], batch size: 65, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:37:01,273 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52549.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:37:04,948 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52555.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:37:07,533 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.60 vs. limit=2.0 2023-03-26 11:37:14,819 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.04 vs. limit=5.0 2023-03-26 11:37:26,846 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 11:37:57,104 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9789, 1.9726, 2.0264, 1.3492, 2.0502, 2.2431, 2.0870, 1.5957], device='cuda:2'), covar=tensor([0.0604, 0.0679, 0.0709, 0.0916, 0.0599, 0.0611, 0.0617, 0.1181], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0131, 0.0142, 0.0122, 0.0117, 0.0141, 0.0141, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:38:00,466 INFO [finetune.py:976] (2/7) Epoch 10, batch 1050, loss[loss=0.1965, simple_loss=0.2696, pruned_loss=0.06169, over 4821.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2627, pruned_loss=0.06633, over 948335.42 frames. ], batch size: 39, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:38:01,504 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 11:38:21,468 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52616.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:38:31,486 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.240e+02 1.591e+02 1.928e+02 2.293e+02 3.930e+02, threshold=3.855e+02, percent-clipped=1.0 2023-03-26 11:38:44,943 INFO [finetune.py:976] (2/7) Epoch 10, batch 1100, loss[loss=0.1969, simple_loss=0.2608, pruned_loss=0.06655, over 4850.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2634, pruned_loss=0.06622, over 950031.41 frames. ], batch size: 44, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:38:59,646 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:14,442 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 11:39:16,689 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4772, 1.5029, 1.4631, 1.8262, 1.7635, 1.7699, 1.1885, 1.2969], device='cuda:2'), covar=tensor([0.2301, 0.2135, 0.1802, 0.1543, 0.1966, 0.1155, 0.2676, 0.1949], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0207, 0.0206, 0.0187, 0.0240, 0.0180, 0.0212, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:39:17,758 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:19,452 INFO [finetune.py:976] (2/7) Epoch 10, batch 1150, loss[loss=0.1665, simple_loss=0.2391, pruned_loss=0.04697, over 4920.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2645, pruned_loss=0.06632, over 951381.22 frames. ], batch size: 38, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:39:19,699 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 11:39:40,544 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 11:39:40,840 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.654e+02 1.930e+02 2.314e+02 4.484e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 11:39:48,712 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:52,581 INFO [finetune.py:976] (2/7) Epoch 10, batch 1200, loss[loss=0.1975, simple_loss=0.2702, pruned_loss=0.06238, over 4814.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2636, pruned_loss=0.06608, over 953551.87 frames. ], batch size: 41, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:39:56,846 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52752.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:40:00,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1993, 2.0311, 1.6341, 2.1092, 2.0059, 1.9783, 1.8862, 2.9341], device='cuda:2'), covar=tensor([0.4557, 0.5773, 0.4155, 0.5419, 0.5194, 0.2687, 0.5507, 0.1933], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0222, 0.0279, 0.0243, 0.0209, 0.0246, 0.0213], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:40:23,851 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1139, 1.9019, 1.6776, 1.8224, 2.0664, 1.7604, 2.3223, 2.0670], device='cuda:2'), covar=tensor([0.1435, 0.2406, 0.3195, 0.2992, 0.2797, 0.1791, 0.3335, 0.2014], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0187, 0.0231, 0.0251, 0.0237, 0.0195, 0.0211, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:40:35,762 INFO [finetune.py:976] (2/7) Epoch 10, batch 1250, loss[loss=0.163, simple_loss=0.2299, pruned_loss=0.04801, over 4859.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2611, pruned_loss=0.06517, over 954038.12 frames. ], batch size: 31, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:40:47,075 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52813.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:40:49,490 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3507, 1.2561, 1.5083, 2.4590, 1.6564, 2.1197, 0.8755, 2.0564], device='cuda:2'), covar=tensor([0.1792, 0.1594, 0.1214, 0.0715, 0.0958, 0.1130, 0.1695, 0.0704], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0135, 0.0165, 0.0102, 0.0139, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 11:41:05,426 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.514e+02 1.794e+02 2.223e+02 4.744e+02, threshold=3.588e+02, percent-clipped=2.0 2023-03-26 11:41:19,445 INFO [finetune.py:976] (2/7) Epoch 10, batch 1300, loss[loss=0.1523, simple_loss=0.2209, pruned_loss=0.04189, over 4915.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2575, pruned_loss=0.06377, over 953121.79 frames. ], batch size: 36, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:41:19,554 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:41:20,255 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 11:41:51,931 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:41:53,102 INFO [finetune.py:976] (2/7) Epoch 10, batch 1350, loss[loss=0.2269, simple_loss=0.2932, pruned_loss=0.08029, over 4914.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2586, pruned_loss=0.06445, over 956081.77 frames. ], batch size: 36, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:42:01,952 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52911.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:42:15,925 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.950e+01 1.660e+02 2.003e+02 2.564e+02 3.985e+02, threshold=4.006e+02, percent-clipped=2.0 2023-03-26 11:42:16,059 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7035, 1.4962, 1.0604, 0.2740, 1.3274, 1.4496, 1.4276, 1.4253], device='cuda:2'), covar=tensor([0.0930, 0.0841, 0.1391, 0.1940, 0.1362, 0.2464, 0.2445, 0.0870], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0203, 0.0189, 0.0216, 0.0208, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:42:30,878 INFO [finetune.py:976] (2/7) Epoch 10, batch 1400, loss[loss=0.1835, simple_loss=0.2613, pruned_loss=0.05286, over 4865.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2624, pruned_loss=0.06582, over 955060.08 frames. ], batch size: 31, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:42:37,889 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 11:42:48,554 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:42:49,303 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 11:43:11,431 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9231, 3.9148, 3.7833, 2.1665, 4.0512, 3.0520, 1.1948, 2.8095], device='cuda:2'), covar=tensor([0.2212, 0.2174, 0.1530, 0.3259, 0.0956, 0.0949, 0.4523, 0.1590], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0175, 0.0161, 0.0129, 0.0157, 0.0123, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 11:43:14,420 INFO [finetune.py:976] (2/7) Epoch 10, batch 1450, loss[loss=0.2073, simple_loss=0.2714, pruned_loss=0.07157, over 4892.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2636, pruned_loss=0.06602, over 954644.97 frames. ], batch size: 35, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:43:22,108 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8649, 1.6893, 1.5247, 1.4462, 1.8900, 1.5559, 1.7941, 1.7579], device='cuda:2'), covar=tensor([0.1567, 0.2333, 0.3623, 0.2875, 0.2973, 0.2030, 0.3018, 0.2102], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0188, 0.0232, 0.0252, 0.0238, 0.0195, 0.0211, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:43:35,022 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:43:45,115 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.605e+02 1.913e+02 2.318e+02 4.347e+02, threshold=3.826e+02, percent-clipped=3.0 2023-03-26 11:43:55,917 INFO [finetune.py:976] (2/7) Epoch 10, batch 1500, loss[loss=0.2434, simple_loss=0.312, pruned_loss=0.08746, over 4829.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2645, pruned_loss=0.06678, over 953607.59 frames. ], batch size: 49, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:44:00,680 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:29,472 INFO [finetune.py:976] (2/7) Epoch 10, batch 1550, loss[loss=0.2246, simple_loss=0.2739, pruned_loss=0.08768, over 4838.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2647, pruned_loss=0.06677, over 951985.03 frames. ], batch size: 30, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:44:35,502 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:41,518 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:52,482 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.652e+02 2.005e+02 2.543e+02 4.651e+02, threshold=4.009e+02, percent-clipped=4.0 2023-03-26 11:45:03,285 INFO [finetune.py:976] (2/7) Epoch 10, batch 1600, loss[loss=0.1499, simple_loss=0.2214, pruned_loss=0.03919, over 4870.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2635, pruned_loss=0.06682, over 953783.64 frames. ], batch size: 34, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:45:48,099 INFO [finetune.py:976] (2/7) Epoch 10, batch 1650, loss[loss=0.1513, simple_loss=0.2156, pruned_loss=0.0435, over 4763.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.2607, pruned_loss=0.06552, over 953997.67 frames. ], batch size: 28, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:45:50,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2996, 2.0606, 1.8374, 2.2014, 2.0940, 2.0720, 2.0096, 3.1163], device='cuda:2'), covar=tensor([0.4178, 0.5912, 0.3795, 0.4937, 0.4840, 0.2550, 0.5290, 0.1565], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0259, 0.0222, 0.0279, 0.0243, 0.0208, 0.0244, 0.0212], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:45:52,559 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4658, 3.8411, 4.0843, 4.2718, 4.2188, 3.9580, 4.5349, 1.4699], device='cuda:2'), covar=tensor([0.0719, 0.0828, 0.0715, 0.1002, 0.1166, 0.1445, 0.0586, 0.5358], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0242, 0.0274, 0.0289, 0.0326, 0.0279, 0.0298, 0.0292], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:45:56,036 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53211.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:46:10,716 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.364e+01 1.592e+02 1.774e+02 2.189e+02 3.836e+02, threshold=3.549e+02, percent-clipped=0.0 2023-03-26 11:46:16,364 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:46:23,558 INFO [finetune.py:976] (2/7) Epoch 10, batch 1700, loss[loss=0.1641, simple_loss=0.2332, pruned_loss=0.04748, over 4891.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2589, pruned_loss=0.06494, over 955832.41 frames. ], batch size: 32, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:46:29,715 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53259.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:46:36,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8227, 1.7755, 2.0413, 2.0457, 1.9553, 3.5862, 1.5616, 1.8167], device='cuda:2'), covar=tensor([0.0846, 0.1532, 0.0940, 0.0839, 0.1380, 0.0306, 0.1427, 0.1500], device='cuda:2'), in_proj_covar=tensor([0.0077, 0.0081, 0.0075, 0.0078, 0.0091, 0.0083, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:46:37,017 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8650, 1.6112, 1.4913, 1.5265, 1.9911, 1.8879, 1.6668, 1.5562], device='cuda:2'), covar=tensor([0.0270, 0.0327, 0.0569, 0.0325, 0.0226, 0.0514, 0.0391, 0.0372], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0108, 0.0138, 0.0114, 0.0101, 0.0102, 0.0091, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.0149e-05, 8.4498e-05, 1.1012e-04, 8.9359e-05, 7.9097e-05, 7.5320e-05, 6.8914e-05, 8.2951e-05], device='cuda:2') 2023-03-26 11:46:44,531 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8499, 1.7952, 1.7674, 1.7555, 1.3916, 3.7117, 1.8393, 2.2179], device='cuda:2'), covar=tensor([0.3020, 0.2041, 0.1860, 0.2052, 0.1569, 0.0198, 0.2510, 0.1166], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0099, 0.0099, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 11:46:56,425 INFO [finetune.py:976] (2/7) Epoch 10, batch 1750, loss[loss=0.222, simple_loss=0.2854, pruned_loss=0.07925, over 4828.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2603, pruned_loss=0.06567, over 953705.69 frames. ], batch size: 33, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:46:58,855 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:00,066 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:07,268 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:18,950 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.605e+02 1.832e+02 2.176e+02 4.638e+02, threshold=3.664e+02, percent-clipped=2.0 2023-03-26 11:47:29,901 INFO [finetune.py:976] (2/7) Epoch 10, batch 1800, loss[loss=0.2048, simple_loss=0.272, pruned_loss=0.06883, over 4824.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2636, pruned_loss=0.06686, over 953648.29 frames. ], batch size: 33, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:47:45,418 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53365.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:46,660 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53367.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:56,055 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:48:26,360 INFO [finetune.py:976] (2/7) Epoch 10, batch 1850, loss[loss=0.1954, simple_loss=0.276, pruned_loss=0.05742, over 4812.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2652, pruned_loss=0.06785, over 953849.08 frames. ], batch size: 45, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:48:32,716 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:48:35,096 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53412.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:48:51,323 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53428.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:48:58,641 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.752e+02 2.111e+02 2.637e+02 7.323e+02, threshold=4.222e+02, percent-clipped=6.0 2023-03-26 11:49:10,459 INFO [finetune.py:976] (2/7) Epoch 10, batch 1900, loss[loss=0.2107, simple_loss=0.2877, pruned_loss=0.06688, over 4907.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2662, pruned_loss=0.06791, over 955577.21 frames. ], batch size: 37, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:49:14,808 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:49:43,866 INFO [finetune.py:976] (2/7) Epoch 10, batch 1950, loss[loss=0.2013, simple_loss=0.2571, pruned_loss=0.07276, over 4790.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.264, pruned_loss=0.06656, over 956420.00 frames. ], batch size: 25, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:49:58,804 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7268, 3.8470, 3.7391, 1.9735, 3.9934, 2.9049, 0.9815, 2.6747], device='cuda:2'), covar=tensor([0.2587, 0.2095, 0.1449, 0.3168, 0.0998, 0.0997, 0.4434, 0.1454], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0174, 0.0161, 0.0129, 0.0157, 0.0123, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 11:50:09,883 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.535e+02 1.778e+02 2.101e+02 3.650e+02, threshold=3.555e+02, percent-clipped=0.0 2023-03-26 11:50:22,182 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1445, 1.9146, 1.6437, 1.6582, 1.8787, 1.8524, 1.8666, 2.6150], device='cuda:2'), covar=tensor([0.4658, 0.4848, 0.4130, 0.5245, 0.4389, 0.2866, 0.4656, 0.2022], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0222, 0.0280, 0.0243, 0.0209, 0.0244, 0.0212], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:50:22,800 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8499, 1.7628, 1.4821, 1.5606, 2.1810, 2.1565, 1.7262, 1.5445], device='cuda:2'), covar=tensor([0.0267, 0.0314, 0.0522, 0.0334, 0.0203, 0.0316, 0.0333, 0.0434], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0107, 0.0137, 0.0113, 0.0100, 0.0100, 0.0090, 0.0107], device='cuda:2'), out_proj_covar=tensor([6.9553e-05, 8.3802e-05, 1.0939e-04, 8.8548e-05, 7.8368e-05, 7.4162e-05, 6.8349e-05, 8.2137e-05], device='cuda:2') 2023-03-26 11:50:24,671 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 11:50:29,329 INFO [finetune.py:976] (2/7) Epoch 10, batch 2000, loss[loss=0.2007, simple_loss=0.2548, pruned_loss=0.07334, over 4691.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2617, pruned_loss=0.06622, over 954556.87 frames. ], batch size: 23, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:51:16,689 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9711, 1.3120, 1.9296, 1.8578, 1.6611, 1.6429, 1.7385, 1.7496], device='cuda:2'), covar=tensor([0.3938, 0.4870, 0.4194, 0.4393, 0.5786, 0.4131, 0.5589, 0.3858], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0239, 0.0252, 0.0256, 0.0250, 0.0226, 0.0273, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:51:21,833 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53597.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:51:23,505 INFO [finetune.py:976] (2/7) Epoch 10, batch 2050, loss[loss=0.1464, simple_loss=0.2164, pruned_loss=0.03815, over 4773.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2583, pruned_loss=0.06521, over 953450.19 frames. ], batch size: 28, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:51:44,833 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.628e+02 1.908e+02 2.249e+02 5.707e+02, threshold=3.816e+02, percent-clipped=1.0 2023-03-26 11:51:56,177 INFO [finetune.py:976] (2/7) Epoch 10, batch 2100, loss[loss=0.1277, simple_loss=0.1999, pruned_loss=0.02778, over 4703.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2586, pruned_loss=0.06528, over 952931.91 frames. ], batch size: 23, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:52:03,970 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53660.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:52:13,510 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53671.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:52:37,059 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4457, 1.3066, 1.2137, 1.4597, 1.6726, 1.4736, 0.9855, 1.2514], device='cuda:2'), covar=tensor([0.2228, 0.2165, 0.2021, 0.1801, 0.1661, 0.1313, 0.2687, 0.2004], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0206, 0.0205, 0.0187, 0.0238, 0.0178, 0.0211, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:52:37,553 INFO [finetune.py:976] (2/7) Epoch 10, batch 2150, loss[loss=0.2338, simple_loss=0.3103, pruned_loss=0.07869, over 4819.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.262, pruned_loss=0.06658, over 950615.90 frames. ], batch size: 47, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:52:52,717 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53712.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:53:03,973 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53723.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:53:19,770 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.764e+02 2.057e+02 2.459e+02 5.535e+02, threshold=4.114e+02, percent-clipped=2.0 2023-03-26 11:53:19,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8309, 2.5030, 2.3683, 1.2985, 2.5023, 2.0237, 1.8776, 2.2429], device='cuda:2'), covar=tensor([0.1172, 0.0953, 0.1845, 0.2329, 0.1715, 0.2381, 0.2429, 0.1388], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0202, 0.0203, 0.0189, 0.0217, 0.0209, 0.0225, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:53:34,458 INFO [finetune.py:976] (2/7) Epoch 10, batch 2200, loss[loss=0.2106, simple_loss=0.2753, pruned_loss=0.07293, over 4889.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2648, pruned_loss=0.06755, over 952262.21 frames. ], batch size: 35, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:53:42,704 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 11:53:43,154 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:54:07,987 INFO [finetune.py:976] (2/7) Epoch 10, batch 2250, loss[loss=0.1617, simple_loss=0.2341, pruned_loss=0.04462, over 4922.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2656, pruned_loss=0.06737, over 953702.24 frames. ], batch size: 33, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:54:30,209 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.658e+02 1.958e+02 2.430e+02 3.560e+02, threshold=3.915e+02, percent-clipped=0.0 2023-03-26 11:54:41,557 INFO [finetune.py:976] (2/7) Epoch 10, batch 2300, loss[loss=0.1837, simple_loss=0.2465, pruned_loss=0.06044, over 4886.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2654, pruned_loss=0.06712, over 952172.11 frames. ], batch size: 32, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:54:42,828 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7814, 1.3665, 0.8409, 1.7844, 2.2053, 1.4201, 1.6966, 1.6473], device='cuda:2'), covar=tensor([0.1535, 0.2130, 0.2274, 0.1315, 0.1997, 0.2215, 0.1566, 0.2096], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0094, 0.0111, 0.0091, 0.0120, 0.0094, 0.0098, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 11:54:47,639 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3250, 2.2019, 1.8844, 2.2656, 2.2651, 1.9464, 2.6626, 2.3304], device='cuda:2'), covar=tensor([0.1386, 0.2370, 0.3230, 0.2864, 0.2751, 0.1715, 0.3144, 0.1767], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0188, 0.0232, 0.0253, 0.0239, 0.0195, 0.0212, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:55:09,001 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.47 vs. limit=5.0 2023-03-26 11:55:15,981 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:55:17,109 INFO [finetune.py:976] (2/7) Epoch 10, batch 2350, loss[loss=0.1674, simple_loss=0.2265, pruned_loss=0.05409, over 4724.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2618, pruned_loss=0.0657, over 952196.38 frames. ], batch size: 59, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:55:47,268 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.627e+02 1.969e+02 2.442e+02 4.599e+02, threshold=3.938e+02, percent-clipped=2.0 2023-03-26 11:55:58,276 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:55:59,709 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-26 11:56:05,529 INFO [finetune.py:976] (2/7) Epoch 10, batch 2400, loss[loss=0.2397, simple_loss=0.2827, pruned_loss=0.09839, over 4935.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2585, pruned_loss=0.06453, over 952965.38 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 11:56:15,959 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:24,610 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53971.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:41,502 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 11:56:41,931 INFO [finetune.py:976] (2/7) Epoch 10, batch 2450, loss[loss=0.158, simple_loss=0.2207, pruned_loss=0.04767, over 4757.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2561, pruned_loss=0.06384, over 953068.41 frames. ], batch size: 28, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:56:49,144 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54008.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:56,921 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54019.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:59,863 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=54023.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:57:05,184 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.615e+02 1.942e+02 2.268e+02 4.833e+02, threshold=3.884e+02, percent-clipped=2.0 2023-03-26 11:57:16,019 INFO [finetune.py:976] (2/7) Epoch 10, batch 2500, loss[loss=0.1824, simple_loss=0.2576, pruned_loss=0.05356, over 4825.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2588, pruned_loss=0.06512, over 953833.89 frames. ], batch size: 39, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:57:23,276 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8394, 1.6704, 2.0869, 1.3526, 1.8550, 2.1262, 1.6168, 2.3012], device='cuda:2'), covar=tensor([0.1337, 0.2186, 0.1535, 0.2104, 0.0996, 0.1370, 0.2663, 0.0927], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0203, 0.0193, 0.0191, 0.0177, 0.0215, 0.0217, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 11:57:32,425 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3628, 2.9286, 2.7728, 1.2115, 2.9735, 2.2761, 0.6836, 1.9127], device='cuda:2'), covar=tensor([0.2228, 0.2320, 0.1777, 0.3737, 0.1508, 0.1156, 0.4183, 0.1703], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0173, 0.0160, 0.0128, 0.0156, 0.0123, 0.0145, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 11:57:42,139 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54071.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:58:00,118 INFO [finetune.py:976] (2/7) Epoch 10, batch 2550, loss[loss=0.2135, simple_loss=0.2768, pruned_loss=0.07509, over 4866.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2636, pruned_loss=0.06713, over 951552.65 frames. ], batch size: 31, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:58:35,815 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.671e+02 2.051e+02 2.356e+02 3.900e+02, threshold=4.103e+02, percent-clipped=1.0 2023-03-26 11:58:46,742 INFO [finetune.py:976] (2/7) Epoch 10, batch 2600, loss[loss=0.1744, simple_loss=0.2484, pruned_loss=0.0502, over 4808.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2632, pruned_loss=0.06628, over 951018.31 frames. ], batch size: 39, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:59:19,472 INFO [finetune.py:976] (2/7) Epoch 10, batch 2650, loss[loss=0.2331, simple_loss=0.2967, pruned_loss=0.08472, over 4790.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2647, pruned_loss=0.06679, over 949943.65 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:59:21,635 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 11:59:43,742 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.585e+01 1.566e+02 1.779e+02 2.159e+02 3.883e+02, threshold=3.557e+02, percent-clipped=0.0 2023-03-26 11:59:45,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4303, 1.4887, 1.2690, 1.4635, 1.7478, 1.7801, 1.4752, 1.3358], device='cuda:2'), covar=tensor([0.0410, 0.0297, 0.0570, 0.0294, 0.0236, 0.0367, 0.0381, 0.0367], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0108, 0.0138, 0.0114, 0.0101, 0.0102, 0.0091, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0733e-05, 8.4499e-05, 1.0972e-04, 8.9204e-05, 7.9275e-05, 7.5548e-05, 6.8801e-05, 8.2471e-05], device='cuda:2') 2023-03-26 11:59:53,473 INFO [finetune.py:976] (2/7) Epoch 10, batch 2700, loss[loss=0.1852, simple_loss=0.2484, pruned_loss=0.06099, over 4923.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2632, pruned_loss=0.06576, over 951293.17 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 11:59:55,901 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4501, 1.5902, 1.7619, 1.7526, 1.6037, 3.3223, 1.4221, 1.6774], device='cuda:2'), covar=tensor([0.0952, 0.1557, 0.1080, 0.0875, 0.1437, 0.0256, 0.1313, 0.1566], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0080, 0.0074, 0.0077, 0.0090, 0.0081, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 12:00:26,576 INFO [finetune.py:976] (2/7) Epoch 10, batch 2750, loss[loss=0.1859, simple_loss=0.2513, pruned_loss=0.06029, over 4821.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.261, pruned_loss=0.06539, over 952238.26 frames. ], batch size: 38, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:00:50,903 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.545e+02 1.929e+02 2.415e+02 3.548e+02, threshold=3.859e+02, percent-clipped=0.0 2023-03-26 12:01:01,566 INFO [finetune.py:976] (2/7) Epoch 10, batch 2800, loss[loss=0.1745, simple_loss=0.2425, pruned_loss=0.05326, over 4937.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2585, pruned_loss=0.06493, over 953525.82 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:01:02,293 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6655, 1.2246, 0.8147, 1.5620, 2.0892, 0.9877, 1.5046, 1.5314], device='cuda:2'), covar=tensor([0.1494, 0.2136, 0.2085, 0.1247, 0.2030, 0.2150, 0.1444, 0.2134], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0121, 0.0095, 0.0099, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:01:08,308 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-26 12:01:14,861 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 12:01:23,251 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4804, 1.4228, 1.5660, 0.7569, 1.5647, 1.5729, 1.4943, 1.3100], device='cuda:2'), covar=tensor([0.0568, 0.0749, 0.0646, 0.0932, 0.0833, 0.0657, 0.0577, 0.1249], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0132, 0.0142, 0.0123, 0.0118, 0.0141, 0.0141, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:01:24,698 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 12:01:48,149 INFO [finetune.py:976] (2/7) Epoch 10, batch 2850, loss[loss=0.2049, simple_loss=0.2651, pruned_loss=0.07229, over 4858.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2571, pruned_loss=0.06437, over 955236.31 frames. ], batch size: 31, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:01:54,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6005, 1.4665, 1.3702, 1.6639, 1.6522, 1.7037, 0.8823, 1.4160], device='cuda:2'), covar=tensor([0.2047, 0.2030, 0.1860, 0.1552, 0.1502, 0.1064, 0.2576, 0.1818], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0210, 0.0210, 0.0191, 0.0243, 0.0182, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:02:08,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4069, 1.2913, 1.9272, 2.7537, 1.9009, 2.0076, 0.8957, 2.1493], device='cuda:2'), covar=tensor([0.1998, 0.1830, 0.1418, 0.0873, 0.0940, 0.1623, 0.2086, 0.0889], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0164, 0.0101, 0.0137, 0.0126, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:02:10,452 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.958e+01 1.628e+02 1.894e+02 2.190e+02 3.699e+02, threshold=3.787e+02, percent-clipped=0.0 2023-03-26 12:02:11,844 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9265, 1.7481, 1.5297, 1.6141, 1.6600, 1.6014, 1.6682, 2.3575], device='cuda:2'), covar=tensor([0.4741, 0.5041, 0.3805, 0.4890, 0.4954, 0.2717, 0.4624, 0.2078], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0222, 0.0279, 0.0243, 0.0208, 0.0245, 0.0211], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:02:22,196 INFO [finetune.py:976] (2/7) Epoch 10, batch 2900, loss[loss=0.1613, simple_loss=0.2357, pruned_loss=0.04345, over 4755.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2604, pruned_loss=0.06581, over 955600.34 frames. ], batch size: 26, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:02:23,613 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 12:02:39,667 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4666, 1.4440, 2.0084, 1.7807, 1.6809, 4.0371, 1.2910, 1.6815], device='cuda:2'), covar=tensor([0.1108, 0.1989, 0.1421, 0.1151, 0.1693, 0.0181, 0.1685, 0.1833], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0091, 0.0082, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:02:57,316 INFO [finetune.py:976] (2/7) Epoch 10, batch 2950, loss[loss=0.2924, simple_loss=0.3385, pruned_loss=0.1231, over 4142.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2637, pruned_loss=0.06657, over 955925.38 frames. ], batch size: 65, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:03:14,106 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1505, 2.1506, 2.2365, 1.5375, 2.2513, 2.3925, 2.2135, 1.8794], device='cuda:2'), covar=tensor([0.0604, 0.0597, 0.0687, 0.0905, 0.0578, 0.0689, 0.0678, 0.1022], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0143, 0.0124, 0.0119, 0.0142, 0.0142, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:03:18,744 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.694e+02 2.010e+02 2.318e+02 4.609e+02, threshold=4.019e+02, percent-clipped=2.0 2023-03-26 12:03:40,000 INFO [finetune.py:976] (2/7) Epoch 10, batch 3000, loss[loss=0.1504, simple_loss=0.2331, pruned_loss=0.03388, over 4755.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2651, pruned_loss=0.06711, over 954634.30 frames. ], batch size: 27, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:03:40,000 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 12:03:48,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9438, 1.0831, 2.0024, 1.8296, 1.6846, 1.6026, 1.6662, 1.8268], device='cuda:2'), covar=tensor([0.4316, 0.4968, 0.4480, 0.4672, 0.6094, 0.4318, 0.5728, 0.3881], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0238, 0.0252, 0.0256, 0.0251, 0.0227, 0.0274, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:03:56,629 INFO [finetune.py:1010] (2/7) Epoch 10, validation: loss=0.1584, simple_loss=0.2295, pruned_loss=0.04366, over 2265189.00 frames. 2023-03-26 12:03:56,629 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 12:04:29,066 INFO [finetune.py:976] (2/7) Epoch 10, batch 3050, loss[loss=0.2102, simple_loss=0.2731, pruned_loss=0.07365, over 4807.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2657, pruned_loss=0.06669, over 955208.89 frames. ], batch size: 38, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:04:47,444 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-26 12:04:52,092 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.606e+02 1.839e+02 2.259e+02 4.011e+02, threshold=3.679e+02, percent-clipped=0.0 2023-03-26 12:05:02,811 INFO [finetune.py:976] (2/7) Epoch 10, batch 3100, loss[loss=0.1628, simple_loss=0.2226, pruned_loss=0.05152, over 4830.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.265, pruned_loss=0.06706, over 954682.37 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:05:05,275 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1429, 2.0930, 1.7277, 2.1351, 2.0209, 1.9451, 1.9502, 2.9957], device='cuda:2'), covar=tensor([0.5071, 0.6156, 0.4377, 0.5714, 0.5522, 0.2931, 0.5898, 0.1974], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0258, 0.0222, 0.0279, 0.0242, 0.0208, 0.0245, 0.0211], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:05:14,411 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54664.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:05:36,458 INFO [finetune.py:976] (2/7) Epoch 10, batch 3150, loss[loss=0.1988, simple_loss=0.2469, pruned_loss=0.07537, over 4806.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2629, pruned_loss=0.0663, over 955263.30 frames. ], batch size: 51, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:05:54,654 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54725.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:05:56,473 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6976, 1.7093, 1.7882, 1.0458, 1.9504, 2.1363, 1.9736, 1.5345], device='cuda:2'), covar=tensor([0.0900, 0.0680, 0.0433, 0.0522, 0.0359, 0.0417, 0.0303, 0.0656], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0120, 0.0132, 0.0130, 0.0124, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.4257e-05, 1.1253e-04, 8.6705e-05, 9.5480e-05, 9.3294e-05, 9.0191e-05, 1.0526e-04, 1.0733e-04], device='cuda:2') 2023-03-26 12:05:59,383 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.653e+02 1.993e+02 2.298e+02 5.311e+02, threshold=3.986e+02, percent-clipped=2.0 2023-03-26 12:06:10,109 INFO [finetune.py:976] (2/7) Epoch 10, batch 3200, loss[loss=0.1972, simple_loss=0.2622, pruned_loss=0.06613, over 4788.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2584, pruned_loss=0.06436, over 955892.71 frames. ], batch size: 29, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:06:10,943 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 12:06:11,488 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.21 vs. limit=5.0 2023-03-26 12:06:53,305 INFO [finetune.py:976] (2/7) Epoch 10, batch 3250, loss[loss=0.2267, simple_loss=0.2852, pruned_loss=0.08406, over 4892.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2605, pruned_loss=0.06605, over 955005.80 frames. ], batch size: 32, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:07:26,174 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.711e+02 2.094e+02 2.546e+02 5.601e+02, threshold=4.189e+02, percent-clipped=2.0 2023-03-26 12:07:29,118 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2175, 1.8831, 1.3385, 0.5276, 1.5924, 1.8181, 1.4538, 1.7430], device='cuda:2'), covar=tensor([0.0720, 0.0888, 0.1413, 0.1919, 0.1289, 0.1999, 0.2402, 0.0792], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0186, 0.0216, 0.0207, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:07:46,368 INFO [finetune.py:976] (2/7) Epoch 10, batch 3300, loss[loss=0.2009, simple_loss=0.2762, pruned_loss=0.06283, over 4802.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2653, pruned_loss=0.0678, over 954687.29 frames. ], batch size: 29, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:07:52,439 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 12:07:53,534 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7135, 1.4965, 1.4283, 1.0915, 1.5221, 1.7676, 1.7124, 1.3641], device='cuda:2'), covar=tensor([0.0912, 0.0569, 0.0535, 0.0447, 0.0471, 0.0533, 0.0294, 0.0588], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0154, 0.0121, 0.0132, 0.0130, 0.0124, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.4532e-05, 1.1281e-04, 8.6928e-05, 9.5563e-05, 9.3292e-05, 9.0540e-05, 1.0557e-04, 1.0740e-04], device='cuda:2') 2023-03-26 12:07:55,810 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54862.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:08:16,672 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.48 vs. limit=5.0 2023-03-26 12:08:20,112 INFO [finetune.py:976] (2/7) Epoch 10, batch 3350, loss[loss=0.2546, simple_loss=0.3162, pruned_loss=0.09647, over 4886.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2667, pruned_loss=0.06819, over 955196.68 frames. ], batch size: 35, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:08:47,735 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54923.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:08:57,773 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.693e+02 1.965e+02 2.442e+02 4.084e+02, threshold=3.930e+02, percent-clipped=0.0 2023-03-26 12:09:07,543 INFO [finetune.py:976] (2/7) Epoch 10, batch 3400, loss[loss=0.2033, simple_loss=0.2727, pruned_loss=0.06692, over 4895.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.2676, pruned_loss=0.06835, over 954236.83 frames. ], batch size: 37, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:09:43,512 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7846, 1.5935, 1.3714, 1.1978, 1.5616, 1.5454, 1.5241, 2.1253], device='cuda:2'), covar=tensor([0.5004, 0.4918, 0.4011, 0.4746, 0.4705, 0.2960, 0.4542, 0.2252], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0222, 0.0278, 0.0242, 0.0208, 0.0245, 0.0212], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:09:55,310 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 12:09:56,944 INFO [finetune.py:976] (2/7) Epoch 10, batch 3450, loss[loss=0.252, simple_loss=0.3088, pruned_loss=0.09758, over 4914.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2681, pruned_loss=0.06843, over 954961.75 frames. ], batch size: 36, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:10:01,587 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.78 vs. limit=5.0 2023-03-26 12:10:16,047 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55020.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:10:17,923 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7514, 2.4578, 2.1133, 1.0518, 2.2828, 2.1115, 1.9326, 2.3498], device='cuda:2'), covar=tensor([0.0804, 0.0994, 0.1715, 0.2128, 0.1564, 0.2034, 0.2041, 0.0934], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0186, 0.0216, 0.0206, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:10:30,756 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.349e+01 1.534e+02 1.957e+02 2.350e+02 5.428e+02, threshold=3.914e+02, percent-clipped=3.0 2023-03-26 12:10:39,044 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3904, 2.2332, 1.8209, 0.8816, 1.9179, 1.8772, 1.6866, 2.0294], device='cuda:2'), covar=tensor([0.0916, 0.0795, 0.1396, 0.2002, 0.1489, 0.2036, 0.2125, 0.0972], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0186, 0.0215, 0.0206, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:10:51,432 INFO [finetune.py:976] (2/7) Epoch 10, batch 3500, loss[loss=0.1711, simple_loss=0.2296, pruned_loss=0.05633, over 4761.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2656, pruned_loss=0.06808, over 952941.40 frames. ], batch size: 54, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:10:52,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9246, 4.2109, 3.8449, 2.0724, 4.0509, 3.2120, 0.9958, 2.8175], device='cuda:2'), covar=tensor([0.2269, 0.1522, 0.1453, 0.3160, 0.1022, 0.0920, 0.4549, 0.1446], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0173, 0.0158, 0.0127, 0.0154, 0.0122, 0.0144, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 12:11:36,219 INFO [finetune.py:976] (2/7) Epoch 10, batch 3550, loss[loss=0.1684, simple_loss=0.2347, pruned_loss=0.05102, over 4853.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2616, pruned_loss=0.06624, over 953906.65 frames. ], batch size: 47, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:11:37,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4786, 1.1688, 1.1852, 1.1471, 1.6439, 1.5802, 1.4296, 1.2745], device='cuda:2'), covar=tensor([0.0309, 0.0392, 0.0794, 0.0415, 0.0264, 0.0431, 0.0311, 0.0457], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0109, 0.0139, 0.0114, 0.0101, 0.0103, 0.0092, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.0927e-05, 8.5410e-05, 1.1063e-04, 8.9626e-05, 7.9314e-05, 7.6471e-05, 6.9713e-05, 8.2996e-05], device='cuda:2') 2023-03-26 12:11:58,138 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1536, 2.1316, 1.7670, 2.1543, 2.2022, 1.8658, 2.4643, 2.2092], device='cuda:2'), covar=tensor([0.1184, 0.2290, 0.2916, 0.2609, 0.2342, 0.1500, 0.3092, 0.1724], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0187, 0.0233, 0.0253, 0.0238, 0.0196, 0.0212, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:11:58,578 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.369e+01 1.530e+02 1.896e+02 2.280e+02 4.793e+02, threshold=3.791e+02, percent-clipped=5.0 2023-03-26 12:12:09,207 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:09,742 INFO [finetune.py:976] (2/7) Epoch 10, batch 3600, loss[loss=0.2338, simple_loss=0.2782, pruned_loss=0.09467, over 4808.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2582, pruned_loss=0.06533, over 954646.92 frames. ], batch size: 39, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:12:43,417 INFO [finetune.py:976] (2/7) Epoch 10, batch 3650, loss[loss=0.2386, simple_loss=0.2801, pruned_loss=0.09857, over 4241.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2593, pruned_loss=0.06578, over 954293.14 frames. ], batch size: 18, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:12:49,692 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:52,577 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:56,771 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55218.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:13:05,133 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7196, 1.6975, 1.5115, 1.5813, 2.0824, 1.8801, 1.7038, 1.4675], device='cuda:2'), covar=tensor([0.0295, 0.0270, 0.0563, 0.0329, 0.0198, 0.0550, 0.0325, 0.0408], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0109, 0.0138, 0.0114, 0.0101, 0.0102, 0.0091, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0504e-05, 8.4845e-05, 1.0998e-04, 8.9048e-05, 7.8876e-05, 7.5876e-05, 6.9269e-05, 8.2553e-05], device='cuda:2') 2023-03-26 12:13:14,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.616e+02 1.938e+02 2.270e+02 4.700e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-26 12:13:26,509 INFO [finetune.py:976] (2/7) Epoch 10, batch 3700, loss[loss=0.1647, simple_loss=0.2208, pruned_loss=0.05431, over 4676.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2629, pruned_loss=0.06646, over 955113.48 frames. ], batch size: 23, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:13:40,332 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:14:00,006 INFO [finetune.py:976] (2/7) Epoch 10, batch 3750, loss[loss=0.165, simple_loss=0.241, pruned_loss=0.04451, over 4824.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2651, pruned_loss=0.0672, over 952844.68 frames. ], batch size: 38, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:14:16,772 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55320.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:14:33,804 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.583e+02 1.835e+02 2.150e+02 3.880e+02, threshold=3.669e+02, percent-clipped=1.0 2023-03-26 12:14:45,562 INFO [finetune.py:976] (2/7) Epoch 10, batch 3800, loss[loss=0.1573, simple_loss=0.2312, pruned_loss=0.04175, over 4839.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2681, pruned_loss=0.06844, over 951777.78 frames. ], batch size: 49, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:14:57,816 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55368.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:15:27,044 INFO [finetune.py:976] (2/7) Epoch 10, batch 3850, loss[loss=0.1686, simple_loss=0.2388, pruned_loss=0.04925, over 4925.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2663, pruned_loss=0.06793, over 951363.71 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:15:38,106 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5588, 1.5913, 1.5330, 1.0048, 1.6160, 1.8396, 1.7780, 1.4083], device='cuda:2'), covar=tensor([0.0987, 0.0680, 0.0517, 0.0539, 0.0432, 0.0562, 0.0325, 0.0694], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0156, 0.0123, 0.0134, 0.0132, 0.0125, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.5531e-05, 1.1425e-04, 8.8418e-05, 9.7162e-05, 9.4680e-05, 9.0995e-05, 1.0681e-04, 1.0779e-04], device='cuda:2') 2023-03-26 12:15:49,874 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.578e+02 1.920e+02 2.344e+02 4.809e+02, threshold=3.839e+02, percent-clipped=3.0 2023-03-26 12:16:01,518 INFO [finetune.py:976] (2/7) Epoch 10, batch 3900, loss[loss=0.1757, simple_loss=0.2422, pruned_loss=0.05466, over 4751.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.2634, pruned_loss=0.06713, over 953843.86 frames. ], batch size: 27, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:16:14,916 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8609, 0.9706, 1.8474, 1.7184, 1.5862, 1.5470, 1.5843, 1.6653], device='cuda:2'), covar=tensor([0.3896, 0.4606, 0.3743, 0.3899, 0.5225, 0.4004, 0.4984, 0.3591], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0239, 0.0253, 0.0257, 0.0252, 0.0229, 0.0274, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:16:15,488 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9975, 1.9360, 1.9861, 1.3145, 2.0754, 2.1203, 2.0402, 1.6558], device='cuda:2'), covar=tensor([0.0557, 0.0606, 0.0681, 0.0921, 0.0588, 0.0673, 0.0600, 0.1078], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0143, 0.0124, 0.0120, 0.0142, 0.0143, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:16:44,730 INFO [finetune.py:976] (2/7) Epoch 10, batch 3950, loss[loss=0.172, simple_loss=0.2387, pruned_loss=0.0527, over 4822.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2603, pruned_loss=0.06609, over 954354.40 frames. ], batch size: 39, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:16:48,773 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:16:58,346 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55518.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:17:13,739 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.580e+02 1.857e+02 2.217e+02 3.906e+02, threshold=3.714e+02, percent-clipped=1.0 2023-03-26 12:17:35,771 INFO [finetune.py:976] (2/7) Epoch 10, batch 4000, loss[loss=0.2216, simple_loss=0.3004, pruned_loss=0.0714, over 4817.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2597, pruned_loss=0.06615, over 955125.41 frames. ], batch size: 40, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:17:36,471 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5652, 1.4626, 2.1083, 1.8815, 1.7482, 3.8478, 1.3432, 1.8802], device='cuda:2'), covar=tensor([0.0950, 0.1803, 0.1188, 0.0920, 0.1489, 0.0218, 0.1531, 0.1578], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0077, 0.0090, 0.0082, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:17:48,293 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55566.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:17:48,908 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:17:59,385 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.53 vs. limit=5.0 2023-03-26 12:18:09,105 INFO [finetune.py:976] (2/7) Epoch 10, batch 4050, loss[loss=0.2344, simple_loss=0.3077, pruned_loss=0.08052, over 4811.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.263, pruned_loss=0.06728, over 955691.19 frames. ], batch size: 41, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:18:23,272 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-26 12:18:34,628 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.736e+02 2.138e+02 2.510e+02 4.140e+02, threshold=4.276e+02, percent-clipped=4.0 2023-03-26 12:18:44,820 INFO [finetune.py:976] (2/7) Epoch 10, batch 4100, loss[loss=0.2224, simple_loss=0.2833, pruned_loss=0.08075, over 4790.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2661, pruned_loss=0.0681, over 955902.14 frames. ], batch size: 25, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:19:17,498 INFO [finetune.py:976] (2/7) Epoch 10, batch 4150, loss[loss=0.1949, simple_loss=0.2673, pruned_loss=0.06128, over 4896.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2665, pruned_loss=0.06753, over 954125.65 frames. ], batch size: 36, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:19:49,995 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.688e+02 2.035e+02 2.462e+02 3.895e+02, threshold=4.069e+02, percent-clipped=0.0 2023-03-26 12:19:59,099 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:19:59,612 INFO [finetune.py:976] (2/7) Epoch 10, batch 4200, loss[loss=0.1654, simple_loss=0.2287, pruned_loss=0.05108, over 4912.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2651, pruned_loss=0.06636, over 956809.14 frames. ], batch size: 37, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:20:53,373 INFO [finetune.py:976] (2/7) Epoch 10, batch 4250, loss[loss=0.1546, simple_loss=0.2245, pruned_loss=0.04233, over 4757.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2629, pruned_loss=0.06596, over 957949.87 frames. ], batch size: 26, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:20:54,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0347, 3.5273, 3.6921, 3.8856, 3.7832, 3.5156, 4.1288, 1.2824], device='cuda:2'), covar=tensor([0.0835, 0.0834, 0.0792, 0.0953, 0.1254, 0.1772, 0.0755, 0.5347], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0243, 0.0274, 0.0289, 0.0328, 0.0282, 0.0299, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:20:57,628 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:06,135 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55809.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:38,840 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.929e+01 1.631e+02 1.908e+02 2.201e+02 4.056e+02, threshold=3.816e+02, percent-clipped=0.0 2023-03-26 12:21:48,081 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:48,566 INFO [finetune.py:976] (2/7) Epoch 10, batch 4300, loss[loss=0.212, simple_loss=0.2707, pruned_loss=0.07663, over 4784.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.26, pruned_loss=0.06454, over 958721.93 frames. ], batch size: 29, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:21:50,382 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:10,848 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55867.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:36,977 INFO [finetune.py:976] (2/7) Epoch 10, batch 4350, loss[loss=0.1918, simple_loss=0.2628, pruned_loss=0.06036, over 4927.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2569, pruned_loss=0.06329, over 959850.34 frames. ], batch size: 38, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:22:48,863 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:58,948 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:23:23,253 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.622e+02 1.928e+02 2.410e+02 3.855e+02, threshold=3.856e+02, percent-clipped=1.0 2023-03-26 12:23:37,719 INFO [finetune.py:976] (2/7) Epoch 10, batch 4400, loss[loss=0.2514, simple_loss=0.32, pruned_loss=0.09142, over 4755.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2574, pruned_loss=0.06323, over 958196.04 frames. ], batch size: 54, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:23:56,977 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4346, 1.3583, 1.2635, 1.3119, 1.6164, 1.5381, 1.4682, 1.2386], device='cuda:2'), covar=tensor([0.0358, 0.0322, 0.0568, 0.0301, 0.0251, 0.0511, 0.0281, 0.0405], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0109, 0.0138, 0.0113, 0.0100, 0.0102, 0.0091, 0.0106], device='cuda:2'), out_proj_covar=tensor([7.0424e-05, 8.5401e-05, 1.1013e-04, 8.8932e-05, 7.8498e-05, 7.5751e-05, 6.9272e-05, 8.1874e-05], device='cuda:2') 2023-03-26 12:24:11,805 INFO [finetune.py:976] (2/7) Epoch 10, batch 4450, loss[loss=0.2051, simple_loss=0.2751, pruned_loss=0.06759, over 4894.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2622, pruned_loss=0.06592, over 954525.55 frames. ], batch size: 35, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:24:36,681 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.562e+02 1.840e+02 2.258e+02 4.729e+02, threshold=3.681e+02, percent-clipped=2.0 2023-03-26 12:24:40,411 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8545, 1.2495, 1.8696, 1.8067, 1.5699, 1.5199, 1.6467, 1.6517], device='cuda:2'), covar=tensor([0.3916, 0.4439, 0.3640, 0.4227, 0.5109, 0.3942, 0.5071, 0.3577], device='cuda:2'), in_proj_covar=tensor([0.0235, 0.0238, 0.0252, 0.0257, 0.0252, 0.0228, 0.0273, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:24:46,909 INFO [finetune.py:976] (2/7) Epoch 10, batch 4500, loss[loss=0.2212, simple_loss=0.2849, pruned_loss=0.07871, over 4921.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2628, pruned_loss=0.06565, over 954913.63 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:25:19,178 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 12:25:31,193 INFO [finetune.py:976] (2/7) Epoch 10, batch 4550, loss[loss=0.2317, simple_loss=0.2963, pruned_loss=0.08358, over 4832.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2656, pruned_loss=0.0673, over 954685.88 frames. ], batch size: 49, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:25:34,330 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56104.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:25:40,425 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4745, 1.4140, 1.5122, 0.8420, 1.5673, 1.4870, 1.4369, 1.3250], device='cuda:2'), covar=tensor([0.0616, 0.0757, 0.0691, 0.0946, 0.0789, 0.0792, 0.0670, 0.1211], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0135, 0.0145, 0.0125, 0.0121, 0.0144, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:25:53,264 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.686e+02 1.941e+02 2.447e+02 3.858e+02, threshold=3.882e+02, percent-clipped=3.0 2023-03-26 12:26:02,519 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56145.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:04,873 INFO [finetune.py:976] (2/7) Epoch 10, batch 4600, loss[loss=0.2345, simple_loss=0.29, pruned_loss=0.08955, over 4809.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2649, pruned_loss=0.06678, over 955409.62 frames. ], batch size: 39, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:26:07,542 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 12:26:09,848 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6352, 1.4011, 2.0404, 3.2431, 2.1663, 2.3785, 0.9880, 2.4973], device='cuda:2'), covar=tensor([0.1645, 0.1467, 0.1244, 0.0498, 0.0760, 0.1513, 0.1789, 0.0583], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0163, 0.0100, 0.0137, 0.0126, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:26:09,937 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 12:26:40,465 INFO [finetune.py:976] (2/7) Epoch 10, batch 4650, loss[loss=0.1953, simple_loss=0.2546, pruned_loss=0.06795, over 4940.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2634, pruned_loss=0.0672, over 954612.85 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:26:40,537 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8046, 3.8443, 3.7628, 1.7673, 3.9724, 2.9809, 0.8307, 2.6922], device='cuda:2'), covar=tensor([0.2213, 0.2280, 0.1380, 0.3240, 0.0992, 0.0972, 0.4254, 0.1488], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0160, 0.0129, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 12:26:43,715 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:44,991 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:51,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3359, 2.8625, 2.7518, 1.2817, 2.9612, 2.1581, 0.6331, 1.8761], device='cuda:2'), covar=tensor([0.2345, 0.2222, 0.1747, 0.3416, 0.1435, 0.1176, 0.4295, 0.1749], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0175, 0.0159, 0.0129, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 12:26:57,344 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 12:27:11,826 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.561e+02 1.851e+02 2.355e+02 3.865e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 12:27:23,135 INFO [finetune.py:976] (2/7) Epoch 10, batch 4700, loss[loss=0.1986, simple_loss=0.2479, pruned_loss=0.07461, over 4926.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2595, pruned_loss=0.06555, over 956085.30 frames. ], batch size: 38, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:27:25,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8503, 1.8073, 1.7030, 1.7478, 1.3628, 4.0143, 1.6490, 2.1056], device='cuda:2'), covar=tensor([0.3124, 0.2177, 0.1941, 0.2156, 0.1610, 0.0157, 0.2537, 0.1239], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0098, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:28:03,981 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9667, 1.8352, 1.5279, 1.6883, 1.8935, 1.6514, 2.1290, 1.9251], device='cuda:2'), covar=tensor([0.1562, 0.2333, 0.3447, 0.2811, 0.2922, 0.1831, 0.3010, 0.1960], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0187, 0.0232, 0.0252, 0.0239, 0.0196, 0.0211, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:28:09,015 INFO [finetune.py:976] (2/7) Epoch 10, batch 4750, loss[loss=0.1946, simple_loss=0.2571, pruned_loss=0.06608, over 4872.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2563, pruned_loss=0.06423, over 955189.96 frames. ], batch size: 31, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:28:15,120 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5851, 1.4172, 2.0429, 3.0629, 2.1170, 2.3460, 0.9545, 2.4969], device='cuda:2'), covar=tensor([0.1696, 0.1518, 0.1297, 0.0635, 0.0798, 0.1321, 0.1917, 0.0565], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:28:16,375 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3659, 1.2490, 1.2761, 1.2925, 0.7485, 2.2481, 0.7480, 1.1774], device='cuda:2'), covar=tensor([0.3417, 0.2471, 0.2140, 0.2486, 0.2138, 0.0362, 0.2822, 0.1395], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0098, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:28:28,569 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.9748, 0.8409, 0.8328, 0.8642, 1.1962, 1.0917, 1.0303, 0.9126], device='cuda:2'), covar=tensor([0.0343, 0.0283, 0.0669, 0.0310, 0.0238, 0.0362, 0.0278, 0.0356], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0110, 0.0140, 0.0114, 0.0101, 0.0103, 0.0092, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0935e-05, 8.5775e-05, 1.1108e-04, 8.9383e-05, 7.8821e-05, 7.6383e-05, 6.9585e-05, 8.2696e-05], device='cuda:2') 2023-03-26 12:28:30,218 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.456e+02 1.800e+02 2.273e+02 6.888e+02, threshold=3.601e+02, percent-clipped=2.0 2023-03-26 12:28:42,336 INFO [finetune.py:976] (2/7) Epoch 10, batch 4800, loss[loss=0.2663, simple_loss=0.3261, pruned_loss=0.1032, over 4849.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2629, pruned_loss=0.0681, over 953738.18 frames. ], batch size: 49, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:28:57,436 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4944, 1.9163, 1.4618, 1.5780, 2.1393, 1.8511, 1.7927, 1.7558], device='cuda:2'), covar=tensor([0.0449, 0.0295, 0.0496, 0.0310, 0.0233, 0.0639, 0.0292, 0.0395], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0110, 0.0139, 0.0114, 0.0100, 0.0103, 0.0092, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0844e-05, 8.5598e-05, 1.1062e-04, 8.9343e-05, 7.8499e-05, 7.6383e-05, 6.9361e-05, 8.2579e-05], device='cuda:2') 2023-03-26 12:29:09,690 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6411, 1.2127, 1.0092, 1.6232, 2.1360, 1.0420, 1.5224, 1.4487], device='cuda:2'), covar=tensor([0.1614, 0.2255, 0.1979, 0.1244, 0.1971, 0.2202, 0.1490, 0.2227], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0097, 0.0114, 0.0093, 0.0122, 0.0095, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:29:12,058 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4777, 1.5562, 2.0411, 1.8717, 1.8649, 4.1331, 1.4498, 1.8214], device='cuda:2'), covar=tensor([0.1036, 0.1780, 0.1236, 0.1015, 0.1525, 0.0196, 0.1485, 0.1684], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0078, 0.0092, 0.0082, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:29:14,947 INFO [finetune.py:976] (2/7) Epoch 10, batch 4850, loss[loss=0.1888, simple_loss=0.2638, pruned_loss=0.05686, over 4838.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2654, pruned_loss=0.06813, over 954816.17 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:29:19,690 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56404.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:29:26,343 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5461, 1.4050, 1.3062, 1.5653, 1.7449, 1.6166, 0.9616, 1.3430], device='cuda:2'), covar=tensor([0.2145, 0.2061, 0.1917, 0.1576, 0.1552, 0.1235, 0.2525, 0.1784], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0207, 0.0207, 0.0188, 0.0240, 0.0181, 0.0213, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:29:37,025 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.738e+02 2.004e+02 2.451e+02 5.164e+02, threshold=4.009e+02, percent-clipped=2.0 2023-03-26 12:29:40,750 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 12:29:48,224 INFO [finetune.py:976] (2/7) Epoch 10, batch 4900, loss[loss=0.1696, simple_loss=0.237, pruned_loss=0.0511, over 4406.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2669, pruned_loss=0.06842, over 955515.84 frames. ], batch size: 19, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:29:50,492 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56452.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:12,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4255, 2.6951, 2.5078, 1.8883, 2.5732, 2.8147, 2.7201, 2.3422], device='cuda:2'), covar=tensor([0.0678, 0.0521, 0.0718, 0.0878, 0.0666, 0.0699, 0.0596, 0.0898], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0134, 0.0144, 0.0126, 0.0121, 0.0144, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:30:26,420 INFO [finetune.py:976] (2/7) Epoch 10, batch 4950, loss[loss=0.1714, simple_loss=0.2427, pruned_loss=0.05002, over 4899.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.2675, pruned_loss=0.06846, over 956185.31 frames. ], batch size: 36, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:30:32,446 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:34,357 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:49,789 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56523.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:30:55,720 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.612e+02 1.965e+02 2.275e+02 4.231e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-26 12:31:06,809 INFO [finetune.py:976] (2/7) Epoch 10, batch 5000, loss[loss=0.1959, simple_loss=0.2582, pruned_loss=0.06677, over 4767.00 frames. ], tot_loss[loss=0.2003, simple_loss=0.265, pruned_loss=0.06776, over 954499.44 frames. ], batch size: 51, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:31:08,678 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:31:08,705 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1343, 3.6079, 3.7564, 4.0059, 3.8516, 3.5990, 4.2375, 1.3620], device='cuda:2'), covar=tensor([0.0909, 0.0857, 0.0869, 0.1108, 0.1432, 0.1956, 0.0815, 0.5705], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0246, 0.0278, 0.0292, 0.0334, 0.0287, 0.0303, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:31:29,653 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56584.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:31:31,501 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0812, 1.9786, 2.1571, 1.3652, 2.1517, 2.2334, 2.1082, 1.6657], device='cuda:2'), covar=tensor([0.0581, 0.0624, 0.0637, 0.0943, 0.0635, 0.0645, 0.0580, 0.1093], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0144, 0.0125, 0.0120, 0.0144, 0.0143, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:31:39,220 INFO [finetune.py:976] (2/7) Epoch 10, batch 5050, loss[loss=0.1902, simple_loss=0.2611, pruned_loss=0.05964, over 4820.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.262, pruned_loss=0.06684, over 953224.14 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:32:04,810 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.243e+02 1.580e+02 1.790e+02 2.049e+02 5.062e+02, threshold=3.579e+02, percent-clipped=1.0 2023-03-26 12:32:14,687 INFO [finetune.py:976] (2/7) Epoch 10, batch 5100, loss[loss=0.1739, simple_loss=0.2372, pruned_loss=0.05533, over 4824.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2573, pruned_loss=0.06455, over 956142.31 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:32:18,878 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.65 vs. limit=5.0 2023-03-26 12:32:55,111 INFO [finetune.py:976] (2/7) Epoch 10, batch 5150, loss[loss=0.1881, simple_loss=0.2636, pruned_loss=0.05632, over 4738.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2582, pruned_loss=0.06515, over 954589.16 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:33:11,359 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1999, 1.9002, 2.1095, 2.0408, 1.8034, 1.8517, 2.0285, 2.0707], device='cuda:2'), covar=tensor([0.3567, 0.4285, 0.3141, 0.4548, 0.4970, 0.4200, 0.5321, 0.3023], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0240, 0.0253, 0.0258, 0.0253, 0.0230, 0.0275, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:33:23,763 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-26 12:33:27,185 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.919e+01 1.632e+02 1.974e+02 2.331e+02 5.610e+02, threshold=3.948e+02, percent-clipped=3.0 2023-03-26 12:33:35,168 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0668, 2.7676, 2.2725, 1.2880, 2.5613, 2.3898, 2.0888, 2.4539], device='cuda:2'), covar=tensor([0.0507, 0.0651, 0.1098, 0.1652, 0.1009, 0.1422, 0.1748, 0.0801], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0202, 0.0187, 0.0215, 0.0208, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:33:36,882 INFO [finetune.py:976] (2/7) Epoch 10, batch 5200, loss[loss=0.2226, simple_loss=0.2957, pruned_loss=0.07475, over 4911.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2625, pruned_loss=0.06669, over 953817.49 frames. ], batch size: 36, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:34:10,223 INFO [finetune.py:976] (2/7) Epoch 10, batch 5250, loss[loss=0.1744, simple_loss=0.2589, pruned_loss=0.04493, over 4805.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2645, pruned_loss=0.06719, over 955684.20 frames. ], batch size: 40, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:34:11,649 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:12,868 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56803.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:27,060 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7848, 1.3103, 0.8442, 1.7067, 2.1063, 1.5246, 1.5964, 1.7242], device='cuda:2'), covar=tensor([0.1575, 0.2182, 0.2180, 0.1260, 0.2000, 0.2074, 0.1467, 0.2066], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0093, 0.0122, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 12:34:32,517 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2187, 1.8004, 2.4779, 3.9685, 2.7518, 2.8676, 1.1349, 3.3001], device='cuda:2'), covar=tensor([0.1696, 0.1489, 0.1414, 0.0597, 0.0795, 0.1481, 0.1784, 0.0499], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0136, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:34:34,254 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.706e+02 2.047e+02 2.503e+02 5.084e+02, threshold=4.093e+02, percent-clipped=2.0 2023-03-26 12:34:43,965 INFO [finetune.py:976] (2/7) Epoch 10, batch 5300, loss[loss=0.2343, simple_loss=0.2956, pruned_loss=0.08649, over 4808.00 frames. ], tot_loss[loss=0.2005, simple_loss=0.2658, pruned_loss=0.0676, over 954876.48 frames. ], batch size: 39, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:34:44,026 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:50,141 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5285, 1.3996, 1.5626, 0.8757, 1.4849, 1.5596, 1.4840, 1.3374], device='cuda:2'), covar=tensor([0.0549, 0.0766, 0.0652, 0.0909, 0.0939, 0.0668, 0.0605, 0.1155], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0135, 0.0144, 0.0126, 0.0121, 0.0144, 0.0144, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:34:53,676 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56864.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:35:04,619 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56879.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:35:17,628 INFO [finetune.py:976] (2/7) Epoch 10, batch 5350, loss[loss=0.1842, simple_loss=0.2548, pruned_loss=0.05681, over 4764.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2653, pruned_loss=0.06698, over 954797.87 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:35:23,291 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56908.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:35:25,163 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3267, 2.2124, 2.3555, 1.6885, 2.3018, 2.4894, 2.4394, 1.9523], device='cuda:2'), covar=tensor([0.0621, 0.0658, 0.0722, 0.0917, 0.0682, 0.0728, 0.0620, 0.1038], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0135, 0.0144, 0.0126, 0.0121, 0.0144, 0.0144, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:35:49,118 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.924e+01 1.584e+02 1.842e+02 2.192e+02 3.665e+02, threshold=3.684e+02, percent-clipped=0.0 2023-03-26 12:36:02,275 INFO [finetune.py:976] (2/7) Epoch 10, batch 5400, loss[loss=0.2076, simple_loss=0.2845, pruned_loss=0.06534, over 4905.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2627, pruned_loss=0.06596, over 956138.70 frames. ], batch size: 36, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:36:09,084 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7407, 1.6327, 1.4808, 1.7860, 2.1719, 1.7867, 1.2508, 1.4885], device='cuda:2'), covar=tensor([0.1936, 0.1878, 0.1743, 0.1424, 0.1381, 0.1152, 0.2457, 0.1690], device='cuda:2'), in_proj_covar=tensor([0.0234, 0.0206, 0.0207, 0.0187, 0.0239, 0.0180, 0.0213, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:36:15,031 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:32,494 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56993.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:33,685 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:35,991 INFO [finetune.py:976] (2/7) Epoch 10, batch 5450, loss[loss=0.1433, simple_loss=0.2083, pruned_loss=0.03913, over 4773.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2606, pruned_loss=0.06597, over 956003.65 frames. ], batch size: 26, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:36:47,110 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5603, 1.2822, 1.9553, 3.0431, 2.1064, 2.4954, 0.7276, 2.4524], device='cuda:2'), covar=tensor([0.2252, 0.2271, 0.1666, 0.0941, 0.1116, 0.1322, 0.2519, 0.0885], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:36:57,712 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.491e+02 1.807e+02 2.344e+02 4.842e+02, threshold=3.613e+02, percent-clipped=5.0 2023-03-26 12:37:09,488 INFO [finetune.py:976] (2/7) Epoch 10, batch 5500, loss[loss=0.161, simple_loss=0.2268, pruned_loss=0.04761, over 4214.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2572, pruned_loss=0.06472, over 956146.07 frames. ], batch size: 18, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:37:12,665 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57054.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:37:13,840 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:37:22,416 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 12:37:43,350 INFO [finetune.py:976] (2/7) Epoch 10, batch 5550, loss[loss=0.1888, simple_loss=0.2602, pruned_loss=0.0587, over 4812.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2585, pruned_loss=0.06473, over 955099.10 frames. ], batch size: 41, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:37:54,002 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 12:38:06,740 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.869e+01 1.587e+02 1.788e+02 2.090e+02 3.209e+02, threshold=3.576e+02, percent-clipped=0.0 2023-03-26 12:38:25,602 INFO [finetune.py:976] (2/7) Epoch 10, batch 5600, loss[loss=0.1907, simple_loss=0.2557, pruned_loss=0.06283, over 4748.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2622, pruned_loss=0.06583, over 953088.79 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:38:35,035 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57159.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:41,582 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-26 12:38:46,644 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57179.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:38:50,750 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:57,557 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57197.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:58,646 INFO [finetune.py:976] (2/7) Epoch 10, batch 5650, loss[loss=0.1883, simple_loss=0.2561, pruned_loss=0.06024, over 4756.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2646, pruned_loss=0.06618, over 953510.09 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:39:04,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9134, 1.3835, 1.9196, 1.8361, 1.6251, 1.6307, 1.8020, 1.7291], device='cuda:2'), covar=tensor([0.4300, 0.4843, 0.3979, 0.4537, 0.5509, 0.4394, 0.5397, 0.3793], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0238, 0.0253, 0.0257, 0.0252, 0.0229, 0.0274, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:39:15,288 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57227.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:39:19,289 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.551e+02 1.804e+02 2.162e+02 3.713e+02, threshold=3.608e+02, percent-clipped=1.0 2023-03-26 12:39:21,163 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9984, 1.9497, 2.3870, 2.3064, 2.1255, 3.7935, 1.8257, 2.1508], device='cuda:2'), covar=tensor([0.0771, 0.1305, 0.0900, 0.0740, 0.1205, 0.0302, 0.1153, 0.1321], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0075, 0.0077, 0.0091, 0.0082, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:39:25,354 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:39:27,126 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:39:28,225 INFO [finetune.py:976] (2/7) Epoch 10, batch 5700, loss[loss=0.1647, simple_loss=0.2185, pruned_loss=0.05544, over 4369.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.2602, pruned_loss=0.06581, over 933481.51 frames. ], batch size: 19, lr: 3.72e-03, grad_scale: 32.0 2023-03-26 12:39:34,005 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57258.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:39:36,089 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 12:39:37,766 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57264.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:40:00,694 INFO [finetune.py:976] (2/7) Epoch 11, batch 0, loss[loss=0.1812, simple_loss=0.2449, pruned_loss=0.05878, over 4919.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2449, pruned_loss=0.05878, over 4919.00 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:40:00,694 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 12:40:16,055 INFO [finetune.py:1010] (2/7) Epoch 11, validation: loss=0.1597, simple_loss=0.2306, pruned_loss=0.04438, over 2265189.00 frames. 2023-03-26 12:40:16,056 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 12:40:37,199 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57305.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:40:50,170 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 12:40:59,547 INFO [finetune.py:976] (2/7) Epoch 11, batch 50, loss[loss=0.1646, simple_loss=0.2334, pruned_loss=0.04788, over 4750.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2683, pruned_loss=0.0694, over 217110.18 frames. ], batch size: 28, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:41:07,741 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0458, 2.1957, 1.9271, 1.7177, 2.4866, 2.5014, 2.2377, 2.0295], device='cuda:2'), covar=tensor([0.0348, 0.0308, 0.0509, 0.0374, 0.0279, 0.0493, 0.0323, 0.0364], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0108, 0.0137, 0.0112, 0.0099, 0.0101, 0.0091, 0.0106], device='cuda:2'), out_proj_covar=tensor([7.0171e-05, 8.4021e-05, 1.0866e-04, 8.7867e-05, 7.7660e-05, 7.5156e-05, 6.8920e-05, 8.1553e-05], device='cuda:2') 2023-03-26 12:41:10,027 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.580e+02 1.868e+02 2.535e+02 4.204e+02, threshold=3.735e+02, percent-clipped=3.0 2023-03-26 12:41:18,572 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57349.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:41:19,794 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57351.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:41:38,106 INFO [finetune.py:976] (2/7) Epoch 11, batch 100, loss[loss=0.1725, simple_loss=0.2447, pruned_loss=0.05021, over 4799.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.263, pruned_loss=0.06684, over 381472.85 frames. ], batch size: 45, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:41:51,427 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57398.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:11,567 INFO [finetune.py:976] (2/7) Epoch 11, batch 150, loss[loss=0.2129, simple_loss=0.2723, pruned_loss=0.07676, over 4890.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2577, pruned_loss=0.06539, over 509294.10 frames. ], batch size: 35, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:42:11,809 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.23 vs. limit=5.0 2023-03-26 12:42:16,966 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.728e+02 2.070e+02 2.489e+02 4.280e+02, threshold=4.140e+02, percent-clipped=3.0 2023-03-26 12:42:31,667 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:31,690 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:33,507 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57462.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:43,503 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5626, 1.4534, 1.4881, 1.5210, 0.8966, 3.3003, 1.2836, 1.8154], device='cuda:2'), covar=tensor([0.3402, 0.2534, 0.2218, 0.2448, 0.2170, 0.0196, 0.2721, 0.1312], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0122, 0.0115, 0.0098, 0.0099, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:42:44,007 INFO [finetune.py:976] (2/7) Epoch 11, batch 200, loss[loss=0.2122, simple_loss=0.2788, pruned_loss=0.07282, over 4902.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2561, pruned_loss=0.06472, over 608966.58 frames. ], batch size: 36, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:42:56,418 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:03,709 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:14,484 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57523.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:17,309 INFO [finetune.py:976] (2/7) Epoch 11, batch 250, loss[loss=0.1808, simple_loss=0.2654, pruned_loss=0.0481, over 4731.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2563, pruned_loss=0.06377, over 685311.69 frames. ], batch size: 59, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:43:22,637 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.287e+02 1.584e+02 1.966e+02 2.356e+02 4.681e+02, threshold=3.932e+02, percent-clipped=1.0 2023-03-26 12:43:27,979 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57542.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:34,250 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 12:43:43,763 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57553.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:43:44,470 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 12:43:45,611 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:55,239 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57564.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:04,118 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 12:44:08,383 INFO [finetune.py:976] (2/7) Epoch 11, batch 300, loss[loss=0.1491, simple_loss=0.2128, pruned_loss=0.04272, over 4261.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2608, pruned_loss=0.06516, over 745971.71 frames. ], batch size: 18, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:44:24,431 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57600.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:31,738 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:40,866 INFO [finetune.py:976] (2/7) Epoch 11, batch 350, loss[loss=0.1748, simple_loss=0.2528, pruned_loss=0.04842, over 4773.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2644, pruned_loss=0.06618, over 793792.81 frames. ], batch size: 28, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:44:46,738 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.573e+02 1.819e+02 2.403e+02 4.156e+02, threshold=3.639e+02, percent-clipped=1.0 2023-03-26 12:44:56,808 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:58,452 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:14,039 INFO [finetune.py:976] (2/7) Epoch 11, batch 400, loss[loss=0.2195, simple_loss=0.2793, pruned_loss=0.07985, over 4881.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.2653, pruned_loss=0.06617, over 829366.89 frames. ], batch size: 32, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:45:30,903 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57697.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:32,135 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:35,171 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 12:45:49,608 INFO [finetune.py:976] (2/7) Epoch 11, batch 450, loss[loss=0.1965, simple_loss=0.2774, pruned_loss=0.05781, over 4932.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2632, pruned_loss=0.06532, over 854697.35 frames. ], batch size: 42, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:45:53,883 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57733.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:55,473 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.602e+02 1.902e+02 2.220e+02 3.989e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-26 12:46:15,573 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57754.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:46:23,461 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 12:46:23,884 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4942, 2.3509, 2.8182, 1.8487, 2.5916, 2.5839, 2.1050, 2.8471], device='cuda:2'), covar=tensor([0.1397, 0.1777, 0.1630, 0.2243, 0.0899, 0.1649, 0.2666, 0.0879], device='cuda:2'), in_proj_covar=tensor([0.0200, 0.0206, 0.0195, 0.0192, 0.0179, 0.0217, 0.0219, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:46:32,834 INFO [finetune.py:976] (2/7) Epoch 11, batch 500, loss[loss=0.1754, simple_loss=0.2364, pruned_loss=0.05719, over 4831.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2604, pruned_loss=0.06424, over 878364.70 frames. ], batch size: 33, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:46:39,378 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5142, 1.4092, 1.9692, 2.9252, 1.9582, 2.0724, 0.9076, 2.3923], device='cuda:2'), covar=tensor([0.1757, 0.1577, 0.1236, 0.0611, 0.0863, 0.1367, 0.1908, 0.0582], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0162, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 12:46:45,695 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57794.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:01,356 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57818.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:06,699 INFO [finetune.py:976] (2/7) Epoch 11, batch 550, loss[loss=0.1983, simple_loss=0.2505, pruned_loss=0.07303, over 4870.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2585, pruned_loss=0.06414, over 895982.43 frames. ], batch size: 31, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:47:11,533 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.635e+02 1.936e+02 2.160e+02 3.511e+02, threshold=3.871e+02, percent-clipped=0.0 2023-03-26 12:47:16,791 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:20,234 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5863, 1.7361, 1.8632, 1.0345, 1.8144, 2.0975, 2.0013, 1.5757], device='cuda:2'), covar=tensor([0.1011, 0.0624, 0.0361, 0.0576, 0.0427, 0.0534, 0.0294, 0.0817], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0120, 0.0132, 0.0130, 0.0123, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.4172e-05, 1.1240e-04, 8.6302e-05, 9.5122e-05, 9.3206e-05, 8.9835e-05, 1.0412e-04, 1.0622e-04], device='cuda:2') 2023-03-26 12:47:23,741 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:24,988 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57853.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:47:34,736 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0714, 2.1188, 2.0369, 2.3190, 2.8731, 2.2102, 2.1603, 1.6682], device='cuda:2'), covar=tensor([0.2573, 0.2339, 0.2136, 0.1828, 0.1935, 0.1295, 0.2420, 0.2294], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0208, 0.0208, 0.0189, 0.0241, 0.0181, 0.0214, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:47:40,101 INFO [finetune.py:976] (2/7) Epoch 11, batch 600, loss[loss=0.1724, simple_loss=0.2318, pruned_loss=0.05654, over 4826.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.258, pruned_loss=0.06417, over 909437.45 frames. ], batch size: 30, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:47:48,476 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57890.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:56,177 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:56,759 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57901.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:48:13,625 INFO [finetune.py:976] (2/7) Epoch 11, batch 650, loss[loss=0.1891, simple_loss=0.2646, pruned_loss=0.05676, over 4838.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2616, pruned_loss=0.06532, over 919683.45 frames. ], batch size: 49, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:48:18,498 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.568e+02 1.897e+02 2.360e+02 4.682e+02, threshold=3.793e+02, percent-clipped=3.0 2023-03-26 12:48:27,449 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:48:42,040 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8351, 3.8265, 3.6866, 1.9187, 3.9023, 2.7771, 0.8057, 2.7198], device='cuda:2'), covar=tensor([0.2232, 0.1610, 0.1304, 0.2968, 0.0973, 0.0973, 0.4295, 0.1340], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0171, 0.0157, 0.0127, 0.0154, 0.0120, 0.0144, 0.0120], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 12:48:48,746 INFO [finetune.py:976] (2/7) Epoch 11, batch 700, loss[loss=0.1752, simple_loss=0.2484, pruned_loss=0.05104, over 4771.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2632, pruned_loss=0.0656, over 927651.61 frames. ], batch size: 28, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:49:24,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1829, 1.7495, 2.1207, 2.0572, 1.7867, 1.7791, 1.9476, 1.8779], device='cuda:2'), covar=tensor([0.4579, 0.5393, 0.4178, 0.4959, 0.6245, 0.4725, 0.6167, 0.4070], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0239, 0.0254, 0.0258, 0.0253, 0.0229, 0.0275, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:49:44,562 INFO [finetune.py:976] (2/7) Epoch 11, batch 750, loss[loss=0.2047, simple_loss=0.2701, pruned_loss=0.06961, over 4911.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2648, pruned_loss=0.06654, over 933170.39 frames. ], batch size: 33, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:49:49,414 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.116e+01 1.579e+02 1.894e+02 2.321e+02 4.436e+02, threshold=3.789e+02, percent-clipped=3.0 2023-03-26 12:50:02,173 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58054.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:18,110 INFO [finetune.py:976] (2/7) Epoch 11, batch 800, loss[loss=0.1753, simple_loss=0.2454, pruned_loss=0.05262, over 4823.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2639, pruned_loss=0.06506, over 939418.76 frames. ], batch size: 30, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:50:21,351 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 12:50:25,468 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:33,870 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:45,551 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:51,413 INFO [finetune.py:976] (2/7) Epoch 11, batch 850, loss[loss=0.1833, simple_loss=0.2542, pruned_loss=0.05622, over 4848.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.262, pruned_loss=0.06438, over 940532.56 frames. ], batch size: 47, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:50:56,226 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.394e+01 1.505e+02 1.749e+02 2.082e+02 4.545e+02, threshold=3.498e+02, percent-clipped=2.0 2023-03-26 12:50:59,950 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58141.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:01,777 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:05,917 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58151.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:23,204 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:35,956 INFO [finetune.py:976] (2/7) Epoch 11, batch 900, loss[loss=0.2128, simple_loss=0.2726, pruned_loss=0.07644, over 4825.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2589, pruned_loss=0.06351, over 943587.05 frames. ], batch size: 40, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:51:41,973 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.85 vs. limit=5.0 2023-03-26 12:51:56,280 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 12:51:57,410 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:59,443 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:52:01,295 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58205.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:52:03,581 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.98 vs. limit=5.0 2023-03-26 12:52:12,174 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9712, 1.8627, 1.7104, 1.8230, 1.5837, 4.4899, 1.9180, 2.3556], device='cuda:2'), covar=tensor([0.3466, 0.2461, 0.2147, 0.2336, 0.1601, 0.0108, 0.2533, 0.1260], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0123, 0.0115, 0.0098, 0.0099, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 12:52:17,042 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3490, 2.2258, 1.8688, 2.3062, 2.3016, 1.9622, 2.6050, 2.3310], device='cuda:2'), covar=tensor([0.1319, 0.2424, 0.3242, 0.2775, 0.2605, 0.1791, 0.3420, 0.1791], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0187, 0.0232, 0.0254, 0.0238, 0.0196, 0.0212, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:52:17,500 INFO [finetune.py:976] (2/7) Epoch 11, batch 950, loss[loss=0.1735, simple_loss=0.2357, pruned_loss=0.05568, over 4802.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2572, pruned_loss=0.06314, over 945553.67 frames. ], batch size: 25, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:52:22,879 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.516e+02 1.975e+02 2.310e+02 4.008e+02, threshold=3.950e+02, percent-clipped=1.0 2023-03-26 12:52:51,452 INFO [finetune.py:976] (2/7) Epoch 11, batch 1000, loss[loss=0.2058, simple_loss=0.2758, pruned_loss=0.06789, over 4750.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2601, pruned_loss=0.06439, over 947242.00 frames. ], batch size: 54, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:53:46,405 INFO [finetune.py:976] (2/7) Epoch 11, batch 1050, loss[loss=0.2557, simple_loss=0.3162, pruned_loss=0.09758, over 4800.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2637, pruned_loss=0.06547, over 950010.68 frames. ], batch size: 51, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:53:51,319 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.197e+02 1.617e+02 2.003e+02 2.375e+02 3.670e+02, threshold=4.006e+02, percent-clipped=0.0 2023-03-26 12:53:53,950 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-26 12:54:08,771 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2128, 2.3592, 2.4123, 1.0360, 2.7320, 2.9577, 2.5010, 2.0132], device='cuda:2'), covar=tensor([0.1031, 0.0659, 0.0572, 0.0733, 0.0563, 0.0457, 0.0417, 0.0645], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0119, 0.0131, 0.0129, 0.0122, 0.0141, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.3479e-05, 1.1150e-04, 8.5907e-05, 9.4660e-05, 9.1961e-05, 8.9095e-05, 1.0349e-04, 1.0550e-04], device='cuda:2') 2023-03-26 12:54:42,548 INFO [finetune.py:976] (2/7) Epoch 11, batch 1100, loss[loss=0.1957, simple_loss=0.266, pruned_loss=0.06267, over 4796.00 frames. ], tot_loss[loss=0.1968, simple_loss=0.2637, pruned_loss=0.06492, over 952284.64 frames. ], batch size: 29, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:54:55,680 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:55:35,370 INFO [finetune.py:976] (2/7) Epoch 11, batch 1150, loss[loss=0.172, simple_loss=0.2453, pruned_loss=0.04933, over 4728.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2652, pruned_loss=0.06545, over 953457.74 frames. ], batch size: 59, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:55:40,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.672e+02 1.870e+02 2.321e+02 4.403e+02, threshold=3.740e+02, percent-clipped=1.0 2023-03-26 12:55:41,941 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58437.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:55:54,582 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5497, 1.5379, 1.8168, 1.9030, 1.5743, 3.5269, 1.3080, 1.6574], device='cuda:2'), covar=tensor([0.0963, 0.1739, 0.1092, 0.0923, 0.1557, 0.0236, 0.1471, 0.1704], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0080, 0.0073, 0.0077, 0.0090, 0.0080, 0.0083, 0.0077], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:2') 2023-03-26 12:56:08,438 INFO [finetune.py:976] (2/7) Epoch 11, batch 1200, loss[loss=0.1995, simple_loss=0.2664, pruned_loss=0.06633, over 4907.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.263, pruned_loss=0.06466, over 952333.52 frames. ], batch size: 36, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:56:08,568 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0034, 1.8787, 1.7418, 2.0649, 2.4870, 2.0597, 1.9054, 1.6820], device='cuda:2'), covar=tensor([0.1690, 0.1805, 0.1470, 0.1355, 0.1607, 0.0996, 0.2125, 0.1630], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0208, 0.0207, 0.0189, 0.0242, 0.0181, 0.0213, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:56:15,626 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1472, 2.0438, 1.6942, 2.1127, 2.1260, 1.8149, 2.4300, 2.1147], device='cuda:2'), covar=tensor([0.1368, 0.2398, 0.3318, 0.2681, 0.2566, 0.1742, 0.2982, 0.1975], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0187, 0.0231, 0.0253, 0.0238, 0.0195, 0.0211, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:56:21,455 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58497.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:56:23,266 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58500.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:56:37,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6741, 1.4173, 0.9272, 0.2216, 1.2047, 1.4910, 1.3067, 1.2658], device='cuda:2'), covar=tensor([0.0753, 0.0846, 0.1174, 0.1748, 0.1344, 0.1985, 0.2220, 0.0839], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0202, 0.0203, 0.0189, 0.0216, 0.0209, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:56:40,496 INFO [finetune.py:976] (2/7) Epoch 11, batch 1250, loss[loss=0.15, simple_loss=0.2286, pruned_loss=0.03572, over 4935.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2594, pruned_loss=0.06355, over 952849.28 frames. ], batch size: 33, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:56:46,790 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.332e+01 1.581e+02 1.822e+02 2.261e+02 4.369e+02, threshold=3.644e+02, percent-clipped=3.0 2023-03-26 12:57:08,441 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9524, 1.3897, 1.9571, 1.8205, 1.6225, 1.6325, 1.7463, 1.8489], device='cuda:2'), covar=tensor([0.4066, 0.4674, 0.3613, 0.4230, 0.5126, 0.4116, 0.5394, 0.3481], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0238, 0.0252, 0.0256, 0.0252, 0.0228, 0.0273, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:57:15,450 INFO [finetune.py:976] (2/7) Epoch 11, batch 1300, loss[loss=0.1725, simple_loss=0.2418, pruned_loss=0.05165, over 4816.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2569, pruned_loss=0.06335, over 955410.66 frames. ], batch size: 45, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:57:48,893 INFO [finetune.py:976] (2/7) Epoch 11, batch 1350, loss[loss=0.231, simple_loss=0.3025, pruned_loss=0.07977, over 4904.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2592, pruned_loss=0.06502, over 954765.68 frames. ], batch size: 43, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:57:54,733 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.484e+01 1.581e+02 1.914e+02 2.266e+02 4.857e+02, threshold=3.829e+02, percent-clipped=2.0 2023-03-26 12:58:20,742 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 12:58:23,950 INFO [finetune.py:976] (2/7) Epoch 11, batch 1400, loss[loss=0.2008, simple_loss=0.2752, pruned_loss=0.06319, over 4818.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2613, pruned_loss=0.06567, over 955222.27 frames. ], batch size: 39, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:58:26,994 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58681.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:58:56,046 INFO [finetune.py:976] (2/7) Epoch 11, batch 1450, loss[loss=0.2129, simple_loss=0.2875, pruned_loss=0.06914, over 4922.00 frames. ], tot_loss[loss=0.1983, simple_loss=0.2639, pruned_loss=0.06637, over 956724.02 frames. ], batch size: 42, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:59:01,960 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.669e+02 2.009e+02 2.318e+02 4.324e+02, threshold=4.017e+02, percent-clipped=1.0 2023-03-26 12:59:04,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0982, 2.0106, 2.1624, 1.6165, 2.0596, 2.3117, 2.1824, 1.7024], device='cuda:2'), covar=tensor([0.0537, 0.0599, 0.0628, 0.0881, 0.0738, 0.0586, 0.0604, 0.1073], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0134, 0.0143, 0.0126, 0.0121, 0.0144, 0.0145, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:59:07,263 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58742.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:59:35,103 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58775.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:59:36,228 INFO [finetune.py:976] (2/7) Epoch 11, batch 1500, loss[loss=0.2056, simple_loss=0.2771, pruned_loss=0.0671, over 4881.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2651, pruned_loss=0.06639, over 954258.48 frames. ], batch size: 35, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 12:59:53,766 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7821, 0.9813, 1.5957, 1.4627, 1.3575, 1.3241, 1.3823, 1.5214], device='cuda:2'), covar=tensor([0.4777, 0.5050, 0.4791, 0.4807, 0.6140, 0.5105, 0.5982, 0.4572], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0238, 0.0252, 0.0256, 0.0253, 0.0228, 0.0273, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 12:59:58,358 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58797.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:04,317 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58800.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:34,186 INFO [finetune.py:976] (2/7) Epoch 11, batch 1550, loss[loss=0.204, simple_loss=0.2641, pruned_loss=0.07197, over 4751.00 frames. ], tot_loss[loss=0.1981, simple_loss=0.2643, pruned_loss=0.06597, over 953561.18 frames. ], batch size: 54, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:00:39,950 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.569e+02 1.959e+02 2.197e+02 4.059e+02, threshold=3.918e+02, percent-clipped=1.0 2023-03-26 13:00:41,219 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58836.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:47,623 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:49,963 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:01:07,920 INFO [finetune.py:976] (2/7) Epoch 11, batch 1600, loss[loss=0.174, simple_loss=0.2335, pruned_loss=0.05725, over 4745.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2623, pruned_loss=0.06558, over 953331.67 frames. ], batch size: 59, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:01:28,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:01:50,384 INFO [finetune.py:976] (2/7) Epoch 11, batch 1650, loss[loss=0.1715, simple_loss=0.2363, pruned_loss=0.05333, over 4870.00 frames. ], tot_loss[loss=0.194, simple_loss=0.259, pruned_loss=0.06446, over 950992.55 frames. ], batch size: 31, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:01:55,256 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.664e+02 1.923e+02 2.390e+02 4.121e+02, threshold=3.846e+02, percent-clipped=1.0 2023-03-26 13:01:56,537 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7370, 3.7183, 3.5288, 1.9464, 3.8934, 2.9047, 0.9768, 2.6567], device='cuda:2'), covar=tensor([0.2527, 0.1840, 0.1498, 0.3270, 0.1021, 0.1000, 0.4480, 0.1455], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0175, 0.0160, 0.0129, 0.0157, 0.0121, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 13:02:18,253 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58967.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:02:19,477 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7497, 1.3466, 0.9538, 1.7475, 2.1019, 1.5164, 1.5410, 1.7904], device='cuda:2'), covar=tensor([0.1326, 0.1820, 0.1940, 0.1028, 0.1828, 0.1910, 0.1229, 0.1579], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0093, 0.0121, 0.0096, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:02:24,167 INFO [finetune.py:976] (2/7) Epoch 11, batch 1700, loss[loss=0.1739, simple_loss=0.2262, pruned_loss=0.06082, over 3946.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2565, pruned_loss=0.06364, over 953615.89 frames. ], batch size: 17, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:02:41,310 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7931, 1.4463, 2.2265, 3.4994, 2.3090, 2.6515, 0.7507, 2.7733], device='cuda:2'), covar=tensor([0.2012, 0.2200, 0.1670, 0.0974, 0.1025, 0.1976, 0.2418, 0.0707], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0137, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:02:57,899 INFO [finetune.py:976] (2/7) Epoch 11, batch 1750, loss[loss=0.1848, simple_loss=0.2469, pruned_loss=0.06137, over 4754.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2579, pruned_loss=0.06427, over 952582.85 frames. ], batch size: 27, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:03:02,754 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.620e+02 1.895e+02 2.249e+02 5.052e+02, threshold=3.790e+02, percent-clipped=2.0 2023-03-26 13:03:04,084 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:03:33,696 INFO [finetune.py:976] (2/7) Epoch 11, batch 1800, loss[loss=0.2107, simple_loss=0.2763, pruned_loss=0.07254, over 4753.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2583, pruned_loss=0.06453, over 948732.05 frames. ], batch size: 27, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:03:34,721 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4553, 1.5299, 1.5329, 0.7892, 1.6036, 1.7799, 1.7081, 1.3564], device='cuda:2'), covar=tensor([0.1068, 0.0714, 0.0475, 0.0637, 0.0448, 0.0637, 0.0397, 0.0825], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0121, 0.0132, 0.0130, 0.0124, 0.0143, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.4322e-05, 1.1258e-04, 8.7298e-05, 9.5661e-05, 9.2882e-05, 9.0086e-05, 1.0459e-04, 1.0655e-04], device='cuda:2') 2023-03-26 13:03:44,260 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3310, 2.8340, 2.6535, 1.4528, 2.7622, 2.3426, 2.2596, 2.4385], device='cuda:2'), covar=tensor([0.0767, 0.1085, 0.2021, 0.2181, 0.1700, 0.1982, 0.2075, 0.1175], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0202, 0.0188, 0.0216, 0.0209, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:04:14,434 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 13:04:19,595 INFO [finetune.py:976] (2/7) Epoch 11, batch 1850, loss[loss=0.1883, simple_loss=0.2539, pruned_loss=0.06134, over 4781.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2598, pruned_loss=0.06511, over 948440.31 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:04:22,097 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59131.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:04:24,430 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.668e+02 2.065e+02 2.636e+02 4.490e+02, threshold=4.130e+02, percent-clipped=5.0 2023-03-26 13:04:43,997 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:04:57,372 INFO [finetune.py:976] (2/7) Epoch 11, batch 1900, loss[loss=0.2034, simple_loss=0.2688, pruned_loss=0.06897, over 4813.00 frames. ], tot_loss[loss=0.1968, simple_loss=0.2622, pruned_loss=0.06574, over 949052.22 frames. ], batch size: 38, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:05:45,620 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0408, 1.0308, 1.0303, 0.3409, 0.8821, 1.2003, 1.2308, 0.9964], device='cuda:2'), covar=tensor([0.0884, 0.0585, 0.0526, 0.0595, 0.0586, 0.0656, 0.0418, 0.0665], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0154, 0.0121, 0.0132, 0.0131, 0.0124, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.4332e-05, 1.1310e-04, 8.7511e-05, 9.5649e-05, 9.3418e-05, 9.0445e-05, 1.0490e-04, 1.0711e-04], device='cuda:2') 2023-03-26 13:05:46,804 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:05:47,935 INFO [finetune.py:976] (2/7) Epoch 11, batch 1950, loss[loss=0.1827, simple_loss=0.2533, pruned_loss=0.05603, over 4886.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.261, pruned_loss=0.0648, over 949305.80 frames. ], batch size: 32, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:05:59,308 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.570e+02 1.817e+02 2.294e+02 4.310e+02, threshold=3.633e+02, percent-clipped=1.0 2023-03-26 13:06:29,860 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59262.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:06:31,754 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 13:06:51,936 INFO [finetune.py:976] (2/7) Epoch 11, batch 2000, loss[loss=0.1692, simple_loss=0.2244, pruned_loss=0.05695, over 4821.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.258, pruned_loss=0.06333, over 952772.09 frames. ], batch size: 39, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:07:37,498 INFO [finetune.py:976] (2/7) Epoch 11, batch 2050, loss[loss=0.1696, simple_loss=0.2329, pruned_loss=0.05317, over 4819.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2559, pruned_loss=0.06308, over 951752.29 frames. ], batch size: 25, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:07:42,276 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.122e+01 1.513e+02 1.843e+02 2.174e+02 3.611e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-26 13:07:44,123 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:07:52,790 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5013, 1.4576, 1.7761, 1.8417, 1.5287, 3.2004, 1.3140, 1.6010], device='cuda:2'), covar=tensor([0.0952, 0.1627, 0.1031, 0.0876, 0.1477, 0.0250, 0.1386, 0.1603], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0080, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:08:17,284 INFO [finetune.py:976] (2/7) Epoch 11, batch 2100, loss[loss=0.1586, simple_loss=0.2282, pruned_loss=0.04452, over 4765.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2574, pruned_loss=0.06401, over 951611.74 frames. ], batch size: 27, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:08:22,237 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:08:32,751 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6120, 2.4067, 2.9610, 1.9185, 2.6743, 2.9353, 2.2048, 3.0610], device='cuda:2'), covar=tensor([0.1377, 0.1729, 0.1573, 0.2322, 0.0881, 0.1638, 0.2631, 0.0903], device='cuda:2'), in_proj_covar=tensor([0.0199, 0.0208, 0.0195, 0.0192, 0.0179, 0.0217, 0.0220, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:08:54,140 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 13:09:08,918 INFO [finetune.py:976] (2/7) Epoch 11, batch 2150, loss[loss=0.1686, simple_loss=0.2485, pruned_loss=0.04435, over 4786.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2614, pruned_loss=0.06505, over 952805.65 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:09:13,344 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59431.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:09:15,687 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.596e+02 1.893e+02 2.254e+02 5.168e+02, threshold=3.786e+02, percent-clipped=3.0 2023-03-26 13:09:54,868 INFO [finetune.py:976] (2/7) Epoch 11, batch 2200, loss[loss=0.2009, simple_loss=0.2752, pruned_loss=0.06329, over 4895.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2646, pruned_loss=0.06608, over 953894.26 frames. ], batch size: 35, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:09:56,666 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:10,714 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 13:10:13,079 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:21,086 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-26 13:10:25,070 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:30,584 INFO [finetune.py:976] (2/7) Epoch 11, batch 2250, loss[loss=0.2247, simple_loss=0.2927, pruned_loss=0.07835, over 4824.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2658, pruned_loss=0.06703, over 954323.62 frames. ], batch size: 39, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:10:37,559 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.729e+02 2.023e+02 2.518e+02 3.990e+02, threshold=4.047e+02, percent-clipped=2.0 2023-03-26 13:11:02,714 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59562.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:03,341 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:13,470 INFO [finetune.py:976] (2/7) Epoch 11, batch 2300, loss[loss=0.1762, simple_loss=0.2427, pruned_loss=0.05484, over 4825.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2664, pruned_loss=0.0669, over 954124.20 frames. ], batch size: 30, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:11:16,186 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 13:11:23,111 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5778, 1.4639, 1.4474, 1.4951, 0.9136, 2.9621, 1.0864, 1.4891], device='cuda:2'), covar=tensor([0.3170, 0.2380, 0.2177, 0.2383, 0.1982, 0.0239, 0.2721, 0.1329], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0122, 0.0114, 0.0097, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:11:35,295 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:46,668 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 13:11:47,097 INFO [finetune.py:976] (2/7) Epoch 11, batch 2350, loss[loss=0.1422, simple_loss=0.2116, pruned_loss=0.03639, over 4839.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2624, pruned_loss=0.06514, over 954950.69 frames. ], batch size: 47, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:11:52,461 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.309e+01 1.451e+02 1.728e+02 2.097e+02 4.600e+02, threshold=3.455e+02, percent-clipped=1.0 2023-03-26 13:12:19,961 INFO [finetune.py:976] (2/7) Epoch 11, batch 2400, loss[loss=0.1746, simple_loss=0.2353, pruned_loss=0.05696, over 4898.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2601, pruned_loss=0.06448, over 956974.61 frames. ], batch size: 32, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:12:21,793 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3523, 2.2700, 2.0459, 1.2757, 2.1068, 1.9892, 1.8285, 2.1606], device='cuda:2'), covar=tensor([0.0893, 0.0654, 0.1234, 0.1758, 0.1412, 0.1675, 0.1806, 0.0823], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0200, 0.0202, 0.0188, 0.0217, 0.0209, 0.0223, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:12:38,419 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7623, 1.5315, 2.1515, 3.3374, 2.1546, 2.4725, 1.0429, 2.6065], device='cuda:2'), covar=tensor([0.1784, 0.1492, 0.1312, 0.0549, 0.0881, 0.1172, 0.1983, 0.0630], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:12:53,269 INFO [finetune.py:976] (2/7) Epoch 11, batch 2450, loss[loss=0.2043, simple_loss=0.2511, pruned_loss=0.07872, over 4774.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2574, pruned_loss=0.06407, over 956607.92 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:13:01,211 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.641e+02 1.877e+02 2.149e+02 5.374e+02, threshold=3.753e+02, percent-clipped=2.0 2023-03-26 13:13:37,051 INFO [finetune.py:976] (2/7) Epoch 11, batch 2500, loss[loss=0.2076, simple_loss=0.2824, pruned_loss=0.06635, over 4802.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2598, pruned_loss=0.06549, over 956131.63 frames. ], batch size: 41, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:13:46,768 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1239, 2.5739, 3.1748, 2.2126, 3.0999, 3.3905, 2.5670, 3.4028], device='cuda:2'), covar=tensor([0.1083, 0.1714, 0.1223, 0.1833, 0.0659, 0.1076, 0.2171, 0.0690], device='cuda:2'), in_proj_covar=tensor([0.0199, 0.0207, 0.0195, 0.0191, 0.0179, 0.0217, 0.0219, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:14:09,088 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.38 vs. limit=5.0 2023-03-26 13:14:29,699 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59820.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:14:32,133 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2840, 1.5637, 0.7251, 2.2059, 2.4525, 1.7036, 1.9532, 1.8878], device='cuda:2'), covar=tensor([0.1491, 0.2073, 0.2346, 0.1172, 0.1971, 0.2071, 0.1389, 0.2002], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0094, 0.0121, 0.0095, 0.0100, 0.0092], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:14:33,894 INFO [finetune.py:976] (2/7) Epoch 11, batch 2550, loss[loss=0.2005, simple_loss=0.2696, pruned_loss=0.06565, over 4898.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2621, pruned_loss=0.0654, over 957193.19 frames. ], batch size: 43, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:14:40,182 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.636e+02 1.885e+02 2.323e+02 4.849e+02, threshold=3.771e+02, percent-clipped=2.0 2023-03-26 13:14:45,067 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1247, 1.5784, 2.5932, 3.8170, 2.6585, 2.6212, 0.9548, 2.9417], device='cuda:2'), covar=tensor([0.1567, 0.1614, 0.1126, 0.0501, 0.0725, 0.1883, 0.1815, 0.0515], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:14:49,246 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:14:57,186 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59858.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:03,244 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8398, 3.3643, 3.5372, 3.7233, 3.6107, 3.3934, 3.9588, 1.2577], device='cuda:2'), covar=tensor([0.0993, 0.0956, 0.0923, 0.1149, 0.1503, 0.1710, 0.0824, 0.5702], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0242, 0.0274, 0.0292, 0.0329, 0.0284, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:15:03,807 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59868.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:09,218 INFO [finetune.py:976] (2/7) Epoch 11, batch 2600, loss[loss=0.1673, simple_loss=0.2422, pruned_loss=0.04623, over 4777.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2626, pruned_loss=0.06542, over 957038.38 frames. ], batch size: 29, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:15:18,110 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59889.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:31,236 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:42,446 INFO [finetune.py:976] (2/7) Epoch 11, batch 2650, loss[loss=0.1833, simple_loss=0.2543, pruned_loss=0.05616, over 4819.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2639, pruned_loss=0.06603, over 956456.99 frames. ], batch size: 33, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:15:47,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.549e+02 1.976e+02 2.444e+02 3.877e+02, threshold=3.952e+02, percent-clipped=1.0 2023-03-26 13:16:03,035 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:16:29,561 INFO [finetune.py:976] (2/7) Epoch 11, batch 2700, loss[loss=0.1816, simple_loss=0.249, pruned_loss=0.05709, over 4785.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.2621, pruned_loss=0.06488, over 954606.69 frames. ], batch size: 28, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:16:48,707 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8216, 1.7373, 1.5373, 1.4603, 1.8967, 1.6061, 1.8632, 1.8314], device='cuda:2'), covar=tensor([0.1422, 0.2164, 0.3223, 0.2556, 0.2599, 0.1758, 0.2869, 0.2008], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0185, 0.0229, 0.0251, 0.0236, 0.0194, 0.0209, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:16:57,161 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6688, 1.5399, 1.4003, 1.5284, 1.8939, 1.7753, 1.5852, 1.3497], device='cuda:2'), covar=tensor([0.0306, 0.0282, 0.0610, 0.0289, 0.0198, 0.0482, 0.0295, 0.0412], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0109, 0.0140, 0.0115, 0.0102, 0.0104, 0.0093, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.1390e-05, 8.5021e-05, 1.1155e-04, 8.9683e-05, 8.0096e-05, 7.7482e-05, 7.0435e-05, 8.3393e-05], device='cuda:2') 2023-03-26 13:17:04,325 INFO [finetune.py:976] (2/7) Epoch 11, batch 2750, loss[loss=0.1965, simple_loss=0.2624, pruned_loss=0.06533, over 4871.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2601, pruned_loss=0.06499, over 955051.99 frames. ], batch size: 31, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:17:09,211 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.603e+02 1.823e+02 2.284e+02 4.397e+02, threshold=3.646e+02, percent-clipped=1.0 2023-03-26 13:17:12,385 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:21,588 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:26,735 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.7915, 3.2727, 3.4591, 3.6904, 3.5607, 3.3356, 3.8747, 1.2516], device='cuda:2'), covar=tensor([0.0843, 0.0901, 0.0842, 0.1021, 0.1210, 0.1516, 0.0823, 0.5289], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0242, 0.0275, 0.0293, 0.0330, 0.0285, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:17:30,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:37,360 INFO [finetune.py:976] (2/7) Epoch 11, batch 2800, loss[loss=0.1744, simple_loss=0.2279, pruned_loss=0.06044, over 4759.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2568, pruned_loss=0.06377, over 956893.32 frames. ], batch size: 26, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:17:38,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2417, 1.3308, 1.6005, 1.1166, 1.2266, 1.4137, 1.3007, 1.6107], device='cuda:2'), covar=tensor([0.1214, 0.1971, 0.1169, 0.1377, 0.0908, 0.1189, 0.2861, 0.0780], device='cuda:2'), in_proj_covar=tensor([0.0198, 0.0205, 0.0193, 0.0189, 0.0177, 0.0215, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:17:38,103 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:54,538 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60101.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 13:18:02,772 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60113.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 13:18:10,616 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:11,113 INFO [finetune.py:976] (2/7) Epoch 11, batch 2850, loss[loss=0.2221, simple_loss=0.2897, pruned_loss=0.07722, over 4808.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2556, pruned_loss=0.06349, over 952680.68 frames. ], batch size: 51, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:18:17,974 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.579e+02 1.818e+02 2.348e+02 4.165e+02, threshold=3.636e+02, percent-clipped=3.0 2023-03-26 13:18:19,783 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0623, 4.9750, 4.6892, 2.7388, 5.0969, 3.8045, 1.3112, 3.5078], device='cuda:2'), covar=tensor([0.2072, 0.1707, 0.1227, 0.2918, 0.0834, 0.0793, 0.4146, 0.1280], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 13:18:21,039 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60139.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:38,134 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 13:18:39,223 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:51,668 INFO [finetune.py:976] (2/7) Epoch 11, batch 2900, loss[loss=0.218, simple_loss=0.2681, pruned_loss=0.084, over 4761.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2573, pruned_loss=0.0638, over 954022.53 frames. ], batch size: 26, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:19:12,029 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60198.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:21,312 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:22,530 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:31,841 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9836, 1.9373, 1.5680, 1.8427, 2.0440, 1.6744, 2.2325, 1.9665], device='cuda:2'), covar=tensor([0.1455, 0.2287, 0.3416, 0.2672, 0.2618, 0.1760, 0.3260, 0.1940], device='cuda:2'), in_proj_covar=tensor([0.0174, 0.0185, 0.0230, 0.0252, 0.0236, 0.0194, 0.0209, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:19:51,301 INFO [finetune.py:976] (2/7) Epoch 11, batch 2950, loss[loss=0.2026, simple_loss=0.2799, pruned_loss=0.06265, over 4889.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2599, pruned_loss=0.06484, over 953634.55 frames. ], batch size: 43, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:20:00,142 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.723e+02 2.035e+02 2.444e+02 4.360e+02, threshold=4.070e+02, percent-clipped=6.0 2023-03-26 13:20:06,689 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:20:12,660 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3650, 1.2656, 1.1752, 1.3385, 1.6276, 1.4509, 1.3454, 1.1231], device='cuda:2'), covar=tensor([0.0305, 0.0279, 0.0615, 0.0281, 0.0206, 0.0513, 0.0306, 0.0445], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0108, 0.0139, 0.0113, 0.0101, 0.0103, 0.0093, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0425e-05, 8.4274e-05, 1.1060e-04, 8.8586e-05, 7.8899e-05, 7.6222e-05, 7.0148e-05, 8.2421e-05], device='cuda:2') 2023-03-26 13:20:16,639 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:20:28,426 INFO [finetune.py:976] (2/7) Epoch 11, batch 3000, loss[loss=0.1755, simple_loss=0.2427, pruned_loss=0.0541, over 4173.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2625, pruned_loss=0.06609, over 953605.44 frames. ], batch size: 18, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:20:28,426 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 13:20:38,898 INFO [finetune.py:1010] (2/7) Epoch 11, validation: loss=0.1572, simple_loss=0.2284, pruned_loss=0.04301, over 2265189.00 frames. 2023-03-26 13:20:38,898 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 13:20:55,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0357, 2.2060, 2.1304, 1.3410, 2.1957, 2.2119, 2.2248, 1.8756], device='cuda:2'), covar=tensor([0.0669, 0.0617, 0.0637, 0.0934, 0.0569, 0.0657, 0.0573, 0.1002], device='cuda:2'), in_proj_covar=tensor([0.0137, 0.0136, 0.0144, 0.0127, 0.0122, 0.0146, 0.0147, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:21:13,696 INFO [finetune.py:976] (2/7) Epoch 11, batch 3050, loss[loss=0.1906, simple_loss=0.2503, pruned_loss=0.06541, over 4904.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2639, pruned_loss=0.06602, over 954581.25 frames. ], batch size: 37, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:21:19,479 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.587e+02 1.939e+02 2.482e+02 4.597e+02, threshold=3.877e+02, percent-clipped=2.0 2023-03-26 13:21:34,480 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4542, 2.2999, 1.8541, 0.9371, 2.1033, 1.9455, 1.8125, 2.0857], device='cuda:2'), covar=tensor([0.0881, 0.0854, 0.1486, 0.2051, 0.1389, 0.2258, 0.2205, 0.0914], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0199, 0.0201, 0.0186, 0.0214, 0.0207, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:21:45,840 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0583, 1.8107, 1.9726, 0.8830, 2.1571, 2.3327, 2.0288, 1.8501], device='cuda:2'), covar=tensor([0.0912, 0.0745, 0.0523, 0.0760, 0.0495, 0.0719, 0.0431, 0.0683], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0121, 0.0132, 0.0129, 0.0124, 0.0143, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3649e-05, 1.1176e-04, 8.7267e-05, 9.5474e-05, 9.2200e-05, 9.0269e-05, 1.0440e-04, 1.0649e-04], device='cuda:2') 2023-03-26 13:21:48,238 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 13:21:49,322 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5776, 1.2952, 1.7773, 3.0686, 2.0536, 2.3955, 0.8834, 2.5597], device='cuda:2'), covar=tensor([0.2175, 0.2224, 0.1739, 0.1084, 0.1083, 0.1327, 0.2386, 0.0741], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:21:56,081 INFO [finetune.py:976] (2/7) Epoch 11, batch 3100, loss[loss=0.1936, simple_loss=0.2711, pruned_loss=0.05806, over 4816.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2608, pruned_loss=0.06472, over 954281.28 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:21:56,168 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3763, 4.5861, 4.9051, 5.2222, 5.0936, 4.8429, 5.4465, 1.6985], device='cuda:2'), covar=tensor([0.0679, 0.0842, 0.0806, 0.0797, 0.1053, 0.1369, 0.0562, 0.5604], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0243, 0.0275, 0.0291, 0.0330, 0.0284, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:21:58,584 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3233, 2.1663, 1.6846, 2.2093, 2.2266, 1.9160, 2.5504, 2.2804], device='cuda:2'), covar=tensor([0.1207, 0.2169, 0.3070, 0.2802, 0.2374, 0.1649, 0.3101, 0.1696], device='cuda:2'), in_proj_covar=tensor([0.0175, 0.0186, 0.0231, 0.0253, 0.0237, 0.0195, 0.0211, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:22:08,706 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60396.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 13:22:16,689 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60408.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 13:22:25,058 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:22:29,560 INFO [finetune.py:976] (2/7) Epoch 11, batch 3150, loss[loss=0.1901, simple_loss=0.2455, pruned_loss=0.06734, over 4835.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2586, pruned_loss=0.0639, over 954020.40 frames. ], batch size: 47, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:22:34,349 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60434.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:22:34,872 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.624e+02 1.838e+02 2.200e+02 4.980e+02, threshold=3.676e+02, percent-clipped=1.0 2023-03-26 13:22:37,949 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7949, 1.7373, 1.5250, 1.9358, 2.4936, 1.9644, 1.5912, 1.4953], device='cuda:2'), covar=tensor([0.2213, 0.2145, 0.2017, 0.1536, 0.1555, 0.1205, 0.2370, 0.1918], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0206, 0.0207, 0.0188, 0.0240, 0.0181, 0.0211, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:22:44,072 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 13:22:56,470 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0266, 1.7621, 2.4293, 1.6247, 2.0706, 2.3802, 1.6816, 2.4651], device='cuda:2'), covar=tensor([0.1152, 0.1951, 0.1078, 0.1747, 0.0786, 0.1195, 0.2458, 0.0762], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0203, 0.0191, 0.0188, 0.0176, 0.0213, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:23:01,693 INFO [finetune.py:976] (2/7) Epoch 11, batch 3200, loss[loss=0.1735, simple_loss=0.2409, pruned_loss=0.05298, over 4904.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2564, pruned_loss=0.06346, over 953862.15 frames. ], batch size: 36, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:23:20,604 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:23:37,313 INFO [finetune.py:976] (2/7) Epoch 11, batch 3250, loss[loss=0.1532, simple_loss=0.2209, pruned_loss=0.04277, over 4792.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2564, pruned_loss=0.06376, over 951670.58 frames. ], batch size: 26, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:23:48,950 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.626e+02 1.982e+02 2.397e+02 3.737e+02, threshold=3.964e+02, percent-clipped=1.0 2023-03-26 13:23:59,824 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:00,671 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 13:24:04,035 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:05,274 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60554.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:27,367 INFO [finetune.py:976] (2/7) Epoch 11, batch 3300, loss[loss=0.2221, simple_loss=0.2987, pruned_loss=0.07274, over 4818.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2609, pruned_loss=0.0651, over 952518.22 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:24:45,613 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:25:28,944 INFO [finetune.py:976] (2/7) Epoch 11, batch 3350, loss[loss=0.2142, simple_loss=0.2862, pruned_loss=0.07103, over 4904.00 frames. ], tot_loss[loss=0.1981, simple_loss=0.2636, pruned_loss=0.06632, over 952852.19 frames. ], batch size: 36, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:25:34,911 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.701e+02 2.036e+02 2.469e+02 3.577e+02, threshold=4.071e+02, percent-clipped=0.0 2023-03-26 13:26:02,928 INFO [finetune.py:976] (2/7) Epoch 11, batch 3400, loss[loss=0.1922, simple_loss=0.2584, pruned_loss=0.06302, over 4922.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2644, pruned_loss=0.06629, over 954033.93 frames. ], batch size: 33, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:26:16,538 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:24,733 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:30,787 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:31,938 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7852, 4.0744, 3.6807, 2.1636, 4.0739, 3.1040, 0.7854, 2.6587], device='cuda:2'), covar=tensor([0.2297, 0.1444, 0.1604, 0.2827, 0.0840, 0.0874, 0.4542, 0.1398], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0174, 0.0160, 0.0128, 0.0156, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 13:26:32,539 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60721.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:36,599 INFO [finetune.py:976] (2/7) Epoch 11, batch 3450, loss[loss=0.2059, simple_loss=0.2643, pruned_loss=0.07375, over 4814.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2638, pruned_loss=0.06579, over 954222.41 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:26:41,018 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60734.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:41,507 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.902e+01 1.594e+02 1.892e+02 2.253e+02 3.493e+02, threshold=3.783e+02, percent-clipped=0.0 2023-03-26 13:26:52,718 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:06,423 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:15,656 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6216, 1.0519, 0.8328, 1.5026, 2.0575, 1.0881, 1.3983, 1.6388], device='cuda:2'), covar=tensor([0.1493, 0.2193, 0.2056, 0.1181, 0.1933, 0.1944, 0.1491, 0.1847], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0094, 0.0121, 0.0095, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:27:25,456 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60769.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:36,413 INFO [finetune.py:976] (2/7) Epoch 11, batch 3500, loss[loss=0.1535, simple_loss=0.2305, pruned_loss=0.03829, over 4739.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2615, pruned_loss=0.06491, over 954793.53 frames. ], batch size: 23, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:27:37,745 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60779.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:39,439 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60782.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:28:14,078 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0654, 3.4981, 3.6991, 3.9089, 3.8709, 3.6150, 4.1216, 1.4026], device='cuda:2'), covar=tensor([0.0732, 0.0771, 0.0879, 0.0862, 0.1018, 0.1292, 0.0753, 0.5148], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0242, 0.0274, 0.0289, 0.0330, 0.0281, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:28:15,227 INFO [finetune.py:976] (2/7) Epoch 11, batch 3550, loss[loss=0.1707, simple_loss=0.2095, pruned_loss=0.06593, over 3413.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2587, pruned_loss=0.06429, over 954027.26 frames. ], batch size: 14, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:28:20,663 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.566e+02 1.863e+02 2.348e+02 4.575e+02, threshold=3.726e+02, percent-clipped=4.0 2023-03-26 13:28:34,186 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60854.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:28:49,078 INFO [finetune.py:976] (2/7) Epoch 11, batch 3600, loss[loss=0.1552, simple_loss=0.2221, pruned_loss=0.04416, over 4826.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2558, pruned_loss=0.06347, over 952580.54 frames. ], batch size: 30, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:29:17,739 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:29:26,109 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 13:29:39,499 INFO [finetune.py:976] (2/7) Epoch 11, batch 3650, loss[loss=0.2266, simple_loss=0.2958, pruned_loss=0.07866, over 4739.00 frames. ], tot_loss[loss=0.194, simple_loss=0.259, pruned_loss=0.06449, over 953966.84 frames. ], batch size: 59, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:29:42,104 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8459, 1.6752, 1.4247, 1.5437, 1.6006, 1.5479, 1.6408, 2.2838], device='cuda:2'), covar=tensor([0.4291, 0.4509, 0.3503, 0.3954, 0.4279, 0.2525, 0.3958, 0.1867], device='cuda:2'), in_proj_covar=tensor([0.0282, 0.0258, 0.0222, 0.0275, 0.0242, 0.0208, 0.0244, 0.0214], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:29:44,364 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.638e+02 1.962e+02 2.312e+02 3.604e+02, threshold=3.924e+02, percent-clipped=0.0 2023-03-26 13:30:18,547 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8982, 1.7013, 2.2675, 1.5001, 2.1052, 2.1323, 1.5604, 2.2597], device='cuda:2'), covar=tensor([0.1356, 0.2119, 0.1437, 0.2165, 0.0831, 0.1514, 0.3084, 0.0973], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0204, 0.0193, 0.0190, 0.0177, 0.0215, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:30:33,791 INFO [finetune.py:976] (2/7) Epoch 11, batch 3700, loss[loss=0.2347, simple_loss=0.3037, pruned_loss=0.08288, over 4910.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2613, pruned_loss=0.0643, over 954302.53 frames. ], batch size: 36, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:30:48,005 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.02 vs. limit=2.0 2023-03-26 13:31:15,830 INFO [finetune.py:976] (2/7) Epoch 11, batch 3750, loss[loss=0.1802, simple_loss=0.2491, pruned_loss=0.05567, over 4767.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2608, pruned_loss=0.06364, over 954192.64 frames. ], batch size: 27, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:31:20,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.587e+02 1.819e+02 2.276e+02 4.586e+02, threshold=3.638e+02, percent-clipped=1.0 2023-03-26 13:31:36,734 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8074, 1.8279, 1.7012, 1.7285, 1.4142, 4.1201, 1.7115, 2.3328], device='cuda:2'), covar=tensor([0.3385, 0.2420, 0.2003, 0.2267, 0.1536, 0.0135, 0.2341, 0.1136], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:31:38,966 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61061.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:31:47,669 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:31:49,400 INFO [finetune.py:976] (2/7) Epoch 11, batch 3800, loss[loss=0.1737, simple_loss=0.2478, pruned_loss=0.04983, over 4921.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2616, pruned_loss=0.06347, over 953740.22 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:32:10,049 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9548, 1.3287, 1.9054, 1.8233, 1.6767, 1.6368, 1.7586, 1.6990], device='cuda:2'), covar=tensor([0.4218, 0.4727, 0.3892, 0.4384, 0.5667, 0.4115, 0.5447, 0.3879], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0238, 0.0254, 0.0259, 0.0255, 0.0231, 0.0274, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:32:29,696 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:32:32,630 INFO [finetune.py:976] (2/7) Epoch 11, batch 3850, loss[loss=0.2024, simple_loss=0.2494, pruned_loss=0.07765, over 4800.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.26, pruned_loss=0.0631, over 954139.80 frames. ], batch size: 51, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:32:33,241 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1939, 2.1657, 2.1555, 1.3975, 2.2050, 2.3488, 2.1827, 1.8302], device='cuda:2'), covar=tensor([0.0605, 0.0616, 0.0728, 0.0993, 0.0607, 0.0675, 0.0638, 0.1098], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0134, 0.0142, 0.0125, 0.0120, 0.0144, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:32:37,924 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.518e+02 1.864e+02 2.279e+02 4.215e+02, threshold=3.727e+02, percent-clipped=1.0 2023-03-26 13:32:42,326 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5318, 1.4502, 1.5666, 0.8329, 1.5962, 1.5458, 1.4937, 1.3505], device='cuda:2'), covar=tensor([0.0615, 0.0817, 0.0707, 0.0993, 0.0767, 0.0760, 0.0689, 0.1254], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0142, 0.0124, 0.0120, 0.0143, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:33:05,949 INFO [finetune.py:976] (2/7) Epoch 11, batch 3900, loss[loss=0.1834, simple_loss=0.2407, pruned_loss=0.0631, over 4749.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2571, pruned_loss=0.06263, over 952718.36 frames. ], batch size: 27, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:33:13,244 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6338, 1.4857, 1.0362, 0.2718, 1.1851, 1.4758, 1.4659, 1.4215], device='cuda:2'), covar=tensor([0.0995, 0.0822, 0.1360, 0.1959, 0.1517, 0.2536, 0.2129, 0.0933], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0199, 0.0200, 0.0185, 0.0214, 0.0207, 0.0221, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:33:39,747 INFO [finetune.py:976] (2/7) Epoch 11, batch 3950, loss[loss=0.1626, simple_loss=0.2301, pruned_loss=0.04757, over 4771.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2551, pruned_loss=0.06221, over 953972.69 frames. ], batch size: 28, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:33:45,060 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.570e+02 1.907e+02 2.309e+02 4.377e+02, threshold=3.813e+02, percent-clipped=3.0 2023-03-26 13:34:01,978 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4134, 1.5012, 1.7290, 1.7980, 1.6132, 3.2978, 1.3455, 1.6381], device='cuda:2'), covar=tensor([0.1034, 0.1855, 0.1101, 0.0980, 0.1609, 0.0258, 0.1573, 0.1703], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0077, 0.0092, 0.0081, 0.0085, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:34:06,066 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4384, 1.5600, 1.5659, 0.8136, 1.6085, 1.8141, 1.8430, 1.3665], device='cuda:2'), covar=tensor([0.0875, 0.0575, 0.0453, 0.0570, 0.0492, 0.0550, 0.0301, 0.0718], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0121, 0.0132, 0.0130, 0.0125, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.4103e-05, 1.1237e-04, 8.7306e-05, 9.5383e-05, 9.2739e-05, 9.0808e-05, 1.0501e-04, 1.0677e-04], device='cuda:2') 2023-03-26 13:34:12,380 INFO [finetune.py:976] (2/7) Epoch 11, batch 4000, loss[loss=0.2188, simple_loss=0.3002, pruned_loss=0.0687, over 4822.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2551, pruned_loss=0.06254, over 954657.73 frames. ], batch size: 33, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:34:37,646 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.18 vs. limit=5.0 2023-03-26 13:34:55,752 INFO [finetune.py:976] (2/7) Epoch 11, batch 4050, loss[loss=0.17, simple_loss=0.2404, pruned_loss=0.04979, over 4826.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2585, pruned_loss=0.06408, over 953938.82 frames. ], batch size: 30, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:35:04,888 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.652e+02 2.086e+02 2.571e+02 4.987e+02, threshold=4.171e+02, percent-clipped=6.0 2023-03-26 13:35:11,043 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 13:35:24,418 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2386, 3.7041, 3.8728, 4.0392, 3.9551, 3.7809, 4.2963, 1.4815], device='cuda:2'), covar=tensor([0.0824, 0.0917, 0.0861, 0.0983, 0.1415, 0.1514, 0.0804, 0.5040], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0241, 0.0274, 0.0288, 0.0329, 0.0281, 0.0299, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:35:41,823 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:35:43,531 INFO [finetune.py:976] (2/7) Epoch 11, batch 4100, loss[loss=0.2454, simple_loss=0.3148, pruned_loss=0.08805, over 4760.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.261, pruned_loss=0.06492, over 954973.61 frames. ], batch size: 59, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:36:11,798 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61417.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:36:20,057 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=61422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:36:26,637 INFO [finetune.py:976] (2/7) Epoch 11, batch 4150, loss[loss=0.2066, simple_loss=0.2812, pruned_loss=0.06603, over 4900.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2631, pruned_loss=0.06613, over 952160.75 frames. ], batch size: 37, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:36:31,421 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9341, 1.8030, 1.7755, 1.8269, 1.3561, 3.8068, 1.5532, 2.2102], device='cuda:2'), covar=tensor([0.3135, 0.2288, 0.1943, 0.2261, 0.1652, 0.0141, 0.2480, 0.1137], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:36:32,502 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.629e+02 1.982e+02 2.519e+02 5.426e+02, threshold=3.964e+02, percent-clipped=4.0 2023-03-26 13:36:59,824 INFO [finetune.py:976] (2/7) Epoch 11, batch 4200, loss[loss=0.2036, simple_loss=0.2636, pruned_loss=0.07178, over 4733.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2627, pruned_loss=0.06578, over 951201.17 frames. ], batch size: 23, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:37:17,621 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9262, 1.2342, 1.8982, 1.8430, 1.6623, 1.6784, 1.7675, 1.6866], device='cuda:2'), covar=tensor([0.3858, 0.4275, 0.3555, 0.3829, 0.5104, 0.3766, 0.4633, 0.3501], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0238, 0.0253, 0.0258, 0.0255, 0.0231, 0.0274, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:37:35,243 INFO [finetune.py:976] (2/7) Epoch 11, batch 4250, loss[loss=0.1796, simple_loss=0.2415, pruned_loss=0.05882, over 4776.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2601, pruned_loss=0.06471, over 951278.05 frames. ], batch size: 26, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:37:45,940 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.771e+01 1.547e+02 1.858e+02 2.245e+02 5.805e+02, threshold=3.715e+02, percent-clipped=2.0 2023-03-26 13:38:05,964 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 13:38:15,490 INFO [finetune.py:976] (2/7) Epoch 11, batch 4300, loss[loss=0.1742, simple_loss=0.237, pruned_loss=0.05572, over 4898.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2574, pruned_loss=0.06394, over 953242.35 frames. ], batch size: 32, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:38:40,086 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 13:38:48,430 INFO [finetune.py:976] (2/7) Epoch 11, batch 4350, loss[loss=0.2013, simple_loss=0.2632, pruned_loss=0.06966, over 4944.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.255, pruned_loss=0.06302, over 953627.53 frames. ], batch size: 33, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:38:54,823 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.580e+02 1.801e+02 2.212e+02 3.446e+02, threshold=3.603e+02, percent-clipped=0.0 2023-03-26 13:38:59,290 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-26 13:39:21,856 INFO [finetune.py:976] (2/7) Epoch 11, batch 4400, loss[loss=0.1487, simple_loss=0.217, pruned_loss=0.04019, over 4790.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2577, pruned_loss=0.06437, over 953948.53 frames. ], batch size: 26, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:39:35,892 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6505, 2.0418, 1.7020, 1.6802, 2.3503, 2.1915, 2.0050, 1.9424], device='cuda:2'), covar=tensor([0.0457, 0.0318, 0.0491, 0.0337, 0.0251, 0.0484, 0.0347, 0.0360], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0108, 0.0140, 0.0114, 0.0102, 0.0103, 0.0092, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0914e-05, 8.3967e-05, 1.1139e-04, 8.9114e-05, 7.9526e-05, 7.6283e-05, 6.9807e-05, 8.2407e-05], device='cuda:2') 2023-03-26 13:39:37,346 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 13:39:53,783 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61717.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:04,332 INFO [finetune.py:976] (2/7) Epoch 11, batch 4450, loss[loss=0.1814, simple_loss=0.2494, pruned_loss=0.05673, over 4825.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2618, pruned_loss=0.06598, over 953970.56 frames. ], batch size: 33, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:40:07,491 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:14,298 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.628e+02 1.977e+02 2.534e+02 3.640e+02, threshold=3.954e+02, percent-clipped=2.0 2023-03-26 13:40:16,771 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 13:40:23,008 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:49,787 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=61765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:56,535 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6912, 1.5640, 1.4629, 1.6510, 1.1315, 3.6022, 1.3748, 1.9039], device='cuda:2'), covar=tensor([0.3294, 0.2393, 0.2283, 0.2399, 0.1910, 0.0158, 0.2677, 0.1311], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0122, 0.0114, 0.0098, 0.0098, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:40:57,018 INFO [finetune.py:976] (2/7) Epoch 11, batch 4500, loss[loss=0.1983, simple_loss=0.2679, pruned_loss=0.06433, over 4805.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2631, pruned_loss=0.06584, over 953598.32 frames. ], batch size: 25, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:41:07,869 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:41:16,082 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:41:33,078 INFO [finetune.py:976] (2/7) Epoch 11, batch 4550, loss[loss=0.1937, simple_loss=0.2617, pruned_loss=0.06284, over 4923.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2638, pruned_loss=0.06588, over 955108.06 frames. ], batch size: 38, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:41:43,495 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.535e+01 1.607e+02 1.951e+02 2.245e+02 3.846e+02, threshold=3.902e+02, percent-clipped=0.0 2023-03-26 13:41:57,039 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4604, 1.3816, 1.7043, 1.7516, 1.5741, 3.2705, 1.3306, 1.5452], device='cuda:2'), covar=tensor([0.0901, 0.1802, 0.1011, 0.0876, 0.1549, 0.0234, 0.1468, 0.1689], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 13:42:14,196 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.76 vs. limit=5.0 2023-03-26 13:42:15,220 INFO [finetune.py:976] (2/7) Epoch 11, batch 4600, loss[loss=0.1914, simple_loss=0.2633, pruned_loss=0.05981, over 4919.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2637, pruned_loss=0.06586, over 955943.90 frames. ], batch size: 38, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:42:40,461 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61914.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:42:45,759 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0067, 1.2194, 1.8841, 1.8458, 1.6738, 1.6191, 1.7629, 1.6790], device='cuda:2'), covar=tensor([0.3557, 0.4483, 0.3569, 0.4230, 0.5082, 0.4017, 0.4918, 0.3527], device='cuda:2'), in_proj_covar=tensor([0.0237, 0.0238, 0.0254, 0.0258, 0.0255, 0.0230, 0.0274, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:42:48,594 INFO [finetune.py:976] (2/7) Epoch 11, batch 4650, loss[loss=0.2007, simple_loss=0.2713, pruned_loss=0.06506, over 4916.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2609, pruned_loss=0.06503, over 955690.31 frames. ], batch size: 46, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:42:56,050 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.606e+02 1.934e+02 2.317e+02 5.626e+02, threshold=3.867e+02, percent-clipped=3.0 2023-03-26 13:43:31,733 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61975.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:43:32,823 INFO [finetune.py:976] (2/7) Epoch 11, batch 4700, loss[loss=0.129, simple_loss=0.1919, pruned_loss=0.03307, over 4049.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2565, pruned_loss=0.06288, over 954390.92 frames. ], batch size: 17, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:44:19,771 INFO [finetune.py:976] (2/7) Epoch 11, batch 4750, loss[loss=0.1609, simple_loss=0.2221, pruned_loss=0.04982, over 4904.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2549, pruned_loss=0.06256, over 955963.05 frames. ], batch size: 43, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:44:25,329 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 13:44:25,603 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.474e+02 1.769e+02 2.148e+02 4.944e+02, threshold=3.539e+02, percent-clipped=1.0 2023-03-26 13:44:28,191 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2918, 2.3626, 2.4976, 1.0429, 2.8386, 3.0153, 2.5687, 2.1848], device='cuda:2'), covar=tensor([0.0960, 0.0763, 0.0503, 0.0791, 0.0570, 0.0417, 0.0501, 0.0792], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0154, 0.0122, 0.0133, 0.0131, 0.0126, 0.0144, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.4700e-05, 1.1262e-04, 8.7850e-05, 9.5863e-05, 9.3747e-05, 9.1940e-05, 1.0541e-04, 1.0739e-04], device='cuda:2') 2023-03-26 13:44:29,473 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.45 vs. limit=5.0 2023-03-26 13:44:53,404 INFO [finetune.py:976] (2/7) Epoch 11, batch 4800, loss[loss=0.2046, simple_loss=0.2737, pruned_loss=0.06778, over 4804.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2579, pruned_loss=0.06336, over 955676.34 frames. ], batch size: 45, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:45:06,473 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:13,196 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62099.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:41,901 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:50,588 INFO [finetune.py:976] (2/7) Epoch 11, batch 4850, loss[loss=0.2095, simple_loss=0.2792, pruned_loss=0.0699, over 4812.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2618, pruned_loss=0.06455, over 954342.02 frames. ], batch size: 45, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:46:01,539 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.730e+02 2.037e+02 2.587e+02 8.043e+02, threshold=4.075e+02, percent-clipped=4.0 2023-03-26 13:46:45,228 INFO [finetune.py:976] (2/7) Epoch 11, batch 4900, loss[loss=0.1755, simple_loss=0.2494, pruned_loss=0.05081, over 4782.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2634, pruned_loss=0.06528, over 954639.40 frames. ], batch size: 29, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:46:48,909 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62183.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:47:49,178 INFO [finetune.py:976] (2/7) Epoch 11, batch 4950, loss[loss=0.1902, simple_loss=0.2175, pruned_loss=0.08148, over 3388.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2656, pruned_loss=0.06662, over 952025.78 frames. ], batch size: 14, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:47:56,654 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.728e+02 2.029e+02 2.471e+02 5.736e+02, threshold=4.057e+02, percent-clipped=2.0 2023-03-26 13:48:18,913 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:48:20,812 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 13:48:21,237 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2894, 3.7001, 3.9154, 4.1169, 4.0304, 3.7477, 4.3213, 1.3200], device='cuda:2'), covar=tensor([0.0715, 0.0879, 0.0806, 0.0919, 0.1129, 0.1529, 0.0706, 0.5654], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0245, 0.0278, 0.0292, 0.0333, 0.0284, 0.0304, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:48:21,269 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:48:24,032 INFO [finetune.py:976] (2/7) Epoch 11, batch 5000, loss[loss=0.1786, simple_loss=0.2453, pruned_loss=0.05597, over 4914.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2626, pruned_loss=0.0653, over 951888.95 frames. ], batch size: 36, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:48:26,085 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 13:48:34,215 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2418, 2.0673, 1.7243, 1.9195, 2.1470, 1.8956, 2.3572, 2.2182], device='cuda:2'), covar=tensor([0.1305, 0.2362, 0.3200, 0.2730, 0.2480, 0.1655, 0.3697, 0.1750], device='cuda:2'), in_proj_covar=tensor([0.0175, 0.0186, 0.0232, 0.0252, 0.0239, 0.0196, 0.0210, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:48:41,438 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2173, 3.6854, 3.8575, 3.9646, 3.9948, 3.7470, 4.2687, 1.3381], device='cuda:2'), covar=tensor([0.0720, 0.0757, 0.0762, 0.0941, 0.1018, 0.1310, 0.0606, 0.5114], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0244, 0.0277, 0.0290, 0.0331, 0.0283, 0.0302, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:48:57,126 INFO [finetune.py:976] (2/7) Epoch 11, batch 5050, loss[loss=0.1542, simple_loss=0.2123, pruned_loss=0.0481, over 4239.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2598, pruned_loss=0.06449, over 952764.83 frames. ], batch size: 18, lr: 3.68e-03, grad_scale: 16.0 2023-03-26 13:49:02,472 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62334.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:04,171 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.504e+02 1.759e+02 2.068e+02 4.473e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-26 13:49:32,190 INFO [finetune.py:976] (2/7) Epoch 11, batch 5100, loss[loss=0.1851, simple_loss=0.2501, pruned_loss=0.0601, over 4692.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2559, pruned_loss=0.06275, over 953742.65 frames. ], batch size: 23, lr: 3.68e-03, grad_scale: 16.0 2023-03-26 13:49:40,039 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:42,303 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62391.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:47,647 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:04,765 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3002, 1.6816, 2.1476, 2.0706, 1.8335, 1.8797, 2.0233, 2.0187], device='cuda:2'), covar=tensor([0.4883, 0.5021, 0.3925, 0.4770, 0.6437, 0.4753, 0.6299, 0.4090], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0239, 0.0255, 0.0260, 0.0256, 0.0230, 0.0275, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:50:05,685 INFO [finetune.py:976] (2/7) Epoch 11, batch 5150, loss[loss=0.1588, simple_loss=0.2339, pruned_loss=0.04186, over 4816.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2555, pruned_loss=0.06298, over 954843.19 frames. ], batch size: 33, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:50:12,136 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:12,674 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.578e+02 2.001e+02 2.432e+02 3.455e+02, threshold=4.003e+02, percent-clipped=0.0 2023-03-26 13:50:25,611 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4391, 2.2350, 2.0699, 1.0530, 2.2418, 1.8400, 1.6896, 2.1476], device='cuda:2'), covar=tensor([0.0953, 0.0981, 0.1571, 0.2155, 0.1499, 0.2338, 0.2227, 0.1064], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0198, 0.0201, 0.0185, 0.0215, 0.0207, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:50:26,776 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:30,443 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62452.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:41,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8882, 1.9265, 1.3768, 1.9638, 1.9401, 1.7025, 2.6734, 1.8895], device='cuda:2'), covar=tensor([0.1429, 0.2045, 0.3356, 0.2796, 0.2677, 0.1702, 0.2325, 0.2010], device='cuda:2'), in_proj_covar=tensor([0.0175, 0.0187, 0.0232, 0.0254, 0.0240, 0.0197, 0.0212, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:50:53,135 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 13:50:55,269 INFO [finetune.py:976] (2/7) Epoch 11, batch 5200, loss[loss=0.1576, simple_loss=0.2199, pruned_loss=0.04761, over 4694.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2594, pruned_loss=0.06443, over 956484.32 frames. ], batch size: 23, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:50:56,961 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:51:36,854 INFO [finetune.py:976] (2/7) Epoch 11, batch 5250, loss[loss=0.1921, simple_loss=0.268, pruned_loss=0.05817, over 4820.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2619, pruned_loss=0.06535, over 956509.59 frames. ], batch size: 39, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:51:46,761 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5727, 1.1653, 0.8080, 1.4022, 1.9884, 1.0185, 1.3521, 1.4340], device='cuda:2'), covar=tensor([0.1485, 0.2205, 0.2042, 0.1302, 0.1915, 0.1996, 0.1493, 0.1990], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0093, 0.0119, 0.0094, 0.0099, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:51:54,382 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.618e+02 1.949e+02 2.406e+02 7.235e+02, threshold=3.897e+02, percent-clipped=3.0 2023-03-26 13:52:03,534 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:19,515 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62570.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:23,690 INFO [finetune.py:976] (2/7) Epoch 11, batch 5300, loss[loss=0.1917, simple_loss=0.2668, pruned_loss=0.05834, over 4850.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2628, pruned_loss=0.06528, over 955174.18 frames. ], batch size: 44, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:52:29,605 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:44,306 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:48,665 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 13:52:51,100 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 13:52:52,191 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62618.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:57,602 INFO [finetune.py:976] (2/7) Epoch 11, batch 5350, loss[loss=0.1989, simple_loss=0.2583, pruned_loss=0.06974, over 4813.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2625, pruned_loss=0.06507, over 952410.37 frames. ], batch size: 38, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:52:58,902 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:03,686 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3476, 3.7479, 4.0223, 4.1856, 4.1260, 3.8628, 4.3947, 1.3899], device='cuda:2'), covar=tensor([0.0651, 0.0771, 0.0736, 0.0846, 0.1063, 0.1393, 0.0625, 0.5326], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0244, 0.0277, 0.0291, 0.0333, 0.0286, 0.0303, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:53:04,200 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.504e+02 1.845e+02 2.238e+02 3.589e+02, threshold=3.690e+02, percent-clipped=0.0 2023-03-26 13:53:10,794 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:24,275 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:30,763 INFO [finetune.py:976] (2/7) Epoch 11, batch 5400, loss[loss=0.1747, simple_loss=0.2504, pruned_loss=0.04954, over 4775.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2602, pruned_loss=0.06421, over 953010.03 frames. ], batch size: 29, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:53:39,228 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3405, 1.9360, 2.3555, 2.2489, 1.9866, 1.9734, 2.1885, 2.1302], device='cuda:2'), covar=tensor([0.4122, 0.4776, 0.3712, 0.4526, 0.5809, 0.4488, 0.5479, 0.3395], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0239, 0.0255, 0.0261, 0.0257, 0.0231, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:54:04,661 INFO [finetune.py:976] (2/7) Epoch 11, batch 5450, loss[loss=0.1917, simple_loss=0.2514, pruned_loss=0.06604, over 4828.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2566, pruned_loss=0.06269, over 955276.94 frames. ], batch size: 41, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:54:04,795 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:10,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.513e+01 1.463e+02 1.876e+02 2.335e+02 4.427e+02, threshold=3.751e+02, percent-clipped=2.0 2023-03-26 13:54:17,806 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:18,450 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6714, 1.4876, 2.1143, 3.3072, 2.3597, 2.2934, 0.8612, 2.6577], device='cuda:2'), covar=tensor([0.1666, 0.1409, 0.1322, 0.0553, 0.0736, 0.1699, 0.1907, 0.0548], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0163, 0.0100, 0.0138, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:54:36,197 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:36,389 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 13:54:38,553 INFO [finetune.py:976] (2/7) Epoch 11, batch 5500, loss[loss=0.2172, simple_loss=0.2806, pruned_loss=0.07694, over 4902.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2543, pruned_loss=0.06208, over 955433.15 frames. ], batch size: 37, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:54:39,231 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:48,406 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.99 vs. limit=5.0 2023-03-26 13:54:59,169 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-26 13:55:12,360 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:55:12,913 INFO [finetune.py:976] (2/7) Epoch 11, batch 5550, loss[loss=0.2031, simple_loss=0.2725, pruned_loss=0.06681, over 4915.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2567, pruned_loss=0.06301, over 956320.17 frames. ], batch size: 37, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:55:18,035 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:55:19,883 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.580e+02 1.841e+02 2.336e+02 5.980e+02, threshold=3.683e+02, percent-clipped=6.0 2023-03-26 13:56:07,690 INFO [finetune.py:976] (2/7) Epoch 11, batch 5600, loss[loss=0.2145, simple_loss=0.2849, pruned_loss=0.0721, over 4744.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2602, pruned_loss=0.06397, over 956229.20 frames. ], batch size: 59, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:56:22,195 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62901.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:56:33,440 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 13:56:34,628 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 13:56:37,252 INFO [finetune.py:976] (2/7) Epoch 11, batch 5650, loss[loss=0.1651, simple_loss=0.2289, pruned_loss=0.05069, over 4702.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2636, pruned_loss=0.06514, over 957045.55 frames. ], batch size: 23, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:56:38,486 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:56:42,200 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 13:56:48,738 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.326e+01 1.606e+02 1.910e+02 2.279e+02 4.497e+02, threshold=3.820e+02, percent-clipped=2.0 2023-03-26 13:56:51,134 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:57:16,383 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0571, 1.7323, 2.3130, 1.6051, 1.9486, 2.2754, 1.7245, 2.4423], device='cuda:2'), covar=tensor([0.1059, 0.2028, 0.1149, 0.1520, 0.0908, 0.1101, 0.2540, 0.0700], device='cuda:2'), in_proj_covar=tensor([0.0199, 0.0208, 0.0195, 0.0193, 0.0181, 0.0217, 0.0219, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:57:18,725 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3600, 1.3223, 1.5468, 2.3072, 1.6250, 2.1019, 0.8924, 1.9011], device='cuda:2'), covar=tensor([0.1640, 0.1342, 0.1127, 0.0657, 0.0849, 0.1134, 0.1496, 0.0677], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0162, 0.0100, 0.0137, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 13:57:23,400 INFO [finetune.py:976] (2/7) Epoch 11, batch 5700, loss[loss=0.2006, simple_loss=0.2439, pruned_loss=0.07863, over 3968.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2599, pruned_loss=0.06497, over 939656.71 frames. ], batch size: 17, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:57:23,434 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:57:34,712 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2637, 2.6933, 2.5840, 1.3180, 2.8577, 2.0676, 0.8372, 1.8015], device='cuda:2'), covar=tensor([0.2122, 0.2454, 0.1790, 0.3667, 0.1199, 0.1231, 0.4163, 0.1872], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0173, 0.0158, 0.0128, 0.0155, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 13:57:54,978 INFO [finetune.py:976] (2/7) Epoch 12, batch 0, loss[loss=0.2179, simple_loss=0.2796, pruned_loss=0.07808, over 4866.00 frames. ], tot_loss[loss=0.2179, simple_loss=0.2796, pruned_loss=0.07808, over 4866.00 frames. ], batch size: 34, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:57:54,978 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 13:58:03,953 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0841, 1.8267, 1.7242, 1.7221, 1.8244, 1.8041, 1.8060, 2.4865], device='cuda:2'), covar=tensor([0.4390, 0.5444, 0.3809, 0.4426, 0.4390, 0.2667, 0.4483, 0.1995], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0260, 0.0222, 0.0275, 0.0242, 0.0209, 0.0245, 0.0216], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:58:11,586 INFO [finetune.py:1010] (2/7) Epoch 12, validation: loss=0.16, simple_loss=0.2305, pruned_loss=0.04472, over 2265189.00 frames. 2023-03-26 13:58:11,587 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 13:58:16,729 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-26 13:58:19,016 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1619, 2.1995, 2.1519, 1.3808, 2.2259, 2.2414, 2.2037, 1.9463], device='cuda:2'), covar=tensor([0.0670, 0.0649, 0.0835, 0.1017, 0.0640, 0.0823, 0.0754, 0.1117], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0125, 0.0121, 0.0143, 0.0143, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 13:58:22,060 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63022.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:58:37,034 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.590e+02 1.966e+02 2.351e+02 4.424e+02, threshold=3.931e+02, percent-clipped=2.0 2023-03-26 13:58:48,895 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-26 13:58:49,958 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63047.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:59:00,857 INFO [finetune.py:976] (2/7) Epoch 12, batch 50, loss[loss=0.2114, simple_loss=0.2751, pruned_loss=0.07386, over 4899.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2616, pruned_loss=0.06459, over 215836.35 frames. ], batch size: 36, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:59:11,373 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4857, 1.3047, 1.1149, 1.2190, 1.7833, 1.7032, 1.5578, 1.1664], device='cuda:2'), covar=tensor([0.0289, 0.0313, 0.0800, 0.0371, 0.0224, 0.0368, 0.0288, 0.0420], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0109, 0.0142, 0.0115, 0.0102, 0.0104, 0.0093, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2100e-05, 8.5138e-05, 1.1260e-04, 8.9654e-05, 7.9727e-05, 7.7273e-05, 7.0406e-05, 8.3889e-05], device='cuda:2') 2023-03-26 13:59:42,645 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63095.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:59:54,682 INFO [finetune.py:976] (2/7) Epoch 12, batch 100, loss[loss=0.1482, simple_loss=0.2108, pruned_loss=0.04278, over 4824.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2558, pruned_loss=0.06281, over 381605.06 frames. ], batch size: 25, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:00:15,390 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:00:21,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.723e+02 1.978e+02 2.544e+02 5.107e+02, threshold=3.957e+02, percent-clipped=1.0 2023-03-26 14:00:50,143 INFO [finetune.py:976] (2/7) Epoch 12, batch 150, loss[loss=0.2261, simple_loss=0.2798, pruned_loss=0.08622, over 4817.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2519, pruned_loss=0.06209, over 510493.51 frames. ], batch size: 39, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:01:47,575 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63201.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:01:56,080 INFO [finetune.py:976] (2/7) Epoch 12, batch 200, loss[loss=0.1615, simple_loss=0.2268, pruned_loss=0.04811, over 4795.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.251, pruned_loss=0.06207, over 609014.16 frames. ], batch size: 25, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:02:08,033 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 14:02:17,488 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63221.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:32,829 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.551e+02 1.870e+02 2.223e+02 3.918e+02, threshold=3.740e+02, percent-clipped=0.0 2023-03-26 14:02:41,474 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:46,790 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63249.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:51,341 INFO [finetune.py:976] (2/7) Epoch 12, batch 250, loss[loss=0.1736, simple_loss=0.2403, pruned_loss=0.05348, over 4827.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2549, pruned_loss=0.0626, over 687795.18 frames. ], batch size: 25, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:03:08,676 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:09,461 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 14:03:13,375 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:23,972 INFO [finetune.py:976] (2/7) Epoch 12, batch 300, loss[loss=0.1955, simple_loss=0.2636, pruned_loss=0.06373, over 4889.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2584, pruned_loss=0.06409, over 746109.54 frames. ], batch size: 35, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:03:28,795 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8579, 1.7711, 1.5832, 1.9596, 2.3628, 2.0318, 1.4849, 1.5364], device='cuda:2'), covar=tensor([0.2322, 0.2062, 0.2028, 0.1712, 0.1539, 0.1212, 0.2533, 0.2039], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0189, 0.0243, 0.0183, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:03:40,229 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:51,185 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.663e+02 2.076e+02 2.406e+02 5.777e+02, threshold=4.151e+02, percent-clipped=4.0 2023-03-26 14:04:08,575 INFO [finetune.py:976] (2/7) Epoch 12, batch 350, loss[loss=0.1871, simple_loss=0.2551, pruned_loss=0.05954, over 4751.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.261, pruned_loss=0.06527, over 791708.14 frames. ], batch size: 27, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:04:27,600 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63370.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:04:31,252 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7022, 0.6551, 1.6870, 1.5343, 1.4811, 1.3853, 1.5064, 1.5731], device='cuda:2'), covar=tensor([0.3465, 0.4113, 0.3386, 0.3754, 0.4472, 0.3657, 0.4389, 0.3217], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0239, 0.0256, 0.0261, 0.0258, 0.0232, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:04:59,619 INFO [finetune.py:976] (2/7) Epoch 12, batch 400, loss[loss=0.1933, simple_loss=0.2604, pruned_loss=0.06313, over 4896.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2631, pruned_loss=0.0659, over 828354.83 frames. ], batch size: 43, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:05:02,025 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4778, 1.3812, 1.4693, 1.6538, 1.5157, 3.0506, 1.3067, 1.5445], device='cuda:2'), covar=tensor([0.0968, 0.1795, 0.1092, 0.0955, 0.1609, 0.0307, 0.1525, 0.1716], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0092, 0.0082, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:05:08,489 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9033, 1.4027, 1.9418, 1.7872, 1.6034, 1.6263, 1.7690, 1.8028], device='cuda:2'), covar=tensor([0.4333, 0.4600, 0.3530, 0.4106, 0.5261, 0.4003, 0.5035, 0.3497], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0239, 0.0256, 0.0261, 0.0258, 0.0232, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:05:10,711 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:11,915 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:16,640 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:21,322 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.591e+02 1.854e+02 2.332e+02 4.296e+02, threshold=3.709e+02, percent-clipped=1.0 2023-03-26 14:05:38,140 INFO [finetune.py:976] (2/7) Epoch 12, batch 450, loss[loss=0.1386, simple_loss=0.2072, pruned_loss=0.03504, over 4776.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2619, pruned_loss=0.06503, over 856779.43 frames. ], batch size: 27, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:05:57,264 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:59,776 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:06:00,967 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:06:08,820 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7213, 1.2517, 0.8189, 1.5384, 2.1158, 1.0588, 1.3713, 1.6400], device='cuda:2'), covar=tensor([0.1407, 0.2052, 0.2028, 0.1193, 0.1854, 0.1982, 0.1502, 0.1801], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0096, 0.0114, 0.0093, 0.0121, 0.0096, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:06:15,171 INFO [finetune.py:976] (2/7) Epoch 12, batch 500, loss[loss=0.1902, simple_loss=0.2508, pruned_loss=0.06473, over 4928.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2586, pruned_loss=0.06343, over 879510.19 frames. ], batch size: 33, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:06:37,050 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.336e+01 1.553e+02 1.855e+02 2.331e+02 4.193e+02, threshold=3.711e+02, percent-clipped=1.0 2023-03-26 14:06:37,736 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0728, 4.3907, 4.7049, 4.6401, 4.5745, 4.4031, 5.1104, 1.5846], device='cuda:2'), covar=tensor([0.1082, 0.1705, 0.1239, 0.1471, 0.2183, 0.2489, 0.1369, 0.7738], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0244, 0.0276, 0.0290, 0.0330, 0.0283, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:06:48,874 INFO [finetune.py:976] (2/7) Epoch 12, batch 550, loss[loss=0.184, simple_loss=0.2522, pruned_loss=0.05793, over 4767.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2561, pruned_loss=0.06289, over 895758.59 frames. ], batch size: 28, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:06:58,444 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:03,808 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63577.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:10,286 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:16,999 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 14:07:22,327 INFO [finetune.py:976] (2/7) Epoch 12, batch 600, loss[loss=0.2115, simple_loss=0.285, pruned_loss=0.06898, over 4865.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2565, pruned_loss=0.0635, over 909511.62 frames. ], batch size: 31, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:07:40,171 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:44,849 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.685e+02 2.017e+02 2.531e+02 3.696e+02, threshold=4.034e+02, percent-clipped=0.0 2023-03-26 14:07:51,102 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63647.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:56,393 INFO [finetune.py:976] (2/7) Epoch 12, batch 650, loss[loss=0.174, simple_loss=0.2505, pruned_loss=0.04879, over 4912.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2598, pruned_loss=0.06434, over 919561.16 frames. ], batch size: 36, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:07:57,096 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7154, 1.3377, 1.0174, 1.6951, 2.0531, 1.6319, 1.4813, 1.7013], device='cuda:2'), covar=tensor([0.1357, 0.1847, 0.2060, 0.1055, 0.1834, 0.2212, 0.1336, 0.1699], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0096, 0.0114, 0.0093, 0.0120, 0.0095, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:08:22,076 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9533, 1.7638, 1.5464, 1.6254, 1.6720, 1.6469, 1.6987, 2.4119], device='cuda:2'), covar=tensor([0.4011, 0.4673, 0.3472, 0.4272, 0.4574, 0.2345, 0.4001, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0224, 0.0277, 0.0244, 0.0210, 0.0247, 0.0217], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:08:29,867 INFO [finetune.py:976] (2/7) Epoch 12, batch 700, loss[loss=0.1901, simple_loss=0.2622, pruned_loss=0.05894, over 4805.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2611, pruned_loss=0.06398, over 927588.61 frames. ], batch size: 40, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:08:57,204 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 14:08:59,558 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.97 vs. limit=5.0 2023-03-26 14:08:59,836 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.754e+02 2.049e+02 2.499e+02 4.974e+02, threshold=4.098e+02, percent-clipped=3.0 2023-03-26 14:09:08,358 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1483, 1.9238, 1.6851, 1.8755, 1.8659, 1.8161, 1.8831, 2.6296], device='cuda:2'), covar=tensor([0.4487, 0.5049, 0.3659, 0.4391, 0.4403, 0.2774, 0.4220, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0259, 0.0222, 0.0275, 0.0243, 0.0208, 0.0245, 0.0216], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:09:11,206 INFO [finetune.py:976] (2/7) Epoch 12, batch 750, loss[loss=0.189, simple_loss=0.2771, pruned_loss=0.05041, over 4756.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2623, pruned_loss=0.06443, over 934320.39 frames. ], batch size: 51, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:09:25,543 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:09:26,767 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:09:56,449 INFO [finetune.py:976] (2/7) Epoch 12, batch 800, loss[loss=0.2226, simple_loss=0.2769, pruned_loss=0.08412, over 4836.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2623, pruned_loss=0.06402, over 941296.47 frames. ], batch size: 44, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:10:04,902 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:26,006 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.587e+02 1.868e+02 2.134e+02 3.136e+02, threshold=3.736e+02, percent-clipped=1.0 2023-03-26 14:10:26,711 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3876, 4.6737, 4.9499, 5.1779, 5.1243, 4.8521, 5.4885, 1.6885], device='cuda:2'), covar=tensor([0.0608, 0.0700, 0.0671, 0.0792, 0.1064, 0.1290, 0.0517, 0.5082], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0243, 0.0275, 0.0288, 0.0329, 0.0281, 0.0299, 0.0292], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:10:32,468 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:38,486 INFO [finetune.py:976] (2/7) Epoch 12, batch 850, loss[loss=0.2099, simple_loss=0.268, pruned_loss=0.07584, over 4844.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2602, pruned_loss=0.06356, over 943368.11 frames. ], batch size: 47, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:10:51,308 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:54,967 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63877.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:22,731 INFO [finetune.py:976] (2/7) Epoch 12, batch 900, loss[loss=0.1929, simple_loss=0.2588, pruned_loss=0.06354, over 4790.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.2566, pruned_loss=0.06255, over 945843.37 frames. ], batch size: 51, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:11:23,444 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:29,458 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:35,894 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:35,904 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:44,046 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.611e+02 1.873e+02 2.372e+02 4.297e+02, threshold=3.747e+02, percent-clipped=2.0 2023-03-26 14:11:47,182 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63942.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:56,116 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.79 vs. limit=5.0 2023-03-26 14:11:56,453 INFO [finetune.py:976] (2/7) Epoch 12, batch 950, loss[loss=0.1555, simple_loss=0.2229, pruned_loss=0.04403, over 4798.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2539, pruned_loss=0.06139, over 948126.20 frames. ], batch size: 25, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:12:10,883 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:12:31,104 INFO [finetune.py:976] (2/7) Epoch 12, batch 1000, loss[loss=0.1751, simple_loss=0.2504, pruned_loss=0.04986, over 4914.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2574, pruned_loss=0.06241, over 950923.41 frames. ], batch size: 36, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:12:43,084 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64023.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:12:51,952 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.200e+02 1.649e+02 1.875e+02 2.259e+02 3.443e+02, threshold=3.751e+02, percent-clipped=0.0 2023-03-26 14:12:57,315 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7075, 1.1486, 0.8776, 1.6189, 2.0762, 1.3172, 1.4518, 1.6792], device='cuda:2'), covar=tensor([0.1412, 0.2149, 0.2134, 0.1210, 0.1928, 0.2009, 0.1486, 0.1844], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0093, 0.0120, 0.0095, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:13:04,254 INFO [finetune.py:976] (2/7) Epoch 12, batch 1050, loss[loss=0.2158, simple_loss=0.2786, pruned_loss=0.07655, over 4822.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2592, pruned_loss=0.06254, over 951285.51 frames. ], batch size: 40, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:13:17,520 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:19,228 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:22,961 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64084.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:30,554 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1595, 2.2945, 1.9412, 1.9796, 2.5439, 2.4619, 2.1367, 2.1139], device='cuda:2'), covar=tensor([0.0296, 0.0300, 0.0517, 0.0331, 0.0273, 0.0618, 0.0353, 0.0362], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0109, 0.0140, 0.0114, 0.0102, 0.0104, 0.0094, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2011e-05, 8.4832e-05, 1.1133e-04, 8.8898e-05, 7.9634e-05, 7.7297e-05, 7.0746e-05, 8.3721e-05], device='cuda:2') 2023-03-26 14:13:37,900 INFO [finetune.py:976] (2/7) Epoch 12, batch 1100, loss[loss=0.1879, simple_loss=0.2693, pruned_loss=0.05323, over 4868.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2619, pruned_loss=0.06393, over 951205.45 frames. ], batch size: 34, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:13:38,774 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.35 vs. limit=5.0 2023-03-26 14:13:39,279 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0959, 1.9034, 1.6478, 1.8494, 1.7866, 1.8130, 1.8434, 2.5354], device='cuda:2'), covar=tensor([0.3982, 0.4988, 0.3538, 0.4488, 0.4630, 0.2448, 0.4352, 0.1814], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0260, 0.0224, 0.0276, 0.0244, 0.0210, 0.0247, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:13:53,907 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64124.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:55,109 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:05,898 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.584e+02 1.925e+02 2.329e+02 4.054e+02, threshold=3.850e+02, percent-clipped=2.0 2023-03-26 14:14:17,902 INFO [finetune.py:976] (2/7) Epoch 12, batch 1150, loss[loss=0.207, simple_loss=0.2646, pruned_loss=0.07466, over 4817.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2633, pruned_loss=0.06457, over 952337.59 frames. ], batch size: 33, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:14:19,274 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 14:14:25,687 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:30,518 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 14:14:48,855 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64201.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:56,319 INFO [finetune.py:976] (2/7) Epoch 12, batch 1200, loss[loss=0.2107, simple_loss=0.2657, pruned_loss=0.07787, over 4850.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2609, pruned_loss=0.0637, over 953821.35 frames. ], batch size: 44, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:15:14,938 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:24,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:31,343 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.575e+02 1.833e+02 2.193e+02 5.344e+02, threshold=3.667e+02, percent-clipped=2.0 2023-03-26 14:15:34,437 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64242.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:43,192 INFO [finetune.py:976] (2/7) Epoch 12, batch 1250, loss[loss=0.1895, simple_loss=0.2516, pruned_loss=0.06367, over 4904.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2583, pruned_loss=0.06255, over 955354.50 frames. ], batch size: 32, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:15:55,100 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:55,714 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:01,292 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3603, 2.2769, 1.7570, 2.4529, 2.4057, 2.0230, 2.8991, 2.4545], device='cuda:2'), covar=tensor([0.1326, 0.2662, 0.3236, 0.2714, 0.2453, 0.1638, 0.3228, 0.1769], device='cuda:2'), in_proj_covar=tensor([0.0177, 0.0189, 0.0234, 0.0256, 0.0242, 0.0199, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:16:09,178 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:09,877 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8546, 1.0410, 1.8377, 1.7144, 1.5944, 1.5446, 1.6364, 1.6775], device='cuda:2'), covar=tensor([0.3511, 0.4042, 0.3177, 0.3632, 0.4423, 0.3526, 0.4306, 0.3169], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0238, 0.0255, 0.0260, 0.0256, 0.0231, 0.0274, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:16:11,092 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:27,223 INFO [finetune.py:976] (2/7) Epoch 12, batch 1300, loss[loss=0.2047, simple_loss=0.2702, pruned_loss=0.0696, over 4828.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2552, pruned_loss=0.06185, over 954767.42 frames. ], batch size: 38, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:16:48,476 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.610e+02 1.842e+02 2.244e+02 4.381e+02, threshold=3.684e+02, percent-clipped=1.0 2023-03-26 14:16:53,427 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:55,249 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8455, 1.6516, 2.3695, 1.5919, 2.1075, 2.1454, 1.5779, 2.3807], device='cuda:2'), covar=tensor([0.1570, 0.2121, 0.1385, 0.1977, 0.0914, 0.1605, 0.2956, 0.0837], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0206, 0.0194, 0.0191, 0.0179, 0.0214, 0.0218, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:16:59,917 INFO [finetune.py:976] (2/7) Epoch 12, batch 1350, loss[loss=0.2109, simple_loss=0.2803, pruned_loss=0.07076, over 4917.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2549, pruned_loss=0.06166, over 955722.32 frames. ], batch size: 36, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:17:10,589 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7781, 3.0496, 2.8456, 1.9893, 2.8355, 3.2337, 3.0313, 2.7310], device='cuda:2'), covar=tensor([0.0654, 0.0640, 0.0713, 0.0967, 0.0623, 0.0692, 0.0664, 0.0955], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0134, 0.0141, 0.0125, 0.0122, 0.0143, 0.0144, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:17:16,046 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:17:33,430 INFO [finetune.py:976] (2/7) Epoch 12, batch 1400, loss[loss=0.2019, simple_loss=0.279, pruned_loss=0.0624, over 4739.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2587, pruned_loss=0.06274, over 956678.35 frames. ], batch size: 59, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:17:34,183 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64406.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:17:37,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4321, 2.1699, 1.7570, 0.7861, 1.9190, 1.9041, 1.7554, 1.9268], device='cuda:2'), covar=tensor([0.0833, 0.0858, 0.1574, 0.2233, 0.1517, 0.2422, 0.2196, 0.0994], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0200, 0.0203, 0.0187, 0.0215, 0.0209, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:17:54,257 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.617e+02 1.936e+02 2.295e+02 3.610e+02, threshold=3.872e+02, percent-clipped=0.0 2023-03-26 14:18:06,657 INFO [finetune.py:976] (2/7) Epoch 12, batch 1450, loss[loss=0.1833, simple_loss=0.2626, pruned_loss=0.05197, over 4775.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2619, pruned_loss=0.06386, over 957582.57 frames. ], batch size: 29, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:18:13,309 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64465.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:13,915 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:37,423 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:39,730 INFO [finetune.py:976] (2/7) Epoch 12, batch 1500, loss[loss=0.1633, simple_loss=0.2283, pruned_loss=0.04914, over 4898.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2627, pruned_loss=0.06447, over 957220.24 frames. ], batch size: 36, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:18:46,089 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:54,914 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64526.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:01,495 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.738e+02 2.083e+02 2.672e+02 4.064e+02, threshold=4.165e+02, percent-clipped=1.0 2023-03-26 14:19:15,859 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64549.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:19,447 INFO [finetune.py:976] (2/7) Epoch 12, batch 1550, loss[loss=0.2039, simple_loss=0.2733, pruned_loss=0.0672, over 4779.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2625, pruned_loss=0.06429, over 953454.74 frames. ], batch size: 51, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:19:33,897 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64572.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:45,159 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:45,837 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:56,453 INFO [finetune.py:976] (2/7) Epoch 12, batch 1600, loss[loss=0.23, simple_loss=0.2976, pruned_loss=0.08123, over 4867.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2601, pruned_loss=0.06365, over 954994.97 frames. ], batch size: 34, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:20:03,274 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6809, 1.7719, 2.2184, 1.9809, 2.0187, 4.3081, 1.6916, 1.9971], device='cuda:2'), covar=tensor([0.0914, 0.1649, 0.1046, 0.0985, 0.1329, 0.0163, 0.1363, 0.1600], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0093, 0.0082, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:20:08,117 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64620.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:30,240 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.633e+02 1.922e+02 2.431e+02 4.177e+02, threshold=3.845e+02, percent-clipped=1.0 2023-03-26 14:20:41,558 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64648.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:43,267 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:49,612 INFO [finetune.py:976] (2/7) Epoch 12, batch 1650, loss[loss=0.1727, simple_loss=0.2387, pruned_loss=0.0533, over 4920.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2567, pruned_loss=0.06204, over 956424.27 frames. ], batch size: 37, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:21:05,197 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:19,529 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:21,766 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64703.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:22,888 INFO [finetune.py:976] (2/7) Epoch 12, batch 1700, loss[loss=0.2236, simple_loss=0.2886, pruned_loss=0.07933, over 4821.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2551, pruned_loss=0.06176, over 958829.95 frames. ], batch size: 38, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:21:27,412 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64709.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:46,816 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:53,446 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.591e+02 1.930e+02 2.225e+02 5.420e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 14:21:58,926 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:03,093 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64752.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:05,308 INFO [finetune.py:976] (2/7) Epoch 12, batch 1750, loss[loss=0.1264, simple_loss=0.2029, pruned_loss=0.02495, over 4756.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2582, pruned_loss=0.06336, over 956873.99 frames. ], batch size: 26, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:22:10,822 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64764.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:22,641 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6491, 2.1841, 1.7488, 0.9102, 2.0149, 2.0398, 1.6621, 1.9597], device='cuda:2'), covar=tensor([0.0695, 0.1016, 0.1418, 0.2011, 0.1584, 0.1931, 0.2309, 0.1039], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0199, 0.0200, 0.0185, 0.0214, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:22:31,949 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7737, 1.3831, 0.8353, 1.6121, 2.0592, 1.5289, 1.5403, 1.7090], device='cuda:2'), covar=tensor([0.1515, 0.2044, 0.2016, 0.1223, 0.2115, 0.1954, 0.1409, 0.1928], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0113, 0.0092, 0.0119, 0.0094, 0.0098, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:22:38,079 INFO [finetune.py:976] (2/7) Epoch 12, batch 1800, loss[loss=0.236, simple_loss=0.2983, pruned_loss=0.08686, over 4796.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2615, pruned_loss=0.06406, over 957936.18 frames. ], batch size: 29, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:22:38,808 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:43,528 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64813.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:48,350 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:58,985 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.622e+02 1.968e+02 2.271e+02 4.247e+02, threshold=3.936e+02, percent-clipped=1.0 2023-03-26 14:23:11,388 INFO [finetune.py:976] (2/7) Epoch 12, batch 1850, loss[loss=0.2319, simple_loss=0.2928, pruned_loss=0.08549, over 4819.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2633, pruned_loss=0.0654, over 958953.13 frames. ], batch size: 38, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:23:31,337 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.98 vs. limit=5.0 2023-03-26 14:23:33,493 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:23:42,259 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5944, 2.1643, 1.8400, 1.0045, 2.1878, 1.9552, 1.7449, 2.0113], device='cuda:2'), covar=tensor([0.1069, 0.1185, 0.1882, 0.2305, 0.1483, 0.2339, 0.2490, 0.1215], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0199, 0.0201, 0.0186, 0.0215, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:23:45,140 INFO [finetune.py:976] (2/7) Epoch 12, batch 1900, loss[loss=0.2105, simple_loss=0.2748, pruned_loss=0.0731, over 4816.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2651, pruned_loss=0.06608, over 960322.02 frames. ], batch size: 47, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:24:05,582 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:06,597 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.653e+02 1.911e+02 2.364e+02 4.358e+02, threshold=3.822e+02, percent-clipped=3.0 2023-03-26 14:24:06,755 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5423, 1.4146, 1.3025, 1.6199, 1.6656, 1.5664, 1.0062, 1.3135], device='cuda:2'), covar=tensor([0.2088, 0.2137, 0.1830, 0.1611, 0.1550, 0.1203, 0.2451, 0.1841], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0190, 0.0242, 0.0183, 0.0213, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:24:12,013 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:18,921 INFO [finetune.py:976] (2/7) Epoch 12, batch 1950, loss[loss=0.1849, simple_loss=0.2304, pruned_loss=0.06974, over 4308.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2636, pruned_loss=0.06562, over 956616.96 frames. ], batch size: 18, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:24:40,804 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:58,112 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65001.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:59,919 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:00,451 INFO [finetune.py:976] (2/7) Epoch 12, batch 2000, loss[loss=0.207, simple_loss=0.2615, pruned_loss=0.07627, over 4866.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2598, pruned_loss=0.06408, over 954401.18 frames. ], batch size: 31, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:25:21,641 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.609e+02 1.880e+02 2.233e+02 7.388e+02, threshold=3.760e+02, percent-clipped=1.0 2023-03-26 14:25:21,798 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65037.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 14:25:34,387 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:42,374 INFO [finetune.py:976] (2/7) Epoch 12, batch 2050, loss[loss=0.1667, simple_loss=0.2257, pruned_loss=0.05385, over 4865.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2562, pruned_loss=0.06313, over 955376.90 frames. ], batch size: 31, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:25:42,494 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:45,412 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65059.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:21,865 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:24,690 INFO [finetune.py:976] (2/7) Epoch 12, batch 2100, loss[loss=0.162, simple_loss=0.231, pruned_loss=0.04646, over 4898.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2565, pruned_loss=0.06318, over 955935.34 frames. ], batch size: 32, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:26:27,109 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:32,504 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:35,466 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:52,594 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.674e+02 1.985e+02 2.398e+02 5.597e+02, threshold=3.971e+02, percent-clipped=1.0 2023-03-26 14:27:08,169 INFO [finetune.py:976] (2/7) Epoch 12, batch 2150, loss[loss=0.1702, simple_loss=0.2346, pruned_loss=0.05292, over 4748.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2592, pruned_loss=0.06443, over 955879.74 frames. ], batch size: 27, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:27:16,100 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 14:27:17,726 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:27:41,484 INFO [finetune.py:976] (2/7) Epoch 12, batch 2200, loss[loss=0.2157, simple_loss=0.2833, pruned_loss=0.0741, over 4820.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2618, pruned_loss=0.06564, over 953599.97 frames. ], batch size: 33, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:28:03,263 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.670e+02 2.055e+02 2.491e+02 4.530e+02, threshold=4.111e+02, percent-clipped=2.0 2023-03-26 14:28:08,797 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:15,257 INFO [finetune.py:976] (2/7) Epoch 12, batch 2250, loss[loss=0.1697, simple_loss=0.24, pruned_loss=0.04966, over 4908.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2642, pruned_loss=0.06638, over 953076.54 frames. ], batch size: 38, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:28:17,936 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 14:28:27,092 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7229, 1.5003, 2.3022, 3.6407, 2.4627, 2.4668, 1.0852, 2.8381], device='cuda:2'), covar=tensor([0.1771, 0.1623, 0.1345, 0.0571, 0.0787, 0.1490, 0.1918, 0.0542], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0100, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:28:36,008 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65285.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:41,279 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:48,530 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:49,046 INFO [finetune.py:976] (2/7) Epoch 12, batch 2300, loss[loss=0.1923, simple_loss=0.2643, pruned_loss=0.06017, over 4759.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2628, pruned_loss=0.06491, over 955206.29 frames. ], batch size: 26, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:29:07,488 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65332.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:29:10,444 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.717e+01 1.673e+02 1.949e+02 2.267e+02 6.743e+02, threshold=3.897e+02, percent-clipped=1.0 2023-03-26 14:29:16,554 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:29:20,081 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65352.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:29:22,340 INFO [finetune.py:976] (2/7) Epoch 12, batch 2350, loss[loss=0.1846, simple_loss=0.2617, pruned_loss=0.05377, over 4905.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2607, pruned_loss=0.06385, over 956645.92 frames. ], batch size: 46, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:29:24,816 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65359.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:29:38,194 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6201, 1.6343, 2.0518, 1.2610, 1.7491, 1.8275, 1.6019, 2.0738], device='cuda:2'), covar=tensor([0.1437, 0.2107, 0.1154, 0.1723, 0.0860, 0.1423, 0.2720, 0.0955], device='cuda:2'), in_proj_covar=tensor([0.0198, 0.0208, 0.0197, 0.0194, 0.0179, 0.0217, 0.0219, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:30:02,647 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65401.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:04,996 INFO [finetune.py:976] (2/7) Epoch 12, batch 2400, loss[loss=0.1389, simple_loss=0.2049, pruned_loss=0.03642, over 4758.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2571, pruned_loss=0.06257, over 956179.88 frames. ], batch size: 59, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:30:06,250 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:06,311 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1064, 0.9745, 0.9942, 0.3982, 0.8838, 1.1838, 1.2187, 1.0072], device='cuda:2'), covar=tensor([0.0860, 0.0644, 0.0502, 0.0531, 0.0522, 0.0604, 0.0401, 0.0656], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0121, 0.0131, 0.0130, 0.0126, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2970e-05, 1.1143e-04, 8.6971e-05, 9.4606e-05, 9.2613e-05, 9.1432e-05, 1.0393e-04, 1.0595e-04], device='cuda:2') 2023-03-26 14:30:07,397 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:09,181 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65411.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:26,323 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.534e+02 1.885e+02 2.327e+02 5.518e+02, threshold=3.771e+02, percent-clipped=1.0 2023-03-26 14:30:34,674 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65449.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:38,806 INFO [finetune.py:976] (2/7) Epoch 12, batch 2450, loss[loss=0.2029, simple_loss=0.2727, pruned_loss=0.06661, over 4828.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.254, pruned_loss=0.06145, over 956576.97 frames. ], batch size: 33, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:30:39,470 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:31:31,329 INFO [finetune.py:976] (2/7) Epoch 12, batch 2500, loss[loss=0.2426, simple_loss=0.2851, pruned_loss=0.1001, over 4896.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2562, pruned_loss=0.06234, over 956078.32 frames. ], batch size: 35, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:31:49,235 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65532.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:31:52,645 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.473e+01 1.650e+02 2.020e+02 2.341e+02 4.049e+02, threshold=4.040e+02, percent-clipped=1.0 2023-03-26 14:32:06,902 INFO [finetune.py:976] (2/7) Epoch 12, batch 2550, loss[loss=0.2193, simple_loss=0.2811, pruned_loss=0.07875, over 4853.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2603, pruned_loss=0.06325, over 954598.65 frames. ], batch size: 47, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:32:17,601 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6829, 1.8901, 1.6394, 1.5730, 2.1690, 2.0849, 1.8397, 1.8474], device='cuda:2'), covar=tensor([0.0451, 0.0325, 0.0560, 0.0360, 0.0394, 0.0656, 0.0389, 0.0429], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0107, 0.0138, 0.0113, 0.0101, 0.0103, 0.0092, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.0782e-05, 8.3422e-05, 1.0992e-04, 8.8086e-05, 7.9092e-05, 7.6420e-05, 6.9824e-05, 8.2594e-05], device='cuda:2') 2023-03-26 14:32:40,914 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65593.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:32:48,714 INFO [finetune.py:976] (2/7) Epoch 12, batch 2600, loss[loss=0.1838, simple_loss=0.264, pruned_loss=0.05182, over 4919.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2621, pruned_loss=0.06334, over 957221.66 frames. ], batch size: 42, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:33:06,613 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65632.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:10,009 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.524e+01 1.645e+02 2.049e+02 2.494e+02 4.393e+02, threshold=4.097e+02, percent-clipped=1.0 2023-03-26 14:33:12,519 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:22,372 INFO [finetune.py:976] (2/7) Epoch 12, batch 2650, loss[loss=0.2182, simple_loss=0.2833, pruned_loss=0.07656, over 4812.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2628, pruned_loss=0.06351, over 955507.53 frames. ], batch size: 39, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:33:38,618 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:55,610 INFO [finetune.py:976] (2/7) Epoch 12, batch 2700, loss[loss=0.1563, simple_loss=0.2279, pruned_loss=0.04233, over 4708.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2616, pruned_loss=0.06335, over 955024.17 frames. ], batch size: 23, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:33:59,778 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65711.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:34:01,568 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 14:34:17,009 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.547e+02 1.884e+02 2.200e+02 3.210e+02, threshold=3.769e+02, percent-clipped=0.0 2023-03-26 14:34:30,279 INFO [finetune.py:976] (2/7) Epoch 12, batch 2750, loss[loss=0.1873, simple_loss=0.2492, pruned_loss=0.06275, over 4890.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2596, pruned_loss=0.06363, over 954416.98 frames. ], batch size: 32, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:34:38,003 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:35:23,474 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 14:35:30,775 INFO [finetune.py:976] (2/7) Epoch 12, batch 2800, loss[loss=0.1865, simple_loss=0.2633, pruned_loss=0.05482, over 4809.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2561, pruned_loss=0.06235, over 954767.58 frames. ], batch size: 38, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:35:52,229 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.914e+01 1.578e+02 1.887e+02 2.176e+02 5.167e+02, threshold=3.774e+02, percent-clipped=1.0 2023-03-26 14:35:52,981 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65838.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:36:04,209 INFO [finetune.py:976] (2/7) Epoch 12, batch 2850, loss[loss=0.2353, simple_loss=0.2884, pruned_loss=0.09112, over 4022.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2544, pruned_loss=0.06191, over 955042.01 frames. ], batch size: 65, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:36:36,962 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65888.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:36:45,225 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65899.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:36:48,785 INFO [finetune.py:976] (2/7) Epoch 12, batch 2900, loss[loss=0.2147, simple_loss=0.2793, pruned_loss=0.07504, over 4835.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2579, pruned_loss=0.06315, over 955329.24 frames. ], batch size: 33, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:37:10,261 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.527e+02 1.842e+02 2.377e+02 4.547e+02, threshold=3.684e+02, percent-clipped=3.0 2023-03-26 14:37:10,403 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1185, 2.7925, 2.4882, 1.1815, 2.6726, 2.2036, 2.1514, 2.4160], device='cuda:2'), covar=tensor([0.1049, 0.0837, 0.1764, 0.2290, 0.1825, 0.2356, 0.2228, 0.1250], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0198, 0.0200, 0.0185, 0.0215, 0.0207, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:37:12,786 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:37:27,695 INFO [finetune.py:976] (2/7) Epoch 12, batch 2950, loss[loss=0.2365, simple_loss=0.2855, pruned_loss=0.09374, over 4740.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2603, pruned_loss=0.06343, over 955763.41 frames. ], batch size: 59, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:37:42,550 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.25 vs. limit=5.0 2023-03-26 14:38:02,421 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:38:26,404 INFO [finetune.py:976] (2/7) Epoch 12, batch 3000, loss[loss=0.2054, simple_loss=0.2673, pruned_loss=0.07171, over 4848.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2619, pruned_loss=0.06441, over 955182.89 frames. ], batch size: 44, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:38:26,404 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 14:38:31,274 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2447, 2.0987, 1.5449, 0.5915, 1.7915, 1.8582, 1.7366, 1.9153], device='cuda:2'), covar=tensor([0.0973, 0.0749, 0.1535, 0.1973, 0.1428, 0.2736, 0.2338, 0.0838], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0198, 0.0201, 0.0185, 0.0214, 0.0207, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:38:37,072 INFO [finetune.py:1010] (2/7) Epoch 12, validation: loss=0.1571, simple_loss=0.2281, pruned_loss=0.04309, over 2265189.00 frames. 2023-03-26 14:38:37,073 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 14:38:53,476 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 14:38:58,511 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.619e+02 1.943e+02 2.343e+02 4.325e+02, threshold=3.886e+02, percent-clipped=3.0 2023-03-26 14:39:21,449 INFO [finetune.py:976] (2/7) Epoch 12, batch 3050, loss[loss=0.1742, simple_loss=0.2464, pruned_loss=0.05098, over 4763.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2625, pruned_loss=0.06429, over 956390.09 frames. ], batch size: 26, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:39:32,825 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-26 14:39:53,013 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 14:40:36,352 INFO [finetune.py:976] (2/7) Epoch 12, batch 3100, loss[loss=0.1453, simple_loss=0.2169, pruned_loss=0.03683, over 4705.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2593, pruned_loss=0.0627, over 956013.33 frames. ], batch size: 59, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:41:19,788 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.628e+02 1.972e+02 2.398e+02 4.316e+02, threshold=3.945e+02, percent-clipped=3.0 2023-03-26 14:41:27,575 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:41:40,144 INFO [finetune.py:976] (2/7) Epoch 12, batch 3150, loss[loss=0.2474, simple_loss=0.291, pruned_loss=0.1019, over 4898.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2567, pruned_loss=0.06213, over 956735.21 frames. ], batch size: 43, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:41:43,355 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66160.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:42:24,559 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66188.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:42:32,946 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66194.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:42:35,820 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2657, 2.9424, 2.7790, 1.1330, 3.0289, 2.2836, 0.6899, 1.8329], device='cuda:2'), covar=tensor([0.2256, 0.2212, 0.1924, 0.3528, 0.1307, 0.1076, 0.4092, 0.1637], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0175, 0.0161, 0.0129, 0.0156, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 14:42:40,455 INFO [finetune.py:976] (2/7) Epoch 12, batch 3200, loss[loss=0.1717, simple_loss=0.2466, pruned_loss=0.04834, over 4872.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2539, pruned_loss=0.06128, over 956792.13 frames. ], batch size: 34, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:42:40,564 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66205.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:42:51,180 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66221.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:43:01,312 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66236.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:43:01,823 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.602e+02 1.878e+02 2.419e+02 4.134e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-26 14:43:03,240 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 14:43:11,527 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-26 14:43:14,214 INFO [finetune.py:976] (2/7) Epoch 12, batch 3250, loss[loss=0.1644, simple_loss=0.2296, pruned_loss=0.04959, over 4795.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2559, pruned_loss=0.0626, over 958283.75 frames. ], batch size: 25, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:43:14,951 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5585, 3.9814, 4.1819, 4.3787, 4.3093, 4.0154, 4.6733, 1.4446], device='cuda:2'), covar=tensor([0.0690, 0.0804, 0.0709, 0.0835, 0.1027, 0.1524, 0.0533, 0.5318], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0242, 0.0274, 0.0289, 0.0327, 0.0282, 0.0299, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:43:16,204 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66258.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:43:28,790 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6124, 1.0417, 0.8485, 1.5031, 2.0060, 1.1005, 1.2456, 1.4829], device='cuda:2'), covar=tensor([0.1579, 0.2394, 0.2087, 0.1315, 0.1933, 0.2040, 0.1717, 0.2101], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0112, 0.0092, 0.0120, 0.0094, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:43:42,482 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2760, 2.8721, 2.6448, 1.1857, 2.9680, 2.2823, 0.8060, 1.7952], device='cuda:2'), covar=tensor([0.2618, 0.2235, 0.1956, 0.3652, 0.1495, 0.1110, 0.4065, 0.1827], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0160, 0.0129, 0.0156, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 14:43:48,267 INFO [finetune.py:976] (2/7) Epoch 12, batch 3300, loss[loss=0.1847, simple_loss=0.2612, pruned_loss=0.05406, over 4829.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2604, pruned_loss=0.0636, over 959032.10 frames. ], batch size: 47, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:43:56,984 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66319.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:44:14,654 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.642e+02 1.904e+02 2.370e+02 4.024e+02, threshold=3.808e+02, percent-clipped=2.0 2023-03-26 14:44:29,721 INFO [finetune.py:976] (2/7) Epoch 12, batch 3350, loss[loss=0.1461, simple_loss=0.2196, pruned_loss=0.03631, over 4697.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2628, pruned_loss=0.06509, over 956445.40 frames. ], batch size: 23, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:44:41,265 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9905, 2.2861, 1.4011, 2.8411, 3.0404, 2.4509, 2.6700, 2.7062], device='cuda:2'), covar=tensor([0.1078, 0.1668, 0.1728, 0.0898, 0.1555, 0.1602, 0.1132, 0.1626], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0120, 0.0094, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:45:02,613 INFO [finetune.py:976] (2/7) Epoch 12, batch 3400, loss[loss=0.1969, simple_loss=0.2699, pruned_loss=0.06188, over 4784.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2621, pruned_loss=0.06445, over 955085.90 frames. ], batch size: 51, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:45:10,998 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5658, 1.4570, 1.4422, 1.4659, 1.1031, 3.1050, 1.2947, 1.6372], device='cuda:2'), covar=tensor([0.3280, 0.2379, 0.2093, 0.2338, 0.1872, 0.0227, 0.2740, 0.1397], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:45:12,840 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6689, 1.5757, 1.4956, 1.5487, 1.0352, 3.6207, 1.3559, 1.9808], device='cuda:2'), covar=tensor([0.3327, 0.2458, 0.2149, 0.2340, 0.1920, 0.0165, 0.2482, 0.1243], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:45:14,664 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8354, 1.6447, 1.4435, 1.2759, 1.6000, 1.5721, 1.5294, 2.1860], device='cuda:2'), covar=tensor([0.4325, 0.4326, 0.3455, 0.4017, 0.4043, 0.2428, 0.3927, 0.1897], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0224, 0.0277, 0.0244, 0.0211, 0.0247, 0.0218], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:45:24,436 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.693e+02 1.994e+02 2.432e+02 3.824e+02, threshold=3.988e+02, percent-clipped=2.0 2023-03-26 14:45:36,094 INFO [finetune.py:976] (2/7) Epoch 12, batch 3450, loss[loss=0.2041, simple_loss=0.2636, pruned_loss=0.07233, over 4846.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2611, pruned_loss=0.06403, over 954634.75 frames. ], batch size: 44, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:46:02,749 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66494.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:46:06,887 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66500.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:46:09,848 INFO [finetune.py:976] (2/7) Epoch 12, batch 3500, loss[loss=0.1634, simple_loss=0.221, pruned_loss=0.05291, over 4247.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2593, pruned_loss=0.06369, over 953241.12 frames. ], batch size: 18, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:46:20,214 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1178, 2.1389, 2.1209, 1.4820, 2.1969, 2.3059, 2.2303, 1.7779], device='cuda:2'), covar=tensor([0.0598, 0.0561, 0.0746, 0.0949, 0.0624, 0.0652, 0.0594, 0.1042], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0141, 0.0123, 0.0122, 0.0142, 0.0142, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:46:20,799 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66516.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:46:23,838 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66521.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:46:29,886 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2104, 1.2651, 1.2473, 0.6503, 1.1834, 1.4770, 1.5181, 1.2154], device='cuda:2'), covar=tensor([0.0816, 0.0499, 0.0458, 0.0489, 0.0464, 0.0514, 0.0258, 0.0547], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0132, 0.0130, 0.0126, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3562e-05, 1.1173e-04, 8.8473e-05, 9.4949e-05, 9.2418e-05, 9.1719e-05, 1.0479e-04, 1.0670e-04], device='cuda:2') 2023-03-26 14:46:36,430 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.655e+02 1.937e+02 2.486e+02 6.010e+02, threshold=3.875e+02, percent-clipped=2.0 2023-03-26 14:46:40,000 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66542.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:46:57,615 INFO [finetune.py:976] (2/7) Epoch 12, batch 3550, loss[loss=0.2277, simple_loss=0.2855, pruned_loss=0.08499, over 4880.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2564, pruned_loss=0.06318, over 953124.27 frames. ], batch size: 35, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:47:23,068 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66582.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 14:47:43,651 INFO [finetune.py:976] (2/7) Epoch 12, batch 3600, loss[loss=0.1664, simple_loss=0.2388, pruned_loss=0.04698, over 4795.00 frames. ], tot_loss[loss=0.189, simple_loss=0.254, pruned_loss=0.06199, over 952576.92 frames. ], batch size: 29, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:47:53,889 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66614.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:48:03,343 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5328, 1.4287, 1.8419, 1.8972, 1.6279, 3.3525, 1.3922, 1.6683], device='cuda:2'), covar=tensor([0.0957, 0.1888, 0.1287, 0.0918, 0.1516, 0.0236, 0.1455, 0.1689], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:48:06,566 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 14:48:08,249 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5254, 1.3557, 1.7819, 2.0002, 1.5433, 3.3088, 1.2537, 1.5962], device='cuda:2'), covar=tensor([0.0897, 0.1893, 0.1073, 0.0852, 0.1644, 0.0244, 0.1614, 0.1767], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 14:48:08,735 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.581e+02 1.999e+02 2.430e+02 3.919e+02, threshold=3.999e+02, percent-clipped=1.0 2023-03-26 14:48:21,113 INFO [finetune.py:976] (2/7) Epoch 12, batch 3650, loss[loss=0.1782, simple_loss=0.2566, pruned_loss=0.04986, over 4842.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2563, pruned_loss=0.06256, over 951199.88 frames. ], batch size: 49, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:48:54,888 INFO [finetune.py:976] (2/7) Epoch 12, batch 3700, loss[loss=0.18, simple_loss=0.2413, pruned_loss=0.05938, over 4713.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.26, pruned_loss=0.06353, over 951908.02 frames. ], batch size: 23, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:49:14,938 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66731.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:49:18,458 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.675e+02 1.999e+02 2.466e+02 3.717e+02, threshold=3.997e+02, percent-clipped=0.0 2023-03-26 14:49:38,724 INFO [finetune.py:976] (2/7) Epoch 12, batch 3750, loss[loss=0.1674, simple_loss=0.2412, pruned_loss=0.04681, over 4825.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2614, pruned_loss=0.06374, over 951709.08 frames. ], batch size: 39, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:03,084 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66792.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:50:08,993 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66800.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:50:12,433 INFO [finetune.py:976] (2/7) Epoch 12, batch 3800, loss[loss=0.2146, simple_loss=0.2942, pruned_loss=0.0675, over 4815.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2634, pruned_loss=0.06489, over 949986.75 frames. ], batch size: 40, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:19,266 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66816.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:50:22,219 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-26 14:50:34,225 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.175e+02 1.668e+02 2.101e+02 2.666e+02 4.038e+02, threshold=4.202e+02, percent-clipped=1.0 2023-03-26 14:50:40,357 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:50:45,463 INFO [finetune.py:976] (2/7) Epoch 12, batch 3850, loss[loss=0.2015, simple_loss=0.2731, pruned_loss=0.06498, over 4829.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2622, pruned_loss=0.06429, over 952182.08 frames. ], batch size: 47, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:45,580 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2266, 2.1985, 1.9385, 1.1623, 2.0903, 1.8655, 1.7355, 2.0177], device='cuda:2'), covar=tensor([0.0814, 0.0651, 0.1262, 0.1648, 0.1180, 0.1645, 0.1795, 0.0818], device='cuda:2'), in_proj_covar=tensor([0.0164, 0.0196, 0.0198, 0.0182, 0.0212, 0.0205, 0.0220, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:50:51,371 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66864.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:51:00,117 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66877.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:51:16,186 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7726, 1.5547, 1.9901, 1.3245, 1.6406, 1.9724, 1.5552, 2.0549], device='cuda:2'), covar=tensor([0.0980, 0.1899, 0.1091, 0.1489, 0.0756, 0.0987, 0.2326, 0.0672], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0207, 0.0195, 0.0193, 0.0180, 0.0216, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:51:18,885 INFO [finetune.py:976] (2/7) Epoch 12, batch 3900, loss[loss=0.2422, simple_loss=0.2964, pruned_loss=0.094, over 4209.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2605, pruned_loss=0.06446, over 953634.99 frames. ], batch size: 66, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:51:20,242 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6388, 2.4254, 3.0208, 1.8051, 2.4309, 3.1387, 2.3301, 3.0322], device='cuda:2'), covar=tensor([0.1380, 0.2054, 0.1460, 0.2342, 0.1136, 0.1362, 0.2495, 0.0943], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0207, 0.0195, 0.0193, 0.0180, 0.0215, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:51:24,978 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66914.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:51:40,885 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.325e+01 1.578e+02 1.785e+02 2.294e+02 5.103e+02, threshold=3.570e+02, percent-clipped=1.0 2023-03-26 14:51:51,223 INFO [finetune.py:976] (2/7) Epoch 12, batch 3950, loss[loss=0.1863, simple_loss=0.2588, pruned_loss=0.05688, over 4848.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2561, pruned_loss=0.06251, over 952885.01 frames. ], batch size: 49, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:51:53,023 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:51:58,482 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66962.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:52:44,782 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5338, 1.3881, 1.2839, 1.5430, 1.6279, 1.5942, 0.9152, 1.3238], device='cuda:2'), covar=tensor([0.2244, 0.2189, 0.2093, 0.1671, 0.1583, 0.1297, 0.2701, 0.1947], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0208, 0.0211, 0.0191, 0.0242, 0.0183, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:52:47,107 INFO [finetune.py:976] (2/7) Epoch 12, batch 4000, loss[loss=0.1733, simple_loss=0.2464, pruned_loss=0.05006, over 4818.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2554, pruned_loss=0.06243, over 954696.93 frames. ], batch size: 41, lr: 3.63e-03, grad_scale: 32.0 2023-03-26 14:52:55,753 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 14:53:04,480 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67018.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:53:18,612 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.442e+01 1.595e+02 2.014e+02 2.521e+02 4.335e+02, threshold=4.027e+02, percent-clipped=3.0 2023-03-26 14:53:28,874 INFO [finetune.py:976] (2/7) Epoch 12, batch 4050, loss[loss=0.1931, simple_loss=0.2611, pruned_loss=0.0626, over 4757.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2597, pruned_loss=0.064, over 953706.91 frames. ], batch size: 28, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:53:47,291 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3388, 1.3142, 1.2091, 1.3087, 1.6097, 1.4578, 1.3599, 1.1470], device='cuda:2'), covar=tensor([0.0322, 0.0261, 0.0594, 0.0282, 0.0224, 0.0479, 0.0331, 0.0411], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0140, 0.0114, 0.0103, 0.0105, 0.0095, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2512e-05, 8.5085e-05, 1.1140e-04, 8.8878e-05, 8.0220e-05, 7.7985e-05, 7.1599e-05, 8.4113e-05], device='cuda:2') 2023-03-26 14:53:47,895 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:53:50,803 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67087.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 14:54:00,460 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 14:54:02,028 INFO [finetune.py:976] (2/7) Epoch 12, batch 4100, loss[loss=0.2239, simple_loss=0.2976, pruned_loss=0.07509, over 4816.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2623, pruned_loss=0.0645, over 954664.29 frames. ], batch size: 40, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:54:17,481 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3893, 1.3180, 1.5203, 0.9037, 1.4130, 1.4998, 1.5036, 1.2499], device='cuda:2'), covar=tensor([0.0456, 0.0591, 0.0497, 0.0766, 0.1035, 0.0451, 0.0443, 0.0946], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0141, 0.0124, 0.0122, 0.0141, 0.0141, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:54:29,986 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.725e+02 1.998e+02 2.409e+02 3.172e+02, threshold=3.997e+02, percent-clipped=0.0 2023-03-26 14:54:34,087 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:54:44,209 INFO [finetune.py:976] (2/7) Epoch 12, batch 4150, loss[loss=0.2183, simple_loss=0.2852, pruned_loss=0.07572, over 4907.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2636, pruned_loss=0.06523, over 951135.00 frames. ], batch size: 37, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:54:48,609 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2189, 1.2167, 1.1229, 1.1886, 1.4933, 1.3722, 1.2698, 1.1615], device='cuda:2'), covar=tensor([0.0368, 0.0273, 0.0594, 0.0281, 0.0220, 0.0408, 0.0350, 0.0389], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0139, 0.0113, 0.0102, 0.0104, 0.0094, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2156e-05, 8.4520e-05, 1.1046e-04, 8.8186e-05, 7.9655e-05, 7.7236e-05, 7.1229e-05, 8.3393e-05], device='cuda:2') 2023-03-26 14:54:59,052 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67177.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:55:02,680 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9775, 1.8459, 1.5532, 1.7708, 1.7078, 1.6945, 1.7716, 2.4172], device='cuda:2'), covar=tensor([0.4155, 0.4462, 0.3519, 0.4024, 0.4133, 0.2637, 0.4006, 0.1803], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0224, 0.0277, 0.0244, 0.0211, 0.0247, 0.0219], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:55:17,545 INFO [finetune.py:976] (2/7) Epoch 12, batch 4200, loss[loss=0.1978, simple_loss=0.2721, pruned_loss=0.06175, over 4918.00 frames. ], tot_loss[loss=0.1968, simple_loss=0.2639, pruned_loss=0.06486, over 950977.51 frames. ], batch size: 42, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:55:30,583 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67225.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:55:39,456 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.549e+02 1.852e+02 2.427e+02 4.145e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-26 14:55:40,670 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6948, 2.0829, 1.6009, 1.6635, 2.2466, 2.1584, 1.9280, 1.9055], device='cuda:2'), covar=tensor([0.0428, 0.0321, 0.0545, 0.0339, 0.0275, 0.0554, 0.0342, 0.0363], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0139, 0.0113, 0.0102, 0.0104, 0.0095, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2388e-05, 8.4727e-05, 1.1069e-04, 8.8460e-05, 7.9847e-05, 7.7403e-05, 7.1499e-05, 8.3761e-05], device='cuda:2') 2023-03-26 14:55:50,530 INFO [finetune.py:976] (2/7) Epoch 12, batch 4250, loss[loss=0.2298, simple_loss=0.2821, pruned_loss=0.08874, over 4794.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2609, pruned_loss=0.06411, over 952981.29 frames. ], batch size: 51, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:55:51,893 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5697, 2.4104, 1.6820, 0.8466, 1.9713, 2.1241, 1.8443, 2.0156], device='cuda:2'), covar=tensor([0.0824, 0.0722, 0.1752, 0.2124, 0.1342, 0.1994, 0.2193, 0.0991], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0197, 0.0198, 0.0184, 0.0212, 0.0206, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:56:00,995 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.68 vs. limit=5.0 2023-03-26 14:56:01,490 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0023, 4.3698, 4.5453, 4.8086, 4.7428, 4.4902, 5.1307, 1.6483], device='cuda:2'), covar=tensor([0.0743, 0.0859, 0.0709, 0.0873, 0.1206, 0.1401, 0.0526, 0.5386], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0242, 0.0275, 0.0290, 0.0328, 0.0281, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 14:56:02,849 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 14:56:03,428 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.10 vs. limit=5.0 2023-03-26 14:56:32,184 INFO [finetune.py:976] (2/7) Epoch 12, batch 4300, loss[loss=0.1822, simple_loss=0.238, pruned_loss=0.06319, over 4700.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2583, pruned_loss=0.06314, over 954949.46 frames. ], batch size: 23, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:56:37,181 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67313.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:56:40,887 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7513, 1.6644, 1.4933, 1.4511, 1.9934, 1.8539, 1.6486, 1.4605], device='cuda:2'), covar=tensor([0.0275, 0.0272, 0.0510, 0.0302, 0.0189, 0.0443, 0.0301, 0.0368], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0140, 0.0114, 0.0103, 0.0105, 0.0095, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2481e-05, 8.4948e-05, 1.1111e-04, 8.8787e-05, 8.0184e-05, 7.7645e-05, 7.1571e-05, 8.4026e-05], device='cuda:2') 2023-03-26 14:56:54,399 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.954e+01 1.550e+02 1.913e+02 2.348e+02 5.397e+02, threshold=3.825e+02, percent-clipped=3.0 2023-03-26 14:57:05,080 INFO [finetune.py:976] (2/7) Epoch 12, batch 4350, loss[loss=0.1418, simple_loss=0.2156, pruned_loss=0.03398, over 4798.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2549, pruned_loss=0.06168, over 956037.51 frames. ], batch size: 29, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:57:28,519 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67387.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:57:39,911 INFO [finetune.py:976] (2/7) Epoch 12, batch 4400, loss[loss=0.2299, simple_loss=0.2801, pruned_loss=0.08988, over 4739.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2544, pruned_loss=0.0613, over 955091.97 frames. ], batch size: 27, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:57:59,631 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 14:58:14,139 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67435.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:58:16,972 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.656e+02 1.972e+02 2.339e+02 4.406e+02, threshold=3.944e+02, percent-clipped=2.0 2023-03-26 14:58:17,064 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67439.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:58:30,809 INFO [finetune.py:976] (2/7) Epoch 12, batch 4450, loss[loss=0.1941, simple_loss=0.2611, pruned_loss=0.06355, over 4783.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2583, pruned_loss=0.06282, over 954522.96 frames. ], batch size: 26, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:58:48,778 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6871, 1.2173, 0.8118, 1.5556, 2.0643, 1.1006, 1.4509, 1.5247], device='cuda:2'), covar=tensor([0.1535, 0.2211, 0.2008, 0.1254, 0.2010, 0.1900, 0.1555, 0.2030], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0121, 0.0094, 0.0100, 0.0091], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 14:59:03,971 INFO [finetune.py:976] (2/7) Epoch 12, batch 4500, loss[loss=0.2126, simple_loss=0.2741, pruned_loss=0.07556, over 4795.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.26, pruned_loss=0.06336, over 954962.33 frames. ], batch size: 25, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:59:09,849 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67513.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:59:26,033 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.686e+02 1.980e+02 2.352e+02 4.001e+02, threshold=3.961e+02, percent-clipped=1.0 2023-03-26 14:59:37,240 INFO [finetune.py:976] (2/7) Epoch 12, batch 4550, loss[loss=0.1737, simple_loss=0.2471, pruned_loss=0.05014, over 4801.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.261, pruned_loss=0.06286, over 955760.55 frames. ], batch size: 40, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:59:46,601 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 14:59:56,259 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67574.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:00:19,986 INFO [finetune.py:976] (2/7) Epoch 12, batch 4600, loss[loss=0.196, simple_loss=0.2699, pruned_loss=0.06102, over 4807.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2614, pruned_loss=0.06321, over 956301.70 frames. ], batch size: 39, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:00:21,963 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5136, 1.6254, 1.7877, 0.9214, 1.7290, 1.9566, 1.9605, 1.5093], device='cuda:2'), covar=tensor([0.0854, 0.0586, 0.0460, 0.0541, 0.0426, 0.0494, 0.0275, 0.0622], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0155, 0.0124, 0.0133, 0.0133, 0.0128, 0.0146, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.4050e-05, 1.1349e-04, 8.9090e-05, 9.6169e-05, 9.4190e-05, 9.2791e-05, 1.0613e-04, 1.0732e-04], device='cuda:2') 2023-03-26 15:00:24,953 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:00:42,115 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.477e+02 1.878e+02 2.272e+02 4.960e+02, threshold=3.756e+02, percent-clipped=1.0 2023-03-26 15:00:42,844 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9199, 1.8325, 2.0311, 1.4078, 1.9858, 2.1110, 2.0623, 1.6014], device='cuda:2'), covar=tensor([0.0554, 0.0583, 0.0609, 0.0858, 0.0672, 0.0541, 0.0500, 0.0992], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0134, 0.0142, 0.0125, 0.0123, 0.0141, 0.0142, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:00:53,235 INFO [finetune.py:976] (2/7) Epoch 12, batch 4650, loss[loss=0.1593, simple_loss=0.2211, pruned_loss=0.04878, over 4788.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2593, pruned_loss=0.06274, over 957540.52 frames. ], batch size: 29, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:00:56,981 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67661.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:06,754 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5719, 4.0417, 4.2134, 4.4076, 4.3433, 4.1276, 4.6698, 1.5993], device='cuda:2'), covar=tensor([0.0676, 0.0721, 0.0737, 0.0801, 0.1127, 0.1381, 0.0689, 0.5135], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0242, 0.0275, 0.0289, 0.0328, 0.0280, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:01:10,459 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:15,910 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8041, 1.7018, 1.5291, 1.8798, 2.1690, 1.9100, 1.4172, 1.5189], device='cuda:2'), covar=tensor([0.2527, 0.2191, 0.2185, 0.1793, 0.1782, 0.1314, 0.2701, 0.2104], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0206, 0.0211, 0.0190, 0.0240, 0.0182, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:01:17,719 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:31,298 INFO [finetune.py:976] (2/7) Epoch 12, batch 4700, loss[loss=0.1685, simple_loss=0.2205, pruned_loss=0.05831, over 4735.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2557, pruned_loss=0.06134, over 957939.03 frames. ], batch size: 23, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:01:31,419 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8830, 1.8340, 1.7599, 1.8843, 1.3654, 3.3093, 1.4919, 2.0143], device='cuda:2'), covar=tensor([0.2799, 0.2069, 0.1800, 0.2018, 0.1550, 0.0248, 0.2290, 0.1035], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:01:32,005 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7553, 4.1363, 4.3453, 4.4887, 4.4771, 4.2192, 4.7927, 2.0019], device='cuda:2'), covar=tensor([0.0653, 0.0865, 0.0798, 0.0935, 0.1194, 0.1434, 0.0690, 0.5010], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0243, 0.0276, 0.0290, 0.0329, 0.0280, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:01:56,970 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.553e+02 1.823e+02 2.116e+02 3.808e+02, threshold=3.646e+02, percent-clipped=1.0 2023-03-26 15:01:57,092 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67739.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:58,303 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:02:05,943 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67753.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:02:07,552 INFO [finetune.py:976] (2/7) Epoch 12, batch 4750, loss[loss=0.1971, simple_loss=0.2631, pruned_loss=0.06557, over 4864.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2539, pruned_loss=0.06111, over 956961.30 frames. ], batch size: 44, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:02:28,911 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67787.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:02:40,329 INFO [finetune.py:976] (2/7) Epoch 12, batch 4800, loss[loss=0.2574, simple_loss=0.3112, pruned_loss=0.1018, over 4241.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2573, pruned_loss=0.06288, over 956267.37 frames. ], batch size: 65, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:02:53,509 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9671, 1.8710, 1.5655, 1.8011, 1.9484, 1.5864, 2.2007, 1.8600], device='cuda:2'), covar=tensor([0.1403, 0.2110, 0.3194, 0.2571, 0.2671, 0.1827, 0.3314, 0.1989], device='cuda:2'), in_proj_covar=tensor([0.0179, 0.0188, 0.0234, 0.0257, 0.0243, 0.0200, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:03:07,511 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.756e+02 1.975e+02 2.556e+02 4.813e+02, threshold=3.950e+02, percent-clipped=3.0 2023-03-26 15:03:10,219 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 15:03:16,793 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0578, 2.0758, 2.0873, 1.5410, 2.1176, 2.3396, 2.2500, 1.8280], device='cuda:2'), covar=tensor([0.0640, 0.0580, 0.0766, 0.0913, 0.0617, 0.0589, 0.0640, 0.1034], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0134, 0.0143, 0.0125, 0.0123, 0.0142, 0.0143, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:03:25,930 INFO [finetune.py:976] (2/7) Epoch 12, batch 4850, loss[loss=0.1956, simple_loss=0.2527, pruned_loss=0.06927, over 4885.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2613, pruned_loss=0.06378, over 957431.36 frames. ], batch size: 32, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:03:39,910 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67869.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:04:03,161 INFO [finetune.py:976] (2/7) Epoch 12, batch 4900, loss[loss=0.2309, simple_loss=0.2832, pruned_loss=0.08927, over 4777.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2627, pruned_loss=0.06458, over 958142.13 frames. ], batch size: 26, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:04:26,939 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.717e+02 1.971e+02 2.418e+02 4.222e+02, threshold=3.942e+02, percent-clipped=1.0 2023-03-26 15:04:36,657 INFO [finetune.py:976] (2/7) Epoch 12, batch 4950, loss[loss=0.2003, simple_loss=0.2694, pruned_loss=0.06557, over 4821.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.264, pruned_loss=0.06451, over 958627.72 frames. ], batch size: 39, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:04:53,986 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:12,320 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.53 vs. limit=5.0 2023-03-26 15:05:17,967 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9959, 2.9002, 2.2331, 1.5206, 2.5805, 2.3922, 2.1240, 2.5080], device='cuda:2'), covar=tensor([0.0616, 0.0635, 0.1306, 0.1640, 0.1078, 0.1623, 0.1634, 0.0781], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0198, 0.0200, 0.0186, 0.0214, 0.0207, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:05:20,898 INFO [finetune.py:976] (2/7) Epoch 12, batch 5000, loss[loss=0.1692, simple_loss=0.2365, pruned_loss=0.05094, over 4918.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2615, pruned_loss=0.06336, over 958168.19 frames. ], batch size: 36, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:05:21,632 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9795, 2.3069, 1.8472, 1.7542, 2.4280, 2.5497, 2.0621, 2.0267], device='cuda:2'), covar=tensor([0.0334, 0.0275, 0.0564, 0.0366, 0.0303, 0.0492, 0.0412, 0.0372], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0141, 0.0114, 0.0103, 0.0105, 0.0095, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.2583e-05, 8.4741e-05, 1.1154e-04, 8.8723e-05, 8.0193e-05, 7.7910e-05, 7.1981e-05, 8.4292e-05], device='cuda:2') 2023-03-26 15:05:41,205 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:43,410 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.964e+01 1.543e+02 1.867e+02 2.301e+02 3.447e+02, threshold=3.734e+02, percent-clipped=0.0 2023-03-26 15:05:46,817 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68042.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:50,385 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68048.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:05:54,511 INFO [finetune.py:976] (2/7) Epoch 12, batch 5050, loss[loss=0.1588, simple_loss=0.2267, pruned_loss=0.04543, over 4761.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.258, pruned_loss=0.06191, over 956349.46 frames. ], batch size: 27, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:06:15,504 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68087.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:06:27,689 INFO [finetune.py:976] (2/7) Epoch 12, batch 5100, loss[loss=0.1756, simple_loss=0.2498, pruned_loss=0.05067, over 4834.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2548, pruned_loss=0.06107, over 957861.07 frames. ], batch size: 30, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:06:59,402 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.565e+02 1.837e+02 2.198e+02 4.078e+02, threshold=3.675e+02, percent-clipped=2.0 2023-03-26 15:07:05,468 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68148.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:07:10,948 INFO [finetune.py:976] (2/7) Epoch 12, batch 5150, loss[loss=0.2037, simple_loss=0.2806, pruned_loss=0.06344, over 4912.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2547, pruned_loss=0.06115, over 955944.26 frames. ], batch size: 37, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:07:19,524 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:07:20,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1954, 2.1858, 1.8903, 1.1540, 2.0430, 1.8336, 1.6693, 1.9412], device='cuda:2'), covar=tensor([0.1072, 0.0759, 0.1460, 0.1930, 0.1377, 0.1999, 0.2006, 0.1073], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0198, 0.0199, 0.0185, 0.0213, 0.0207, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:07:23,357 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 15:07:38,473 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 15:07:43,719 INFO [finetune.py:976] (2/7) Epoch 12, batch 5200, loss[loss=0.1787, simple_loss=0.2541, pruned_loss=0.0516, over 4781.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2582, pruned_loss=0.06243, over 953918.34 frames. ], batch size: 28, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:07:51,087 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:08:05,785 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.664e+02 1.889e+02 2.252e+02 3.665e+02, threshold=3.778e+02, percent-clipped=0.0 2023-03-26 15:08:12,439 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3116, 3.7153, 3.9224, 4.1100, 4.0480, 3.7586, 4.4123, 1.3850], device='cuda:2'), covar=tensor([0.0815, 0.0878, 0.0830, 0.1013, 0.1248, 0.1558, 0.0686, 0.5646], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0242, 0.0276, 0.0291, 0.0329, 0.0282, 0.0300, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:08:16,523 INFO [finetune.py:976] (2/7) Epoch 12, batch 5250, loss[loss=0.2082, simple_loss=0.2802, pruned_loss=0.06811, over 4920.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2601, pruned_loss=0.0626, over 953376.57 frames. ], batch size: 33, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:08:24,819 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4865, 2.1269, 2.8865, 1.8074, 2.5144, 2.7470, 2.0328, 2.8622], device='cuda:2'), covar=tensor([0.1382, 0.2053, 0.1393, 0.2458, 0.0886, 0.1527, 0.2604, 0.0853], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0207, 0.0197, 0.0194, 0.0180, 0.0216, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:09:03,119 INFO [finetune.py:976] (2/7) Epoch 12, batch 5300, loss[loss=0.1915, simple_loss=0.2567, pruned_loss=0.06312, over 4913.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2619, pruned_loss=0.06356, over 952398.31 frames. ], batch size: 42, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:09:24,991 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:25,582 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:26,706 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.294e+02 1.838e+02 2.123e+02 2.651e+02 4.524e+02, threshold=4.245e+02, percent-clipped=5.0 2023-03-26 15:09:32,247 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:36,484 INFO [finetune.py:976] (2/7) Epoch 12, batch 5350, loss[loss=0.1981, simple_loss=0.2661, pruned_loss=0.06503, over 4919.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2607, pruned_loss=0.06252, over 954384.56 frames. ], batch size: 38, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:09:50,609 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4190, 1.6488, 1.7570, 1.0327, 1.6181, 1.8838, 1.9107, 1.5271], device='cuda:2'), covar=tensor([0.1027, 0.0676, 0.0532, 0.0648, 0.0524, 0.0757, 0.0379, 0.0916], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0157, 0.0125, 0.0134, 0.0134, 0.0129, 0.0147, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.4814e-05, 1.1453e-04, 9.0404e-05, 9.6472e-05, 9.4720e-05, 9.3655e-05, 1.0719e-04, 1.0831e-04], device='cuda:2') 2023-03-26 15:09:55,984 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:56,636 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6729, 2.4396, 2.0172, 1.0333, 2.2591, 2.0327, 1.8369, 2.2358], device='cuda:2'), covar=tensor([0.0687, 0.0776, 0.1371, 0.2064, 0.1268, 0.2028, 0.1928, 0.0844], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0197, 0.0198, 0.0184, 0.0212, 0.0206, 0.0221, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:10:04,679 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:10:06,564 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:10:10,285 INFO [finetune.py:976] (2/7) Epoch 12, batch 5400, loss[loss=0.2106, simple_loss=0.2712, pruned_loss=0.07505, over 4805.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2578, pruned_loss=0.06166, over 955418.87 frames. ], batch size: 51, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:10:40,850 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.541e+02 1.801e+02 2.082e+02 4.267e+02, threshold=3.602e+02, percent-clipped=1.0 2023-03-26 15:10:44,357 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68443.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:10:51,598 INFO [finetune.py:976] (2/7) Epoch 12, batch 5450, loss[loss=0.1591, simple_loss=0.2315, pruned_loss=0.04338, over 4790.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.255, pruned_loss=0.06088, over 955445.67 frames. ], batch size: 29, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:10:54,766 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68460.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:11:24,505 INFO [finetune.py:976] (2/7) Epoch 12, batch 5500, loss[loss=0.1665, simple_loss=0.2327, pruned_loss=0.05009, over 4927.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2519, pruned_loss=0.05961, over 956046.13 frames. ], batch size: 38, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:11:29,594 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-26 15:11:47,045 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.509e+02 1.942e+02 2.407e+02 6.603e+02, threshold=3.884e+02, percent-clipped=3.0 2023-03-26 15:11:59,912 INFO [finetune.py:976] (2/7) Epoch 12, batch 5550, loss[loss=0.1639, simple_loss=0.2369, pruned_loss=0.04545, over 4895.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2525, pruned_loss=0.06034, over 954571.02 frames. ], batch size: 32, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:12:23,726 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68578.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:12:39,652 INFO [finetune.py:976] (2/7) Epoch 12, batch 5600, loss[loss=0.1721, simple_loss=0.2399, pruned_loss=0.05219, over 4771.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2573, pruned_loss=0.06206, over 955682.08 frames. ], batch size: 26, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:12:58,318 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68637.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:12:59,421 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.664e+02 1.965e+02 2.319e+02 3.885e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-26 15:12:59,528 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:09,174 INFO [finetune.py:976] (2/7) Epoch 12, batch 5650, loss[loss=0.2029, simple_loss=0.2876, pruned_loss=0.05915, over 4805.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2616, pruned_loss=0.06356, over 955620.77 frames. ], batch size: 45, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:13:21,413 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8546, 1.8839, 1.6389, 2.0487, 2.5362, 2.0154, 1.7292, 1.5479], device='cuda:2'), covar=tensor([0.2356, 0.2082, 0.1918, 0.1698, 0.1668, 0.1196, 0.2432, 0.2053], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0207, 0.0210, 0.0190, 0.0240, 0.0182, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:13:22,608 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7534, 1.9441, 1.9333, 1.1151, 2.0074, 2.2288, 2.2218, 1.6791], device='cuda:2'), covar=tensor([0.0948, 0.0579, 0.0470, 0.0598, 0.0388, 0.0649, 0.0291, 0.0677], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0156, 0.0125, 0.0133, 0.0133, 0.0129, 0.0147, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.4255e-05, 1.1419e-04, 9.0117e-05, 9.6202e-05, 9.4549e-05, 9.3702e-05, 1.0676e-04, 1.0790e-04], device='cuda:2') 2023-03-26 15:13:27,856 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:27,925 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:41,827 INFO [finetune.py:976] (2/7) Epoch 12, batch 5700, loss[loss=0.168, simple_loss=0.2198, pruned_loss=0.05809, over 4414.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2575, pruned_loss=0.06267, over 942613.25 frames. ], batch size: 19, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:13:44,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6274, 2.2264, 2.6107, 2.4794, 2.2083, 2.2394, 2.3736, 2.3995], device='cuda:2'), covar=tensor([0.3230, 0.4106, 0.3354, 0.3584, 0.4958, 0.3434, 0.5079, 0.3237], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0238, 0.0255, 0.0261, 0.0257, 0.0233, 0.0275, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:14:27,874 INFO [finetune.py:976] (2/7) Epoch 13, batch 0, loss[loss=0.2248, simple_loss=0.2772, pruned_loss=0.08617, over 4850.00 frames. ], tot_loss[loss=0.2248, simple_loss=0.2772, pruned_loss=0.08617, over 4850.00 frames. ], batch size: 31, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:14:27,874 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 15:14:42,136 INFO [finetune.py:1010] (2/7) Epoch 13, validation: loss=0.1598, simple_loss=0.23, pruned_loss=0.04482, over 2265189.00 frames. 2023-03-26 15:14:42,136 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 15:14:47,269 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.546e+02 1.915e+02 2.253e+02 4.332e+02, threshold=3.830e+02, percent-clipped=1.0 2023-03-26 15:14:49,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68743.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:14:52,137 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68746.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:14:56,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2858, 2.0362, 2.1528, 0.9333, 2.4670, 2.5219, 2.2132, 1.9730], device='cuda:2'), covar=tensor([0.0827, 0.0743, 0.0517, 0.0716, 0.0419, 0.0546, 0.0454, 0.0699], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0155, 0.0123, 0.0132, 0.0132, 0.0127, 0.0145, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.3373e-05, 1.1311e-04, 8.8848e-05, 9.5292e-05, 9.3380e-05, 9.2446e-05, 1.0547e-04, 1.0665e-04], device='cuda:2') 2023-03-26 15:14:58,510 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68755.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:15:13,119 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0238, 1.8412, 2.1238, 1.5855, 2.0979, 2.3926, 2.0595, 1.4507], device='cuda:2'), covar=tensor([0.0765, 0.0906, 0.0821, 0.1055, 0.0760, 0.0644, 0.0824, 0.1736], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0133, 0.0142, 0.0125, 0.0122, 0.0141, 0.0142, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:15:15,977 INFO [finetune.py:976] (2/7) Epoch 13, batch 50, loss[loss=0.1988, simple_loss=0.2692, pruned_loss=0.06422, over 4813.00 frames. ], tot_loss[loss=0.205, simple_loss=0.2695, pruned_loss=0.07023, over 217386.09 frames. ], batch size: 39, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:15:21,852 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68791.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:15:54,085 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1153, 2.0135, 2.1622, 1.4683, 2.1593, 2.2844, 2.1054, 1.8221], device='cuda:2'), covar=tensor([0.0533, 0.0627, 0.0652, 0.0927, 0.0612, 0.0587, 0.0623, 0.1025], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0133, 0.0142, 0.0125, 0.0122, 0.0141, 0.0142, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:15:57,118 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8973, 4.8241, 4.5727, 2.5950, 4.9296, 3.7367, 0.9400, 3.3197], device='cuda:2'), covar=tensor([0.2479, 0.1537, 0.1322, 0.2958, 0.0579, 0.0823, 0.4495, 0.1368], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0175, 0.0161, 0.0129, 0.0157, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 15:15:57,664 INFO [finetune.py:976] (2/7) Epoch 13, batch 100, loss[loss=0.143, simple_loss=0.2174, pruned_loss=0.03432, over 4786.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2586, pruned_loss=0.06426, over 380869.35 frames. ], batch size: 29, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:16:02,754 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.681e+02 1.901e+02 2.429e+02 4.753e+02, threshold=3.802e+02, percent-clipped=2.0 2023-03-26 15:16:31,426 INFO [finetune.py:976] (2/7) Epoch 13, batch 150, loss[loss=0.1748, simple_loss=0.248, pruned_loss=0.05081, over 4934.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.253, pruned_loss=0.06211, over 506467.68 frames. ], batch size: 33, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:17:03,380 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4324, 1.3442, 1.5773, 2.4641, 1.7291, 2.1884, 0.9134, 2.0700], device='cuda:2'), covar=tensor([0.1745, 0.1357, 0.1101, 0.0723, 0.0884, 0.1060, 0.1450, 0.0698], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0115, 0.0134, 0.0164, 0.0101, 0.0137, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 15:17:05,107 INFO [finetune.py:976] (2/7) Epoch 13, batch 200, loss[loss=0.1577, simple_loss=0.2199, pruned_loss=0.04777, over 4908.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2507, pruned_loss=0.06196, over 605374.28 frames. ], batch size: 32, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:17:05,753 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:17:09,208 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.222e+02 1.603e+02 1.930e+02 2.189e+02 8.191e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 15:17:46,315 INFO [finetune.py:976] (2/7) Epoch 13, batch 250, loss[loss=0.1958, simple_loss=0.2686, pruned_loss=0.06147, over 4897.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.2548, pruned_loss=0.06349, over 682356.15 frames. ], batch size: 35, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:18:15,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3020, 2.1418, 2.2334, 1.0323, 2.6026, 2.7138, 2.2980, 2.0517], device='cuda:2'), covar=tensor([0.0885, 0.0565, 0.0450, 0.0625, 0.0347, 0.0443, 0.0374, 0.0539], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0153, 0.0122, 0.0130, 0.0130, 0.0126, 0.0144, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2185e-05, 1.1163e-04, 8.8128e-05, 9.3833e-05, 9.2412e-05, 9.1053e-05, 1.0463e-04, 1.0520e-04], device='cuda:2') 2023-03-26 15:18:19,710 INFO [finetune.py:976] (2/7) Epoch 13, batch 300, loss[loss=0.1973, simple_loss=0.2744, pruned_loss=0.06007, over 4832.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2597, pruned_loss=0.06502, over 742010.38 frames. ], batch size: 30, lr: 3.62e-03, grad_scale: 32.0 2023-03-26 15:18:23,316 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.585e+02 1.877e+02 2.328e+02 4.201e+02, threshold=3.755e+02, percent-clipped=2.0 2023-03-26 15:18:24,575 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:25,874 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69043.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:34,515 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:55,352 INFO [finetune.py:976] (2/7) Epoch 13, batch 350, loss[loss=0.2099, simple_loss=0.2745, pruned_loss=0.07268, over 4704.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2626, pruned_loss=0.06589, over 788612.36 frames. ], batch size: 59, lr: 3.62e-03, grad_scale: 32.0 2023-03-26 15:19:18,510 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:19:19,630 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69104.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:19:41,449 INFO [finetune.py:976] (2/7) Epoch 13, batch 400, loss[loss=0.2557, simple_loss=0.2929, pruned_loss=0.1093, over 4068.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2623, pruned_loss=0.06448, over 825494.59 frames. ], batch size: 65, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:19:50,063 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.689e+02 1.999e+02 2.345e+02 4.076e+02, threshold=3.998e+02, percent-clipped=3.0 2023-03-26 15:20:09,233 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 15:20:09,753 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69162.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:20:13,409 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:20:23,367 INFO [finetune.py:976] (2/7) Epoch 13, batch 450, loss[loss=0.1922, simple_loss=0.2577, pruned_loss=0.06331, over 4825.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2608, pruned_loss=0.06356, over 855451.65 frames. ], batch size: 30, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:20:29,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8885, 1.7735, 1.5615, 1.8570, 2.2966, 1.9516, 1.6389, 1.4892], device='cuda:2'), covar=tensor([0.2005, 0.1942, 0.1838, 0.1596, 0.1690, 0.1133, 0.2338, 0.1817], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0206, 0.0209, 0.0190, 0.0239, 0.0181, 0.0212, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:20:30,157 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 15:21:04,190 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69223.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:05,419 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:07,854 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:09,751 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 15:21:10,194 INFO [finetune.py:976] (2/7) Epoch 13, batch 500, loss[loss=0.143, simple_loss=0.2198, pruned_loss=0.03309, over 4810.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2577, pruned_loss=0.06185, over 878332.08 frames. ], batch size: 39, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:21:10,908 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:14,298 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.659e+02 1.928e+02 2.205e+02 4.798e+02, threshold=3.855e+02, percent-clipped=1.0 2023-03-26 15:21:18,084 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69245.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:21:34,686 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-26 15:21:37,034 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:43,321 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:43,877 INFO [finetune.py:976] (2/7) Epoch 13, batch 550, loss[loss=0.1701, simple_loss=0.2394, pruned_loss=0.05042, over 4764.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2541, pruned_loss=0.06067, over 896062.93 frames. ], batch size: 54, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:21:45,840 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69286.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:59,419 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69306.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:22:08,855 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:17,555 INFO [finetune.py:976] (2/7) Epoch 13, batch 600, loss[loss=0.1603, simple_loss=0.2279, pruned_loss=0.04637, over 4915.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.255, pruned_loss=0.06164, over 911076.52 frames. ], batch size: 36, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:22:18,297 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69334.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:22:21,204 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.536e+02 1.861e+02 2.296e+02 3.946e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 15:22:22,517 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69341.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:50,005 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69368.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:58,854 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:59,983 INFO [finetune.py:976] (2/7) Epoch 13, batch 650, loss[loss=0.2226, simple_loss=0.298, pruned_loss=0.07358, over 4932.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2575, pruned_loss=0.06233, over 919997.88 frames. ], batch size: 42, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:23:03,692 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:06,754 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:10,343 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:20,690 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8059, 1.6747, 2.2065, 1.6516, 2.0358, 2.0539, 1.6434, 2.2202], device='cuda:2'), covar=tensor([0.1183, 0.1704, 0.1206, 0.1727, 0.0661, 0.1273, 0.2314, 0.0672], device='cuda:2'), in_proj_covar=tensor([0.0198, 0.0206, 0.0196, 0.0194, 0.0180, 0.0217, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:23:30,667 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:33,419 INFO [finetune.py:976] (2/7) Epoch 13, batch 700, loss[loss=0.2423, simple_loss=0.2997, pruned_loss=0.09245, over 4197.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2599, pruned_loss=0.06329, over 925004.37 frames. ], batch size: 65, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:23:37,530 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.702e+02 1.957e+02 2.425e+02 4.096e+02, threshold=3.913e+02, percent-clipped=2.0 2023-03-26 15:23:47,843 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:54,773 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6383, 1.5228, 1.4711, 1.5602, 1.1941, 3.2975, 1.3343, 1.7872], device='cuda:2'), covar=tensor([0.3163, 0.2291, 0.2027, 0.2217, 0.1719, 0.0202, 0.2660, 0.1253], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:23:58,424 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.5572, 4.7625, 5.1240, 5.3381, 5.2783, 4.9809, 5.6771, 2.1506], device='cuda:2'), covar=tensor([0.0695, 0.0801, 0.0645, 0.0907, 0.1232, 0.1457, 0.0484, 0.4943], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0243, 0.0277, 0.0292, 0.0329, 0.0283, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:24:06,514 INFO [finetune.py:976] (2/7) Epoch 13, batch 750, loss[loss=0.2101, simple_loss=0.2865, pruned_loss=0.06681, over 4861.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2598, pruned_loss=0.06314, over 930106.27 frames. ], batch size: 34, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:24:40,542 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69518.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:24:43,531 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9923, 1.6667, 2.3862, 3.7353, 2.7188, 2.6999, 0.7527, 2.9821], device='cuda:2'), covar=tensor([0.1822, 0.1727, 0.1462, 0.0777, 0.0748, 0.1883, 0.2333, 0.0552], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0116, 0.0134, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 15:24:44,559 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:24:44,800 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 15:24:50,486 INFO [finetune.py:976] (2/7) Epoch 13, batch 800, loss[loss=0.2024, simple_loss=0.2836, pruned_loss=0.06063, over 4806.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2601, pruned_loss=0.06268, over 934967.72 frames. ], batch size: 41, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:24:57,839 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.694e+02 1.982e+02 2.355e+02 4.334e+02, threshold=3.964e+02, percent-clipped=1.0 2023-03-26 15:25:08,475 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8125, 1.7233, 1.5326, 1.9303, 2.1656, 1.8680, 1.5144, 1.4846], device='cuda:2'), covar=tensor([0.2074, 0.2011, 0.1826, 0.1531, 0.1675, 0.1202, 0.2478, 0.1914], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0207, 0.0209, 0.0190, 0.0240, 0.0182, 0.0212, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:25:47,578 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69581.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:25:48,750 INFO [finetune.py:976] (2/7) Epoch 13, batch 850, loss[loss=0.1687, simple_loss=0.2362, pruned_loss=0.0506, over 4824.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2579, pruned_loss=0.06225, over 939219.54 frames. ], batch size: 30, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:25:52,434 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1390, 1.9036, 2.0288, 0.9595, 2.3285, 2.5000, 2.1259, 1.9027], device='cuda:2'), covar=tensor([0.0947, 0.0718, 0.0522, 0.0668, 0.0436, 0.0676, 0.0428, 0.0706], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0153, 0.0123, 0.0130, 0.0130, 0.0126, 0.0143, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.2788e-05, 1.1173e-04, 8.8324e-05, 9.3617e-05, 9.2401e-05, 9.1099e-05, 1.0450e-04, 1.0596e-04], device='cuda:2') 2023-03-26 15:26:03,125 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69601.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:26:06,192 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2311, 2.1395, 1.6733, 2.2059, 2.1418, 1.8502, 2.4883, 2.2906], device='cuda:2'), covar=tensor([0.1343, 0.2202, 0.3168, 0.2797, 0.2626, 0.1826, 0.3469, 0.1794], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0188, 0.0234, 0.0256, 0.0244, 0.0200, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:26:19,783 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9785, 1.4911, 2.0260, 1.9335, 1.7017, 1.6821, 1.8954, 1.7644], device='cuda:2'), covar=tensor([0.3940, 0.4462, 0.3537, 0.4140, 0.5369, 0.3988, 0.4686, 0.3527], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0238, 0.0255, 0.0261, 0.0257, 0.0233, 0.0275, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:26:21,352 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69629.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:26:24,227 INFO [finetune.py:976] (2/7) Epoch 13, batch 900, loss[loss=0.1549, simple_loss=0.223, pruned_loss=0.04345, over 4774.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2549, pruned_loss=0.06158, over 944503.66 frames. ], batch size: 28, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:26:27,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.604e+02 1.856e+02 2.224e+02 3.601e+02, threshold=3.711e+02, percent-clipped=0.0 2023-03-26 15:26:55,507 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69674.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:26:56,705 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69676.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:27:06,463 INFO [finetune.py:976] (2/7) Epoch 13, batch 950, loss[loss=0.2042, simple_loss=0.2615, pruned_loss=0.07348, over 4763.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.254, pruned_loss=0.06162, over 949482.34 frames. ], batch size: 23, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:27:29,365 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:02,965 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69724.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:08,358 INFO [finetune.py:976] (2/7) Epoch 13, batch 1000, loss[loss=0.1593, simple_loss=0.2413, pruned_loss=0.03869, over 4763.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2558, pruned_loss=0.06162, over 950798.02 frames. ], batch size: 28, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:28:10,718 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69735.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:28:12,999 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.598e+02 1.856e+02 2.406e+02 4.029e+02, threshold=3.712e+02, percent-clipped=2.0 2023-03-26 15:28:18,441 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:20,308 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69750.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:21,484 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5622, 1.4619, 1.4959, 0.9859, 1.6294, 1.8319, 1.7088, 1.3132], device='cuda:2'), covar=tensor([0.0966, 0.0767, 0.0462, 0.0562, 0.0384, 0.0545, 0.0347, 0.0752], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0130, 0.0130, 0.0126, 0.0144, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3083e-05, 1.1158e-04, 8.8272e-05, 9.3607e-05, 9.2106e-05, 9.1281e-05, 1.0479e-04, 1.0585e-04], device='cuda:2') 2023-03-26 15:28:26,974 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0817, 1.3983, 0.8965, 2.0238, 2.4140, 1.7670, 1.6842, 1.7920], device='cuda:2'), covar=tensor([0.1443, 0.1974, 0.2116, 0.1093, 0.1873, 0.2135, 0.1322, 0.1931], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0121, 0.0094, 0.0099, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 15:28:52,947 INFO [finetune.py:976] (2/7) Epoch 13, batch 1050, loss[loss=0.1822, simple_loss=0.2475, pruned_loss=0.05849, over 4892.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2599, pruned_loss=0.0628, over 952804.66 frames. ], batch size: 32, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:29:03,459 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9659, 1.0772, 1.9669, 1.8266, 1.6955, 1.6171, 1.7313, 1.7859], device='cuda:2'), covar=tensor([0.3699, 0.3961, 0.3430, 0.3605, 0.4823, 0.3592, 0.4664, 0.3166], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0237, 0.0254, 0.0261, 0.0257, 0.0232, 0.0274, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:29:05,252 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5811, 1.5105, 1.3492, 1.6317, 1.5919, 1.6384, 0.8881, 1.3836], device='cuda:2'), covar=tensor([0.1990, 0.1869, 0.1796, 0.1515, 0.1437, 0.1077, 0.2449, 0.1785], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0206, 0.0210, 0.0189, 0.0239, 0.0182, 0.0212, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:29:38,099 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69818.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:29:48,094 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:29:48,125 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5563, 2.9016, 2.6515, 1.9946, 2.7862, 3.0158, 2.7946, 2.4395], device='cuda:2'), covar=tensor([0.0598, 0.0486, 0.0670, 0.0880, 0.0556, 0.0592, 0.0585, 0.0905], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0131, 0.0141, 0.0123, 0.0122, 0.0141, 0.0141, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:29:59,168 INFO [finetune.py:976] (2/7) Epoch 13, batch 1100, loss[loss=0.185, simple_loss=0.2518, pruned_loss=0.05909, over 4703.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2605, pruned_loss=0.06311, over 953346.66 frames. ], batch size: 59, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:30:02,881 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.609e+02 1.898e+02 2.282e+02 6.010e+02, threshold=3.795e+02, percent-clipped=2.0 2023-03-26 15:30:35,886 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69866.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:43,313 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:51,913 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69881.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:53,488 INFO [finetune.py:976] (2/7) Epoch 13, batch 1150, loss[loss=0.1868, simple_loss=0.2654, pruned_loss=0.05408, over 4925.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2626, pruned_loss=0.06374, over 954560.36 frames. ], batch size: 41, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:31:12,298 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69901.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:31:24,472 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 15:31:34,977 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 15:31:36,257 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 15:31:42,210 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:31:42,257 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69929.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:31:44,609 INFO [finetune.py:976] (2/7) Epoch 13, batch 1200, loss[loss=0.1939, simple_loss=0.2512, pruned_loss=0.06826, over 4752.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2604, pruned_loss=0.06266, over 955389.06 frames. ], batch size: 28, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:31:48,754 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.603e+02 1.893e+02 2.321e+02 3.158e+02, threshold=3.786e+02, percent-clipped=0.0 2023-03-26 15:31:55,835 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69949.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:32:13,213 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:13,761 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:17,837 INFO [finetune.py:976] (2/7) Epoch 13, batch 1250, loss[loss=0.1836, simple_loss=0.2545, pruned_loss=0.05629, over 4866.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2561, pruned_loss=0.06066, over 955854.18 frames. ], batch size: 31, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:32:45,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4734, 1.5616, 1.5892, 0.7209, 1.5740, 1.8110, 1.8217, 1.4293], device='cuda:2'), covar=tensor([0.0932, 0.0656, 0.0510, 0.0644, 0.0453, 0.0631, 0.0302, 0.0806], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0120, 0.0128, 0.0129, 0.0125, 0.0141, 0.0143], device='cuda:2'), out_proj_covar=tensor([9.1747e-05, 1.1010e-04, 8.6705e-05, 9.2450e-05, 9.1264e-05, 9.0504e-05, 1.0303e-04, 1.0419e-04], device='cuda:2') 2023-03-26 15:32:46,508 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:46,535 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:47,252 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.60 vs. limit=5.0 2023-03-26 15:32:50,169 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70030.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:32:52,397 INFO [finetune.py:976] (2/7) Epoch 13, batch 1300, loss[loss=0.1574, simple_loss=0.2327, pruned_loss=0.04099, over 4827.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2531, pruned_loss=0.05971, over 956071.74 frames. ], batch size: 39, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:32:52,535 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2056, 1.3487, 1.4459, 0.7461, 1.3352, 1.5262, 1.6027, 1.3132], device='cuda:2'), covar=tensor([0.0782, 0.0571, 0.0516, 0.0502, 0.0501, 0.0700, 0.0339, 0.0658], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0120, 0.0128, 0.0129, 0.0125, 0.0141, 0.0143], device='cuda:2'), out_proj_covar=tensor([9.1692e-05, 1.1003e-04, 8.6630e-05, 9.2386e-05, 9.1150e-05, 9.0448e-05, 1.0294e-04, 1.0406e-04], device='cuda:2') 2023-03-26 15:32:56,053 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.649e+02 1.897e+02 2.309e+02 4.234e+02, threshold=3.795e+02, percent-clipped=2.0 2023-03-26 15:33:03,831 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:06,697 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2473, 2.8184, 2.4440, 1.8840, 2.7148, 2.7599, 2.6158, 2.4766], device='cuda:2'), covar=tensor([0.0701, 0.0536, 0.0777, 0.0859, 0.0657, 0.0712, 0.0661, 0.0920], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0130, 0.0140, 0.0123, 0.0122, 0.0140, 0.0140, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:33:10,857 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6630, 3.8246, 3.6297, 2.0808, 3.9971, 3.0124, 0.9251, 2.6735], device='cuda:2'), covar=tensor([0.2696, 0.2361, 0.1484, 0.3029, 0.0958, 0.1037, 0.4297, 0.1653], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0160, 0.0129, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 15:33:16,379 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 15:33:19,135 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:25,839 INFO [finetune.py:976] (2/7) Epoch 13, batch 1350, loss[loss=0.2082, simple_loss=0.2843, pruned_loss=0.06606, over 4927.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2535, pruned_loss=0.06033, over 955577.81 frames. ], batch size: 42, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:33:36,433 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:53,160 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:34:08,077 INFO [finetune.py:976] (2/7) Epoch 13, batch 1400, loss[loss=0.1883, simple_loss=0.2681, pruned_loss=0.05428, over 4901.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2549, pruned_loss=0.06055, over 956660.99 frames. ], batch size: 37, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:34:12,153 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.588e+02 1.939e+02 2.393e+02 8.943e+02, threshold=3.877e+02, percent-clipped=1.0 2023-03-26 15:34:34,202 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70171.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:34:41,771 INFO [finetune.py:976] (2/7) Epoch 13, batch 1450, loss[loss=0.1843, simple_loss=0.2569, pruned_loss=0.05586, over 4792.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2556, pruned_loss=0.06014, over 956427.32 frames. ], batch size: 45, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:35:26,432 INFO [finetune.py:976] (2/7) Epoch 13, batch 1500, loss[loss=0.2608, simple_loss=0.3177, pruned_loss=0.1019, over 4759.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2591, pruned_loss=0.06217, over 957060.36 frames. ], batch size: 54, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:35:30,134 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.613e+02 1.899e+02 2.364e+02 4.350e+02, threshold=3.798e+02, percent-clipped=1.0 2023-03-26 15:35:46,434 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 15:35:46,863 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:35:53,449 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1809, 1.9372, 2.5886, 1.6730, 2.2064, 2.4988, 1.9128, 2.5626], device='cuda:2'), covar=tensor([0.1615, 0.2072, 0.1666, 0.2402, 0.1085, 0.1689, 0.2685, 0.1102], device='cuda:2'), in_proj_covar=tensor([0.0196, 0.0205, 0.0195, 0.0192, 0.0178, 0.0214, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:36:10,535 INFO [finetune.py:976] (2/7) Epoch 13, batch 1550, loss[loss=0.2094, simple_loss=0.2708, pruned_loss=0.074, over 4886.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2593, pruned_loss=0.06208, over 957687.23 frames. ], batch size: 32, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:36:49,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4669, 1.4829, 1.5721, 0.8601, 1.6287, 1.8772, 1.7886, 1.3680], device='cuda:2'), covar=tensor([0.0939, 0.0595, 0.0541, 0.0548, 0.0423, 0.0544, 0.0325, 0.0717], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0122, 0.0129, 0.0130, 0.0126, 0.0142, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.2455e-05, 1.1064e-04, 8.7716e-05, 9.3188e-05, 9.1858e-05, 9.1104e-05, 1.0348e-04, 1.0456e-04], device='cuda:2') 2023-03-26 15:36:49,641 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:36:58,937 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70330.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:37:00,630 INFO [finetune.py:976] (2/7) Epoch 13, batch 1600, loss[loss=0.1176, simple_loss=0.1908, pruned_loss=0.02225, over 4746.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2578, pruned_loss=0.0617, over 957879.17 frames. ], batch size: 23, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:37:04,737 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.529e+02 1.873e+02 2.318e+02 5.550e+02, threshold=3.745e+02, percent-clipped=4.0 2023-03-26 15:37:18,500 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 15:37:30,809 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70378.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:37:34,182 INFO [finetune.py:976] (2/7) Epoch 13, batch 1650, loss[loss=0.1754, simple_loss=0.2355, pruned_loss=0.05765, over 4860.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2555, pruned_loss=0.06174, over 957399.93 frames. ], batch size: 44, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:37:52,234 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 15:38:08,087 INFO [finetune.py:976] (2/7) Epoch 13, batch 1700, loss[loss=0.1947, simple_loss=0.2682, pruned_loss=0.06059, over 4924.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2526, pruned_loss=0.06029, over 957787.15 frames. ], batch size: 37, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:38:10,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6425, 2.4690, 2.0151, 1.0177, 2.2410, 2.0461, 1.9089, 2.1609], device='cuda:2'), covar=tensor([0.0843, 0.0766, 0.1697, 0.2100, 0.1636, 0.2234, 0.1945, 0.1019], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0197, 0.0200, 0.0186, 0.0215, 0.0208, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:38:11,732 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.610e+02 1.926e+02 2.276e+02 4.227e+02, threshold=3.852e+02, percent-clipped=1.0 2023-03-26 15:38:30,202 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:38:41,456 INFO [finetune.py:976] (2/7) Epoch 13, batch 1750, loss[loss=0.1903, simple_loss=0.2529, pruned_loss=0.06384, over 4739.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2555, pruned_loss=0.06154, over 956519.62 frames. ], batch size: 23, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:39:24,239 INFO [finetune.py:976] (2/7) Epoch 13, batch 1800, loss[loss=0.1939, simple_loss=0.2555, pruned_loss=0.06621, over 4877.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2578, pruned_loss=0.06149, over 956313.35 frames. ], batch size: 32, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:39:28,346 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.597e+02 2.051e+02 2.548e+02 3.844e+02, threshold=4.101e+02, percent-clipped=0.0 2023-03-26 15:39:47,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8220, 1.7111, 1.4867, 1.3730, 1.8717, 1.5604, 1.8465, 1.8045], device='cuda:2'), covar=tensor([0.1560, 0.2309, 0.3458, 0.2847, 0.2782, 0.1973, 0.3095, 0.2047], device='cuda:2'), in_proj_covar=tensor([0.0178, 0.0186, 0.0231, 0.0253, 0.0242, 0.0199, 0.0212, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:39:49,318 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8340, 1.0527, 1.8085, 1.7381, 1.6073, 1.5433, 1.6225, 1.7009], device='cuda:2'), covar=tensor([0.3623, 0.4168, 0.3505, 0.3666, 0.5033, 0.3764, 0.4374, 0.3408], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0239, 0.0256, 0.0263, 0.0260, 0.0234, 0.0277, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:39:58,058 INFO [finetune.py:976] (2/7) Epoch 13, batch 1850, loss[loss=0.1589, simple_loss=0.2265, pruned_loss=0.04564, over 4787.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2587, pruned_loss=0.06186, over 954596.88 frames. ], batch size: 26, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:40:07,837 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4682, 1.3648, 1.3370, 1.4420, 1.0179, 2.9864, 1.0461, 1.6688], device='cuda:2'), covar=tensor([0.4379, 0.3122, 0.2550, 0.2962, 0.2036, 0.0378, 0.2636, 0.1266], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:40:26,914 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:40:42,677 INFO [finetune.py:976] (2/7) Epoch 13, batch 1900, loss[loss=0.1974, simple_loss=0.2698, pruned_loss=0.06255, over 4893.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2597, pruned_loss=0.06217, over 954961.09 frames. ], batch size: 37, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:40:43,965 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.46 vs. limit=5.0 2023-03-26 15:40:46,778 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.570e+02 1.884e+02 2.217e+02 6.026e+02, threshold=3.769e+02, percent-clipped=2.0 2023-03-26 15:41:27,334 INFO [finetune.py:976] (2/7) Epoch 13, batch 1950, loss[loss=0.1447, simple_loss=0.2218, pruned_loss=0.03383, over 4814.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2581, pruned_loss=0.0622, over 954338.78 frames. ], batch size: 39, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:41:32,676 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7439, 1.5959, 2.1185, 3.4848, 2.5209, 2.4191, 1.0126, 2.8431], device='cuda:2'), covar=tensor([0.1673, 0.1463, 0.1298, 0.0577, 0.0725, 0.1453, 0.1941, 0.0492], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0118, 0.0136, 0.0166, 0.0102, 0.0140, 0.0129, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:2') 2023-03-26 15:42:06,899 INFO [finetune.py:976] (2/7) Epoch 13, batch 2000, loss[loss=0.1923, simple_loss=0.2477, pruned_loss=0.06841, over 4820.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2552, pruned_loss=0.0612, over 955575.36 frames. ], batch size: 40, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:42:15,811 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.535e+02 1.807e+02 2.194e+02 3.140e+02, threshold=3.615e+02, percent-clipped=0.0 2023-03-26 15:42:36,900 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:42:48,489 INFO [finetune.py:976] (2/7) Epoch 13, batch 2050, loss[loss=0.1915, simple_loss=0.26, pruned_loss=0.06147, over 4856.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2522, pruned_loss=0.06025, over 955032.48 frames. ], batch size: 49, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:43:09,367 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70814.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:43:11,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6708, 2.7449, 2.6461, 2.2040, 2.9346, 3.1435, 2.9271, 2.1404], device='cuda:2'), covar=tensor([0.0732, 0.0646, 0.0785, 0.0902, 0.0679, 0.0683, 0.0738, 0.1486], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0131, 0.0140, 0.0123, 0.0122, 0.0140, 0.0140, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:43:22,315 INFO [finetune.py:976] (2/7) Epoch 13, batch 2100, loss[loss=0.2167, simple_loss=0.2836, pruned_loss=0.07494, over 4781.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2521, pruned_loss=0.06034, over 955151.29 frames. ], batch size: 54, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:43:26,462 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.827e+01 1.609e+02 1.892e+02 2.240e+02 3.187e+02, threshold=3.783e+02, percent-clipped=0.0 2023-03-26 15:43:56,100 INFO [finetune.py:976] (2/7) Epoch 13, batch 2150, loss[loss=0.2284, simple_loss=0.2939, pruned_loss=0.08141, over 4928.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.257, pruned_loss=0.06222, over 955875.85 frames. ], batch size: 33, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:44:14,246 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3821, 1.5126, 1.9059, 1.7670, 1.6084, 3.5775, 1.3443, 1.5832], device='cuda:2'), covar=tensor([0.1352, 0.2415, 0.1286, 0.1279, 0.1989, 0.0306, 0.2100, 0.2485], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:44:35,057 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:44:46,790 INFO [finetune.py:976] (2/7) Epoch 13, batch 2200, loss[loss=0.2244, simple_loss=0.2814, pruned_loss=0.08373, over 4788.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2596, pruned_loss=0.063, over 955730.13 frames. ], batch size: 29, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:44:50,484 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.701e+02 1.958e+02 2.316e+02 4.574e+02, threshold=3.916e+02, percent-clipped=1.0 2023-03-26 15:45:07,669 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:45:12,622 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0696, 1.9633, 1.5768, 1.9173, 2.0263, 1.7348, 2.3402, 2.0391], device='cuda:2'), covar=tensor([0.1464, 0.2219, 0.3215, 0.2724, 0.2680, 0.1750, 0.3115, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0179, 0.0188, 0.0234, 0.0255, 0.0245, 0.0200, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:45:19,186 INFO [finetune.py:976] (2/7) Epoch 13, batch 2250, loss[loss=0.228, simple_loss=0.2973, pruned_loss=0.07941, over 4805.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2615, pruned_loss=0.06338, over 954616.78 frames. ], batch size: 40, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:45:26,597 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:45:39,694 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2573, 1.7407, 2.0580, 2.0485, 1.8478, 1.8817, 2.0600, 1.9525], device='cuda:2'), covar=tensor([0.4942, 0.5042, 0.3865, 0.4589, 0.5683, 0.4334, 0.5895, 0.3828], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0238, 0.0255, 0.0262, 0.0259, 0.0234, 0.0275, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:46:01,452 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3309, 2.9420, 3.0718, 3.3037, 3.1031, 2.8640, 3.3570, 1.0005], device='cuda:2'), covar=tensor([0.0994, 0.0963, 0.1045, 0.0935, 0.1543, 0.1826, 0.0998, 0.5277], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0244, 0.0276, 0.0291, 0.0330, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:46:02,071 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:46:03,707 INFO [finetune.py:976] (2/7) Epoch 13, batch 2300, loss[loss=0.2093, simple_loss=0.2685, pruned_loss=0.07503, over 4762.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2602, pruned_loss=0.06228, over 953474.57 frames. ], batch size: 27, lr: 3.60e-03, grad_scale: 64.0 2023-03-26 15:46:08,250 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.237e+01 1.685e+02 2.000e+02 2.324e+02 3.629e+02, threshold=3.999e+02, percent-clipped=0.0 2023-03-26 15:46:23,772 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71053.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:46:46,189 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 15:46:59,590 INFO [finetune.py:976] (2/7) Epoch 13, batch 2350, loss[loss=0.1893, simple_loss=0.2513, pruned_loss=0.06367, over 4736.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2585, pruned_loss=0.06188, over 955225.34 frames. ], batch size: 54, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:47:10,260 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71091.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:48:00,798 INFO [finetune.py:976] (2/7) Epoch 13, batch 2400, loss[loss=0.1425, simple_loss=0.2111, pruned_loss=0.03693, over 4819.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2549, pruned_loss=0.06035, over 956394.72 frames. ], batch size: 40, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:48:09,284 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.076e+01 1.502e+02 1.791e+02 2.104e+02 3.987e+02, threshold=3.583e+02, percent-clipped=0.0 2023-03-26 15:49:04,011 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 15:49:05,621 INFO [finetune.py:976] (2/7) Epoch 13, batch 2450, loss[loss=0.1396, simple_loss=0.2111, pruned_loss=0.03404, over 4770.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.252, pruned_loss=0.05947, over 955080.74 frames. ], batch size: 28, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:49:54,490 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0258, 0.9615, 0.9073, 1.1386, 1.2014, 1.1052, 0.9947, 0.9507], device='cuda:2'), covar=tensor([0.0336, 0.0321, 0.0614, 0.0274, 0.0278, 0.0453, 0.0357, 0.0416], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0110, 0.0142, 0.0114, 0.0103, 0.0106, 0.0096, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4010e-05, 8.5520e-05, 1.1242e-04, 8.8798e-05, 8.0267e-05, 7.8834e-05, 7.2617e-05, 8.4584e-05], device='cuda:2') 2023-03-26 15:50:04,537 INFO [finetune.py:976] (2/7) Epoch 13, batch 2500, loss[loss=0.2168, simple_loss=0.2767, pruned_loss=0.07844, over 4823.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2553, pruned_loss=0.06083, over 954735.55 frames. ], batch size: 40, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:50:04,706 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2710, 2.1241, 1.6875, 2.1507, 2.1792, 1.8820, 2.5148, 2.2232], device='cuda:2'), covar=tensor([0.1365, 0.2223, 0.3538, 0.2769, 0.2644, 0.1844, 0.2977, 0.1915], device='cuda:2'), in_proj_covar=tensor([0.0178, 0.0187, 0.0233, 0.0254, 0.0243, 0.0198, 0.0212, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:50:08,818 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.629e+02 1.890e+02 2.415e+02 4.682e+02, threshold=3.780e+02, percent-clipped=4.0 2023-03-26 15:50:15,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9787, 1.8044, 1.7441, 2.0437, 2.4934, 1.9593, 1.8884, 1.4690], device='cuda:2'), covar=tensor([0.2213, 0.2133, 0.1875, 0.1739, 0.1941, 0.1197, 0.2178, 0.1872], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0207, 0.0209, 0.0190, 0.0239, 0.0182, 0.0213, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:50:41,417 INFO [finetune.py:976] (2/7) Epoch 13, batch 2550, loss[loss=0.1652, simple_loss=0.243, pruned_loss=0.04373, over 4810.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2566, pruned_loss=0.06033, over 956885.19 frames. ], batch size: 51, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:50:43,391 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9151, 1.7279, 1.5264, 1.6022, 1.6319, 1.6207, 1.6608, 2.3610], device='cuda:2'), covar=tensor([0.3490, 0.4138, 0.3113, 0.3764, 0.4014, 0.2355, 0.3727, 0.1605], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0223, 0.0277, 0.0245, 0.0212, 0.0247, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:51:22,575 INFO [finetune.py:976] (2/7) Epoch 13, batch 2600, loss[loss=0.1943, simple_loss=0.2578, pruned_loss=0.06539, over 4792.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2581, pruned_loss=0.06109, over 955279.26 frames. ], batch size: 51, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:51:23,293 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3873, 1.2538, 1.2638, 1.2891, 0.6411, 2.2277, 0.6766, 1.1754], device='cuda:2'), covar=tensor([0.3579, 0.2692, 0.2481, 0.2603, 0.2425, 0.0371, 0.2886, 0.1504], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0098, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:51:25,169 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8254, 1.5312, 2.2788, 1.4960, 2.0263, 2.1710, 1.4949, 2.2753], device='cuda:2'), covar=tensor([0.1491, 0.2407, 0.1311, 0.2004, 0.0968, 0.1522, 0.3142, 0.0868], device='cuda:2'), in_proj_covar=tensor([0.0196, 0.0206, 0.0194, 0.0191, 0.0179, 0.0215, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:51:26,871 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.678e+02 1.922e+02 2.428e+02 5.321e+02, threshold=3.843e+02, percent-clipped=3.0 2023-03-26 15:51:31,779 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:51:51,343 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 15:51:55,370 INFO [finetune.py:976] (2/7) Epoch 13, batch 2650, loss[loss=0.1837, simple_loss=0.2578, pruned_loss=0.05484, over 4885.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2593, pruned_loss=0.06143, over 953867.97 frames. ], batch size: 35, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:51:58,311 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:52:29,325 INFO [finetune.py:976] (2/7) Epoch 13, batch 2700, loss[loss=0.1856, simple_loss=0.2526, pruned_loss=0.05932, over 4902.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2576, pruned_loss=0.0608, over 954452.27 frames. ], batch size: 36, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:52:34,538 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.578e+02 1.884e+02 2.307e+02 4.300e+02, threshold=3.769e+02, percent-clipped=2.0 2023-03-26 15:52:45,514 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 15:52:46,071 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9792, 1.7825, 2.0691, 1.2435, 1.8935, 2.0273, 1.9586, 1.5820], device='cuda:2'), covar=tensor([0.0582, 0.0713, 0.0595, 0.0944, 0.0770, 0.0719, 0.0612, 0.1180], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0134, 0.0144, 0.0126, 0.0125, 0.0144, 0.0143, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:52:48,344 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1315, 2.9170, 2.6087, 1.4232, 2.7169, 2.2768, 2.1707, 2.4520], device='cuda:2'), covar=tensor([0.1020, 0.0762, 0.1731, 0.2224, 0.1707, 0.2066, 0.2075, 0.1187], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0195, 0.0198, 0.0185, 0.0211, 0.0206, 0.0220, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:53:02,923 INFO [finetune.py:976] (2/7) Epoch 13, batch 2750, loss[loss=0.1751, simple_loss=0.233, pruned_loss=0.05859, over 4757.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2552, pruned_loss=0.06034, over 953993.02 frames. ], batch size: 28, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:53:19,115 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6999, 2.5333, 1.9392, 0.9613, 2.1469, 2.1839, 1.9262, 2.1365], device='cuda:2'), covar=tensor([0.0744, 0.0718, 0.1360, 0.1964, 0.1318, 0.1956, 0.1928, 0.0874], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0195, 0.0198, 0.0185, 0.0212, 0.0206, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:53:34,926 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4905, 1.4589, 1.5196, 0.7490, 1.5392, 1.7125, 1.7035, 1.3235], device='cuda:2'), covar=tensor([0.0876, 0.0646, 0.0492, 0.0575, 0.0440, 0.0541, 0.0311, 0.0727], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0122, 0.0130, 0.0131, 0.0127, 0.0143, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.3632e-05, 1.1167e-04, 8.7933e-05, 9.3505e-05, 9.2790e-05, 9.1819e-05, 1.0419e-04, 1.0561e-04], device='cuda:2') 2023-03-26 15:53:36,642 INFO [finetune.py:976] (2/7) Epoch 13, batch 2800, loss[loss=0.1746, simple_loss=0.2343, pruned_loss=0.0574, over 4903.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2524, pruned_loss=0.05918, over 955544.28 frames. ], batch size: 36, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:53:40,882 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.564e+02 1.863e+02 2.304e+02 3.302e+02, threshold=3.726e+02, percent-clipped=0.0 2023-03-26 15:54:07,213 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 15:54:08,311 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 15:54:23,057 INFO [finetune.py:976] (2/7) Epoch 13, batch 2850, loss[loss=0.2075, simple_loss=0.2535, pruned_loss=0.08077, over 4823.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2518, pruned_loss=0.0591, over 954296.61 frames. ], batch size: 30, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:54:52,328 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:54:56,238 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9237, 1.8638, 2.0142, 1.4463, 1.8979, 2.1308, 2.0628, 1.6710], device='cuda:2'), covar=tensor([0.0529, 0.0582, 0.0591, 0.0861, 0.0938, 0.0526, 0.0482, 0.0911], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0143, 0.0125, 0.0124, 0.0143, 0.0142, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:55:06,322 INFO [finetune.py:976] (2/7) Epoch 13, batch 2900, loss[loss=0.1981, simple_loss=0.28, pruned_loss=0.05805, over 4929.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2547, pruned_loss=0.05986, over 952288.40 frames. ], batch size: 42, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:55:15,494 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.635e+01 1.661e+02 1.944e+02 2.530e+02 6.475e+02, threshold=3.888e+02, percent-clipped=5.0 2023-03-26 15:55:24,626 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71648.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:55:49,611 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:55:58,581 INFO [finetune.py:976] (2/7) Epoch 13, batch 2950, loss[loss=0.195, simple_loss=0.2574, pruned_loss=0.06634, over 4896.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2568, pruned_loss=0.06075, over 951634.57 frames. ], batch size: 32, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:56:00,485 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:09,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3205, 3.7036, 3.9337, 4.1786, 4.0757, 3.8671, 4.3974, 1.3607], device='cuda:2'), covar=tensor([0.0687, 0.0764, 0.0696, 0.0874, 0.1211, 0.1497, 0.0598, 0.5179], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0243, 0.0276, 0.0290, 0.0330, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:56:11,115 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=71696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:39,790 INFO [finetune.py:976] (2/7) Epoch 13, batch 3000, loss[loss=0.1673, simple_loss=0.2246, pruned_loss=0.05505, over 4206.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2577, pruned_loss=0.06114, over 951522.69 frames. ], batch size: 18, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:56:39,790 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 15:56:50,410 INFO [finetune.py:1010] (2/7) Epoch 13, validation: loss=0.1572, simple_loss=0.2278, pruned_loss=0.04333, over 2265189.00 frames. 2023-03-26 15:56:50,410 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 15:56:51,091 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=71734.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:55,669 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.624e+02 1.953e+02 2.376e+02 4.887e+02, threshold=3.907e+02, percent-clipped=1.0 2023-03-26 15:57:22,728 INFO [finetune.py:976] (2/7) Epoch 13, batch 3050, loss[loss=0.2175, simple_loss=0.2799, pruned_loss=0.07759, over 4918.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2579, pruned_loss=0.06089, over 951452.67 frames. ], batch size: 33, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:57:55,478 INFO [finetune.py:976] (2/7) Epoch 13, batch 3100, loss[loss=0.1914, simple_loss=0.2529, pruned_loss=0.06499, over 4912.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.257, pruned_loss=0.06063, over 952878.06 frames. ], batch size: 43, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:58:01,084 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.560e+02 1.843e+02 2.215e+02 5.565e+02, threshold=3.687e+02, percent-clipped=1.0 2023-03-26 15:58:08,165 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1115, 1.0482, 1.0755, 0.4917, 0.9185, 1.2099, 1.2752, 1.0274], device='cuda:2'), covar=tensor([0.0735, 0.0490, 0.0445, 0.0466, 0.0529, 0.0559, 0.0333, 0.0565], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0121, 0.0129, 0.0130, 0.0126, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2834e-05, 1.1089e-04, 8.7332e-05, 9.2646e-05, 9.2054e-05, 9.1662e-05, 1.0324e-04, 1.0496e-04], device='cuda:2') 2023-03-26 15:58:26,832 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0782, 0.9934, 1.0895, 0.5011, 0.8975, 1.1741, 1.2276, 1.0345], device='cuda:2'), covar=tensor([0.0816, 0.0557, 0.0452, 0.0484, 0.0485, 0.0613, 0.0361, 0.0656], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0121, 0.0128, 0.0130, 0.0126, 0.0142, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.2714e-05, 1.1073e-04, 8.7264e-05, 9.2492e-05, 9.1906e-05, 9.1482e-05, 1.0315e-04, 1.0479e-04], device='cuda:2') 2023-03-26 15:58:29,154 INFO [finetune.py:976] (2/7) Epoch 13, batch 3150, loss[loss=0.1799, simple_loss=0.2508, pruned_loss=0.05452, over 4824.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2549, pruned_loss=0.06026, over 953309.78 frames. ], batch size: 41, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:58:43,944 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6193, 1.9131, 2.2104, 2.0077, 1.8295, 4.3043, 1.5080, 1.8971], device='cuda:2'), covar=tensor([0.0957, 0.1618, 0.1182, 0.0947, 0.1521, 0.0217, 0.1460, 0.1603], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0078, 0.0092, 0.0082, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 15:58:53,749 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71919.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:03,051 INFO [finetune.py:976] (2/7) Epoch 13, batch 3200, loss[loss=0.1877, simple_loss=0.2646, pruned_loss=0.05542, over 4791.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.252, pruned_loss=0.05961, over 954217.25 frames. ], batch size: 45, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:59:07,313 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.561e+02 1.912e+02 2.265e+02 3.518e+02, threshold=3.824e+02, percent-clipped=0.0 2023-03-26 15:59:40,776 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:49,829 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0345, 3.4991, 3.6734, 3.8766, 3.7665, 3.5892, 4.1238, 1.2525], device='cuda:2'), covar=tensor([0.0875, 0.0931, 0.0756, 0.1148, 0.1378, 0.1614, 0.0802, 0.5893], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0276, 0.0290, 0.0330, 0.0282, 0.0302, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 15:59:54,377 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:56,074 INFO [finetune.py:976] (2/7) Epoch 13, batch 3250, loss[loss=0.2334, simple_loss=0.3025, pruned_loss=0.08217, over 4749.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2526, pruned_loss=0.05967, over 954692.29 frames. ], batch size: 54, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 16:00:04,187 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5436, 1.4153, 1.3648, 1.6300, 1.5815, 1.5990, 0.9370, 1.3487], device='cuda:2'), covar=tensor([0.1878, 0.1872, 0.1717, 0.1327, 0.1304, 0.1047, 0.2319, 0.1629], device='cuda:2'), in_proj_covar=tensor([0.0238, 0.0208, 0.0210, 0.0190, 0.0241, 0.0184, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:00:39,651 INFO [finetune.py:976] (2/7) Epoch 13, batch 3300, loss[loss=0.1817, simple_loss=0.2578, pruned_loss=0.0528, over 4906.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2555, pruned_loss=0.06068, over 954386.02 frames. ], batch size: 36, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:00:44,481 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.593e+02 1.995e+02 2.341e+02 5.205e+02, threshold=3.991e+02, percent-clipped=4.0 2023-03-26 16:01:29,182 INFO [finetune.py:976] (2/7) Epoch 13, batch 3350, loss[loss=0.2417, simple_loss=0.3055, pruned_loss=0.089, over 4122.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.258, pruned_loss=0.06151, over 954619.54 frames. ], batch size: 65, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:01:45,266 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0870, 1.9977, 1.5575, 1.9568, 1.9582, 1.6804, 2.3221, 1.9853], device='cuda:2'), covar=tensor([0.1402, 0.2236, 0.3350, 0.2745, 0.2862, 0.1900, 0.3130, 0.2005], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0255, 0.0245, 0.0199, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:01:58,845 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.62 vs. limit=2.0 2023-03-26 16:01:59,486 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 16:02:07,936 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:02:11,038 INFO [finetune.py:976] (2/7) Epoch 13, batch 3400, loss[loss=0.1471, simple_loss=0.2265, pruned_loss=0.03384, over 4747.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2603, pruned_loss=0.06189, over 957141.40 frames. ], batch size: 27, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:02:16,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.708e+02 2.008e+02 2.371e+02 4.954e+02, threshold=4.015e+02, percent-clipped=4.0 2023-03-26 16:02:49,871 INFO [finetune.py:976] (2/7) Epoch 13, batch 3450, loss[loss=0.2039, simple_loss=0.2672, pruned_loss=0.07029, over 4862.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2604, pruned_loss=0.06214, over 958045.00 frames. ], batch size: 34, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:02:55,185 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:06,715 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 16:03:23,344 INFO [finetune.py:976] (2/7) Epoch 13, batch 3500, loss[loss=0.1796, simple_loss=0.2494, pruned_loss=0.05487, over 3997.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2567, pruned_loss=0.06089, over 957595.52 frames. ], batch size: 17, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:03:29,064 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.641e+02 1.993e+02 2.438e+02 4.377e+02, threshold=3.986e+02, percent-clipped=2.0 2023-03-26 16:03:44,661 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9001, 1.8465, 1.6875, 1.8377, 1.3752, 4.1754, 1.5484, 2.0436], device='cuda:2'), covar=tensor([0.3251, 0.2441, 0.2095, 0.2373, 0.1709, 0.0150, 0.2663, 0.1282], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0119, 0.0123, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:03:49,883 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:51,650 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72275.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:56,422 INFO [finetune.py:976] (2/7) Epoch 13, batch 3550, loss[loss=0.1668, simple_loss=0.2365, pruned_loss=0.04856, over 4909.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2541, pruned_loss=0.06007, over 956680.30 frames. ], batch size: 35, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:04:04,830 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 16:04:14,407 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.07 vs. limit=5.0 2023-03-26 16:04:37,444 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:04:47,568 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:04:51,694 INFO [finetune.py:976] (2/7) Epoch 13, batch 3600, loss[loss=0.1352, simple_loss=0.2143, pruned_loss=0.02811, over 4934.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2521, pruned_loss=0.05938, over 956328.95 frames. ], batch size: 38, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:04:58,281 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.845e+01 1.525e+02 1.754e+02 2.048e+02 3.586e+02, threshold=3.507e+02, percent-clipped=0.0 2023-03-26 16:05:42,445 INFO [finetune.py:976] (2/7) Epoch 13, batch 3650, loss[loss=0.1409, simple_loss=0.2136, pruned_loss=0.03416, over 4769.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2541, pruned_loss=0.06008, over 955632.80 frames. ], batch size: 26, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:05:50,442 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72387.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:06:14,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4269, 1.1331, 0.7681, 1.3087, 1.8374, 0.7072, 1.2264, 1.3521], device='cuda:2'), covar=tensor([0.1672, 0.2095, 0.1814, 0.1270, 0.2037, 0.1949, 0.1518, 0.1964], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0094, 0.0111, 0.0091, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:06:22,139 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2962, 2.9197, 3.0202, 3.2000, 3.0582, 2.8834, 3.3364, 0.9938], device='cuda:2'), covar=tensor([0.1065, 0.1056, 0.1150, 0.1295, 0.1695, 0.1804, 0.1067, 0.5367], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0241, 0.0274, 0.0289, 0.0329, 0.0279, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:06:54,367 INFO [finetune.py:976] (2/7) Epoch 13, batch 3700, loss[loss=0.238, simple_loss=0.3135, pruned_loss=0.08119, over 4819.00 frames. ], tot_loss[loss=0.1894, simple_loss=0.257, pruned_loss=0.06091, over 954475.00 frames. ], batch size: 45, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:06:55,111 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2660, 2.1666, 1.6255, 2.1914, 2.1271, 1.8878, 2.5059, 2.2129], device='cuda:2'), covar=tensor([0.1352, 0.2006, 0.3173, 0.2556, 0.2532, 0.1654, 0.2855, 0.1788], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0254, 0.0245, 0.0199, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:07:04,384 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.616e+02 1.915e+02 2.308e+02 4.437e+02, threshold=3.829e+02, percent-clipped=1.0 2023-03-26 16:07:17,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6441, 1.4411, 2.1513, 3.1201, 2.1585, 2.2568, 1.1248, 2.4594], device='cuda:2'), covar=tensor([0.1667, 0.1537, 0.1151, 0.0592, 0.0785, 0.1772, 0.1707, 0.0606], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0137, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:07:52,818 INFO [finetune.py:976] (2/7) Epoch 13, batch 3750, loss[loss=0.1728, simple_loss=0.249, pruned_loss=0.04827, over 4774.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2588, pruned_loss=0.06109, over 955727.37 frames. ], batch size: 29, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:07:54,151 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72485.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:08:29,282 INFO [finetune.py:976] (2/7) Epoch 13, batch 3800, loss[loss=0.1951, simple_loss=0.248, pruned_loss=0.07108, over 4800.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2595, pruned_loss=0.06146, over 954945.37 frames. ], batch size: 25, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:08:33,611 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-26 16:08:34,664 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.445e+01 1.578e+02 1.803e+02 2.155e+02 3.901e+02, threshold=3.607e+02, percent-clipped=1.0 2023-03-26 16:08:56,828 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72575.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:09:02,584 INFO [finetune.py:976] (2/7) Epoch 13, batch 3850, loss[loss=0.2085, simple_loss=0.2616, pruned_loss=0.07768, over 4806.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.257, pruned_loss=0.05989, over 956801.13 frames. ], batch size: 45, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:09:09,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3540, 3.7738, 3.9545, 4.1545, 4.1384, 3.8076, 4.4070, 1.4564], device='cuda:2'), covar=tensor([0.0670, 0.0831, 0.0815, 0.0909, 0.1133, 0.1548, 0.0727, 0.5002], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0241, 0.0273, 0.0288, 0.0328, 0.0278, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:09:12,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6910, 1.2446, 0.9573, 1.6050, 2.0230, 1.2964, 1.4041, 1.6153], device='cuda:2'), covar=tensor([0.1218, 0.1679, 0.1637, 0.0966, 0.1693, 0.1831, 0.1241, 0.1507], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0091, 0.0119, 0.0093, 0.0098, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:09:30,596 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72623.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:09:46,306 INFO [finetune.py:976] (2/7) Epoch 13, batch 3900, loss[loss=0.1845, simple_loss=0.2486, pruned_loss=0.06021, over 4754.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2542, pruned_loss=0.0594, over 956455.01 frames. ], batch size: 27, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:09:51,187 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.402e+01 1.493e+02 1.751e+02 2.217e+02 3.590e+02, threshold=3.501e+02, percent-clipped=0.0 2023-03-26 16:10:05,181 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 16:10:18,075 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72682.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:10:18,645 INFO [finetune.py:976] (2/7) Epoch 13, batch 3950, loss[loss=0.154, simple_loss=0.2175, pruned_loss=0.04529, over 4901.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2516, pruned_loss=0.05835, over 957057.79 frames. ], batch size: 46, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:10:25,888 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 16:10:43,212 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8968, 3.5646, 3.4723, 1.7455, 3.7466, 2.7489, 0.8353, 2.5231], device='cuda:2'), covar=tensor([0.2301, 0.2121, 0.1542, 0.3060, 0.1044, 0.0966, 0.4138, 0.1571], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0128, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 16:10:45,753 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8825, 1.4259, 1.9706, 1.8669, 1.6671, 1.6485, 1.8431, 1.8010], device='cuda:2'), covar=tensor([0.3712, 0.3981, 0.3155, 0.3706, 0.4632, 0.3582, 0.4232, 0.3099], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0238, 0.0256, 0.0264, 0.0260, 0.0236, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:11:10,484 INFO [finetune.py:976] (2/7) Epoch 13, batch 4000, loss[loss=0.158, simple_loss=0.2309, pruned_loss=0.04258, over 4832.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2502, pruned_loss=0.0579, over 956739.34 frames. ], batch size: 30, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:11:16,800 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.220e+02 1.596e+02 1.921e+02 2.181e+02 4.609e+02, threshold=3.842e+02, percent-clipped=3.0 2023-03-26 16:11:21,108 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5199, 1.7013, 1.8096, 0.9120, 1.7037, 2.0324, 1.9581, 1.5740], device='cuda:2'), covar=tensor([0.0905, 0.0654, 0.0495, 0.0586, 0.0480, 0.0574, 0.0357, 0.0707], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0122, 0.0128, 0.0130, 0.0125, 0.0142, 0.0144], device='cuda:2'), out_proj_covar=tensor([9.2713e-05, 1.1067e-04, 8.7595e-05, 9.2374e-05, 9.2337e-05, 9.1062e-05, 1.0306e-04, 1.0475e-04], device='cuda:2') 2023-03-26 16:11:44,631 INFO [finetune.py:976] (2/7) Epoch 13, batch 4050, loss[loss=0.1648, simple_loss=0.2242, pruned_loss=0.05269, over 4719.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2541, pruned_loss=0.0597, over 957023.88 frames. ], batch size: 23, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:11:46,017 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72785.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:11:56,927 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8167, 1.6409, 1.4587, 1.2009, 1.5837, 1.6298, 1.5623, 2.1759], device='cuda:2'), covar=tensor([0.4103, 0.4274, 0.3222, 0.3933, 0.3882, 0.2345, 0.3692, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0223, 0.0276, 0.0245, 0.0211, 0.0246, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:12:05,271 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3942, 2.3110, 2.0279, 2.4860, 2.2520, 2.1812, 2.1714, 3.2447], device='cuda:2'), covar=tensor([0.4315, 0.5119, 0.3499, 0.4595, 0.4511, 0.2674, 0.4879, 0.1596], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0223, 0.0276, 0.0245, 0.0211, 0.0246, 0.0220], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:12:39,938 INFO [finetune.py:976] (2/7) Epoch 13, batch 4100, loss[loss=0.1627, simple_loss=0.2347, pruned_loss=0.04537, over 4788.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2565, pruned_loss=0.06045, over 956290.77 frames. ], batch size: 29, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:12:39,999 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72833.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:12:45,292 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.592e+02 1.875e+02 2.230e+02 3.624e+02, threshold=3.749e+02, percent-clipped=0.0 2023-03-26 16:13:12,220 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8878, 1.3672, 1.9038, 1.8588, 1.6843, 1.6179, 1.8357, 1.7296], device='cuda:2'), covar=tensor([0.3640, 0.3927, 0.3285, 0.3678, 0.4713, 0.3589, 0.4376, 0.3213], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0237, 0.0256, 0.0264, 0.0261, 0.0236, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:13:13,288 INFO [finetune.py:976] (2/7) Epoch 13, batch 4150, loss[loss=0.2399, simple_loss=0.3052, pruned_loss=0.08731, over 4827.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2576, pruned_loss=0.06105, over 955148.74 frames. ], batch size: 47, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:13:26,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72894.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:13:44,272 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9613, 1.8496, 1.5612, 1.7810, 1.9264, 1.6453, 2.1802, 1.9471], device='cuda:2'), covar=tensor([0.1380, 0.2250, 0.3199, 0.2668, 0.2712, 0.1740, 0.3209, 0.1814], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0188, 0.0236, 0.0255, 0.0245, 0.0200, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:14:03,786 INFO [finetune.py:976] (2/7) Epoch 13, batch 4200, loss[loss=0.1817, simple_loss=0.2536, pruned_loss=0.05484, over 4758.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2579, pruned_loss=0.06067, over 956118.97 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:14:08,708 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.693e+01 1.496e+02 1.812e+02 2.169e+02 4.504e+02, threshold=3.624e+02, percent-clipped=2.0 2023-03-26 16:14:18,721 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:14:30,128 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72964.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:14:52,547 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72982.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:14:53,055 INFO [finetune.py:976] (2/7) Epoch 13, batch 4250, loss[loss=0.225, simple_loss=0.2711, pruned_loss=0.0895, over 4313.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2566, pruned_loss=0.06091, over 956485.80 frames. ], batch size: 65, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:15:15,246 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4451, 1.4907, 1.6367, 1.7450, 1.5481, 3.2034, 1.3437, 1.6056], device='cuda:2'), covar=tensor([0.0927, 0.1692, 0.1086, 0.0914, 0.1456, 0.0268, 0.1417, 0.1683], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:15:19,025 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.71 vs. limit=5.0 2023-03-26 16:15:21,375 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73025.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 16:15:23,174 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5954, 1.5579, 1.9353, 1.8679, 1.6233, 3.5577, 1.3083, 1.7173], device='cuda:2'), covar=tensor([0.0941, 0.1771, 0.1084, 0.0968, 0.1589, 0.0234, 0.1547, 0.1703], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:15:24,376 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:15:26,658 INFO [finetune.py:976] (2/7) Epoch 13, batch 4300, loss[loss=0.1978, simple_loss=0.248, pruned_loss=0.07374, over 4252.00 frames. ], tot_loss[loss=0.187, simple_loss=0.254, pruned_loss=0.05998, over 955804.30 frames. ], batch size: 18, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:15:31,990 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.498e+02 1.782e+02 2.254e+02 4.055e+02, threshold=3.563e+02, percent-clipped=2.0 2023-03-26 16:15:32,761 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0389, 1.9057, 1.6175, 1.7241, 1.7870, 1.7848, 1.7924, 2.5255], device='cuda:2'), covar=tensor([0.4129, 0.4368, 0.3562, 0.4189, 0.4350, 0.2696, 0.4301, 0.1711], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0223, 0.0276, 0.0246, 0.0212, 0.0246, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:15:59,433 INFO [finetune.py:976] (2/7) Epoch 13, batch 4350, loss[loss=0.1431, simple_loss=0.2173, pruned_loss=0.03439, over 4827.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2508, pruned_loss=0.05857, over 955979.15 frames. ], batch size: 30, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:16:34,970 INFO [finetune.py:976] (2/7) Epoch 13, batch 4400, loss[loss=0.2157, simple_loss=0.2964, pruned_loss=0.0675, over 4841.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2526, pruned_loss=0.05954, over 955852.79 frames. ], batch size: 49, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:16:39,866 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8886, 1.7949, 1.5579, 1.7758, 1.7107, 1.6812, 1.7397, 2.4399], device='cuda:2'), covar=tensor([0.4098, 0.4520, 0.3247, 0.3844, 0.4096, 0.2442, 0.3828, 0.1602], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0223, 0.0277, 0.0246, 0.0212, 0.0247, 0.0221], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:16:40,306 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.328e+01 1.433e+02 1.829e+02 2.142e+02 3.915e+02, threshold=3.659e+02, percent-clipped=1.0 2023-03-26 16:17:08,723 INFO [finetune.py:976] (2/7) Epoch 13, batch 4450, loss[loss=0.1889, simple_loss=0.2613, pruned_loss=0.05823, over 4759.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2562, pruned_loss=0.06071, over 955785.54 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:17:50,324 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9742, 1.8320, 1.6038, 1.6761, 1.7771, 1.7587, 1.7515, 2.4363], device='cuda:2'), covar=tensor([0.4304, 0.4637, 0.3508, 0.3823, 0.4057, 0.2525, 0.3815, 0.1843], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0261, 0.0224, 0.0278, 0.0247, 0.0213, 0.0248, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:17:53,151 INFO [finetune.py:976] (2/7) Epoch 13, batch 4500, loss[loss=0.2181, simple_loss=0.3005, pruned_loss=0.06787, over 4801.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2587, pruned_loss=0.06164, over 955055.42 frames. ], batch size: 51, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:17:55,115 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2383, 1.1837, 1.3800, 1.9804, 1.3956, 1.7659, 0.7622, 1.6599], device='cuda:2'), covar=tensor([0.1208, 0.1012, 0.0795, 0.0606, 0.0635, 0.0917, 0.1106, 0.0521], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0116, 0.0133, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:17:56,940 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4123, 1.4641, 1.7687, 1.6528, 1.5241, 3.2008, 1.3204, 1.5236], device='cuda:2'), covar=tensor([0.0943, 0.1644, 0.1068, 0.0953, 0.1496, 0.0247, 0.1420, 0.1668], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:17:58,018 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.220e+02 1.732e+02 2.105e+02 2.505e+02 4.470e+02, threshold=4.210e+02, percent-clipped=3.0 2023-03-26 16:18:04,482 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:18:21,101 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.54 vs. limit=5.0 2023-03-26 16:18:26,891 INFO [finetune.py:976] (2/7) Epoch 13, batch 4550, loss[loss=0.2498, simple_loss=0.2993, pruned_loss=0.1001, over 4730.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2598, pruned_loss=0.06215, over 954488.77 frames. ], batch size: 59, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:18:34,965 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:19:07,530 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73320.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 16:19:19,978 INFO [finetune.py:976] (2/7) Epoch 13, batch 4600, loss[loss=0.1981, simple_loss=0.2621, pruned_loss=0.06712, over 4821.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2581, pruned_loss=0.06106, over 954300.07 frames. ], batch size: 33, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:19:24,888 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.809e+01 1.600e+02 1.901e+02 2.194e+02 3.702e+02, threshold=3.803e+02, percent-clipped=0.0 2023-03-26 16:19:42,013 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73354.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:20:11,745 INFO [finetune.py:976] (2/7) Epoch 13, batch 4650, loss[loss=0.1655, simple_loss=0.2383, pruned_loss=0.04633, over 4826.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2561, pruned_loss=0.06065, over 954709.23 frames. ], batch size: 33, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:20:22,117 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.04 vs. limit=5.0 2023-03-26 16:20:45,672 INFO [finetune.py:976] (2/7) Epoch 13, batch 4700, loss[loss=0.1253, simple_loss=0.1991, pruned_loss=0.02577, over 4940.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2537, pruned_loss=0.06009, over 953511.68 frames. ], batch size: 38, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:20:50,434 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.614e+02 1.909e+02 2.257e+02 3.771e+02, threshold=3.817e+02, percent-clipped=0.0 2023-03-26 16:20:52,822 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0543, 1.7163, 2.3959, 1.6238, 2.0845, 2.1829, 1.7085, 2.4251], device='cuda:2'), covar=tensor([0.1303, 0.2039, 0.1397, 0.2007, 0.0895, 0.1529, 0.2560, 0.0871], device='cuda:2'), in_proj_covar=tensor([0.0197, 0.0207, 0.0194, 0.0192, 0.0180, 0.0216, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:21:10,716 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3110, 2.9216, 2.7920, 1.1887, 2.9931, 2.2669, 0.6694, 1.8005], device='cuda:2'), covar=tensor([0.2538, 0.2595, 0.2048, 0.3695, 0.1532, 0.1189, 0.4397, 0.1808], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0175, 0.0161, 0.0129, 0.0157, 0.0122, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 16:21:18,783 INFO [finetune.py:976] (2/7) Epoch 13, batch 4750, loss[loss=0.1435, simple_loss=0.2168, pruned_loss=0.03511, over 4777.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2507, pruned_loss=0.05856, over 953070.11 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:21:37,290 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3911, 1.5027, 1.2222, 1.3955, 1.7413, 1.6085, 1.4712, 1.2505], device='cuda:2'), covar=tensor([0.0367, 0.0293, 0.0555, 0.0300, 0.0219, 0.0468, 0.0306, 0.0417], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0108, 0.0138, 0.0111, 0.0100, 0.0104, 0.0095, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2447e-05, 8.3535e-05, 1.0938e-04, 8.6621e-05, 7.8049e-05, 7.6998e-05, 7.1396e-05, 8.2813e-05], device='cuda:2') 2023-03-26 16:21:51,903 INFO [finetune.py:976] (2/7) Epoch 13, batch 4800, loss[loss=0.2365, simple_loss=0.3119, pruned_loss=0.08061, over 4810.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2525, pruned_loss=0.05927, over 954394.24 frames. ], batch size: 45, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:21:57,193 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.637e+02 2.007e+02 2.318e+02 3.852e+02, threshold=4.014e+02, percent-clipped=1.0 2023-03-26 16:22:03,311 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:22:24,805 INFO [finetune.py:976] (2/7) Epoch 13, batch 4850, loss[loss=0.1885, simple_loss=0.2585, pruned_loss=0.05928, over 4816.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2559, pruned_loss=0.06017, over 956125.66 frames. ], batch size: 38, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:22:30,110 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73590.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:22:33,148 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-26 16:22:37,208 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73598.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:22:40,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7610, 1.6548, 2.1513, 3.4270, 2.4311, 2.3617, 1.0466, 2.7113], device='cuda:2'), covar=tensor([0.1597, 0.1277, 0.1181, 0.0500, 0.0691, 0.1359, 0.1663, 0.0572], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0115, 0.0133, 0.0165, 0.0101, 0.0138, 0.0126, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:23:00,190 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73620.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:23:08,514 INFO [finetune.py:976] (2/7) Epoch 13, batch 4900, loss[loss=0.1954, simple_loss=0.2806, pruned_loss=0.05511, over 4892.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2582, pruned_loss=0.0612, over 954887.20 frames. ], batch size: 35, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:23:14,277 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.751e+02 2.108e+02 2.596e+02 5.059e+02, threshold=4.217e+02, percent-clipped=3.0 2023-03-26 16:23:19,742 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:23:21,029 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:23:31,880 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73668.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:23:41,382 INFO [finetune.py:976] (2/7) Epoch 13, batch 4950, loss[loss=0.1789, simple_loss=0.2586, pruned_loss=0.04961, over 4923.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2593, pruned_loss=0.0611, over 955714.75 frames. ], batch size: 38, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:23:46,323 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 16:24:05,676 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.11 vs. limit=5.0 2023-03-26 16:24:11,629 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-26 16:24:24,416 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:24:24,937 INFO [finetune.py:976] (2/7) Epoch 13, batch 5000, loss[loss=0.1944, simple_loss=0.2556, pruned_loss=0.06656, over 4117.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.257, pruned_loss=0.06016, over 955193.68 frames. ], batch size: 65, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:24:33,743 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.586e+02 1.888e+02 2.371e+02 3.310e+02, threshold=3.776e+02, percent-clipped=1.0 2023-03-26 16:25:17,331 INFO [finetune.py:976] (2/7) Epoch 13, batch 5050, loss[loss=0.1772, simple_loss=0.2356, pruned_loss=0.05937, over 4902.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2542, pruned_loss=0.05937, over 956507.17 frames. ], batch size: 43, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:25:26,625 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:25:40,940 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3287, 4.6491, 4.8829, 5.1436, 5.0664, 4.7505, 5.4387, 1.5221], device='cuda:2'), covar=tensor([0.0649, 0.0715, 0.0719, 0.0811, 0.1091, 0.1387, 0.0532, 0.5644], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0243, 0.0277, 0.0291, 0.0333, 0.0280, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:25:53,900 INFO [finetune.py:976] (2/7) Epoch 13, batch 5100, loss[loss=0.1506, simple_loss=0.2173, pruned_loss=0.0419, over 4829.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2517, pruned_loss=0.05856, over 955702.98 frames. ], batch size: 30, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:25:59,155 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.804e+01 1.435e+02 1.752e+02 2.086e+02 3.868e+02, threshold=3.504e+02, percent-clipped=1.0 2023-03-26 16:26:03,555 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2111, 2.1874, 1.6548, 2.2360, 2.2150, 1.9451, 2.6082, 2.3009], device='cuda:2'), covar=tensor([0.1489, 0.2315, 0.3384, 0.2732, 0.2726, 0.1840, 0.3320, 0.1811], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0190, 0.0237, 0.0258, 0.0247, 0.0202, 0.0215, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:26:11,156 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 16:26:27,642 INFO [finetune.py:976] (2/7) Epoch 13, batch 5150, loss[loss=0.2048, simple_loss=0.2677, pruned_loss=0.071, over 4904.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2521, pruned_loss=0.05904, over 956717.76 frames. ], batch size: 35, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:26:53,725 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8930, 4.8355, 4.5694, 2.8498, 4.9809, 3.8674, 1.1484, 3.2051], device='cuda:2'), covar=tensor([0.2390, 0.2120, 0.1427, 0.2760, 0.0748, 0.0769, 0.4435, 0.1470], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0174, 0.0161, 0.0128, 0.0157, 0.0122, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 16:26:55,555 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1905, 1.8568, 2.5696, 4.2738, 3.0100, 2.7820, 1.0478, 3.5973], device='cuda:2'), covar=tensor([0.1726, 0.1505, 0.1356, 0.0457, 0.0686, 0.1349, 0.1869, 0.0376], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0115, 0.0133, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:26:56,221 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0841, 1.9792, 1.6641, 1.8959, 1.8575, 1.8443, 1.8882, 2.6217], device='cuda:2'), covar=tensor([0.4145, 0.4817, 0.3555, 0.4301, 0.4386, 0.2503, 0.4254, 0.1693], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0224, 0.0276, 0.0246, 0.0213, 0.0247, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:27:01,331 INFO [finetune.py:976] (2/7) Epoch 13, batch 5200, loss[loss=0.2308, simple_loss=0.3047, pruned_loss=0.07844, over 4848.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.256, pruned_loss=0.06054, over 955929.60 frames. ], batch size: 44, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:27:06,219 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.675e+02 1.952e+02 2.217e+02 3.649e+02, threshold=3.904e+02, percent-clipped=2.0 2023-03-26 16:27:09,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1532, 1.3998, 1.4741, 0.7322, 1.3467, 1.6338, 1.7126, 1.3999], device='cuda:2'), covar=tensor([0.0859, 0.0610, 0.0485, 0.0533, 0.0503, 0.0629, 0.0329, 0.0686], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0123, 0.0129, 0.0131, 0.0125, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2734e-05, 1.1084e-04, 8.8227e-05, 9.2571e-05, 9.2920e-05, 9.0820e-05, 1.0310e-04, 1.0531e-04], device='cuda:2') 2023-03-26 16:27:09,803 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73946.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:27:11,713 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:27:25,461 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 16:27:33,222 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 16:27:34,704 INFO [finetune.py:976] (2/7) Epoch 13, batch 5250, loss[loss=0.1824, simple_loss=0.2653, pruned_loss=0.04977, over 4904.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2579, pruned_loss=0.06057, over 955070.71 frames. ], batch size: 35, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:27:43,893 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=7.26 vs. limit=5.0 2023-03-26 16:27:44,365 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:28:11,315 INFO [finetune.py:976] (2/7) Epoch 13, batch 5300, loss[loss=0.2249, simple_loss=0.289, pruned_loss=0.08039, over 4773.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2595, pruned_loss=0.06151, over 953941.63 frames. ], batch size: 51, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:28:17,126 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.638e+02 2.059e+02 2.515e+02 4.122e+02, threshold=4.117e+02, percent-clipped=3.0 2023-03-26 16:28:33,135 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:28:45,017 INFO [finetune.py:976] (2/7) Epoch 13, batch 5350, loss[loss=0.2023, simple_loss=0.2681, pruned_loss=0.06827, over 4887.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2598, pruned_loss=0.06159, over 953899.76 frames. ], batch size: 32, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:28:46,281 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8984, 4.5523, 4.3588, 2.7024, 4.7566, 3.6967, 0.7998, 3.1801], device='cuda:2'), covar=tensor([0.2377, 0.1475, 0.1332, 0.2653, 0.0753, 0.0717, 0.4661, 0.1285], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0174, 0.0161, 0.0128, 0.0157, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 16:28:48,546 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:29:05,719 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1703, 2.1928, 1.9830, 2.2888, 2.7808, 2.1624, 2.3207, 1.6895], device='cuda:2'), covar=tensor([0.2143, 0.1896, 0.1862, 0.1639, 0.1725, 0.1117, 0.1959, 0.1819], device='cuda:2'), in_proj_covar=tensor([0.0236, 0.0206, 0.0209, 0.0189, 0.0239, 0.0182, 0.0212, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:29:10,660 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 16:29:12,965 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74125.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:29:14,757 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8384, 3.9697, 3.7435, 1.9309, 4.0760, 3.2010, 0.8151, 2.6896], device='cuda:2'), covar=tensor([0.2115, 0.1682, 0.1466, 0.3020, 0.0849, 0.0832, 0.4354, 0.1319], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0173, 0.0160, 0.0127, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 16:29:18,157 INFO [finetune.py:976] (2/7) Epoch 13, batch 5400, loss[loss=0.1581, simple_loss=0.2313, pruned_loss=0.04246, over 4746.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2565, pruned_loss=0.06041, over 954612.99 frames. ], batch size: 26, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:29:27,916 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.534e+02 1.853e+02 2.261e+02 4.254e+02, threshold=3.706e+02, percent-clipped=1.0 2023-03-26 16:30:11,887 INFO [finetune.py:976] (2/7) Epoch 13, batch 5450, loss[loss=0.1735, simple_loss=0.2449, pruned_loss=0.05104, over 4907.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2531, pruned_loss=0.05907, over 954781.45 frames. ], batch size: 36, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:30:19,483 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 16:30:52,417 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 16:30:56,457 INFO [finetune.py:976] (2/7) Epoch 13, batch 5500, loss[loss=0.1833, simple_loss=0.247, pruned_loss=0.05976, over 4842.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2499, pruned_loss=0.0578, over 954990.09 frames. ], batch size: 44, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:31:01,346 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:01,847 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.610e+01 1.534e+02 1.911e+02 2.187e+02 5.924e+02, threshold=3.822e+02, percent-clipped=2.0 2023-03-26 16:31:05,016 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:12,155 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7844, 1.4516, 0.8959, 1.7018, 1.9830, 1.6083, 1.5229, 1.6874], device='cuda:2'), covar=tensor([0.1318, 0.1753, 0.1889, 0.1063, 0.1982, 0.1763, 0.1273, 0.1676], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0094, 0.0111, 0.0091, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:31:30,480 INFO [finetune.py:976] (2/7) Epoch 13, batch 5550, loss[loss=0.1642, simple_loss=0.2389, pruned_loss=0.04473, over 4886.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2522, pruned_loss=0.05941, over 952745.56 frames. ], batch size: 32, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:31:37,728 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:42,505 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74301.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:50,861 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-26 16:32:02,279 INFO [finetune.py:976] (2/7) Epoch 13, batch 5600, loss[loss=0.1886, simple_loss=0.2634, pruned_loss=0.05695, over 4805.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2556, pruned_loss=0.06007, over 952815.17 frames. ], batch size: 45, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:32:06,847 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.439e+01 1.528e+02 1.833e+02 2.251e+02 4.644e+02, threshold=3.666e+02, percent-clipped=1.0 2023-03-26 16:32:31,595 INFO [finetune.py:976] (2/7) Epoch 13, batch 5650, loss[loss=0.1675, simple_loss=0.2302, pruned_loss=0.05237, over 4747.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2583, pruned_loss=0.06099, over 952765.08 frames. ], batch size: 23, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:32:35,045 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:32:49,735 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 16:32:54,212 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:33:00,315 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 16:33:01,852 INFO [finetune.py:976] (2/7) Epoch 13, batch 5700, loss[loss=0.2101, simple_loss=0.2528, pruned_loss=0.08367, over 4052.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2546, pruned_loss=0.06108, over 933123.85 frames. ], batch size: 17, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:33:03,646 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:33:06,486 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.627e+01 1.487e+02 1.916e+02 2.529e+02 4.839e+02, threshold=3.833e+02, percent-clipped=5.0 2023-03-26 16:33:15,441 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6879, 2.4785, 2.6664, 1.4241, 2.8838, 3.1477, 2.7927, 2.3309], device='cuda:2'), covar=tensor([0.0699, 0.0668, 0.0444, 0.0611, 0.0416, 0.0528, 0.0372, 0.0579], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0154, 0.0124, 0.0130, 0.0132, 0.0127, 0.0144, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.3735e-05, 1.1245e-04, 8.9077e-05, 9.3807e-05, 9.3987e-05, 9.2350e-05, 1.0441e-04, 1.0665e-04], device='cuda:2') 2023-03-26 16:33:31,118 INFO [finetune.py:976] (2/7) Epoch 14, batch 0, loss[loss=0.2003, simple_loss=0.2619, pruned_loss=0.06934, over 4865.00 frames. ], tot_loss[loss=0.2003, simple_loss=0.2619, pruned_loss=0.06934, over 4865.00 frames. ], batch size: 31, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:33:31,118 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 16:33:41,691 INFO [finetune.py:1010] (2/7) Epoch 14, validation: loss=0.1582, simple_loss=0.2295, pruned_loss=0.04344, over 2265189.00 frames. 2023-03-26 16:33:41,691 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 16:33:53,661 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6738, 1.5392, 1.5670, 1.6313, 1.3897, 3.7407, 1.5450, 2.1278], device='cuda:2'), covar=tensor([0.3747, 0.2936, 0.2403, 0.2740, 0.1959, 0.0211, 0.2631, 0.1271], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:34:14,919 INFO [finetune.py:976] (2/7) Epoch 14, batch 50, loss[loss=0.1572, simple_loss=0.2191, pruned_loss=0.04769, over 4702.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2569, pruned_loss=0.06041, over 217931.12 frames. ], batch size: 23, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:34:42,636 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.553e+01 1.578e+02 1.920e+02 2.248e+02 3.729e+02, threshold=3.841e+02, percent-clipped=1.0 2023-03-26 16:35:04,283 INFO [finetune.py:976] (2/7) Epoch 14, batch 100, loss[loss=0.2468, simple_loss=0.3017, pruned_loss=0.09595, over 4896.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2546, pruned_loss=0.06112, over 380307.10 frames. ], batch size: 36, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:35:17,576 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.63 vs. limit=5.0 2023-03-26 16:35:31,065 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:35:31,611 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74596.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:35:41,805 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0421, 1.8389, 2.4585, 1.7156, 2.2513, 2.4250, 1.6989, 2.5276], device='cuda:2'), covar=tensor([0.1369, 0.2097, 0.1491, 0.2061, 0.0828, 0.1530, 0.2798, 0.0952], device='cuda:2'), in_proj_covar=tensor([0.0196, 0.0204, 0.0193, 0.0190, 0.0178, 0.0215, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:35:49,899 INFO [finetune.py:976] (2/7) Epoch 14, batch 150, loss[loss=0.1829, simple_loss=0.2565, pruned_loss=0.05463, over 4834.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2492, pruned_loss=0.05895, over 509310.83 frames. ], batch size: 33, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:35:53,591 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1740, 2.0783, 1.9429, 2.2026, 1.8121, 4.7910, 1.9069, 2.3929], device='cuda:2'), covar=tensor([0.3081, 0.2292, 0.1895, 0.2080, 0.1419, 0.0105, 0.2192, 0.1144], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0116, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:36:18,543 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.42 vs. limit=5.0 2023-03-26 16:36:22,702 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.552e+02 1.795e+02 2.139e+02 3.747e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-26 16:36:27,709 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 16:36:32,541 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:36:35,947 INFO [finetune.py:976] (2/7) Epoch 14, batch 200, loss[loss=0.193, simple_loss=0.2668, pruned_loss=0.0596, over 4842.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2484, pruned_loss=0.05915, over 609005.18 frames. ], batch size: 47, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:36:49,131 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0363, 1.6050, 1.2096, 1.8658, 2.1258, 1.8896, 1.7310, 1.8653], device='cuda:2'), covar=tensor([0.1125, 0.1646, 0.1745, 0.0952, 0.1707, 0.1879, 0.1140, 0.1436], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:36:55,112 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6973, 1.5879, 1.5474, 1.5989, 1.1436, 3.3664, 1.3093, 1.8135], device='cuda:2'), covar=tensor([0.3519, 0.2612, 0.2299, 0.2607, 0.2021, 0.0220, 0.2697, 0.1334], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:37:02,943 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6416, 1.4811, 1.4872, 1.5409, 1.0995, 3.3268, 1.2834, 1.7385], device='cuda:2'), covar=tensor([0.3424, 0.2613, 0.2261, 0.2477, 0.1932, 0.0205, 0.2716, 0.1361], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:37:09,842 INFO [finetune.py:976] (2/7) Epoch 14, batch 250, loss[loss=0.1306, simple_loss=0.2039, pruned_loss=0.02861, over 4777.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2526, pruned_loss=0.06026, over 687537.01 frames. ], batch size: 26, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:37:15,966 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:37:30,097 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.719e+02 2.081e+02 2.468e+02 4.342e+02, threshold=4.162e+02, percent-clipped=2.0 2023-03-26 16:37:42,698 INFO [finetune.py:976] (2/7) Epoch 14, batch 300, loss[loss=0.1613, simple_loss=0.2362, pruned_loss=0.04322, over 4813.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2568, pruned_loss=0.06144, over 747594.46 frames. ], batch size: 40, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:37:42,829 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1875, 1.2925, 1.3294, 0.7334, 1.2787, 1.5558, 1.6307, 1.2462], device='cuda:2'), covar=tensor([0.0979, 0.0645, 0.0505, 0.0531, 0.0487, 0.0631, 0.0372, 0.0722], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0130, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3224e-05, 1.1182e-04, 8.8622e-05, 9.3363e-05, 9.2616e-05, 9.1726e-05, 1.0341e-04, 1.0595e-04], device='cuda:2') 2023-03-26 16:37:48,012 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74768.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:38:03,448 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-26 16:38:16,371 INFO [finetune.py:976] (2/7) Epoch 14, batch 350, loss[loss=0.1872, simple_loss=0.2588, pruned_loss=0.05782, over 4785.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2588, pruned_loss=0.06211, over 794542.72 frames. ], batch size: 29, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:38:36,813 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.618e+01 1.648e+02 1.967e+02 2.475e+02 5.107e+02, threshold=3.933e+02, percent-clipped=3.0 2023-03-26 16:38:49,813 INFO [finetune.py:976] (2/7) Epoch 14, batch 400, loss[loss=0.2188, simple_loss=0.2837, pruned_loss=0.07695, over 4864.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2608, pruned_loss=0.06304, over 830478.78 frames. ], batch size: 34, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:39:11,277 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 16:39:13,573 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74896.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:22,425 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:23,514 INFO [finetune.py:976] (2/7) Epoch 14, batch 450, loss[loss=0.258, simple_loss=0.291, pruned_loss=0.1125, over 4831.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2603, pruned_loss=0.06296, over 857857.62 frames. ], batch size: 30, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:39:43,596 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.587e+02 1.868e+02 2.260e+02 4.285e+02, threshold=3.737e+02, percent-clipped=2.0 2023-03-26 16:39:45,925 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:50,153 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:58,628 INFO [finetune.py:976] (2/7) Epoch 14, batch 500, loss[loss=0.2824, simple_loss=0.3212, pruned_loss=0.1218, over 4334.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2572, pruned_loss=0.06156, over 878991.78 frames. ], batch size: 66, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:40:00,594 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:40:00,658 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 16:40:08,960 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:40:45,475 INFO [finetune.py:976] (2/7) Epoch 14, batch 550, loss[loss=0.2005, simple_loss=0.2651, pruned_loss=0.06798, over 4837.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2543, pruned_loss=0.06051, over 895615.29 frames. ], batch size: 33, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:40:57,875 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75025.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:41:09,612 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.564e+02 1.846e+02 2.204e+02 7.411e+02, threshold=3.691e+02, percent-clipped=3.0 2023-03-26 16:41:32,953 INFO [finetune.py:976] (2/7) Epoch 14, batch 600, loss[loss=0.1917, simple_loss=0.265, pruned_loss=0.05918, over 4854.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2546, pruned_loss=0.06047, over 908214.66 frames. ], batch size: 44, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:10,229 INFO [finetune.py:976] (2/7) Epoch 14, batch 650, loss[loss=0.1619, simple_loss=0.2509, pruned_loss=0.03644, over 4793.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2568, pruned_loss=0.06105, over 919338.27 frames. ], batch size: 29, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:27,566 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9425, 1.8549, 2.1031, 1.3912, 1.9976, 2.1842, 2.0402, 1.6347], device='cuda:2'), covar=tensor([0.0563, 0.0656, 0.0574, 0.0858, 0.0659, 0.0644, 0.0575, 0.1079], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0133, 0.0142, 0.0123, 0.0124, 0.0141, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:42:30,910 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.620e+02 1.922e+02 2.248e+02 3.855e+02, threshold=3.845e+02, percent-clipped=1.0 2023-03-26 16:42:43,796 INFO [finetune.py:976] (2/7) Epoch 14, batch 700, loss[loss=0.1984, simple_loss=0.2664, pruned_loss=0.06517, over 4885.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2593, pruned_loss=0.0626, over 927290.08 frames. ], batch size: 32, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:45,100 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1867, 1.7444, 2.5019, 4.1152, 2.9004, 2.7561, 0.8726, 3.3279], device='cuda:2'), covar=tensor([0.1679, 0.1465, 0.1338, 0.0490, 0.0683, 0.1707, 0.2035, 0.0469], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0116, 0.0133, 0.0164, 0.0100, 0.0137, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:43:16,884 INFO [finetune.py:976] (2/7) Epoch 14, batch 750, loss[loss=0.154, simple_loss=0.2352, pruned_loss=0.03643, over 4766.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2604, pruned_loss=0.06248, over 933574.06 frames. ], batch size: 28, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:43:28,266 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75228.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:30,757 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8259, 1.2931, 1.8595, 1.7467, 1.5522, 1.5091, 1.7175, 1.6629], device='cuda:2'), covar=tensor([0.3330, 0.3705, 0.2973, 0.3496, 0.4412, 0.3521, 0.4133, 0.2997], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0237, 0.0255, 0.0263, 0.0261, 0.0235, 0.0276, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:43:37,703 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.931e+01 1.583e+02 1.834e+02 2.163e+02 4.783e+02, threshold=3.668e+02, percent-clipped=1.0 2023-03-26 16:43:44,259 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:50,665 INFO [finetune.py:976] (2/7) Epoch 14, batch 800, loss[loss=0.2008, simple_loss=0.2632, pruned_loss=0.0692, over 4831.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2598, pruned_loss=0.06245, over 937417.53 frames. ], batch size: 49, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:43:53,645 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:56,658 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:06,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.6581, 3.2737, 2.8825, 1.4535, 3.1234, 2.6746, 2.4989, 2.8247], device='cuda:2'), covar=tensor([0.0949, 0.0852, 0.1905, 0.2210, 0.1526, 0.1994, 0.2094, 0.1081], device='cuda:2'), in_proj_covar=tensor([0.0165, 0.0194, 0.0198, 0.0184, 0.0212, 0.0207, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:44:09,556 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:16,560 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:24,289 INFO [finetune.py:976] (2/7) Epoch 14, batch 850, loss[loss=0.1456, simple_loss=0.2175, pruned_loss=0.03686, over 4823.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2564, pruned_loss=0.06102, over 942618.43 frames. ], batch size: 41, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:44:30,270 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:37,475 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75331.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:44:44,944 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.704e+01 1.586e+02 1.985e+02 2.275e+02 3.825e+02, threshold=3.970e+02, percent-clipped=2.0 2023-03-26 16:44:48,775 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 16:44:57,416 INFO [finetune.py:976] (2/7) Epoch 14, batch 900, loss[loss=0.1986, simple_loss=0.2695, pruned_loss=0.06389, over 4904.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2542, pruned_loss=0.06092, over 944519.44 frames. ], batch size: 37, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:45:44,930 INFO [finetune.py:976] (2/7) Epoch 14, batch 950, loss[loss=0.1721, simple_loss=0.2359, pruned_loss=0.05411, over 4156.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2534, pruned_loss=0.06119, over 944704.53 frames. ], batch size: 17, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:45:50,487 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2711, 2.1119, 2.2251, 0.9484, 2.4173, 2.7229, 2.1748, 2.0761], device='cuda:2'), covar=tensor([0.1089, 0.0794, 0.0642, 0.0834, 0.0572, 0.1043, 0.0599, 0.0742], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0122, 0.0129, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.2945e-05, 1.1141e-04, 8.7772e-05, 9.2910e-05, 9.2785e-05, 9.1585e-05, 1.0336e-04, 1.0588e-04], device='cuda:2') 2023-03-26 16:45:52,306 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 16:46:05,784 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.942e+01 1.566e+02 1.871e+02 2.243e+02 4.539e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-26 16:46:18,859 INFO [finetune.py:976] (2/7) Epoch 14, batch 1000, loss[loss=0.2251, simple_loss=0.2931, pruned_loss=0.07851, over 4206.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2554, pruned_loss=0.0614, over 946514.20 frames. ], batch size: 65, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:07,211 INFO [finetune.py:976] (2/7) Epoch 14, batch 1050, loss[loss=0.1565, simple_loss=0.2429, pruned_loss=0.03503, over 4800.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.257, pruned_loss=0.06171, over 948319.29 frames. ], batch size: 29, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:31,085 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.109e+01 1.606e+02 2.003e+02 2.356e+02 8.983e+02, threshold=4.007e+02, percent-clipped=2.0 2023-03-26 16:47:44,013 INFO [finetune.py:976] (2/7) Epoch 14, batch 1100, loss[loss=0.21, simple_loss=0.2694, pruned_loss=0.07524, over 4853.00 frames. ], tot_loss[loss=0.192, simple_loss=0.259, pruned_loss=0.06248, over 949210.94 frames. ], batch size: 44, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:47,089 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:00,099 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:18,069 INFO [finetune.py:976] (2/7) Epoch 14, batch 1150, loss[loss=0.1889, simple_loss=0.2534, pruned_loss=0.06219, over 4806.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.26, pruned_loss=0.06281, over 948356.59 frames. ], batch size: 51, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:48:19,334 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:24,058 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75620.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:28,194 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75626.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:48:38,760 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.806e+01 1.586e+02 1.986e+02 2.337e+02 5.787e+02, threshold=3.972e+02, percent-clipped=2.0 2023-03-26 16:48:51,183 INFO [finetune.py:976] (2/7) Epoch 14, batch 1200, loss[loss=0.2071, simple_loss=0.2598, pruned_loss=0.07719, over 4871.00 frames. ], tot_loss[loss=0.192, simple_loss=0.259, pruned_loss=0.06244, over 950754.04 frames. ], batch size: 31, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:48:56,005 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-26 16:48:56,399 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:58,295 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0209, 1.4996, 2.5385, 1.5093, 2.1907, 2.2538, 1.4883, 2.3371], device='cuda:2'), covar=tensor([0.1442, 0.2355, 0.1024, 0.1954, 0.1000, 0.1410, 0.2833, 0.1087], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0204, 0.0191, 0.0189, 0.0175, 0.0213, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:49:24,700 INFO [finetune.py:976] (2/7) Epoch 14, batch 1250, loss[loss=0.1884, simple_loss=0.2547, pruned_loss=0.06103, over 4816.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2562, pruned_loss=0.06108, over 952673.07 frames. ], batch size: 38, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:49:28,434 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 16:49:45,240 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.497e+02 1.829e+02 2.293e+02 4.240e+02, threshold=3.659e+02, percent-clipped=2.0 2023-03-26 16:49:48,966 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 16:49:51,973 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1733, 2.1865, 1.7670, 2.3020, 2.0429, 2.0064, 2.0593, 2.8950], device='cuda:2'), covar=tensor([0.4346, 0.4697, 0.3704, 0.4421, 0.4688, 0.2828, 0.4774, 0.1777], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0262, 0.0226, 0.0278, 0.0248, 0.0215, 0.0250, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:49:53,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7314, 1.4847, 2.0582, 3.0745, 2.1389, 2.2487, 1.1795, 2.4910], device='cuda:2'), covar=tensor([0.1686, 0.1484, 0.1247, 0.0611, 0.0833, 0.1278, 0.1704, 0.0648], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0101, 0.0138, 0.0126, 0.0103], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:49:57,807 INFO [finetune.py:976] (2/7) Epoch 14, batch 1300, loss[loss=0.1525, simple_loss=0.2133, pruned_loss=0.04579, over 4693.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2511, pruned_loss=0.0589, over 953118.99 frames. ], batch size: 23, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:50:23,272 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0626, 1.7123, 1.2323, 2.0111, 2.4189, 1.6655, 1.9877, 1.9028], device='cuda:2'), covar=tensor([0.1344, 0.1816, 0.1780, 0.1024, 0.1692, 0.1848, 0.1202, 0.1738], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:50:31,734 INFO [finetune.py:976] (2/7) Epoch 14, batch 1350, loss[loss=0.201, simple_loss=0.2633, pruned_loss=0.06934, over 4901.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2518, pruned_loss=0.05911, over 954834.57 frames. ], batch size: 35, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:51:07,732 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.659e+01 1.636e+02 1.953e+02 2.256e+02 6.748e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 16:51:19,719 INFO [finetune.py:976] (2/7) Epoch 14, batch 1400, loss[loss=0.2163, simple_loss=0.2877, pruned_loss=0.07243, over 4868.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2569, pruned_loss=0.06099, over 957686.19 frames. ], batch size: 31, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:51:35,778 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:51:51,270 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8095, 3.3750, 3.5602, 3.6819, 3.5956, 3.3463, 3.9040, 1.2647], device='cuda:2'), covar=tensor([0.0842, 0.0899, 0.0842, 0.1059, 0.1299, 0.1689, 0.0858, 0.5075], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0246, 0.0279, 0.0293, 0.0334, 0.0286, 0.0304, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:51:53,045 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4586, 3.8518, 4.1252, 4.3019, 4.2173, 3.9657, 4.5746, 1.3224], device='cuda:2'), covar=tensor([0.0734, 0.0883, 0.0837, 0.1030, 0.1180, 0.1598, 0.0722, 0.5690], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0246, 0.0279, 0.0293, 0.0333, 0.0286, 0.0304, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:51:53,578 INFO [finetune.py:976] (2/7) Epoch 14, batch 1450, loss[loss=0.1883, simple_loss=0.2669, pruned_loss=0.05486, over 4915.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2586, pruned_loss=0.06111, over 957893.98 frames. ], batch size: 38, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:51:54,363 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.44 vs. limit=5.0 2023-03-26 16:51:54,495 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 16:52:08,067 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75926.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:52:17,401 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75932.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:52:27,769 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.664e+02 1.939e+02 2.609e+02 1.085e+03, threshold=3.877e+02, percent-clipped=5.0 2023-03-26 16:52:44,458 INFO [finetune.py:976] (2/7) Epoch 14, batch 1500, loss[loss=0.1868, simple_loss=0.2668, pruned_loss=0.05338, over 4905.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2612, pruned_loss=0.06226, over 957361.91 frames. ], batch size: 36, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:52:47,795 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 16:52:52,970 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75974.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:53:19,429 INFO [finetune.py:976] (2/7) Epoch 14, batch 1550, loss[loss=0.1742, simple_loss=0.2453, pruned_loss=0.05158, over 4898.00 frames. ], tot_loss[loss=0.1916, simple_loss=0.2596, pruned_loss=0.06178, over 956182.09 frames. ], batch size: 37, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:53:40,206 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.569e+01 1.489e+02 1.761e+02 2.263e+02 3.823e+02, threshold=3.522e+02, percent-clipped=0.0 2023-03-26 16:53:45,168 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6020, 1.6025, 1.3443, 1.5493, 2.0249, 1.8445, 1.7027, 1.4677], device='cuda:2'), covar=tensor([0.0322, 0.0282, 0.0562, 0.0327, 0.0185, 0.0518, 0.0239, 0.0353], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0108, 0.0140, 0.0113, 0.0100, 0.0105, 0.0095, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.2558e-05, 8.3880e-05, 1.1054e-04, 8.7585e-05, 7.7963e-05, 7.8015e-05, 7.1707e-05, 8.2008e-05], device='cuda:2') 2023-03-26 16:53:53,247 INFO [finetune.py:976] (2/7) Epoch 14, batch 1600, loss[loss=0.17, simple_loss=0.2365, pruned_loss=0.0518, over 4832.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2578, pruned_loss=0.06142, over 957564.00 frames. ], batch size: 30, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:53:53,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4094, 1.6130, 1.8522, 1.7432, 1.6827, 3.4411, 1.4502, 1.6702], device='cuda:2'), covar=tensor([0.0976, 0.1665, 0.1042, 0.0938, 0.1450, 0.0256, 0.1395, 0.1605], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0080, 0.0085, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:54:26,637 INFO [finetune.py:976] (2/7) Epoch 14, batch 1650, loss[loss=0.1685, simple_loss=0.2375, pruned_loss=0.04976, over 4820.00 frames. ], tot_loss[loss=0.187, simple_loss=0.254, pruned_loss=0.06, over 959032.14 frames. ], batch size: 25, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:54:47,811 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.271e+01 1.580e+02 1.872e+02 2.187e+02 4.946e+02, threshold=3.744e+02, percent-clipped=3.0 2023-03-26 16:55:00,264 INFO [finetune.py:976] (2/7) Epoch 14, batch 1700, loss[loss=0.2115, simple_loss=0.2715, pruned_loss=0.07575, over 4739.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2525, pruned_loss=0.05979, over 958447.00 frames. ], batch size: 59, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:55:34,231 INFO [finetune.py:976] (2/7) Epoch 14, batch 1750, loss[loss=0.2304, simple_loss=0.3016, pruned_loss=0.0796, over 4814.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2546, pruned_loss=0.06003, over 958457.66 frames. ], batch size: 45, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:55:36,203 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3103, 1.3949, 1.4525, 1.4672, 1.5486, 2.9597, 1.3123, 1.4768], device='cuda:2'), covar=tensor([0.1038, 0.1859, 0.1127, 0.1052, 0.1665, 0.0268, 0.1611, 0.1807], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0080, 0.0073, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 16:55:55,253 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.620e+02 1.973e+02 2.349e+02 4.562e+02, threshold=3.945e+02, percent-clipped=4.0 2023-03-26 16:56:04,964 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-26 16:56:15,150 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 16:56:17,743 INFO [finetune.py:976] (2/7) Epoch 14, batch 1800, loss[loss=0.1994, simple_loss=0.2902, pruned_loss=0.0543, over 4905.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2565, pruned_loss=0.06011, over 957340.61 frames. ], batch size: 36, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:56:27,657 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76269.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:56:41,575 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6041, 2.8682, 2.2980, 1.8370, 2.5260, 2.9587, 2.8275, 2.4417], device='cuda:2'), covar=tensor([0.0601, 0.0500, 0.0809, 0.0844, 0.0683, 0.0641, 0.0588, 0.0872], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0134, 0.0143, 0.0125, 0.0125, 0.0143, 0.0142, 0.0165], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:56:47,390 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:56:58,601 INFO [finetune.py:976] (2/7) Epoch 14, batch 1850, loss[loss=0.1815, simple_loss=0.2437, pruned_loss=0.05965, over 4753.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.258, pruned_loss=0.06081, over 958037.85 frames. ], batch size: 26, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:57:07,117 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76323.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:57:08,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8488, 1.6094, 1.4957, 1.2615, 1.6030, 1.6058, 1.5733, 2.1737], device='cuda:2'), covar=tensor([0.4170, 0.4647, 0.3458, 0.3963, 0.4072, 0.2521, 0.3932, 0.1904], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0259, 0.0224, 0.0276, 0.0246, 0.0213, 0.0247, 0.0222], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 16:57:11,383 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76330.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:57:19,099 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.514e+02 1.907e+02 2.299e+02 3.483e+02, threshold=3.815e+02, percent-clipped=0.0 2023-03-26 16:57:35,301 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:57:38,872 INFO [finetune.py:976] (2/7) Epoch 14, batch 1900, loss[loss=0.1913, simple_loss=0.2643, pruned_loss=0.05913, over 4810.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2581, pruned_loss=0.05998, over 958132.86 frames. ], batch size: 41, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:57:52,360 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.88 vs. limit=5.0 2023-03-26 16:57:57,152 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76384.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:58:14,596 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-26 16:58:15,528 INFO [finetune.py:976] (2/7) Epoch 14, batch 1950, loss[loss=0.1781, simple_loss=0.253, pruned_loss=0.05159, over 4824.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2579, pruned_loss=0.06022, over 958013.89 frames. ], batch size: 38, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:58:28,718 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 16:58:31,079 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 16:58:35,771 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.938e+01 1.459e+02 1.786e+02 2.082e+02 3.715e+02, threshold=3.572e+02, percent-clipped=0.0 2023-03-26 16:58:45,212 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6189, 1.5591, 1.5603, 0.8111, 1.6038, 1.8944, 1.8295, 1.3990], device='cuda:2'), covar=tensor([0.0959, 0.0735, 0.0525, 0.0652, 0.0439, 0.0460, 0.0364, 0.0696], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0122, 0.0130, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3406e-05, 1.1204e-04, 8.7912e-05, 9.3423e-05, 9.2941e-05, 9.1991e-05, 1.0314e-04, 1.0603e-04], device='cuda:2') 2023-03-26 16:58:49,143 INFO [finetune.py:976] (2/7) Epoch 14, batch 2000, loss[loss=0.196, simple_loss=0.2659, pruned_loss=0.063, over 4869.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2547, pruned_loss=0.0591, over 957030.80 frames. ], batch size: 31, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:58:49,843 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5207, 1.2046, 2.0546, 3.1618, 2.1096, 2.5162, 0.8308, 2.6973], device='cuda:2'), covar=tensor([0.2059, 0.2217, 0.1652, 0.0939, 0.1019, 0.1753, 0.2497, 0.0716], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0139, 0.0127, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 16:59:22,662 INFO [finetune.py:976] (2/7) Epoch 14, batch 2050, loss[loss=0.1974, simple_loss=0.2638, pruned_loss=0.06551, over 4816.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2521, pruned_loss=0.05898, over 955176.90 frames. ], batch size: 38, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:59:42,964 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.833e+01 1.460e+02 1.798e+02 2.157e+02 5.136e+02, threshold=3.595e+02, percent-clipped=3.0 2023-03-26 16:59:56,036 INFO [finetune.py:976] (2/7) Epoch 14, batch 2100, loss[loss=0.135, simple_loss=0.2083, pruned_loss=0.03083, over 4773.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2504, pruned_loss=0.05866, over 955076.68 frames. ], batch size: 26, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:00:10,538 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4192, 1.3693, 1.5055, 1.5312, 1.5825, 2.9372, 1.3484, 1.4606], device='cuda:2'), covar=tensor([0.0898, 0.1796, 0.1028, 0.0945, 0.1481, 0.0300, 0.1464, 0.1696], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0080, 0.0073, 0.0077, 0.0091, 0.0080, 0.0085, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:00:20,161 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2295, 2.0737, 2.7992, 1.6918, 2.4935, 2.6477, 2.0013, 2.9575], device='cuda:2'), covar=tensor([0.1605, 0.1926, 0.1503, 0.2352, 0.0924, 0.1687, 0.2695, 0.0863], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0205, 0.0192, 0.0190, 0.0177, 0.0215, 0.0218, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:00:29,581 INFO [finetune.py:976] (2/7) Epoch 14, batch 2150, loss[loss=0.204, simple_loss=0.2893, pruned_loss=0.05932, over 4905.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2549, pruned_loss=0.06046, over 955466.21 frames. ], batch size: 37, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:00:38,763 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76625.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:00:40,650 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0788, 0.9542, 0.9874, 0.5172, 0.8630, 1.1804, 1.2256, 0.9918], device='cuda:2'), covar=tensor([0.0876, 0.0555, 0.0556, 0.0502, 0.0529, 0.0624, 0.0392, 0.0658], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0121, 0.0129, 0.0130, 0.0126, 0.0141, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2765e-05, 1.1106e-04, 8.7193e-05, 9.2852e-05, 9.2016e-05, 9.1054e-05, 1.0245e-04, 1.0513e-04], device='cuda:2') 2023-03-26 17:00:50,410 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.680e+02 1.855e+02 2.274e+02 3.771e+02, threshold=3.710e+02, percent-clipped=2.0 2023-03-26 17:00:55,370 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:01:02,483 INFO [finetune.py:976] (2/7) Epoch 14, batch 2200, loss[loss=0.209, simple_loss=0.2737, pruned_loss=0.07214, over 4938.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2583, pruned_loss=0.06153, over 955728.48 frames. ], batch size: 38, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:01:12,827 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9529, 1.8709, 2.4364, 1.6063, 2.1958, 2.3383, 1.8329, 2.5591], device='cuda:2'), covar=tensor([0.1375, 0.2184, 0.1346, 0.2039, 0.0963, 0.1543, 0.2524, 0.0835], device='cuda:2'), in_proj_covar=tensor([0.0196, 0.0206, 0.0193, 0.0191, 0.0178, 0.0215, 0.0218, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:01:21,649 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76679.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:01:57,531 INFO [finetune.py:976] (2/7) Epoch 14, batch 2250, loss[loss=0.1926, simple_loss=0.2657, pruned_loss=0.05978, over 4908.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2581, pruned_loss=0.06166, over 954034.14 frames. ], batch size: 36, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:02:18,736 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.580e+02 1.848e+02 2.151e+02 3.368e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 17:02:25,549 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 17:02:31,272 INFO [finetune.py:976] (2/7) Epoch 14, batch 2300, loss[loss=0.1855, simple_loss=0.2521, pruned_loss=0.05944, over 4862.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2582, pruned_loss=0.06083, over 955432.22 frames. ], batch size: 34, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:02:52,990 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6732, 1.5595, 2.0351, 1.3810, 1.9057, 2.0217, 1.5734, 2.1876], device='cuda:2'), covar=tensor([0.1649, 0.2319, 0.1678, 0.2133, 0.0923, 0.1712, 0.2860, 0.0951], device='cuda:2'), in_proj_covar=tensor([0.0196, 0.0207, 0.0194, 0.0191, 0.0178, 0.0215, 0.0219, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:03:06,755 INFO [finetune.py:976] (2/7) Epoch 14, batch 2350, loss[loss=0.1415, simple_loss=0.2163, pruned_loss=0.03333, over 4926.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2564, pruned_loss=0.0605, over 956360.78 frames. ], batch size: 33, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:03:25,942 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76839.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:03:28,174 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.531e+02 1.863e+02 2.217e+02 4.521e+02, threshold=3.725e+02, percent-clipped=2.0 2023-03-26 17:03:35,384 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 17:03:40,655 INFO [finetune.py:976] (2/7) Epoch 14, batch 2400, loss[loss=0.1742, simple_loss=0.23, pruned_loss=0.05924, over 4115.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2533, pruned_loss=0.05962, over 957451.51 frames. ], batch size: 18, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:04:06,920 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:04:14,025 INFO [finetune.py:976] (2/7) Epoch 14, batch 2450, loss[loss=0.2086, simple_loss=0.2576, pruned_loss=0.07975, over 4203.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2507, pruned_loss=0.05888, over 953255.36 frames. ], batch size: 18, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:04:23,095 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76925.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:04:34,661 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.650e+01 1.627e+02 1.914e+02 2.448e+02 4.488e+02, threshold=3.829e+02, percent-clipped=3.0 2023-03-26 17:04:34,776 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7342, 1.2811, 0.8241, 1.6497, 2.0119, 1.5034, 1.5871, 1.6751], device='cuda:2'), covar=tensor([0.1420, 0.2008, 0.2018, 0.1190, 0.1988, 0.2026, 0.1373, 0.1830], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0112, 0.0091, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:04:40,096 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:04:40,207 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-26 17:04:47,674 INFO [finetune.py:976] (2/7) Epoch 14, batch 2500, loss[loss=0.2553, simple_loss=0.3321, pruned_loss=0.08922, over 4818.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2531, pruned_loss=0.05995, over 954674.05 frames. ], batch size: 38, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:04:49,011 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2191, 1.9246, 2.6451, 1.6342, 2.3249, 2.5088, 1.8530, 2.7060], device='cuda:2'), covar=tensor([0.1473, 0.1887, 0.1570, 0.2319, 0.0928, 0.1670, 0.2383, 0.0934], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0205, 0.0192, 0.0190, 0.0177, 0.0214, 0.0217, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:04:55,524 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=76973.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:04:59,660 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76979.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:05:12,268 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=76998.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:05:21,741 INFO [finetune.py:976] (2/7) Epoch 14, batch 2550, loss[loss=0.1519, simple_loss=0.2247, pruned_loss=0.03955, over 4764.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2556, pruned_loss=0.06041, over 955162.01 frames. ], batch size: 28, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:05:32,005 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77027.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:05:33,997 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 17:05:42,422 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.599e+02 1.863e+02 2.531e+02 5.028e+02, threshold=3.725e+02, percent-clipped=2.0 2023-03-26 17:05:46,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9282, 4.1287, 3.8016, 2.0264, 4.2222, 3.0856, 1.3851, 2.8695], device='cuda:2'), covar=tensor([0.2022, 0.1785, 0.1493, 0.3168, 0.0916, 0.1096, 0.3786, 0.1415], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0172, 0.0158, 0.0127, 0.0155, 0.0121, 0.0144, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 17:05:55,386 INFO [finetune.py:976] (2/7) Epoch 14, batch 2600, loss[loss=0.1781, simple_loss=0.2527, pruned_loss=0.05176, over 4902.00 frames. ], tot_loss[loss=0.188, simple_loss=0.256, pruned_loss=0.06007, over 953689.60 frames. ], batch size: 36, lr: 3.55e-03, grad_scale: 32.0 2023-03-26 17:06:18,040 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77095.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:06:19,254 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9415, 2.0234, 1.9808, 1.4366, 2.0197, 2.1085, 2.0268, 1.7344], device='cuda:2'), covar=tensor([0.0519, 0.0517, 0.0627, 0.0783, 0.0582, 0.0580, 0.0528, 0.0949], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0132, 0.0141, 0.0123, 0.0123, 0.0141, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:06:29,341 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5817, 1.3275, 2.1152, 3.3359, 2.1781, 2.2538, 1.0529, 2.6460], device='cuda:2'), covar=tensor([0.1651, 0.1510, 0.1284, 0.0556, 0.0822, 0.2154, 0.1817, 0.0540], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0139, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:06:30,487 INFO [finetune.py:976] (2/7) Epoch 14, batch 2650, loss[loss=0.1803, simple_loss=0.2551, pruned_loss=0.05279, over 4920.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2579, pruned_loss=0.06057, over 955543.93 frames. ], batch size: 42, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:07:08,839 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.606e+02 1.948e+02 2.371e+02 3.624e+02, threshold=3.895e+02, percent-clipped=0.0 2023-03-26 17:07:22,921 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77156.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:07:26,328 INFO [finetune.py:976] (2/7) Epoch 14, batch 2700, loss[loss=0.1959, simple_loss=0.2743, pruned_loss=0.05878, over 4904.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2561, pruned_loss=0.05938, over 956484.29 frames. ], batch size: 37, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:07:57,705 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:07:59,549 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5556, 1.4457, 2.0566, 1.9651, 1.7476, 4.1451, 1.3878, 1.7364], device='cuda:2'), covar=tensor([0.0966, 0.1828, 0.1237, 0.0898, 0.1503, 0.0235, 0.1459, 0.1619], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:08:07,956 INFO [finetune.py:976] (2/7) Epoch 14, batch 2750, loss[loss=0.1728, simple_loss=0.235, pruned_loss=0.05523, over 4812.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2533, pruned_loss=0.05865, over 957676.72 frames. ], batch size: 41, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:08:08,056 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:15,773 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7935, 3.5895, 3.4682, 1.9538, 3.8270, 2.9133, 1.2821, 2.5871], device='cuda:2'), covar=tensor([0.2483, 0.2146, 0.1756, 0.3775, 0.1140, 0.1138, 0.4641, 0.1722], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0173, 0.0159, 0.0127, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 17:08:18,370 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 17:08:28,394 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.514e+02 1.871e+02 2.163e+02 3.576e+02, threshold=3.742e+02, percent-clipped=0.0 2023-03-26 17:08:40,953 INFO [finetune.py:976] (2/7) Epoch 14, batch 2800, loss[loss=0.1528, simple_loss=0.2279, pruned_loss=0.03888, over 4691.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2495, pruned_loss=0.05712, over 955512.56 frames. ], batch size: 23, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:08:45,842 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0032, 1.3314, 1.9547, 1.9177, 1.7261, 1.6145, 1.8494, 1.7901], device='cuda:2'), covar=tensor([0.3560, 0.4025, 0.3169, 0.3356, 0.4650, 0.3558, 0.4447, 0.3152], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0238, 0.0256, 0.0265, 0.0263, 0.0236, 0.0276, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:08:48,261 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:48,342 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-26 17:08:48,865 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:52,207 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77277.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:14,627 INFO [finetune.py:976] (2/7) Epoch 14, batch 2850, loss[loss=0.1711, simple_loss=0.2447, pruned_loss=0.04876, over 4894.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.249, pruned_loss=0.0572, over 954521.49 frames. ], batch size: 32, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:09:29,738 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77334.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:32,167 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77338.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:34,910 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.563e+02 1.884e+02 2.320e+02 5.201e+02, threshold=3.768e+02, percent-clipped=2.0 2023-03-26 17:09:47,964 INFO [finetune.py:976] (2/7) Epoch 14, batch 2900, loss[loss=0.2559, simple_loss=0.3201, pruned_loss=0.09579, over 4802.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2522, pruned_loss=0.05826, over 954305.25 frames. ], batch size: 41, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:21,780 INFO [finetune.py:976] (2/7) Epoch 14, batch 2950, loss[loss=0.1963, simple_loss=0.266, pruned_loss=0.06329, over 4827.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2558, pruned_loss=0.05967, over 955128.16 frames. ], batch size: 30, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:41,985 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.681e+01 1.651e+02 1.924e+02 2.203e+02 4.754e+02, threshold=3.848e+02, percent-clipped=2.0 2023-03-26 17:10:48,021 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77451.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:10:54,978 INFO [finetune.py:976] (2/7) Epoch 14, batch 3000, loss[loss=0.1661, simple_loss=0.2494, pruned_loss=0.04145, over 4791.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2581, pruned_loss=0.06052, over 955287.23 frames. ], batch size: 29, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:54,978 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 17:11:01,057 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6676, 1.5740, 1.5569, 1.5731, 0.9916, 3.0405, 1.1488, 1.6172], device='cuda:2'), covar=tensor([0.3474, 0.2438, 0.2129, 0.2436, 0.1933, 0.0262, 0.2669, 0.1277], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0116, 0.0098, 0.0097, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:11:02,911 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2354, 1.3910, 1.3907, 0.7610, 1.2949, 1.5397, 1.6571, 1.3004], device='cuda:2'), covar=tensor([0.0895, 0.0594, 0.0469, 0.0512, 0.0501, 0.0595, 0.0312, 0.0754], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0122, 0.0129, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3363e-05, 1.1119e-04, 8.7663e-05, 9.2947e-05, 9.2162e-05, 9.1958e-05, 1.0294e-04, 1.0563e-04], device='cuda:2') 2023-03-26 17:11:09,361 INFO [finetune.py:1010] (2/7) Epoch 14, validation: loss=0.1563, simple_loss=0.2268, pruned_loss=0.04293, over 2265189.00 frames. 2023-03-26 17:11:09,361 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6329MB 2023-03-26 17:11:34,088 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:11:44,250 INFO [finetune.py:976] (2/7) Epoch 14, batch 3050, loss[loss=0.1538, simple_loss=0.2174, pruned_loss=0.0451, over 4303.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2581, pruned_loss=0.06026, over 954701.92 frames. ], batch size: 18, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:12:13,224 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.567e+02 1.800e+02 2.244e+02 5.193e+02, threshold=3.600e+02, percent-clipped=2.0 2023-03-26 17:12:13,931 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:12:35,684 INFO [finetune.py:976] (2/7) Epoch 14, batch 3100, loss[loss=0.1757, simple_loss=0.2371, pruned_loss=0.05718, over 4924.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2558, pruned_loss=0.05939, over 953619.55 frames. ], batch size: 33, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:12:39,910 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:12:46,082 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9977, 1.7236, 2.4090, 3.6420, 2.5695, 2.5721, 0.9670, 2.9551], device='cuda:2'), covar=tensor([0.1567, 0.1450, 0.1322, 0.0593, 0.0748, 0.1819, 0.1910, 0.0474], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0101, 0.0138, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:13:22,407 INFO [finetune.py:976] (2/7) Epoch 14, batch 3150, loss[loss=0.1798, simple_loss=0.2518, pruned_loss=0.05387, over 4915.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2528, pruned_loss=0.05837, over 951974.93 frames. ], batch size: 43, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:13:25,547 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:35,436 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:37,878 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77633.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:43,292 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.643e+02 1.968e+02 2.398e+02 4.679e+02, threshold=3.936e+02, percent-clipped=3.0 2023-03-26 17:13:56,367 INFO [finetune.py:976] (2/7) Epoch 14, batch 3200, loss[loss=0.1849, simple_loss=0.2322, pruned_loss=0.06885, over 4823.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2491, pruned_loss=0.05711, over 952833.53 frames. ], batch size: 30, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:14:06,649 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:06,660 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:07,858 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.53 vs. limit=5.0 2023-03-26 17:14:17,379 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.9765, 4.3305, 4.5089, 4.7799, 4.6813, 4.4395, 5.1380, 1.5921], device='cuda:2'), covar=tensor([0.0720, 0.0819, 0.0803, 0.0891, 0.1228, 0.1502, 0.0458, 0.5575], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0242, 0.0275, 0.0292, 0.0329, 0.0281, 0.0297, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:14:21,620 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.22 vs. limit=5.0 2023-03-26 17:14:29,513 INFO [finetune.py:976] (2/7) Epoch 14, batch 3250, loss[loss=0.2208, simple_loss=0.2762, pruned_loss=0.08268, over 4699.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2495, pruned_loss=0.05756, over 950844.29 frames. ], batch size: 23, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:14:47,312 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:49,600 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.623e+02 1.885e+02 2.287e+02 7.301e+02, threshold=3.769e+02, percent-clipped=4.0 2023-03-26 17:14:52,766 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7396, 2.8685, 2.5618, 2.0269, 2.7203, 3.0495, 3.0060, 2.5008], device='cuda:2'), covar=tensor([0.0576, 0.0570, 0.0812, 0.0868, 0.0567, 0.0652, 0.0612, 0.0969], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0132, 0.0141, 0.0123, 0.0123, 0.0140, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:14:55,594 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77751.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:15:02,065 INFO [finetune.py:976] (2/7) Epoch 14, batch 3300, loss[loss=0.2373, simple_loss=0.305, pruned_loss=0.08481, over 4896.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2542, pruned_loss=0.05934, over 952274.40 frames. ], batch size: 43, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:15:27,674 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:15:35,621 INFO [finetune.py:976] (2/7) Epoch 14, batch 3350, loss[loss=0.1635, simple_loss=0.2438, pruned_loss=0.04163, over 4747.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2562, pruned_loss=0.05971, over 950877.11 frames. ], batch size: 27, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:15:39,753 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4325, 1.4526, 1.3866, 0.8504, 1.4160, 1.6265, 1.6918, 1.3472], device='cuda:2'), covar=tensor([0.0839, 0.0560, 0.0547, 0.0479, 0.0558, 0.0509, 0.0282, 0.0607], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0122, 0.0130, 0.0131, 0.0128, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.3913e-05, 1.1201e-04, 8.8074e-05, 9.3617e-05, 9.3059e-05, 9.2808e-05, 1.0384e-04, 1.0630e-04], device='cuda:2') 2023-03-26 17:15:41,692 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 17:15:57,263 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.683e+02 1.958e+02 2.277e+02 5.309e+02, threshold=3.915e+02, percent-clipped=1.0 2023-03-26 17:15:58,001 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9363, 1.7869, 2.3466, 1.6077, 2.1095, 2.2832, 1.8026, 2.3752], device='cuda:2'), covar=tensor([0.1451, 0.1980, 0.1478, 0.2032, 0.0933, 0.1341, 0.2315, 0.0934], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0204, 0.0191, 0.0190, 0.0176, 0.0213, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:16:09,331 INFO [finetune.py:976] (2/7) Epoch 14, batch 3400, loss[loss=0.1911, simple_loss=0.2559, pruned_loss=0.06311, over 4922.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2581, pruned_loss=0.06075, over 949454.16 frames. ], batch size: 29, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:16:18,560 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77867.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:16:51,414 INFO [finetune.py:976] (2/7) Epoch 14, batch 3450, loss[loss=0.1701, simple_loss=0.2343, pruned_loss=0.05293, over 4895.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2583, pruned_loss=0.06085, over 950018.55 frames. ], batch size: 32, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:16:53,841 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:03,230 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:05,724 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:09,734 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0749, 0.9609, 1.0379, 0.4063, 0.8350, 1.1904, 1.1978, 1.0183], device='cuda:2'), covar=tensor([0.0751, 0.0536, 0.0475, 0.0483, 0.0515, 0.0524, 0.0344, 0.0571], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0122, 0.0130, 0.0131, 0.0127, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.3736e-05, 1.1172e-04, 8.7958e-05, 9.3498e-05, 9.2818e-05, 9.2324e-05, 1.0387e-04, 1.0631e-04], device='cuda:2') 2023-03-26 17:17:11,408 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.160e+01 1.485e+02 1.825e+02 2.093e+02 4.049e+02, threshold=3.650e+02, percent-clipped=1.0 2023-03-26 17:17:33,219 INFO [finetune.py:976] (2/7) Epoch 14, batch 3500, loss[loss=0.265, simple_loss=0.3203, pruned_loss=0.1049, over 4815.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2553, pruned_loss=0.05989, over 951479.62 frames. ], batch size: 38, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:17:40,921 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:44,500 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:45,790 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2780, 2.2329, 2.2725, 1.7901, 2.1771, 2.6424, 2.5654, 1.9946], device='cuda:2'), covar=tensor([0.0467, 0.0537, 0.0697, 0.0924, 0.1406, 0.0456, 0.0488, 0.1013], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0132, 0.0141, 0.0124, 0.0124, 0.0141, 0.0141, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:17:46,959 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:18:20,967 INFO [finetune.py:976] (2/7) Epoch 14, batch 3550, loss[loss=0.1536, simple_loss=0.2291, pruned_loss=0.03902, over 4827.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.253, pruned_loss=0.05922, over 952405.42 frames. ], batch size: 39, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:18:24,857 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-26 17:18:35,858 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78033.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:18:41,172 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.505e+02 1.960e+02 2.472e+02 4.194e+02, threshold=3.920e+02, percent-clipped=4.0 2023-03-26 17:18:54,344 INFO [finetune.py:976] (2/7) Epoch 14, batch 3600, loss[loss=0.1677, simple_loss=0.2364, pruned_loss=0.04952, over 4801.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.251, pruned_loss=0.05896, over 952317.73 frames. ], batch size: 29, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:19:04,160 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9274, 1.1940, 1.9156, 1.8903, 1.6789, 1.5956, 1.7180, 1.7433], device='cuda:2'), covar=tensor([0.3420, 0.3819, 0.3153, 0.3308, 0.4341, 0.3481, 0.4401, 0.2896], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0239, 0.0257, 0.0266, 0.0264, 0.0237, 0.0278, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:19:28,413 INFO [finetune.py:976] (2/7) Epoch 14, batch 3650, loss[loss=0.1982, simple_loss=0.2681, pruned_loss=0.06416, over 4863.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2522, pruned_loss=0.05966, over 951690.22 frames. ], batch size: 31, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:19:48,728 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.711e+02 2.033e+02 2.406e+02 8.151e+02, threshold=4.067e+02, percent-clipped=4.0 2023-03-26 17:20:02,234 INFO [finetune.py:976] (2/7) Epoch 14, batch 3700, loss[loss=0.1753, simple_loss=0.2343, pruned_loss=0.05817, over 4154.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.256, pruned_loss=0.06072, over 953032.94 frames. ], batch size: 18, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:20:16,627 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3680, 1.4431, 1.4904, 0.8805, 1.4229, 1.6452, 1.7153, 1.3419], device='cuda:2'), covar=tensor([0.0816, 0.0567, 0.0432, 0.0450, 0.0442, 0.0521, 0.0322, 0.0604], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0122, 0.0130, 0.0129, 0.0126, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.3069e-05, 1.1104e-04, 8.7808e-05, 9.3173e-05, 9.1667e-05, 9.1595e-05, 1.0283e-04, 1.0532e-04], device='cuda:2') 2023-03-26 17:20:35,985 INFO [finetune.py:976] (2/7) Epoch 14, batch 3750, loss[loss=0.1901, simple_loss=0.2574, pruned_loss=0.06143, over 4760.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2574, pruned_loss=0.06053, over 954130.71 frames. ], batch size: 26, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:20:49,422 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:20:55,808 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.186e+02 1.631e+02 1.949e+02 2.239e+02 4.423e+02, threshold=3.899e+02, percent-clipped=1.0 2023-03-26 17:21:08,217 INFO [finetune.py:976] (2/7) Epoch 14, batch 3800, loss[loss=0.2091, simple_loss=0.2746, pruned_loss=0.07177, over 4818.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.259, pruned_loss=0.06083, over 955202.41 frames. ], batch size: 33, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:21:15,902 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:21:26,195 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4540, 1.3349, 1.3492, 1.3423, 0.8793, 2.2816, 0.7277, 1.2440], device='cuda:2'), covar=tensor([0.3457, 0.2624, 0.2194, 0.2568, 0.1970, 0.0346, 0.2831, 0.1328], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:21:31,034 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78293.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:21:49,334 INFO [finetune.py:976] (2/7) Epoch 14, batch 3850, loss[loss=0.1864, simple_loss=0.2477, pruned_loss=0.06257, over 4922.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2577, pruned_loss=0.0605, over 955361.60 frames. ], batch size: 33, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:21:55,864 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:21:56,643 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 17:22:03,820 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:22:09,072 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1741, 2.0626, 2.1158, 1.4439, 2.1489, 2.2386, 2.2387, 1.7852], device='cuda:2'), covar=tensor([0.0552, 0.0612, 0.0635, 0.0870, 0.0639, 0.0692, 0.0574, 0.1068], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0133, 0.0141, 0.0124, 0.0124, 0.0141, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:22:10,190 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.518e+02 1.784e+02 2.158e+02 4.566e+02, threshold=3.568e+02, percent-clipped=2.0 2023-03-26 17:22:17,624 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 17:22:22,702 INFO [finetune.py:976] (2/7) Epoch 14, batch 3900, loss[loss=0.1537, simple_loss=0.2284, pruned_loss=0.0395, over 4835.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.255, pruned_loss=0.05988, over 952353.98 frames. ], batch size: 47, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:22:45,628 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:23:09,953 INFO [finetune.py:976] (2/7) Epoch 14, batch 3950, loss[loss=0.1611, simple_loss=0.2278, pruned_loss=0.04722, over 4818.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2533, pruned_loss=0.05983, over 952901.77 frames. ], batch size: 41, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:23:23,138 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8273, 1.6940, 1.6076, 1.7550, 1.1821, 3.7552, 1.5452, 2.0634], device='cuda:2'), covar=tensor([0.3297, 0.2529, 0.2127, 0.2429, 0.1902, 0.0165, 0.2589, 0.1226], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0115, 0.0121, 0.0124, 0.0116, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:23:37,906 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.587e+02 1.893e+02 2.259e+02 3.905e+02, threshold=3.786e+02, percent-clipped=1.0 2023-03-26 17:23:50,374 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78460.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:23:50,893 INFO [finetune.py:976] (2/7) Epoch 14, batch 4000, loss[loss=0.1898, simple_loss=0.2539, pruned_loss=0.06288, over 4815.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2505, pruned_loss=0.05839, over 953142.66 frames. ], batch size: 40, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:24:24,846 INFO [finetune.py:976] (2/7) Epoch 14, batch 4050, loss[loss=0.1774, simple_loss=0.2486, pruned_loss=0.05312, over 4811.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2552, pruned_loss=0.06041, over 955469.37 frames. ], batch size: 40, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:24:31,530 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:24:36,691 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3698, 0.8988, 0.7285, 1.2275, 1.7547, 0.6498, 1.0847, 1.2137], device='cuda:2'), covar=tensor([0.1269, 0.1981, 0.1563, 0.1083, 0.1589, 0.1869, 0.1504, 0.1794], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:24:45,461 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.285e+01 1.581e+02 1.919e+02 2.315e+02 3.488e+02, threshold=3.837e+02, percent-clipped=0.0 2023-03-26 17:24:54,882 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:24:57,772 INFO [finetune.py:976] (2/7) Epoch 14, batch 4100, loss[loss=0.1869, simple_loss=0.2621, pruned_loss=0.05587, over 4848.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2591, pruned_loss=0.06179, over 954895.86 frames. ], batch size: 49, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:24:58,468 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6023, 0.9496, 0.8575, 1.4123, 1.9928, 1.2177, 1.1805, 1.3345], device='cuda:2'), covar=tensor([0.2049, 0.3461, 0.2528, 0.1846, 0.2220, 0.2826, 0.2269, 0.3057], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0119, 0.0094, 0.0100, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 17:25:03,291 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 17:25:16,504 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78588.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 17:25:31,551 INFO [finetune.py:976] (2/7) Epoch 14, batch 4150, loss[loss=0.2176, simple_loss=0.2818, pruned_loss=0.07664, over 4737.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2609, pruned_loss=0.06265, over 957030.85 frames. ], batch size: 59, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:25:35,290 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78617.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:25:43,109 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.81 vs. limit=5.0 2023-03-26 17:25:52,377 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.171e+02 1.587e+02 1.869e+02 2.218e+02 3.242e+02, threshold=3.739e+02, percent-clipped=0.0 2023-03-26 17:26:04,875 INFO [finetune.py:976] (2/7) Epoch 14, batch 4200, loss[loss=0.1352, simple_loss=0.2044, pruned_loss=0.03301, over 4812.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2611, pruned_loss=0.06223, over 957466.71 frames. ], batch size: 25, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:26:29,139 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1911, 2.2776, 2.2116, 1.7120, 1.9888, 2.5125, 2.4380, 1.9386], device='cuda:2'), covar=tensor([0.0510, 0.0527, 0.0650, 0.0834, 0.1576, 0.0584, 0.0521, 0.0967], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0142, 0.0124, 0.0124, 0.0141, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:26:37,998 INFO [finetune.py:976] (2/7) Epoch 14, batch 4250, loss[loss=0.1584, simple_loss=0.2352, pruned_loss=0.04078, over 4930.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2576, pruned_loss=0.06068, over 956077.49 frames. ], batch size: 38, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:26:40,462 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7952, 1.8675, 1.5708, 1.9683, 2.3407, 1.9042, 1.5198, 1.4478], device='cuda:2'), covar=tensor([0.2394, 0.2085, 0.2123, 0.1822, 0.1795, 0.1357, 0.2695, 0.2304], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0211, 0.0192, 0.0242, 0.0185, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:27:05,885 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.501e+02 1.722e+02 2.085e+02 5.543e+02, threshold=3.444e+02, percent-clipped=2.0 2023-03-26 17:27:17,139 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5491, 1.0750, 0.7495, 1.4142, 1.9594, 0.7099, 1.3361, 1.5118], device='cuda:2'), covar=tensor([0.1405, 0.2073, 0.1763, 0.1201, 0.1922, 0.1930, 0.1476, 0.1815], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:27:21,259 INFO [finetune.py:976] (2/7) Epoch 14, batch 4300, loss[loss=0.1431, simple_loss=0.2201, pruned_loss=0.03303, over 4817.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2546, pruned_loss=0.05937, over 956569.58 frames. ], batch size: 25, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:27:59,985 INFO [finetune.py:976] (2/7) Epoch 14, batch 4350, loss[loss=0.1319, simple_loss=0.1919, pruned_loss=0.036, over 4455.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2508, pruned_loss=0.05769, over 957714.04 frames. ], batch size: 20, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:28:06,805 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:28:34,427 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.179e+02 1.620e+02 1.919e+02 2.276e+02 4.946e+02, threshold=3.838e+02, percent-clipped=3.0 2023-03-26 17:28:54,136 INFO [finetune.py:976] (2/7) Epoch 14, batch 4400, loss[loss=0.1787, simple_loss=0.2574, pruned_loss=0.04999, over 4709.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2523, pruned_loss=0.05847, over 956656.56 frames. ], batch size: 59, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:29:02,550 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9059, 2.7069, 2.4162, 3.2057, 2.8732, 2.6940, 3.4029, 2.8462], device='cuda:2'), covar=tensor([0.1314, 0.2229, 0.3058, 0.2380, 0.2501, 0.1509, 0.2469, 0.1769], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0187, 0.0235, 0.0253, 0.0244, 0.0199, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:29:12,612 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:27,939 INFO [finetune.py:976] (2/7) Epoch 14, batch 4450, loss[loss=0.2477, simple_loss=0.3068, pruned_loss=0.09433, over 4754.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2549, pruned_loss=0.05956, over 953306.01 frames. ], batch size: 59, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:29:28,614 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78912.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:44,126 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:48,695 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.623e+02 2.075e+02 2.454e+02 4.700e+02, threshold=4.150e+02, percent-clipped=3.0 2023-03-26 17:30:01,641 INFO [finetune.py:976] (2/7) Epoch 14, batch 4500, loss[loss=0.1876, simple_loss=0.2602, pruned_loss=0.05746, over 4762.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2558, pruned_loss=0.05928, over 952697.82 frames. ], batch size: 54, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:30:03,574 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9694, 1.3767, 1.9440, 1.9112, 1.7411, 1.6243, 1.8522, 1.7398], device='cuda:2'), covar=tensor([0.3342, 0.3886, 0.3536, 0.3740, 0.4604, 0.3664, 0.4456, 0.3290], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0240, 0.0257, 0.0267, 0.0266, 0.0239, 0.0279, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:30:04,118 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78965.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:30:04,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3103, 1.3639, 1.4001, 1.4665, 1.5048, 2.8964, 1.2695, 1.4224], device='cuda:2'), covar=tensor([0.0973, 0.1814, 0.1170, 0.0986, 0.1527, 0.0292, 0.1505, 0.1697], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:30:10,583 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7091, 1.5790, 1.5375, 0.9342, 1.6654, 1.8720, 1.8747, 1.4673], device='cuda:2'), covar=tensor([0.0872, 0.0584, 0.0488, 0.0542, 0.0440, 0.0458, 0.0293, 0.0564], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0130, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3159e-05, 1.1138e-04, 8.8364e-05, 9.3252e-05, 9.2131e-05, 9.1685e-05, 1.0275e-04, 1.0543e-04], device='cuda:2') 2023-03-26 17:30:18,502 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 17:30:34,870 INFO [finetune.py:976] (2/7) Epoch 14, batch 4550, loss[loss=0.2092, simple_loss=0.2671, pruned_loss=0.07563, over 4180.00 frames. ], tot_loss[loss=0.1894, simple_loss=0.2582, pruned_loss=0.06032, over 954208.63 frames. ], batch size: 65, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:30:36,871 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.93 vs. limit=5.0 2023-03-26 17:30:40,995 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2740, 1.2073, 1.5762, 2.4289, 1.6385, 2.0524, 0.8676, 2.0069], device='cuda:2'), covar=tensor([0.1832, 0.1521, 0.1140, 0.0750, 0.0891, 0.1301, 0.1622, 0.0696], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0133, 0.0164, 0.0101, 0.0137, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:30:44,021 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79026.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:30:54,526 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.627e+02 1.835e+02 2.182e+02 4.419e+02, threshold=3.671e+02, percent-clipped=1.0 2023-03-26 17:31:04,055 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 17:31:08,635 INFO [finetune.py:976] (2/7) Epoch 14, batch 4600, loss[loss=0.1831, simple_loss=0.2511, pruned_loss=0.05749, over 4822.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2572, pruned_loss=0.0596, over 954420.15 frames. ], batch size: 47, lr: 3.53e-03, grad_scale: 64.0 2023-03-26 17:31:27,820 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4200, 3.8845, 4.0353, 4.1492, 4.2120, 3.9266, 4.5109, 1.8563], device='cuda:2'), covar=tensor([0.0751, 0.0785, 0.0759, 0.0988, 0.1040, 0.1380, 0.0602, 0.4804], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0241, 0.0273, 0.0291, 0.0331, 0.0281, 0.0298, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:31:42,443 INFO [finetune.py:976] (2/7) Epoch 14, batch 4650, loss[loss=0.1579, simple_loss=0.2307, pruned_loss=0.04259, over 4730.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2553, pruned_loss=0.05951, over 953079.37 frames. ], batch size: 23, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:31:45,552 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:32:02,946 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.520e+02 1.886e+02 2.415e+02 4.094e+02, threshold=3.771e+02, percent-clipped=1.0 2023-03-26 17:32:24,492 INFO [finetune.py:976] (2/7) Epoch 14, batch 4700, loss[loss=0.147, simple_loss=0.2257, pruned_loss=0.03416, over 4831.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2526, pruned_loss=0.05845, over 953284.68 frames. ], batch size: 38, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:32:26,311 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:32:58,004 INFO [finetune.py:976] (2/7) Epoch 14, batch 4750, loss[loss=0.2513, simple_loss=0.2814, pruned_loss=0.1106, over 4476.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.251, pruned_loss=0.05829, over 954193.04 frames. ], batch size: 19, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:32:59,189 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79212.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:33:15,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3373, 2.2026, 2.3013, 1.0109, 2.6107, 2.8893, 2.4241, 2.1563], device='cuda:2'), covar=tensor([0.0909, 0.0757, 0.0466, 0.0751, 0.0438, 0.0691, 0.0447, 0.0742], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0123, 0.0130, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.3383e-05, 1.1102e-04, 8.8634e-05, 9.3142e-05, 9.2618e-05, 9.1688e-05, 1.0319e-04, 1.0584e-04], device='cuda:2') 2023-03-26 17:33:32,018 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.558e+01 1.654e+02 1.996e+02 2.359e+02 6.861e+02, threshold=3.993e+02, percent-clipped=2.0 2023-03-26 17:33:32,980 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 17:33:51,146 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:33:51,674 INFO [finetune.py:976] (2/7) Epoch 14, batch 4800, loss[loss=0.2093, simple_loss=0.2789, pruned_loss=0.0698, over 4891.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2529, pruned_loss=0.05908, over 955871.08 frames. ], batch size: 43, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:34:06,290 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.63 vs. limit=5.0 2023-03-26 17:34:10,267 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79285.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:34:11,529 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1549, 1.9362, 1.9717, 0.9288, 2.2223, 2.4536, 2.1554, 1.9147], device='cuda:2'), covar=tensor([0.0955, 0.0804, 0.0615, 0.0786, 0.0632, 0.0752, 0.0516, 0.0784], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0151, 0.0123, 0.0129, 0.0130, 0.0126, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.2895e-05, 1.1027e-04, 8.8324e-05, 9.2787e-05, 9.2109e-05, 9.1140e-05, 1.0253e-04, 1.0527e-04], device='cuda:2') 2023-03-26 17:34:27,365 INFO [finetune.py:976] (2/7) Epoch 14, batch 4850, loss[loss=0.17, simple_loss=0.2428, pruned_loss=0.04859, over 4753.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2564, pruned_loss=0.06024, over 956182.83 frames. ], batch size: 27, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:34:35,463 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:34:42,729 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:34:49,132 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.639e+02 1.937e+02 2.312e+02 4.640e+02, threshold=3.873e+02, percent-clipped=1.0 2023-03-26 17:34:51,065 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79346.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:35:00,513 INFO [finetune.py:976] (2/7) Epoch 14, batch 4900, loss[loss=0.1862, simple_loss=0.2451, pruned_loss=0.06368, over 4267.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.258, pruned_loss=0.06106, over 956129.65 frames. ], batch size: 65, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:35:03,436 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79364.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:03,585 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 17:35:11,495 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79375.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:35:23,209 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:34,335 INFO [finetune.py:976] (2/7) Epoch 14, batch 4950, loss[loss=0.1441, simple_loss=0.2205, pruned_loss=0.03385, over 4913.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2589, pruned_loss=0.06105, over 956001.24 frames. ], batch size: 38, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:35:45,284 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:51,946 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79436.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:35:55,994 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.541e+02 1.877e+02 2.434e+02 3.585e+02, threshold=3.755e+02, percent-clipped=0.0 2023-03-26 17:36:07,907 INFO [finetune.py:976] (2/7) Epoch 14, batch 5000, loss[loss=0.1952, simple_loss=0.2674, pruned_loss=0.06149, over 4913.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2576, pruned_loss=0.06052, over 956696.90 frames. ], batch size: 36, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:36:41,415 INFO [finetune.py:976] (2/7) Epoch 14, batch 5050, loss[loss=0.1759, simple_loss=0.2374, pruned_loss=0.05721, over 4799.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.254, pruned_loss=0.05942, over 955767.20 frames. ], batch size: 51, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:36:55,524 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5938, 1.5508, 1.8947, 1.1680, 1.6054, 1.7562, 1.4946, 1.9458], device='cuda:2'), covar=tensor([0.1448, 0.2128, 0.1450, 0.1970, 0.0928, 0.1541, 0.2698, 0.0907], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0205, 0.0192, 0.0190, 0.0176, 0.0214, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:36:58,563 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7714, 1.6910, 1.5987, 1.7028, 1.2003, 3.5711, 1.4707, 1.8399], device='cuda:2'), covar=tensor([0.3105, 0.2236, 0.2070, 0.2359, 0.1748, 0.0210, 0.2380, 0.1310], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:37:02,696 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.527e+02 1.776e+02 2.127e+02 3.568e+02, threshold=3.553e+02, percent-clipped=0.0 2023-03-26 17:37:05,272 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9835, 1.3975, 1.9927, 1.9668, 1.7694, 1.6879, 1.8953, 1.7871], device='cuda:2'), covar=tensor([0.3917, 0.4204, 0.3328, 0.3629, 0.5065, 0.3894, 0.4500, 0.3327], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0241, 0.0258, 0.0268, 0.0266, 0.0239, 0.0280, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:37:07,039 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5603, 3.3307, 3.2231, 1.4757, 3.4658, 2.5196, 0.8320, 2.3237], device='cuda:2'), covar=tensor([0.2313, 0.2328, 0.1719, 0.3550, 0.1203, 0.1125, 0.4436, 0.1751], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0173, 0.0160, 0.0127, 0.0157, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 17:37:11,927 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.04 vs. limit=5.0 2023-03-26 17:37:14,551 INFO [finetune.py:976] (2/7) Epoch 14, batch 5100, loss[loss=0.1561, simple_loss=0.2225, pruned_loss=0.04492, over 4794.00 frames. ], tot_loss[loss=0.185, simple_loss=0.252, pruned_loss=0.05901, over 955300.85 frames. ], batch size: 26, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:37:38,105 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:37:57,946 INFO [finetune.py:976] (2/7) Epoch 14, batch 5150, loss[loss=0.1392, simple_loss=0.2131, pruned_loss=0.03267, over 4865.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2515, pruned_loss=0.05899, over 954412.15 frames. ], batch size: 31, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:38:04,612 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79621.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:20,360 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79641.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:38:21,527 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.601e+02 1.924e+02 2.369e+02 3.228e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-26 17:38:23,477 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:38,713 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:41,522 INFO [finetune.py:976] (2/7) Epoch 14, batch 5200, loss[loss=0.2606, simple_loss=0.3229, pruned_loss=0.09919, over 4860.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2532, pruned_loss=0.05896, over 956104.01 frames. ], batch size: 44, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:38:50,979 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:51,657 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5412, 1.5594, 1.4173, 1.4314, 1.7013, 1.3948, 1.8550, 1.5674], device='cuda:2'), covar=tensor([0.1361, 0.1839, 0.2596, 0.2299, 0.2146, 0.1561, 0.2483, 0.1606], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0186, 0.0233, 0.0254, 0.0243, 0.0199, 0.0212, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:38:58,388 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6029, 1.3403, 1.7988, 1.7964, 1.5898, 3.5042, 1.2972, 1.4925], device='cuda:2'), covar=tensor([0.0999, 0.1883, 0.1247, 0.1067, 0.1679, 0.0199, 0.1656, 0.1904], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:39:10,049 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79689.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:27,297 INFO [finetune.py:976] (2/7) Epoch 14, batch 5250, loss[loss=0.1663, simple_loss=0.2397, pruned_loss=0.04642, over 4693.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.255, pruned_loss=0.05897, over 956573.69 frames. ], batch size: 23, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:39:32,146 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:33,309 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:40,419 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79731.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:39:49,075 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.058e+02 1.683e+02 1.987e+02 2.478e+02 3.642e+02, threshold=3.974e+02, percent-clipped=0.0 2023-03-26 17:39:57,110 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3909, 2.0418, 1.7440, 0.8361, 2.0786, 1.7838, 1.4834, 1.9405], device='cuda:2'), covar=tensor([0.0778, 0.1061, 0.1598, 0.2020, 0.1366, 0.2275, 0.2391, 0.1064], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0196, 0.0201, 0.0184, 0.0215, 0.0208, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:39:59,971 INFO [finetune.py:976] (2/7) Epoch 14, batch 5300, loss[loss=0.159, simple_loss=0.2299, pruned_loss=0.044, over 4744.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2572, pruned_loss=0.06019, over 956894.95 frames. ], batch size: 59, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:40:00,722 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79762.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:40:10,112 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6411, 1.4746, 1.0325, 0.3191, 1.3084, 1.3946, 1.4380, 1.4179], device='cuda:2'), covar=tensor([0.0781, 0.0723, 0.1095, 0.1646, 0.1243, 0.1933, 0.1927, 0.0789], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0196, 0.0201, 0.0184, 0.0215, 0.0208, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:40:22,299 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5144, 2.2371, 2.8242, 1.5259, 2.5191, 2.7259, 2.0652, 2.9371], device='cuda:2'), covar=tensor([0.1567, 0.2006, 0.1850, 0.2497, 0.0984, 0.1733, 0.2634, 0.1004], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0204, 0.0192, 0.0190, 0.0176, 0.0213, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:40:33,375 INFO [finetune.py:976] (2/7) Epoch 14, batch 5350, loss[loss=0.1818, simple_loss=0.2537, pruned_loss=0.05493, over 4773.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2578, pruned_loss=0.06005, over 955830.76 frames. ], batch size: 28, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:40:39,028 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.02 vs. limit=5.0 2023-03-26 17:40:41,355 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:40:52,660 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2114, 4.5713, 4.7676, 5.0610, 4.9671, 4.6989, 5.3205, 1.6453], device='cuda:2'), covar=tensor([0.0666, 0.0794, 0.0688, 0.0775, 0.1043, 0.1370, 0.0523, 0.5286], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0241, 0.0272, 0.0290, 0.0329, 0.0281, 0.0297, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:40:55,393 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.503e+02 1.776e+02 2.291e+02 4.117e+02, threshold=3.553e+02, percent-clipped=3.0 2023-03-26 17:41:00,224 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1459, 1.3352, 0.7407, 2.0449, 2.3382, 1.7770, 1.8215, 1.9708], device='cuda:2'), covar=tensor([0.1373, 0.1988, 0.2109, 0.1107, 0.1863, 0.1740, 0.1399, 0.1880], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:41:06,814 INFO [finetune.py:976] (2/7) Epoch 14, batch 5400, loss[loss=0.1966, simple_loss=0.2546, pruned_loss=0.06928, over 4816.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2549, pruned_loss=0.05901, over 955762.99 frames. ], batch size: 40, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:41:40,250 INFO [finetune.py:976] (2/7) Epoch 14, batch 5450, loss[loss=0.1442, simple_loss=0.2187, pruned_loss=0.03483, over 4868.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2516, pruned_loss=0.05836, over 954477.00 frames. ], batch size: 34, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:41:51,292 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6897, 1.4186, 1.2266, 1.3656, 1.8748, 1.9068, 1.5911, 1.3605], device='cuda:2'), covar=tensor([0.0244, 0.0342, 0.0763, 0.0340, 0.0222, 0.0382, 0.0297, 0.0403], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0108, 0.0141, 0.0113, 0.0101, 0.0107, 0.0097, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2669e-05, 8.4017e-05, 1.1195e-04, 8.7817e-05, 7.8541e-05, 7.9047e-05, 7.2911e-05, 8.2871e-05], device='cuda:2') 2023-03-26 17:41:59,681 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:41:59,698 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79941.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:42:00,807 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.485e+02 1.773e+02 2.175e+02 4.141e+02, threshold=3.546e+02, percent-clipped=3.0 2023-03-26 17:42:14,249 INFO [finetune.py:976] (2/7) Epoch 14, batch 5500, loss[loss=0.1688, simple_loss=0.2448, pruned_loss=0.04638, over 4809.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2489, pruned_loss=0.05735, over 954718.13 frames. ], batch size: 45, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:42:32,371 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79989.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:42:32,390 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:42:44,821 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7357, 1.4900, 1.0655, 0.2700, 1.2796, 1.4531, 1.3984, 1.3960], device='cuda:2'), covar=tensor([0.0821, 0.0759, 0.1324, 0.1789, 0.1305, 0.2326, 0.2132, 0.0888], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0195, 0.0199, 0.0183, 0.0213, 0.0207, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:42:54,668 INFO [finetune.py:976] (2/7) Epoch 14, batch 5550, loss[loss=0.2079, simple_loss=0.2831, pruned_loss=0.06637, over 4861.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2513, pruned_loss=0.05822, over 954643.05 frames. ], batch size: 31, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:42:55,965 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80013.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:00,253 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:07,413 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80031.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:43:11,543 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:15,065 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.609e+02 1.862e+02 2.441e+02 4.163e+02, threshold=3.724e+02, percent-clipped=2.0 2023-03-26 17:43:25,570 INFO [finetune.py:976] (2/7) Epoch 14, batch 5600, loss[loss=0.1808, simple_loss=0.2641, pruned_loss=0.04879, over 4816.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2553, pruned_loss=0.05952, over 953323.61 frames. ], batch size: 38, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:43:27,363 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6692, 1.5344, 2.2213, 3.0374, 2.0939, 2.1597, 1.3279, 2.4648], device='cuda:2'), covar=tensor([0.1531, 0.1301, 0.0991, 0.0532, 0.0739, 0.2099, 0.1438, 0.0549], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0137, 0.0124, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:43:29,671 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80068.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:36,039 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80079.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:44:00,967 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7574, 1.5701, 1.5843, 0.9550, 1.7220, 1.9475, 1.8847, 1.4229], device='cuda:2'), covar=tensor([0.1057, 0.0778, 0.0591, 0.0698, 0.0482, 0.0556, 0.0364, 0.0817], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0129, 0.0130, 0.0128, 0.0142, 0.0145], device='cuda:2'), out_proj_covar=tensor([9.3173e-05, 1.1117e-04, 8.8660e-05, 9.3000e-05, 9.2484e-05, 9.2401e-05, 1.0253e-04, 1.0502e-04], device='cuda:2') 2023-03-26 17:44:12,672 INFO [finetune.py:976] (2/7) Epoch 14, batch 5650, loss[loss=0.2396, simple_loss=0.3022, pruned_loss=0.08851, over 4728.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2574, pruned_loss=0.05992, over 952450.18 frames. ], batch size: 59, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:44:21,812 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:44:41,678 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.565e+02 1.838e+02 2.201e+02 4.652e+02, threshold=3.676e+02, percent-clipped=2.0 2023-03-26 17:44:56,306 INFO [finetune.py:976] (2/7) Epoch 14, batch 5700, loss[loss=0.1711, simple_loss=0.2261, pruned_loss=0.05806, over 4460.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2534, pruned_loss=0.05916, over 936418.07 frames. ], batch size: 19, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:45:28,231 INFO [finetune.py:976] (2/7) Epoch 15, batch 0, loss[loss=0.168, simple_loss=0.2444, pruned_loss=0.04573, over 4824.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2444, pruned_loss=0.04573, over 4824.00 frames. ], batch size: 30, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:45:28,231 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 17:45:30,622 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3361, 2.0444, 1.4767, 0.5875, 1.8569, 1.9786, 1.8122, 1.9376], device='cuda:2'), covar=tensor([0.0966, 0.0823, 0.1621, 0.2147, 0.1314, 0.2558, 0.2235, 0.0859], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0195, 0.0200, 0.0183, 0.0213, 0.0207, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:45:42,541 INFO [finetune.py:1010] (2/7) Epoch 15, validation: loss=0.1586, simple_loss=0.2288, pruned_loss=0.0442, over 2265189.00 frames. 2023-03-26 17:45:42,542 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 17:45:42,686 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3591, 2.3192, 1.8083, 2.5218, 2.3249, 1.9333, 2.8603, 2.4185], device='cuda:2'), covar=tensor([0.1335, 0.2576, 0.3321, 0.2821, 0.2667, 0.1768, 0.3418, 0.1868], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0255, 0.0245, 0.0200, 0.0213, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:46:12,248 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7334, 1.1482, 0.8097, 1.5476, 2.0756, 1.0848, 1.4318, 1.5882], device='cuda:2'), covar=tensor([0.1475, 0.2140, 0.2046, 0.1262, 0.1958, 0.2020, 0.1483, 0.1932], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:46:15,191 INFO [finetune.py:976] (2/7) Epoch 15, batch 50, loss[loss=0.1948, simple_loss=0.2577, pruned_loss=0.06594, over 4757.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2587, pruned_loss=0.06115, over 216497.04 frames. ], batch size: 28, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:46:17,640 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:46:17,688 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8698, 1.7895, 1.3607, 1.6755, 1.9678, 1.6500, 2.4019, 1.8131], device='cuda:2'), covar=tensor([0.1443, 0.1989, 0.3422, 0.2741, 0.2803, 0.1642, 0.2856, 0.1972], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0254, 0.0244, 0.0199, 0.0213, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:46:18,753 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.470e+02 1.896e+02 2.201e+02 3.299e+02, threshold=3.792e+02, percent-clipped=0.0 2023-03-26 17:46:21,719 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6359, 1.5519, 1.4538, 1.5809, 1.0593, 3.3072, 1.2476, 1.6228], device='cuda:2'), covar=tensor([0.3498, 0.2431, 0.2219, 0.2623, 0.1990, 0.0244, 0.2737, 0.1370], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:46:37,379 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 17:46:45,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80283.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:46:48,315 INFO [finetune.py:976] (2/7) Epoch 15, batch 100, loss[loss=0.1907, simple_loss=0.2493, pruned_loss=0.06608, over 4824.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2538, pruned_loss=0.05947, over 382153.98 frames. ], batch size: 30, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:46:48,983 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:05,102 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80313.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:10,584 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 17:47:16,529 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 17:47:21,598 INFO [finetune.py:976] (2/7) Epoch 15, batch 150, loss[loss=0.1549, simple_loss=0.2229, pruned_loss=0.04346, over 4821.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2494, pruned_loss=0.05828, over 508507.94 frames. ], batch size: 30, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:47:25,132 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.875e+01 1.589e+02 1.860e+02 2.189e+02 4.694e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 17:47:25,895 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80344.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:36,664 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80361.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:54,529 INFO [finetune.py:976] (2/7) Epoch 15, batch 200, loss[loss=0.1629, simple_loss=0.2461, pruned_loss=0.03986, over 4917.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2484, pruned_loss=0.05825, over 609182.57 frames. ], batch size: 37, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:47:57,330 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 17:47:58,241 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:16,699 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:34,170 INFO [finetune.py:976] (2/7) Epoch 15, batch 250, loss[loss=0.2143, simple_loss=0.2949, pruned_loss=0.06689, over 4834.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2527, pruned_loss=0.0596, over 686827.90 frames. ], batch size: 49, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:48:37,160 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.638e+02 2.049e+02 2.410e+02 5.367e+02, threshold=4.098e+02, percent-clipped=2.0 2023-03-26 17:48:48,467 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:58,604 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:49:00,359 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:49:20,044 INFO [finetune.py:976] (2/7) Epoch 15, batch 300, loss[loss=0.1686, simple_loss=0.2483, pruned_loss=0.04441, over 4826.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2574, pruned_loss=0.0612, over 745965.40 frames. ], batch size: 30, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:49:24,809 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80494.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:50:03,935 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:50:12,916 INFO [finetune.py:976] (2/7) Epoch 15, batch 350, loss[loss=0.2015, simple_loss=0.2749, pruned_loss=0.06403, over 4711.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2589, pruned_loss=0.06155, over 791858.24 frames. ], batch size: 59, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:50:16,427 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.511e+02 1.809e+02 2.185e+02 3.892e+02, threshold=3.618e+02, percent-clipped=0.0 2023-03-26 17:50:24,825 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:50:47,435 INFO [finetune.py:976] (2/7) Epoch 15, batch 400, loss[loss=0.1958, simple_loss=0.2719, pruned_loss=0.05986, over 4790.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2603, pruned_loss=0.06186, over 828158.06 frames. ], batch size: 51, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:50:59,711 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-26 17:51:20,765 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7544, 1.5811, 2.2412, 3.2910, 2.3192, 2.3834, 1.0777, 2.7226], device='cuda:2'), covar=tensor([0.1637, 0.1371, 0.1245, 0.0588, 0.0741, 0.1587, 0.1801, 0.0507], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0101, 0.0137, 0.0124, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 17:51:29,122 INFO [finetune.py:976] (2/7) Epoch 15, batch 450, loss[loss=0.1443, simple_loss=0.2051, pruned_loss=0.04181, over 4162.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2581, pruned_loss=0.06116, over 855681.55 frames. ], batch size: 18, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:51:29,784 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:51:31,566 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0154, 1.9686, 2.0217, 1.4224, 2.1308, 2.1584, 2.1262, 1.7971], device='cuda:2'), covar=tensor([0.0533, 0.0650, 0.0643, 0.0839, 0.0625, 0.0648, 0.0541, 0.0975], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0131, 0.0140, 0.0122, 0.0122, 0.0139, 0.0139, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:51:32,681 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.580e+02 1.854e+02 2.177e+02 4.594e+02, threshold=3.707e+02, percent-clipped=2.0 2023-03-26 17:51:38,685 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9782, 1.7117, 2.1832, 1.6679, 2.0966, 2.1455, 1.6996, 2.2629], device='cuda:2'), covar=tensor([0.1060, 0.1704, 0.1300, 0.1421, 0.0672, 0.1153, 0.2128, 0.0771], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0204, 0.0192, 0.0190, 0.0176, 0.0214, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:52:03,114 INFO [finetune.py:976] (2/7) Epoch 15, batch 500, loss[loss=0.2109, simple_loss=0.2722, pruned_loss=0.07484, over 4143.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2548, pruned_loss=0.05964, over 876725.76 frames. ], batch size: 18, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:52:36,003 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80736.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:52:37,124 INFO [finetune.py:976] (2/7) Epoch 15, batch 550, loss[loss=0.1384, simple_loss=0.2096, pruned_loss=0.03362, over 4865.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2511, pruned_loss=0.05857, over 890743.85 frames. ], batch size: 31, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:52:40,201 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.496e+02 1.725e+02 2.011e+02 3.976e+02, threshold=3.451e+02, percent-clipped=1.0 2023-03-26 17:52:44,383 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80749.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:52:49,602 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9425, 1.8486, 1.7372, 1.9331, 1.3493, 4.6649, 1.7538, 2.1671], device='cuda:2'), covar=tensor([0.3426, 0.2435, 0.2092, 0.2333, 0.1833, 0.0133, 0.2452, 0.1280], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 17:53:01,301 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:10,746 INFO [finetune.py:976] (2/7) Epoch 15, batch 600, loss[loss=0.2128, simple_loss=0.2816, pruned_loss=0.07205, over 4745.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2513, pruned_loss=0.0586, over 905015.66 frames. ], batch size: 54, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:53:16,806 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80797.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:32,700 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:44,643 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:45,833 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9651, 3.7815, 3.5991, 1.8552, 3.7947, 2.8816, 1.3481, 2.7272], device='cuda:2'), covar=tensor([0.2046, 0.1544, 0.1542, 0.3249, 0.0959, 0.1026, 0.3966, 0.1393], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0173, 0.0159, 0.0128, 0.0157, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 17:53:47,023 INFO [finetune.py:976] (2/7) Epoch 15, batch 650, loss[loss=0.2225, simple_loss=0.286, pruned_loss=0.07955, over 4905.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2543, pruned_loss=0.0599, over 917951.69 frames. ], batch size: 43, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:53:50,575 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.196e+02 1.643e+02 1.965e+02 2.358e+02 6.399e+02, threshold=3.929e+02, percent-clipped=5.0 2023-03-26 17:53:55,342 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:54:07,241 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4798, 2.3626, 2.0892, 1.0280, 2.2543, 1.8311, 1.6387, 2.2124], device='cuda:2'), covar=tensor([0.0882, 0.0958, 0.1455, 0.2110, 0.1385, 0.2364, 0.2289, 0.0875], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0196, 0.0201, 0.0184, 0.0215, 0.0207, 0.0226, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:54:16,869 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1415, 1.8502, 2.4306, 1.6728, 2.3313, 2.4193, 1.7976, 2.4781], device='cuda:2'), covar=tensor([0.1389, 0.2014, 0.1592, 0.1989, 0.0938, 0.1592, 0.2695, 0.1057], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0203, 0.0192, 0.0190, 0.0175, 0.0214, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:54:29,238 INFO [finetune.py:976] (2/7) Epoch 15, batch 700, loss[loss=0.2857, simple_loss=0.3377, pruned_loss=0.1169, over 4102.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2562, pruned_loss=0.06071, over 924390.43 frames. ], batch size: 65, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:54:30,581 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7251, 1.4750, 0.9417, 0.2078, 1.1808, 1.5016, 1.4793, 1.3565], device='cuda:2'), covar=tensor([0.1107, 0.0927, 0.1567, 0.1973, 0.1547, 0.2399, 0.2357, 0.0937], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0195, 0.0200, 0.0183, 0.0214, 0.0207, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:55:14,815 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0561, 1.8998, 1.6315, 1.8969, 2.0081, 1.6798, 2.2359, 2.0003], device='cuda:2'), covar=tensor([0.1248, 0.2116, 0.2958, 0.2288, 0.2364, 0.1664, 0.3213, 0.1746], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0186, 0.0233, 0.0253, 0.0243, 0.0199, 0.0212, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:55:23,119 INFO [finetune.py:976] (2/7) Epoch 15, batch 750, loss[loss=0.1802, simple_loss=0.2491, pruned_loss=0.05561, over 4742.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2585, pruned_loss=0.0616, over 930808.97 frames. ], batch size: 59, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:55:23,802 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:55:26,162 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.628e+02 1.856e+02 2.303e+02 3.612e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-26 17:55:33,348 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1883, 2.9029, 3.0136, 2.9773, 2.8337, 2.7214, 3.2576, 0.9874], device='cuda:2'), covar=tensor([0.1652, 0.1811, 0.1763, 0.2166, 0.2632, 0.2695, 0.1720, 0.7748], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0243, 0.0271, 0.0291, 0.0328, 0.0280, 0.0296, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:55:35,352 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 17:55:56,364 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:55:56,907 INFO [finetune.py:976] (2/7) Epoch 15, batch 800, loss[loss=0.1481, simple_loss=0.2229, pruned_loss=0.03665, over 4784.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2584, pruned_loss=0.0611, over 936270.63 frames. ], batch size: 26, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:56:38,264 INFO [finetune.py:976] (2/7) Epoch 15, batch 850, loss[loss=0.202, simple_loss=0.2728, pruned_loss=0.06563, over 4892.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2557, pruned_loss=0.05961, over 941890.60 frames. ], batch size: 35, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:56:40,803 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9023, 1.3176, 0.8138, 1.7554, 2.0984, 1.2020, 1.5315, 1.6326], device='cuda:2'), covar=tensor([0.1334, 0.1986, 0.1972, 0.1117, 0.1711, 0.1838, 0.1407, 0.1964], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0097, 0.0112, 0.0094, 0.0121, 0.0095, 0.0100, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 17:56:41,290 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.679e+02 1.976e+02 2.340e+02 3.768e+02, threshold=3.952e+02, percent-clipped=2.0 2023-03-26 17:56:44,995 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:11,961 INFO [finetune.py:976] (2/7) Epoch 15, batch 900, loss[loss=0.1677, simple_loss=0.2346, pruned_loss=0.05042, over 4928.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2523, pruned_loss=0.05803, over 945398.68 frames. ], batch size: 38, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:57:14,452 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:17,457 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81097.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:25,237 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 17:57:29,871 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:31,671 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:38,664 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:45,622 INFO [finetune.py:976] (2/7) Epoch 15, batch 950, loss[loss=0.1826, simple_loss=0.2543, pruned_loss=0.05544, over 4894.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2514, pruned_loss=0.05811, over 949470.55 frames. ], batch size: 35, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:57:48,668 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.129e+01 1.456e+02 1.848e+02 2.216e+02 5.430e+02, threshold=3.695e+02, percent-clipped=2.0 2023-03-26 17:57:52,982 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:03,766 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:11,296 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81177.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:14,954 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9711, 1.7839, 1.5357, 1.6378, 1.6456, 1.6947, 1.7231, 2.4300], device='cuda:2'), covar=tensor([0.3831, 0.4332, 0.3415, 0.4037, 0.4321, 0.2346, 0.3945, 0.1730], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0258, 0.0225, 0.0276, 0.0246, 0.0214, 0.0249, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 17:58:19,387 INFO [finetune.py:976] (2/7) Epoch 15, batch 1000, loss[loss=0.2142, simple_loss=0.2731, pruned_loss=0.07767, over 4061.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2539, pruned_loss=0.05943, over 950806.25 frames. ], batch size: 65, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:58:22,515 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4788, 1.2664, 1.2940, 1.3429, 1.7044, 1.5386, 1.3950, 1.2106], device='cuda:2'), covar=tensor([0.0283, 0.0285, 0.0565, 0.0302, 0.0224, 0.0458, 0.0315, 0.0387], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0110, 0.0143, 0.0114, 0.0102, 0.0108, 0.0098, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.3509e-05, 8.5072e-05, 1.1367e-04, 8.8660e-05, 7.9449e-05, 8.0022e-05, 7.3599e-05, 8.3879e-05], device='cuda:2') 2023-03-26 17:58:25,510 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81198.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:52,895 INFO [finetune.py:976] (2/7) Epoch 15, batch 1050, loss[loss=0.1696, simple_loss=0.2472, pruned_loss=0.04596, over 4916.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2562, pruned_loss=0.05933, over 952291.10 frames. ], batch size: 42, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:58:56,385 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.183e+02 1.566e+02 1.800e+02 2.282e+02 3.514e+02, threshold=3.601e+02, percent-clipped=0.0 2023-03-26 17:59:31,695 INFO [finetune.py:976] (2/7) Epoch 15, batch 1100, loss[loss=0.2166, simple_loss=0.285, pruned_loss=0.07407, over 4819.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2561, pruned_loss=0.05896, over 952870.63 frames. ], batch size: 38, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:59:43,369 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:59:49,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81301.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:16,530 INFO [finetune.py:976] (2/7) Epoch 15, batch 1150, loss[loss=0.1814, simple_loss=0.2564, pruned_loss=0.05317, over 4929.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2573, pruned_loss=0.05946, over 952113.12 frames. ], batch size: 41, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:00:23,240 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.635e+02 2.084e+02 2.407e+02 3.907e+02, threshold=4.168e+02, percent-clipped=1.0 2023-03-26 18:00:41,598 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:42,872 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:56,498 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0752, 1.8916, 2.4217, 1.5539, 2.2069, 2.5266, 1.8441, 2.5437], device='cuda:2'), covar=tensor([0.1415, 0.2009, 0.1496, 0.1989, 0.1031, 0.1136, 0.2485, 0.0890], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0203, 0.0191, 0.0189, 0.0175, 0.0212, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:00:59,899 INFO [finetune.py:976] (2/7) Epoch 15, batch 1200, loss[loss=0.1708, simple_loss=0.2465, pruned_loss=0.04752, over 4904.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.256, pruned_loss=0.05909, over 951713.37 frames. ], batch size: 46, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:01:03,391 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81392.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:27,560 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.9873, 4.3930, 4.5281, 4.8144, 4.7377, 4.4477, 5.1155, 1.4902], device='cuda:2'), covar=tensor([0.0645, 0.0824, 0.0663, 0.0700, 0.1034, 0.1311, 0.0435, 0.5565], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0242, 0.0271, 0.0290, 0.0328, 0.0280, 0.0295, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:01:27,576 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:35,102 INFO [finetune.py:976] (2/7) Epoch 15, batch 1250, loss[loss=0.2185, simple_loss=0.272, pruned_loss=0.08251, over 4759.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2551, pruned_loss=0.05964, over 953567.50 frames. ], batch size: 28, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:01:40,099 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:42,327 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.103e+01 1.546e+02 1.830e+02 2.259e+02 3.665e+02, threshold=3.660e+02, percent-clipped=0.0 2023-03-26 18:01:55,868 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6722, 1.5511, 2.3159, 3.6296, 2.4065, 2.4630, 1.0207, 2.9704], device='cuda:2'), covar=tensor([0.1890, 0.1577, 0.1407, 0.0518, 0.0812, 0.1527, 0.2072, 0.0472], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0114, 0.0131, 0.0162, 0.0099, 0.0135, 0.0123, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:02:05,449 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:02:08,447 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:02:15,564 INFO [finetune.py:976] (2/7) Epoch 15, batch 1300, loss[loss=0.1683, simple_loss=0.2436, pruned_loss=0.04645, over 4937.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2518, pruned_loss=0.05818, over 953399.69 frames. ], batch size: 42, lr: 3.50e-03, grad_scale: 64.0 2023-03-26 18:02:49,394 INFO [finetune.py:976] (2/7) Epoch 15, batch 1350, loss[loss=0.163, simple_loss=0.2264, pruned_loss=0.04975, over 4713.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2516, pruned_loss=0.05816, over 955707.03 frames. ], batch size: 23, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:02:53,475 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.608e+02 1.859e+02 2.257e+02 3.880e+02, threshold=3.719e+02, percent-clipped=1.0 2023-03-26 18:03:20,604 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9050, 1.9752, 1.7376, 1.7068, 2.3767, 2.4042, 2.0862, 1.9364], device='cuda:2'), covar=tensor([0.0376, 0.0364, 0.0591, 0.0336, 0.0307, 0.0564, 0.0314, 0.0353], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0097, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2579e-05, 8.4245e-05, 1.1249e-04, 8.7436e-05, 7.8236e-05, 7.9221e-05, 7.2828e-05, 8.3121e-05], device='cuda:2') 2023-03-26 18:03:22,722 INFO [finetune.py:976] (2/7) Epoch 15, batch 1400, loss[loss=0.1558, simple_loss=0.2207, pruned_loss=0.0454, over 4188.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2552, pruned_loss=0.05943, over 955981.61 frames. ], batch size: 18, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:03:24,648 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6089, 1.3717, 2.1342, 3.2890, 2.0876, 2.1908, 1.1297, 2.6486], device='cuda:2'), covar=tensor([0.1796, 0.1476, 0.1251, 0.0525, 0.0868, 0.1951, 0.1671, 0.0509], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0099, 0.0136, 0.0123, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:03:56,017 INFO [finetune.py:976] (2/7) Epoch 15, batch 1450, loss[loss=0.176, simple_loss=0.2554, pruned_loss=0.04828, over 4827.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2568, pruned_loss=0.05977, over 954685.66 frames. ], batch size: 47, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:04:00,100 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.620e+02 1.887e+02 2.237e+02 3.719e+02, threshold=3.774e+02, percent-clipped=1.0 2023-03-26 18:04:07,939 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:09,558 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:18,093 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81670.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:18,122 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3410, 2.2439, 1.8555, 2.3568, 2.1752, 2.1122, 2.0606, 3.1151], device='cuda:2'), covar=tensor([0.3936, 0.5299, 0.3605, 0.4583, 0.4338, 0.2596, 0.4678, 0.1767], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0257, 0.0224, 0.0274, 0.0244, 0.0213, 0.0247, 0.0223], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:04:24,629 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6656, 3.3966, 3.2085, 1.5560, 3.4512, 2.7230, 0.6882, 2.3316], device='cuda:2'), covar=tensor([0.2328, 0.1833, 0.1561, 0.3379, 0.1211, 0.1088, 0.4541, 0.1571], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0174, 0.0158, 0.0128, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 18:04:29,491 INFO [finetune.py:976] (2/7) Epoch 15, batch 1500, loss[loss=0.1898, simple_loss=0.2635, pruned_loss=0.05803, over 4879.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2576, pruned_loss=0.05997, over 954412.43 frames. ], batch size: 32, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:05:16,702 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81731.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:05:16,769 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 18:05:20,802 INFO [finetune.py:976] (2/7) Epoch 15, batch 1550, loss[loss=0.1822, simple_loss=0.251, pruned_loss=0.05664, over 4839.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.257, pruned_loss=0.05932, over 956733.62 frames. ], batch size: 44, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:05:20,953 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9685, 1.2835, 1.9441, 1.9037, 1.6878, 1.6515, 1.8057, 1.7787], device='cuda:2'), covar=tensor([0.3693, 0.3790, 0.3023, 0.3320, 0.4231, 0.3521, 0.4226, 0.2984], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0238, 0.0256, 0.0266, 0.0265, 0.0238, 0.0279, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:05:24,955 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.530e+02 1.898e+02 2.293e+02 4.636e+02, threshold=3.795e+02, percent-clipped=1.0 2023-03-26 18:05:50,088 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81772.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:06:03,757 INFO [finetune.py:976] (2/7) Epoch 15, batch 1600, loss[loss=0.1832, simple_loss=0.2518, pruned_loss=0.05732, over 4911.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2543, pruned_loss=0.05866, over 958409.26 frames. ], batch size: 46, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:06:25,747 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81820.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:06:37,140 INFO [finetune.py:976] (2/7) Epoch 15, batch 1650, loss[loss=0.153, simple_loss=0.2166, pruned_loss=0.04468, over 4824.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2506, pruned_loss=0.05755, over 957201.25 frames. ], batch size: 25, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:06:40,762 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.362e+01 1.564e+02 1.826e+02 2.251e+02 4.924e+02, threshold=3.651e+02, percent-clipped=3.0 2023-03-26 18:07:18,076 INFO [finetune.py:976] (2/7) Epoch 15, batch 1700, loss[loss=0.1966, simple_loss=0.2724, pruned_loss=0.06035, over 4751.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2489, pruned_loss=0.05709, over 954962.45 frames. ], batch size: 54, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:07:21,291 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 18:07:51,481 INFO [finetune.py:976] (2/7) Epoch 15, batch 1750, loss[loss=0.2399, simple_loss=0.3142, pruned_loss=0.08279, over 4811.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2529, pruned_loss=0.05967, over 954667.62 frames. ], batch size: 38, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:07:55,584 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.510e+02 1.914e+02 2.293e+02 4.004e+02, threshold=3.828e+02, percent-clipped=1.0 2023-03-26 18:07:57,576 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4938, 1.6346, 1.4361, 1.8374, 1.9825, 1.6820, 1.3442, 1.2544], device='cuda:2'), covar=tensor([0.2584, 0.2055, 0.1980, 0.1657, 0.2036, 0.1413, 0.2644, 0.2128], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0192, 0.0242, 0.0186, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:08:00,587 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-26 18:08:02,978 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:04,192 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:24,912 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:25,411 INFO [finetune.py:976] (2/7) Epoch 15, batch 1800, loss[loss=0.1921, simple_loss=0.266, pruned_loss=0.0591, over 4800.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2561, pruned_loss=0.05981, over 953960.60 frames. ], batch size: 51, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:08:36,208 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82003.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:37,855 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82005.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:51,332 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.8789, 1.8223, 2.0314, 1.1774, 1.9909, 2.2915, 2.1046, 1.7745], device='cuda:2'), covar=tensor([0.0936, 0.0707, 0.0411, 0.0586, 0.0478, 0.0551, 0.0408, 0.0577], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0151, 0.0123, 0.0129, 0.0130, 0.0127, 0.0141, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.2728e-05, 1.0994e-04, 8.8504e-05, 9.2230e-05, 9.1852e-05, 9.1658e-05, 1.0217e-04, 1.0532e-04], device='cuda:2') 2023-03-26 18:08:52,498 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82026.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:09:00,022 INFO [finetune.py:976] (2/7) Epoch 15, batch 1850, loss[loss=0.2363, simple_loss=0.2968, pruned_loss=0.08794, over 4807.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2566, pruned_loss=0.06038, over 954643.10 frames. ], batch size: 45, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:09:03,672 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.664e+02 1.894e+02 2.440e+02 3.763e+02, threshold=3.787e+02, percent-clipped=0.0 2023-03-26 18:09:06,718 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:09:11,015 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:09:33,282 INFO [finetune.py:976] (2/7) Epoch 15, batch 1900, loss[loss=0.2299, simple_loss=0.2962, pruned_loss=0.0818, over 4924.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.257, pruned_loss=0.0602, over 952759.93 frames. ], batch size: 41, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:09:51,812 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:10:16,117 INFO [finetune.py:976] (2/7) Epoch 15, batch 1950, loss[loss=0.2001, simple_loss=0.263, pruned_loss=0.06861, over 4865.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2548, pruned_loss=0.05937, over 953049.40 frames. ], batch size: 31, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:10:24,221 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.964e+01 1.525e+02 1.906e+02 2.226e+02 4.434e+02, threshold=3.812e+02, percent-clipped=2.0 2023-03-26 18:11:01,528 INFO [finetune.py:976] (2/7) Epoch 15, batch 2000, loss[loss=0.1883, simple_loss=0.2497, pruned_loss=0.06343, over 4826.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2532, pruned_loss=0.05926, over 954023.11 frames. ], batch size: 30, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:11:38,383 INFO [finetune.py:976] (2/7) Epoch 15, batch 2050, loss[loss=0.2102, simple_loss=0.2823, pruned_loss=0.06905, over 4871.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2496, pruned_loss=0.05773, over 955178.51 frames. ], batch size: 31, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:11:42,512 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.417e+02 1.727e+02 2.274e+02 4.171e+02, threshold=3.454e+02, percent-clipped=1.0 2023-03-26 18:12:24,920 INFO [finetune.py:976] (2/7) Epoch 15, batch 2100, loss[loss=0.2109, simple_loss=0.2891, pruned_loss=0.06634, over 4815.00 frames. ], tot_loss[loss=0.182, simple_loss=0.249, pruned_loss=0.05745, over 955459.09 frames. ], batch size: 51, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:12:25,017 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5011, 1.4414, 1.5577, 1.6891, 1.5377, 2.6967, 1.2518, 1.4123], device='cuda:2'), covar=tensor([0.1004, 0.2026, 0.1648, 0.0963, 0.1666, 0.0436, 0.1804, 0.2037], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:12:54,227 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:02,490 INFO [finetune.py:976] (2/7) Epoch 15, batch 2150, loss[loss=0.1776, simple_loss=0.2439, pruned_loss=0.05562, over 4816.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2524, pruned_loss=0.05838, over 954892.06 frames. ], batch size: 38, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:13:06,128 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:06,660 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.137e+02 1.626e+02 1.861e+02 2.291e+02 4.001e+02, threshold=3.721e+02, percent-clipped=2.0 2023-03-26 18:13:07,967 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:16,746 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 18:13:21,303 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4818, 2.6801, 2.4864, 1.9024, 2.6270, 2.9403, 2.7896, 2.4093], device='cuda:2'), covar=tensor([0.0626, 0.0483, 0.0663, 0.0874, 0.0626, 0.0623, 0.0586, 0.0867], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0142, 0.0124, 0.0125, 0.0142, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:13:25,988 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:35,326 INFO [finetune.py:976] (2/7) Epoch 15, batch 2200, loss[loss=0.2195, simple_loss=0.298, pruned_loss=0.07047, over 4804.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2551, pruned_loss=0.05901, over 956518.34 frames. ], batch size: 41, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:13:48,059 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:50,432 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82411.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:14:02,327 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1676, 2.0997, 1.8661, 2.1519, 1.9939, 2.0204, 2.0080, 2.7960], device='cuda:2'), covar=tensor([0.3681, 0.5068, 0.3447, 0.4658, 0.4838, 0.2401, 0.4730, 0.1671], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0225, 0.0275, 0.0246, 0.0214, 0.0248, 0.0225], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:14:08,110 INFO [finetune.py:976] (2/7) Epoch 15, batch 2250, loss[loss=0.2186, simple_loss=0.2828, pruned_loss=0.07724, over 4824.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2567, pruned_loss=0.06035, over 954776.67 frames. ], batch size: 33, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:14:08,256 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0246, 1.9425, 1.6609, 1.8617, 1.7510, 1.7985, 1.8210, 2.5486], device='cuda:2'), covar=tensor([0.3529, 0.4335, 0.3290, 0.4193, 0.4311, 0.2362, 0.4164, 0.1616], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0225, 0.0275, 0.0246, 0.0214, 0.0248, 0.0225], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:14:12,179 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.505e+02 1.711e+02 2.071e+02 3.892e+02, threshold=3.421e+02, percent-clipped=2.0 2023-03-26 18:14:38,256 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1461, 2.7743, 2.6150, 1.1614, 2.8338, 2.1450, 0.8739, 1.9062], device='cuda:2'), covar=tensor([0.2165, 0.1844, 0.1671, 0.3526, 0.1236, 0.1164, 0.3817, 0.1556], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0172, 0.0157, 0.0127, 0.0155, 0.0121, 0.0144, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 18:14:41,717 INFO [finetune.py:976] (2/7) Epoch 15, batch 2300, loss[loss=0.1947, simple_loss=0.2564, pruned_loss=0.06653, over 4870.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2567, pruned_loss=0.06016, over 955111.59 frames. ], batch size: 31, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:14:45,300 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82493.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:14:45,975 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 18:15:17,445 INFO [finetune.py:976] (2/7) Epoch 15, batch 2350, loss[loss=0.1611, simple_loss=0.2289, pruned_loss=0.04664, over 4903.00 frames. ], tot_loss[loss=0.1862, simple_loss=0.2541, pruned_loss=0.05913, over 951841.51 frames. ], batch size: 36, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:15:21,099 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.644e+02 1.984e+02 2.390e+02 4.799e+02, threshold=3.967e+02, percent-clipped=3.0 2023-03-26 18:15:28,708 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82554.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:15:44,890 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:00,849 INFO [finetune.py:976] (2/7) Epoch 15, batch 2400, loss[loss=0.195, simple_loss=0.2578, pruned_loss=0.06605, over 4828.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2502, pruned_loss=0.05739, over 954251.84 frames. ], batch size: 39, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:16:29,666 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 18:16:37,311 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:42,547 INFO [finetune.py:976] (2/7) Epoch 15, batch 2450, loss[loss=0.2433, simple_loss=0.2962, pruned_loss=0.09519, over 4902.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2486, pruned_loss=0.05699, over 954409.97 frames. ], batch size: 32, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:16:45,674 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:46,161 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.277e+01 1.596e+02 1.937e+02 2.264e+02 4.235e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-26 18:16:56,851 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8994, 1.6783, 1.5214, 1.6281, 2.1429, 2.0376, 1.7258, 1.4573], device='cuda:2'), covar=tensor([0.0241, 0.0300, 0.0544, 0.0304, 0.0186, 0.0367, 0.0351, 0.0403], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0097, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2822e-05, 8.4377e-05, 1.1262e-04, 8.7428e-05, 7.7991e-05, 7.9048e-05, 7.3282e-05, 8.3210e-05], device='cuda:2') 2023-03-26 18:17:18,143 INFO [finetune.py:976] (2/7) Epoch 15, batch 2500, loss[loss=0.2037, simple_loss=0.2745, pruned_loss=0.06645, over 4826.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2514, pruned_loss=0.05806, over 956643.34 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:17:20,059 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82691.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:17:35,740 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82702.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:17:46,343 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82711.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:18:03,677 INFO [finetune.py:976] (2/7) Epoch 15, batch 2550, loss[loss=0.1778, simple_loss=0.2465, pruned_loss=0.05458, over 4831.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2553, pruned_loss=0.05863, over 956740.78 frames. ], batch size: 33, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:18:04,850 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7177, 3.8780, 3.7640, 1.9675, 3.9276, 2.9116, 0.8765, 2.6636], device='cuda:2'), covar=tensor([0.2260, 0.1813, 0.1275, 0.2958, 0.0883, 0.0943, 0.4215, 0.1421], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0157, 0.0122, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 18:18:07,767 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.574e+02 1.850e+02 2.269e+02 4.152e+02, threshold=3.700e+02, percent-clipped=3.0 2023-03-26 18:18:17,861 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:18:33,529 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0334, 1.2124, 1.9777, 1.9101, 1.7757, 1.7153, 1.7930, 1.8488], device='cuda:2'), covar=tensor([0.3344, 0.3843, 0.3459, 0.3563, 0.4664, 0.3611, 0.4272, 0.3215], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0239, 0.0257, 0.0268, 0.0267, 0.0240, 0.0280, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:18:36,861 INFO [finetune.py:976] (2/7) Epoch 15, batch 2600, loss[loss=0.2207, simple_loss=0.2892, pruned_loss=0.07606, over 4822.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2567, pruned_loss=0.05926, over 955067.45 frames. ], batch size: 30, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:18:45,278 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7195, 1.2716, 0.8514, 1.6153, 1.9803, 1.3812, 1.5003, 1.5407], device='cuda:2'), covar=tensor([0.1472, 0.2088, 0.2036, 0.1247, 0.2025, 0.2088, 0.1429, 0.2010], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0096, 0.0112, 0.0093, 0.0121, 0.0095, 0.0100, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 18:19:10,656 INFO [finetune.py:976] (2/7) Epoch 15, batch 2650, loss[loss=0.2332, simple_loss=0.2924, pruned_loss=0.087, over 4739.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2565, pruned_loss=0.05986, over 954751.51 frames. ], batch size: 59, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:19:14,265 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.579e+02 1.879e+02 2.251e+02 6.929e+02, threshold=3.759e+02, percent-clipped=2.0 2023-03-26 18:19:17,848 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82849.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 18:19:43,179 INFO [finetune.py:976] (2/7) Epoch 15, batch 2700, loss[loss=0.1918, simple_loss=0.2538, pruned_loss=0.06492, over 4920.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2553, pruned_loss=0.05882, over 956207.00 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:20:08,049 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:20:12,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6200, 1.4960, 2.3780, 3.3379, 2.2808, 2.5014, 0.9521, 2.7676], device='cuda:2'), covar=tensor([0.1626, 0.1474, 0.1079, 0.0553, 0.0751, 0.1617, 0.1785, 0.0459], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0162, 0.0099, 0.0137, 0.0123, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:20:15,445 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-26 18:20:16,388 INFO [finetune.py:976] (2/7) Epoch 15, batch 2750, loss[loss=0.1606, simple_loss=0.2285, pruned_loss=0.04631, over 4755.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2525, pruned_loss=0.05812, over 954827.79 frames. ], batch size: 27, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:20:20,500 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.574e+02 1.758e+02 2.107e+02 4.076e+02, threshold=3.515e+02, percent-clipped=2.0 2023-03-26 18:20:33,223 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82963.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:20:49,687 INFO [finetune.py:976] (2/7) Epoch 15, batch 2800, loss[loss=0.1887, simple_loss=0.2482, pruned_loss=0.06461, over 4931.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2488, pruned_loss=0.05667, over 954986.88 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:21:07,073 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:21:21,927 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:21:37,765 INFO [finetune.py:976] (2/7) Epoch 15, batch 2850, loss[loss=0.2169, simple_loss=0.2878, pruned_loss=0.07293, over 4841.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.248, pruned_loss=0.05686, over 955570.73 frames. ], batch size: 49, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:21:41,397 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.604e+02 1.866e+02 2.227e+02 4.125e+02, threshold=3.733e+02, percent-clipped=3.0 2023-03-26 18:21:42,116 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5343, 1.4607, 2.2693, 3.3216, 2.2109, 2.3611, 1.1485, 2.8216], device='cuda:2'), covar=tensor([0.1739, 0.1555, 0.1192, 0.0645, 0.0810, 0.1531, 0.1794, 0.0477], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0132, 0.0164, 0.0100, 0.0138, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:21:48,542 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:21:54,549 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2122, 1.2636, 1.5962, 1.0113, 1.1820, 1.4112, 1.2728, 1.5924], device='cuda:2'), covar=tensor([0.1313, 0.2022, 0.1171, 0.1480, 0.1008, 0.1303, 0.2635, 0.0908], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0202, 0.0192, 0.0189, 0.0174, 0.0212, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:22:15,027 INFO [finetune.py:976] (2/7) Epoch 15, batch 2900, loss[loss=0.1589, simple_loss=0.2416, pruned_loss=0.03816, over 4810.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2521, pruned_loss=0.05802, over 955359.30 frames. ], batch size: 39, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:22:57,705 INFO [finetune.py:976] (2/7) Epoch 15, batch 2950, loss[loss=0.1723, simple_loss=0.2466, pruned_loss=0.04905, over 4861.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2568, pruned_loss=0.05976, over 953863.80 frames. ], batch size: 31, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:23:01,328 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.293e+02 1.748e+02 2.030e+02 2.368e+02 3.585e+02, threshold=4.059e+02, percent-clipped=0.0 2023-03-26 18:23:08,908 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83149.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:23:37,596 INFO [finetune.py:976] (2/7) Epoch 15, batch 3000, loss[loss=0.2047, simple_loss=0.2845, pruned_loss=0.06241, over 4815.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2587, pruned_loss=0.06073, over 954052.20 frames. ], batch size: 39, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:23:37,596 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 18:23:48,368 INFO [finetune.py:1010] (2/7) Epoch 15, validation: loss=0.1564, simple_loss=0.2269, pruned_loss=0.04296, over 2265189.00 frames. 2023-03-26 18:23:48,369 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 18:23:51,554 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.02 vs. limit=2.0 2023-03-26 18:23:59,729 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:00,272 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83197.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:24:00,912 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5267, 1.4142, 1.4170, 1.4744, 1.1036, 3.1016, 1.2860, 1.6331], device='cuda:2'), covar=tensor([0.3391, 0.2583, 0.2154, 0.2346, 0.1832, 0.0218, 0.2907, 0.1311], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:24:21,534 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:23,383 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0005, 2.0010, 1.7413, 2.0829, 2.4989, 2.0674, 1.8759, 1.5337], device='cuda:2'), covar=tensor([0.2213, 0.1924, 0.1876, 0.1537, 0.1862, 0.1132, 0.2190, 0.1880], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0244, 0.0186, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:24:30,316 INFO [finetune.py:976] (2/7) Epoch 15, batch 3050, loss[loss=0.2065, simple_loss=0.2867, pruned_loss=0.06313, over 4770.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.259, pruned_loss=0.06032, over 955335.04 frames. ], batch size: 51, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:24:34,916 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.542e+02 1.763e+02 2.135e+02 3.801e+02, threshold=3.526e+02, percent-clipped=0.0 2023-03-26 18:24:44,056 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83257.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:51,841 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2929, 2.1319, 1.7430, 2.1924, 2.1185, 1.8825, 2.4389, 2.2711], device='cuda:2'), covar=tensor([0.1174, 0.1954, 0.2951, 0.2548, 0.2661, 0.1611, 0.3496, 0.1617], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0187, 0.0235, 0.0254, 0.0245, 0.0201, 0.0213, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:24:54,193 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:01,276 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:04,100 INFO [finetune.py:976] (2/7) Epoch 15, batch 3100, loss[loss=0.1884, simple_loss=0.2505, pruned_loss=0.06316, over 4842.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2559, pruned_loss=0.05897, over 954521.96 frames. ], batch size: 47, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:25:21,164 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7352, 4.1258, 4.3043, 4.5577, 4.4987, 4.2335, 4.8336, 1.5422], device='cuda:2'), covar=tensor([0.0596, 0.0717, 0.0709, 0.0712, 0.1017, 0.1296, 0.0507, 0.5311], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0244, 0.0276, 0.0291, 0.0333, 0.0282, 0.0301, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:25:24,780 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83319.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:37,274 INFO [finetune.py:976] (2/7) Epoch 15, batch 3150, loss[loss=0.1667, simple_loss=0.2444, pruned_loss=0.04451, over 4744.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2523, pruned_loss=0.0578, over 955125.50 frames. ], batch size: 27, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:25:41,382 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.524e+02 1.821e+02 2.258e+02 3.585e+02, threshold=3.643e+02, percent-clipped=2.0 2023-03-26 18:25:42,590 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:45,567 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-26 18:26:12,519 INFO [finetune.py:976] (2/7) Epoch 15, batch 3200, loss[loss=0.1696, simple_loss=0.2454, pruned_loss=0.04691, over 4935.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.249, pruned_loss=0.05673, over 954565.49 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:26:49,452 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4472, 1.9282, 2.2384, 2.2613, 2.0666, 2.0867, 2.2049, 2.1794], device='cuda:2'), covar=tensor([0.4334, 0.4190, 0.4109, 0.4318, 0.5732, 0.4109, 0.5347, 0.3957], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0238, 0.0256, 0.0268, 0.0266, 0.0239, 0.0280, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:26:55,876 INFO [finetune.py:976] (2/7) Epoch 15, batch 3250, loss[loss=0.1568, simple_loss=0.2309, pruned_loss=0.04133, over 4768.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2506, pruned_loss=0.05794, over 953554.37 frames. ], batch size: 26, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:27:00,084 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.648e+01 1.539e+02 1.854e+02 2.232e+02 3.646e+02, threshold=3.708e+02, percent-clipped=1.0 2023-03-26 18:27:00,250 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0704, 1.8769, 1.6157, 1.6761, 1.7952, 1.7649, 1.8341, 2.4965], device='cuda:2'), covar=tensor([0.3825, 0.3992, 0.3177, 0.3847, 0.4003, 0.2383, 0.3712, 0.1712], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0257, 0.0225, 0.0273, 0.0245, 0.0213, 0.0247, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:27:29,590 INFO [finetune.py:976] (2/7) Epoch 15, batch 3300, loss[loss=0.1887, simple_loss=0.2601, pruned_loss=0.05863, over 4799.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2543, pruned_loss=0.05962, over 953419.30 frames. ], batch size: 45, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:27:31,028 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 18:27:47,189 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2642, 2.1843, 2.3544, 1.0300, 2.6271, 2.8517, 2.5048, 2.1198], device='cuda:2'), covar=tensor([0.0879, 0.0748, 0.0539, 0.0751, 0.0612, 0.0578, 0.0468, 0.0744], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0123, 0.0128, 0.0129, 0.0126, 0.0140, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.1802e-05, 1.0954e-04, 8.8521e-05, 9.1668e-05, 9.1424e-05, 9.0796e-05, 1.0103e-04, 1.0565e-04], device='cuda:2') 2023-03-26 18:27:54,232 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5274, 1.3107, 1.0865, 1.3577, 1.7912, 1.6891, 1.4554, 1.2277], device='cuda:2'), covar=tensor([0.0324, 0.0417, 0.0854, 0.0361, 0.0228, 0.0532, 0.0373, 0.0432], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0109, 0.0143, 0.0113, 0.0100, 0.0107, 0.0098, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.3203e-05, 8.4830e-05, 1.1334e-04, 8.7792e-05, 7.8440e-05, 7.9374e-05, 7.3825e-05, 8.3567e-05], device='cuda:2') 2023-03-26 18:28:07,468 INFO [finetune.py:976] (2/7) Epoch 15, batch 3350, loss[loss=0.1565, simple_loss=0.2292, pruned_loss=0.04183, over 4836.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2562, pruned_loss=0.06004, over 954244.49 frames. ], batch size: 30, lr: 3.49e-03, grad_scale: 64.0 2023-03-26 18:28:14,630 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.792e+02 2.041e+02 2.510e+02 5.102e+02, threshold=4.082e+02, percent-clipped=3.0 2023-03-26 18:28:14,785 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6259, 1.5273, 1.3945, 1.7724, 1.8475, 1.6902, 1.2745, 1.3424], device='cuda:2'), covar=tensor([0.2403, 0.2317, 0.2100, 0.1681, 0.1972, 0.1333, 0.2693, 0.2088], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0192, 0.0243, 0.0185, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:28:21,871 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:28:54,246 INFO [finetune.py:976] (2/7) Epoch 15, batch 3400, loss[loss=0.1919, simple_loss=0.2592, pruned_loss=0.0623, over 4858.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2571, pruned_loss=0.06032, over 956199.17 frames. ], batch size: 31, lr: 3.49e-03, grad_scale: 64.0 2023-03-26 18:29:03,974 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5976, 1.5610, 1.3468, 1.7542, 1.9260, 1.6944, 1.3044, 1.3345], device='cuda:2'), covar=tensor([0.2270, 0.2015, 0.1935, 0.1556, 0.1702, 0.1239, 0.2333, 0.1889], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0192, 0.0243, 0.0185, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:29:24,883 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83619.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:29:37,312 INFO [finetune.py:976] (2/7) Epoch 15, batch 3450, loss[loss=0.1571, simple_loss=0.2257, pruned_loss=0.04423, over 4877.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2564, pruned_loss=0.05975, over 954194.45 frames. ], batch size: 32, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:29:39,048 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:29:41,997 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.651e+02 1.928e+02 2.236e+02 3.717e+02, threshold=3.855e+02, percent-clipped=0.0 2023-03-26 18:29:54,586 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0746, 1.9692, 1.6567, 1.8261, 2.0449, 1.7468, 2.1482, 2.0092], device='cuda:2'), covar=tensor([0.1217, 0.1849, 0.2941, 0.2273, 0.2357, 0.1617, 0.2792, 0.1669], device='cuda:2'), in_proj_covar=tensor([0.0180, 0.0186, 0.0233, 0.0252, 0.0243, 0.0200, 0.0211, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:29:57,381 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83667.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:30:11,002 INFO [finetune.py:976] (2/7) Epoch 15, batch 3500, loss[loss=0.1941, simple_loss=0.2545, pruned_loss=0.06691, over 4823.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2548, pruned_loss=0.0598, over 953042.45 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:30:32,396 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.50 vs. limit=2.0 2023-03-26 18:30:44,676 INFO [finetune.py:976] (2/7) Epoch 15, batch 3550, loss[loss=0.1461, simple_loss=0.2284, pruned_loss=0.03189, over 4830.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2512, pruned_loss=0.05852, over 954090.47 frames. ], batch size: 41, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:30:49,417 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.222e+01 1.573e+02 1.880e+02 2.102e+02 4.250e+02, threshold=3.760e+02, percent-clipped=2.0 2023-03-26 18:30:59,115 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:31:18,472 INFO [finetune.py:976] (2/7) Epoch 15, batch 3600, loss[loss=0.1627, simple_loss=0.2406, pruned_loss=0.0424, over 4761.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2493, pruned_loss=0.05799, over 956163.81 frames. ], batch size: 28, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:31:41,546 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3386, 2.0190, 2.8231, 1.6320, 2.4836, 2.6638, 1.9520, 2.9003], device='cuda:2'), covar=tensor([0.1302, 0.1902, 0.1347, 0.2182, 0.0801, 0.1260, 0.2496, 0.0719], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0204, 0.0192, 0.0190, 0.0176, 0.0212, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:31:48,179 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:31:59,980 INFO [finetune.py:976] (2/7) Epoch 15, batch 3650, loss[loss=0.1356, simple_loss=0.2102, pruned_loss=0.03046, over 4773.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2511, pruned_loss=0.05828, over 956078.17 frames. ], batch size: 28, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:32:04,776 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.603e+02 1.941e+02 2.306e+02 4.863e+02, threshold=3.882e+02, percent-clipped=1.0 2023-03-26 18:32:09,567 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:32:22,793 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0196, 1.9346, 1.6085, 1.8888, 1.8185, 1.8164, 1.8156, 2.5371], device='cuda:2'), covar=tensor([0.3954, 0.4494, 0.3468, 0.4287, 0.4166, 0.2363, 0.4310, 0.1718], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0258, 0.0225, 0.0273, 0.0245, 0.0214, 0.0247, 0.0225], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:32:33,847 INFO [finetune.py:976] (2/7) Epoch 15, batch 3700, loss[loss=0.195, simple_loss=0.2683, pruned_loss=0.06087, over 4936.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2555, pruned_loss=0.05919, over 955973.90 frames. ], batch size: 33, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:32:41,698 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:32:41,901 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 18:33:07,588 INFO [finetune.py:976] (2/7) Epoch 15, batch 3750, loss[loss=0.2198, simple_loss=0.2793, pruned_loss=0.08021, over 4891.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2569, pruned_loss=0.05973, over 955095.40 frames. ], batch size: 35, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:33:08,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:11,799 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.596e+02 1.977e+02 2.275e+02 5.079e+02, threshold=3.955e+02, percent-clipped=1.0 2023-03-26 18:33:14,854 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:53,423 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83985.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:55,619 INFO [finetune.py:976] (2/7) Epoch 15, batch 3800, loss[loss=0.1561, simple_loss=0.2258, pruned_loss=0.0432, over 4727.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2577, pruned_loss=0.05992, over 953112.70 frames. ], batch size: 54, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:33:55,677 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:34:11,217 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84010.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:34:20,503 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1528, 1.8771, 1.8608, 2.0301, 1.7996, 1.9152, 1.8468, 2.5490], device='cuda:2'), covar=tensor([0.3397, 0.4557, 0.3132, 0.3943, 0.3968, 0.2410, 0.4302, 0.1612], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0258, 0.0225, 0.0274, 0.0246, 0.0214, 0.0248, 0.0225], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:34:32,158 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5417, 2.8903, 2.7268, 1.9758, 2.8131, 2.9758, 2.8696, 2.5089], device='cuda:2'), covar=tensor([0.0669, 0.0595, 0.0767, 0.0952, 0.0590, 0.0741, 0.0695, 0.1032], device='cuda:2'), in_proj_covar=tensor([0.0136, 0.0137, 0.0144, 0.0125, 0.0126, 0.0143, 0.0144, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:34:36,933 INFO [finetune.py:976] (2/7) Epoch 15, batch 3850, loss[loss=0.1882, simple_loss=0.2628, pruned_loss=0.05685, over 4850.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2566, pruned_loss=0.05975, over 954033.42 frames. ], batch size: 44, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:34:38,912 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.76 vs. limit=5.0 2023-03-26 18:34:41,716 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.557e+01 1.496e+02 1.862e+02 2.338e+02 3.560e+02, threshold=3.724e+02, percent-clipped=0.0 2023-03-26 18:34:42,464 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:35:10,033 INFO [finetune.py:976] (2/7) Epoch 15, batch 3900, loss[loss=0.1682, simple_loss=0.2333, pruned_loss=0.05153, over 4751.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2533, pruned_loss=0.05853, over 953353.50 frames. ], batch size: 27, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:35:16,538 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84097.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:35:28,912 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:35:34,443 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2810, 2.1125, 1.8243, 2.0085, 2.1897, 1.9758, 2.4778, 2.2486], device='cuda:2'), covar=tensor([0.1400, 0.2267, 0.3282, 0.2766, 0.2790, 0.1783, 0.2738, 0.1874], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0188, 0.0235, 0.0255, 0.0246, 0.0201, 0.0212, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:35:43,581 INFO [finetune.py:976] (2/7) Epoch 15, batch 3950, loss[loss=0.1854, simple_loss=0.2457, pruned_loss=0.06252, over 4822.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2502, pruned_loss=0.05744, over 954388.20 frames. ], batch size: 33, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:35:47,770 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.469e+02 1.885e+02 2.278e+02 4.120e+02, threshold=3.770e+02, percent-clipped=1.0 2023-03-26 18:35:57,215 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:16,808 INFO [finetune.py:976] (2/7) Epoch 15, batch 4000, loss[loss=0.2138, simple_loss=0.2763, pruned_loss=0.07564, over 4903.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2502, pruned_loss=0.05836, over 954622.60 frames. ], batch size: 43, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:36:18,656 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8878, 1.2831, 1.8857, 1.7947, 1.5958, 1.5963, 1.7354, 1.7047], device='cuda:2'), covar=tensor([0.3956, 0.4387, 0.3326, 0.4040, 0.5011, 0.3824, 0.4806, 0.3362], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0238, 0.0256, 0.0267, 0.0266, 0.0239, 0.0279, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:36:24,507 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:47,895 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 18:36:57,613 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:59,238 INFO [finetune.py:976] (2/7) Epoch 15, batch 4050, loss[loss=0.202, simple_loss=0.2753, pruned_loss=0.06432, over 4901.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2538, pruned_loss=0.05953, over 954116.92 frames. ], batch size: 43, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:37:07,827 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.586e+02 1.909e+02 2.268e+02 5.729e+02, threshold=3.818e+02, percent-clipped=2.0 2023-03-26 18:37:21,553 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:37:23,270 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9512, 1.8837, 1.8968, 1.8799, 1.4718, 3.6605, 1.6183, 2.0923], device='cuda:2'), covar=tensor([0.2949, 0.2299, 0.1871, 0.2182, 0.1650, 0.0199, 0.2309, 0.1141], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:37:39,953 INFO [finetune.py:976] (2/7) Epoch 15, batch 4100, loss[loss=0.2112, simple_loss=0.2713, pruned_loss=0.07558, over 4836.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2547, pruned_loss=0.05964, over 953226.51 frames. ], batch size: 30, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:37:46,445 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:37:52,176 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84305.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 18:38:09,591 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 18:38:11,863 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:38:13,429 INFO [finetune.py:976] (2/7) Epoch 15, batch 4150, loss[loss=0.1814, simple_loss=0.2521, pruned_loss=0.05532, over 4906.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2555, pruned_loss=0.05967, over 954077.61 frames. ], batch size: 46, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:38:15,329 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84341.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:38:18,112 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.618e+02 1.997e+02 2.307e+02 7.274e+02, threshold=3.993e+02, percent-clipped=3.0 2023-03-26 18:38:49,843 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9352, 1.4272, 2.0171, 1.9301, 1.7254, 1.7326, 1.8603, 1.8586], device='cuda:2'), covar=tensor([0.3917, 0.4283, 0.3265, 0.3611, 0.4623, 0.3661, 0.4863, 0.3099], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0238, 0.0256, 0.0268, 0.0267, 0.0240, 0.0279, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:38:50,307 INFO [finetune.py:976] (2/7) Epoch 15, batch 4200, loss[loss=0.1679, simple_loss=0.2389, pruned_loss=0.04847, over 4900.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2557, pruned_loss=0.05963, over 954202.12 frames. ], batch size: 37, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:39:04,814 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:13,827 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 18:39:17,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84416.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:31,951 INFO [finetune.py:976] (2/7) Epoch 15, batch 4250, loss[loss=0.1573, simple_loss=0.2441, pruned_loss=0.03522, over 4787.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2549, pruned_loss=0.05952, over 954930.00 frames. ], batch size: 29, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:39:36,665 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.324e+01 1.559e+02 1.825e+02 2.300e+02 4.289e+02, threshold=3.650e+02, percent-clipped=1.0 2023-03-26 18:39:37,494 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.17 vs. limit=5.0 2023-03-26 18:39:47,480 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84453.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:58,789 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:40:14,294 INFO [finetune.py:976] (2/7) Epoch 15, batch 4300, loss[loss=0.1604, simple_loss=0.2253, pruned_loss=0.04773, over 4823.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2516, pruned_loss=0.05794, over 956348.45 frames. ], batch size: 30, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:40:34,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8383, 1.1994, 1.8968, 1.7482, 1.5740, 1.5044, 1.7108, 1.7136], device='cuda:2'), covar=tensor([0.3388, 0.3580, 0.2874, 0.3367, 0.4027, 0.3378, 0.3750, 0.2796], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0237, 0.0256, 0.0268, 0.0267, 0.0240, 0.0279, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:40:39,091 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:40:47,844 INFO [finetune.py:976] (2/7) Epoch 15, batch 4350, loss[loss=0.1668, simple_loss=0.2241, pruned_loss=0.05476, over 4734.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.248, pruned_loss=0.05693, over 956630.94 frames. ], batch size: 23, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:40:52,223 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.502e+02 1.820e+02 2.196e+02 3.984e+02, threshold=3.641e+02, percent-clipped=2.0 2023-03-26 18:40:58,847 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:19,639 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:21,100 INFO [finetune.py:976] (2/7) Epoch 15, batch 4400, loss[loss=0.1803, simple_loss=0.2366, pruned_loss=0.06198, over 4758.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2483, pruned_loss=0.05687, over 956755.90 frames. ], batch size: 26, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:41:24,147 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5634, 3.9694, 4.1640, 4.3847, 4.3377, 4.0040, 4.6281, 1.4253], device='cuda:2'), covar=tensor([0.0697, 0.0850, 0.0757, 0.1032, 0.1058, 0.1399, 0.0555, 0.5589], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0275, 0.0291, 0.0332, 0.0283, 0.0298, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:41:24,149 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:32,779 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:32,967 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 18:41:51,202 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5491, 1.5069, 1.9985, 3.2430, 2.1151, 2.3093, 0.9802, 2.6986], device='cuda:2'), covar=tensor([0.1897, 0.1469, 0.1319, 0.0597, 0.0879, 0.1335, 0.1925, 0.0497], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0100, 0.0137, 0.0124, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:41:51,851 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84633.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:54,826 INFO [finetune.py:976] (2/7) Epoch 15, batch 4450, loss[loss=0.1999, simple_loss=0.2763, pruned_loss=0.06181, over 4827.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2527, pruned_loss=0.05811, over 956499.86 frames. ], batch size: 40, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:41:57,753 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:01,981 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.595e+02 1.952e+02 2.292e+02 4.719e+02, threshold=3.904e+02, percent-clipped=1.0 2023-03-26 18:42:07,100 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:23,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1798, 1.8341, 1.9011, 0.9087, 2.1356, 2.3304, 2.0104, 1.7556], device='cuda:2'), covar=tensor([0.1031, 0.0896, 0.0621, 0.0751, 0.0545, 0.0765, 0.0504, 0.0823], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0130, 0.0132, 0.0127, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.2524e-05, 1.1082e-04, 8.9295e-05, 9.2835e-05, 9.3423e-05, 9.1523e-05, 1.0294e-04, 1.0671e-04], device='cuda:2') 2023-03-26 18:42:46,431 INFO [finetune.py:976] (2/7) Epoch 15, batch 4500, loss[loss=0.1953, simple_loss=0.2661, pruned_loss=0.06224, over 4910.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2549, pruned_loss=0.05927, over 955626.96 frames. ], batch size: 36, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:42:47,111 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84689.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:49,431 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:50,727 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84694.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:43:00,972 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 18:43:20,117 INFO [finetune.py:976] (2/7) Epoch 15, batch 4550, loss[loss=0.1914, simple_loss=0.2729, pruned_loss=0.05495, over 4900.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2568, pruned_loss=0.06035, over 954788.96 frames. ], batch size: 37, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:43:25,286 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.592e+02 2.005e+02 2.406e+02 4.528e+02, threshold=4.009e+02, percent-clipped=3.0 2023-03-26 18:43:30,260 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:43:42,543 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6001, 1.4771, 1.4118, 1.5108, 1.1360, 3.4187, 1.3263, 1.7620], device='cuda:2'), covar=tensor([0.3362, 0.2595, 0.2225, 0.2461, 0.1847, 0.0189, 0.2707, 0.1398], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:43:46,350 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-26 18:43:53,702 INFO [finetune.py:976] (2/7) Epoch 15, batch 4600, loss[loss=0.1504, simple_loss=0.218, pruned_loss=0.04141, over 4806.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2565, pruned_loss=0.06016, over 955437.51 frames. ], batch size: 25, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:44:06,844 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84800.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:07,394 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:25,141 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2243, 1.2909, 1.3667, 0.6530, 1.1944, 1.5127, 1.5963, 1.2318], device='cuda:2'), covar=tensor([0.0749, 0.0508, 0.0420, 0.0459, 0.0424, 0.0505, 0.0261, 0.0622], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0130, 0.0131, 0.0127, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.2556e-05, 1.1061e-04, 8.9163e-05, 9.2784e-05, 9.2856e-05, 9.1374e-05, 1.0286e-04, 1.0672e-04], device='cuda:2') 2023-03-26 18:44:35,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7576, 1.6576, 2.0740, 1.3419, 1.7808, 1.9955, 1.6748, 2.1480], device='cuda:2'), covar=tensor([0.1060, 0.1968, 0.1271, 0.1516, 0.0848, 0.1143, 0.2760, 0.0860], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0205, 0.0194, 0.0191, 0.0177, 0.0214, 0.0219, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:44:36,214 INFO [finetune.py:976] (2/7) Epoch 15, batch 4650, loss[loss=0.2009, simple_loss=0.2713, pruned_loss=0.06527, over 4810.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2539, pruned_loss=0.05949, over 955678.68 frames. ], batch size: 39, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:44:40,389 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.584e+02 1.933e+02 2.372e+02 3.946e+02, threshold=3.865e+02, percent-clipped=0.0 2023-03-26 18:44:47,570 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84855.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:56,243 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84861.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:17,973 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84881.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:22,918 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8179, 1.7185, 1.6021, 1.7464, 1.2248, 4.3990, 1.6113, 1.9570], device='cuda:2'), covar=tensor([0.3517, 0.2482, 0.2148, 0.2437, 0.1820, 0.0110, 0.2376, 0.1287], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:45:25,445 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-26 18:45:26,374 INFO [finetune.py:976] (2/7) Epoch 15, batch 4700, loss[loss=0.1293, simple_loss=0.1965, pruned_loss=0.03107, over 4802.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2513, pruned_loss=0.05867, over 954699.40 frames. ], batch size: 25, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:45:29,346 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:36,038 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:39,616 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-26 18:45:46,238 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5568, 1.3739, 1.2933, 1.5346, 1.6359, 1.4810, 0.9582, 1.3315], device='cuda:2'), covar=tensor([0.1990, 0.1953, 0.1796, 0.1530, 0.1591, 0.1213, 0.2690, 0.1820], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0206, 0.0209, 0.0191, 0.0240, 0.0184, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:45:59,759 INFO [finetune.py:976] (2/7) Epoch 15, batch 4750, loss[loss=0.1643, simple_loss=0.2317, pruned_loss=0.04848, over 4821.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2494, pruned_loss=0.05776, over 954634.56 frames. ], batch size: 51, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:46:01,518 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:04,956 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.651e+02 1.892e+02 2.436e+02 4.596e+02, threshold=3.784e+02, percent-clipped=1.0 2023-03-26 18:46:08,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8073, 1.7399, 1.8535, 1.1907, 1.9014, 1.8682, 1.8507, 1.5000], device='cuda:2'), covar=tensor([0.0562, 0.0652, 0.0647, 0.0862, 0.0635, 0.0701, 0.0617, 0.1247], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0135, 0.0142, 0.0123, 0.0124, 0.0141, 0.0143, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:46:08,764 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:33,699 INFO [finetune.py:976] (2/7) Epoch 15, batch 4800, loss[loss=0.2121, simple_loss=0.2833, pruned_loss=0.0704, over 4915.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2521, pruned_loss=0.05907, over 951936.61 frames. ], batch size: 36, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:46:34,391 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:36,729 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:50,494 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85012.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:47:04,999 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 18:47:05,339 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5505, 3.4325, 3.2352, 1.3648, 3.5613, 2.6670, 1.2438, 2.3342], device='cuda:2'), covar=tensor([0.2271, 0.1923, 0.1466, 0.3658, 0.1156, 0.1023, 0.3679, 0.1540], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0128, 0.0158, 0.0123, 0.0145, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 18:47:07,603 INFO [finetune.py:976] (2/7) Epoch 15, batch 4850, loss[loss=0.1722, simple_loss=0.2506, pruned_loss=0.04688, over 4848.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2565, pruned_loss=0.06039, over 952298.14 frames. ], batch size: 49, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:47:08,860 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:47:10,612 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7547, 1.6454, 1.5418, 1.6819, 1.5258, 4.5729, 1.9571, 2.3109], device='cuda:2'), covar=tensor([0.4262, 0.3172, 0.2439, 0.2972, 0.1771, 0.0203, 0.2195, 0.1146], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:47:10,635 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9801, 2.1179, 1.7515, 1.9107, 2.5050, 2.4997, 2.1032, 2.0232], device='cuda:2'), covar=tensor([0.0313, 0.0320, 0.0572, 0.0303, 0.0193, 0.0394, 0.0286, 0.0373], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0098, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2340e-05, 8.4192e-05, 1.1254e-04, 8.7141e-05, 7.7741e-05, 7.8859e-05, 7.3435e-05, 8.2586e-05], device='cuda:2') 2023-03-26 18:47:12,314 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.585e+02 1.858e+02 2.141e+02 6.123e+02, threshold=3.716e+02, percent-clipped=1.0 2023-03-26 18:47:19,224 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 18:47:20,383 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1272, 1.8250, 2.2460, 2.0904, 1.8623, 1.8668, 2.0517, 2.0555], device='cuda:2'), covar=tensor([0.4060, 0.4254, 0.3192, 0.3995, 0.5152, 0.4006, 0.5255, 0.3225], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0269, 0.0268, 0.0241, 0.0280, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:47:25,855 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 18:47:50,157 INFO [finetune.py:976] (2/7) Epoch 15, batch 4900, loss[loss=0.2173, simple_loss=0.2861, pruned_loss=0.0743, over 4896.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2572, pruned_loss=0.06028, over 951094.22 frames. ], batch size: 36, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:47:53,289 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 18:47:59,543 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9547, 1.7331, 2.2936, 1.4858, 2.0980, 2.2786, 1.6399, 2.4133], device='cuda:2'), covar=tensor([0.1599, 0.2041, 0.1695, 0.2244, 0.1088, 0.1675, 0.2910, 0.0986], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0203, 0.0192, 0.0190, 0.0176, 0.0213, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:48:06,023 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85107.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:11,341 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:11,957 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4754, 1.3881, 1.3753, 1.4630, 0.9155, 2.9094, 0.9990, 1.3824], device='cuda:2'), covar=tensor([0.3259, 0.2532, 0.2107, 0.2323, 0.1931, 0.0252, 0.2559, 0.1384], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0096, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:48:26,707 INFO [finetune.py:976] (2/7) Epoch 15, batch 4950, loss[loss=0.2135, simple_loss=0.2724, pruned_loss=0.07727, over 4776.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.258, pruned_loss=0.06017, over 953381.31 frames. ], batch size: 51, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:48:31,435 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.624e+02 1.884e+02 2.194e+02 3.725e+02, threshold=3.769e+02, percent-clipped=1.0 2023-03-26 18:48:39,187 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85156.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:47,001 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85168.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:48:52,392 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:55,804 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:49:00,447 INFO [finetune.py:976] (2/7) Epoch 15, batch 5000, loss[loss=0.1761, simple_loss=0.2411, pruned_loss=0.05554, over 4725.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2561, pruned_loss=0.05974, over 954118.40 frames. ], batch size: 23, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:49:25,343 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 18:49:36,185 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:49:37,539 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 18:49:41,002 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4771, 1.5242, 2.1080, 1.8730, 1.7254, 3.8606, 1.4048, 1.7123], device='cuda:2'), covar=tensor([0.1004, 0.1716, 0.1330, 0.0975, 0.1549, 0.0207, 0.1508, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0073, 0.0078, 0.0092, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:49:42,104 INFO [finetune.py:976] (2/7) Epoch 15, batch 5050, loss[loss=0.1766, simple_loss=0.2475, pruned_loss=0.05283, over 4788.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2529, pruned_loss=0.05849, over 955912.57 frames. ], batch size: 59, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:49:46,816 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.593e+02 1.872e+02 2.269e+02 5.264e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 18:50:22,632 INFO [finetune.py:976] (2/7) Epoch 15, batch 5100, loss[loss=0.1979, simple_loss=0.264, pruned_loss=0.06589, over 4834.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2502, pruned_loss=0.05788, over 952071.46 frames. ], batch size: 33, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:50:23,337 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:50:31,517 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 18:50:33,842 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 18:50:42,799 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:50:57,629 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 18:51:02,801 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:51:03,346 INFO [finetune.py:976] (2/7) Epoch 15, batch 5150, loss[loss=0.1822, simple_loss=0.2623, pruned_loss=0.05102, over 4920.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2509, pruned_loss=0.05825, over 951664.69 frames. ], batch size: 42, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:51:08,102 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.526e+02 1.888e+02 2.256e+02 3.382e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 18:51:20,122 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 18:51:29,575 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9172, 1.7543, 1.5587, 1.4472, 1.9207, 1.6518, 1.8687, 1.9004], device='cuda:2'), covar=tensor([0.1511, 0.2206, 0.3336, 0.2601, 0.2867, 0.1842, 0.2751, 0.2032], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0188, 0.0233, 0.0253, 0.0245, 0.0201, 0.0212, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:51:37,046 INFO [finetune.py:976] (2/7) Epoch 15, batch 5200, loss[loss=0.1947, simple_loss=0.2593, pruned_loss=0.06503, over 4938.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.254, pruned_loss=0.05927, over 952384.89 frames. ], batch size: 33, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:51:46,973 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 18:51:54,267 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 18:52:04,182 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:10,614 INFO [finetune.py:976] (2/7) Epoch 15, batch 5250, loss[loss=0.1504, simple_loss=0.2254, pruned_loss=0.03768, over 4763.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.255, pruned_loss=0.05915, over 951794.47 frames. ], batch size: 28, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:52:15,821 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.657e+02 1.928e+02 2.523e+02 8.274e+02, threshold=3.856e+02, percent-clipped=2.0 2023-03-26 18:52:23,067 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:25,299 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3111, 1.9721, 1.9943, 1.1280, 2.2628, 2.4341, 2.1095, 1.8420], device='cuda:2'), covar=tensor([0.1005, 0.0676, 0.0597, 0.0640, 0.0512, 0.0601, 0.0447, 0.0750], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0125, 0.0130, 0.0132, 0.0128, 0.0144, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.3123e-05, 1.1117e-04, 8.9664e-05, 9.3066e-05, 9.3186e-05, 9.2284e-05, 1.0369e-04, 1.0692e-04], device='cuda:2') 2023-03-26 18:52:27,683 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85463.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 18:52:33,028 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:40,799 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:44,235 INFO [finetune.py:976] (2/7) Epoch 15, batch 5300, loss[loss=0.1747, simple_loss=0.2501, pruned_loss=0.04969, over 4767.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2576, pruned_loss=0.06029, over 951683.78 frames. ], batch size: 28, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:52:44,950 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85489.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:54,981 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:14,956 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1447, 2.1616, 2.2090, 1.6109, 2.1757, 2.3355, 2.3314, 1.8543], device='cuda:2'), covar=tensor([0.0565, 0.0612, 0.0723, 0.0880, 0.0636, 0.0693, 0.0556, 0.1032], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0122, 0.0123, 0.0140, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:53:19,680 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:24,908 INFO [finetune.py:976] (2/7) Epoch 15, batch 5350, loss[loss=0.1702, simple_loss=0.244, pruned_loss=0.04819, over 4879.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2586, pruned_loss=0.06064, over 950955.71 frames. ], batch size: 32, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:53:28,695 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85544.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:29,185 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.542e+02 1.806e+02 2.197e+02 4.190e+02, threshold=3.613e+02, percent-clipped=2.0 2023-03-26 18:53:30,127 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.19 vs. limit=5.0 2023-03-26 18:53:54,510 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8250, 1.3427, 0.9089, 1.6620, 2.1430, 1.3830, 1.5241, 1.6273], device='cuda:2'), covar=tensor([0.1395, 0.2087, 0.1899, 0.1227, 0.1859, 0.1934, 0.1465, 0.1975], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:53:58,024 INFO [finetune.py:976] (2/7) Epoch 15, batch 5400, loss[loss=0.236, simple_loss=0.2792, pruned_loss=0.09643, over 4864.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2542, pruned_loss=0.05941, over 950484.27 frames. ], batch size: 31, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:54:00,443 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:09,518 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 18:54:11,205 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:23,547 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85625.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:31,764 INFO [finetune.py:976] (2/7) Epoch 15, batch 5450, loss[loss=0.1382, simple_loss=0.2133, pruned_loss=0.0315, over 4825.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2513, pruned_loss=0.05811, over 950864.60 frames. ], batch size: 30, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:54:41,082 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.884e+01 1.516e+02 1.902e+02 2.390e+02 5.288e+02, threshold=3.804e+02, percent-clipped=4.0 2023-03-26 18:54:52,063 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:55:05,691 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7430, 1.6583, 1.5456, 1.6476, 0.9870, 3.7586, 1.4036, 1.8214], device='cuda:2'), covar=tensor([0.3451, 0.2570, 0.2253, 0.2511, 0.2124, 0.0207, 0.2605, 0.1338], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:55:06,908 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7784, 1.6070, 1.4049, 1.3123, 1.5611, 1.5741, 1.5260, 2.1454], device='cuda:2'), covar=tensor([0.3814, 0.3604, 0.3140, 0.3303, 0.3524, 0.2299, 0.3284, 0.1716], device='cuda:2'), in_proj_covar=tensor([0.0283, 0.0258, 0.0224, 0.0273, 0.0245, 0.0213, 0.0247, 0.0224], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:55:12,319 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0917, 1.8625, 2.1327, 1.6513, 2.1188, 2.2260, 2.1912, 1.4279], device='cuda:2'), covar=tensor([0.0728, 0.0894, 0.0751, 0.0972, 0.0845, 0.0763, 0.0680, 0.1906], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0140, 0.0122, 0.0123, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:55:16,879 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85686.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 18:55:17,972 INFO [finetune.py:976] (2/7) Epoch 15, batch 5500, loss[loss=0.1974, simple_loss=0.2485, pruned_loss=0.0731, over 4430.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.248, pruned_loss=0.05669, over 953386.98 frames. ], batch size: 19, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:55:27,826 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 18:56:02,558 INFO [finetune.py:976] (2/7) Epoch 15, batch 5550, loss[loss=0.1881, simple_loss=0.2613, pruned_loss=0.05745, over 4835.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2519, pruned_loss=0.05831, over 953542.13 frames. ], batch size: 33, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:56:06,722 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.623e+01 1.589e+02 1.875e+02 2.150e+02 4.153e+02, threshold=3.750e+02, percent-clipped=1.0 2023-03-26 18:56:17,603 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8310, 1.6286, 2.1323, 1.3438, 1.8810, 2.0577, 1.6086, 2.2973], device='cuda:2'), covar=tensor([0.1348, 0.2247, 0.1370, 0.1957, 0.0975, 0.1365, 0.2776, 0.0811], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0205, 0.0193, 0.0191, 0.0177, 0.0213, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:56:19,298 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:24,636 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:28,781 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 18:56:32,620 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:34,954 INFO [finetune.py:976] (2/7) Epoch 15, batch 5600, loss[loss=0.1554, simple_loss=0.218, pruned_loss=0.04642, over 4032.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2539, pruned_loss=0.05789, over 954085.73 frames. ], batch size: 17, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:56:48,431 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:53,122 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:55,444 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6758, 1.5804, 2.0846, 3.4577, 2.3833, 2.2652, 1.0362, 2.7657], device='cuda:2'), covar=tensor([0.1717, 0.1430, 0.1280, 0.0537, 0.0761, 0.1560, 0.1836, 0.0478], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0101, 0.0139, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:57:04,149 INFO [finetune.py:976] (2/7) Epoch 15, batch 5650, loss[loss=0.2472, simple_loss=0.3016, pruned_loss=0.09636, over 4192.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2567, pruned_loss=0.05912, over 951330.22 frames. ], batch size: 65, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:57:04,773 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85839.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:08,244 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.126e+02 1.563e+02 1.888e+02 2.328e+02 3.522e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 18:57:26,034 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85875.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:32,564 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:33,723 INFO [finetune.py:976] (2/7) Epoch 15, batch 5700, loss[loss=0.1194, simple_loss=0.1874, pruned_loss=0.02564, over 3991.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2525, pruned_loss=0.0579, over 938858.61 frames. ], batch size: 17, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:58:02,782 INFO [finetune.py:976] (2/7) Epoch 16, batch 0, loss[loss=0.1777, simple_loss=0.2551, pruned_loss=0.05018, over 4888.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2551, pruned_loss=0.05018, over 4888.00 frames. ], batch size: 35, lr: 3.46e-03, grad_scale: 64.0 2023-03-26 18:58:02,782 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 18:58:09,878 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8103, 1.3895, 0.9589, 1.6109, 2.0813, 1.1544, 1.6211, 1.6335], device='cuda:2'), covar=tensor([0.1374, 0.1833, 0.1752, 0.1181, 0.1745, 0.1940, 0.1232, 0.1870], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 18:58:17,920 INFO [finetune.py:1010] (2/7) Epoch 16, validation: loss=0.1572, simple_loss=0.2278, pruned_loss=0.04329, over 2265189.00 frames. 2023-03-26 18:58:17,921 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 18:58:22,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5797, 1.4041, 1.3157, 1.4997, 1.8433, 1.7417, 1.4585, 1.3235], device='cuda:2'), covar=tensor([0.0323, 0.0356, 0.0601, 0.0314, 0.0195, 0.0458, 0.0351, 0.0381], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0109, 0.0143, 0.0113, 0.0100, 0.0107, 0.0097, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2658e-05, 8.4242e-05, 1.1341e-04, 8.7361e-05, 7.8285e-05, 7.8926e-05, 7.3140e-05, 8.2796e-05], device='cuda:2') 2023-03-26 18:58:25,993 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 18:58:26,968 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85930.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:58:29,350 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9418, 1.8236, 1.6840, 1.8141, 1.2789, 4.6202, 1.6792, 2.2335], device='cuda:2'), covar=tensor([0.3322, 0.2456, 0.2102, 0.2391, 0.1780, 0.0108, 0.2467, 0.1237], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0114, 0.0118, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 18:58:30,593 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:58:36,423 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.566e+02 1.783e+02 2.274e+02 8.459e+02, threshold=3.567e+02, percent-clipped=4.0 2023-03-26 18:58:40,699 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4842, 2.4453, 1.9025, 2.5705, 2.5424, 2.0317, 3.0516, 2.4957], device='cuda:2'), covar=tensor([0.1304, 0.2234, 0.3169, 0.2768, 0.2461, 0.1732, 0.2681, 0.1734], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0188, 0.0234, 0.0254, 0.0244, 0.0202, 0.0212, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 18:58:44,317 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:58:49,654 INFO [finetune.py:976] (2/7) Epoch 16, batch 50, loss[loss=0.1922, simple_loss=0.2546, pruned_loss=0.06496, over 4882.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2565, pruned_loss=0.05906, over 215914.51 frames. ], batch size: 35, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 18:58:59,304 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85981.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:59:05,875 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85991.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:59:14,747 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:59:17,224 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 18:59:23,676 INFO [finetune.py:976] (2/7) Epoch 16, batch 100, loss[loss=0.1205, simple_loss=0.1869, pruned_loss=0.02705, over 4236.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2446, pruned_loss=0.05284, over 378533.37 frames. ], batch size: 18, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 18:59:24,989 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:59:44,127 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.611e+02 1.877e+02 2.147e+02 3.763e+02, threshold=3.754e+02, percent-clipped=3.0 2023-03-26 18:59:47,928 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:00,112 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86063.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:06,620 INFO [finetune.py:976] (2/7) Epoch 16, batch 150, loss[loss=0.1526, simple_loss=0.2198, pruned_loss=0.04276, over 4768.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2421, pruned_loss=0.05339, over 507087.90 frames. ], batch size: 26, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:00:08,581 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:17,301 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.72 vs. limit=2.0 2023-03-26 19:00:27,240 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86084.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:43,681 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8303, 3.9863, 3.7475, 1.9241, 4.0898, 2.9972, 0.8222, 2.8724], device='cuda:2'), covar=tensor([0.2327, 0.1951, 0.1675, 0.3660, 0.1029, 0.1101, 0.4965, 0.1589], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0174, 0.0160, 0.0129, 0.0158, 0.0123, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:00:50,258 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:51,950 INFO [finetune.py:976] (2/7) Epoch 16, batch 200, loss[loss=0.1993, simple_loss=0.2638, pruned_loss=0.06741, over 4825.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2419, pruned_loss=0.05434, over 605650.64 frames. ], batch size: 33, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:01:04,040 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:05,147 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:09,828 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86139.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:14,012 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.565e+02 1.801e+02 2.285e+02 3.660e+02, threshold=3.601e+02, percent-clipped=0.0 2023-03-26 19:01:27,190 INFO [finetune.py:976] (2/7) Epoch 16, batch 250, loss[loss=0.1782, simple_loss=0.2543, pruned_loss=0.05104, over 4909.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2469, pruned_loss=0.05653, over 684187.07 frames. ], batch size: 37, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:01:40,820 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:41,402 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86187.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:00,655 INFO [finetune.py:976] (2/7) Epoch 16, batch 300, loss[loss=0.2143, simple_loss=0.2733, pruned_loss=0.07767, over 4909.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2514, pruned_loss=0.0578, over 742488.04 frames. ], batch size: 37, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:02:04,372 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6675, 1.5898, 1.3526, 1.7123, 1.9211, 1.6743, 1.3034, 1.3562], device='cuda:2'), covar=tensor([0.2193, 0.2032, 0.1967, 0.1614, 0.1627, 0.1196, 0.2487, 0.1967], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0207, 0.0211, 0.0191, 0.0241, 0.0184, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:02:04,970 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1935, 2.1005, 1.6395, 2.0707, 2.0686, 1.8166, 2.4230, 2.2068], device='cuda:2'), covar=tensor([0.1360, 0.2017, 0.3115, 0.2658, 0.2597, 0.1786, 0.3370, 0.1744], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0189, 0.0235, 0.0255, 0.0245, 0.0203, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:02:11,170 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86231.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:12,977 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:13,730 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-26 19:02:20,698 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.625e+02 1.967e+02 2.251e+02 5.649e+02, threshold=3.935e+02, percent-clipped=3.0 2023-03-26 19:02:20,831 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:23,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5892, 3.4730, 3.3459, 1.4733, 3.5645, 2.6540, 0.8125, 2.4248], device='cuda:2'), covar=tensor([0.2281, 0.2188, 0.1652, 0.3591, 0.1121, 0.1113, 0.4380, 0.1543], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0173, 0.0159, 0.0127, 0.0157, 0.0122, 0.0145, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:02:34,278 INFO [finetune.py:976] (2/7) Epoch 16, batch 350, loss[loss=0.2258, simple_loss=0.2894, pruned_loss=0.08113, over 4792.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2532, pruned_loss=0.058, over 790945.98 frames. ], batch size: 51, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:02:39,871 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1600, 1.9759, 2.1157, 0.7700, 2.3205, 2.5736, 2.1079, 1.9234], device='cuda:2'), covar=tensor([0.1091, 0.0775, 0.0546, 0.0883, 0.0589, 0.0638, 0.0551, 0.0753], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0128, 0.0129, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.1856e-05, 1.0970e-04, 8.8804e-05, 9.1725e-05, 9.1320e-05, 9.1600e-05, 1.0220e-04, 1.0557e-04], device='cuda:2') 2023-03-26 19:02:44,512 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:47,961 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86286.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:03:01,132 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:05,210 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86313.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:07,485 INFO [finetune.py:976] (2/7) Epoch 16, batch 400, loss[loss=0.1872, simple_loss=0.2554, pruned_loss=0.05949, over 4854.00 frames. ], tot_loss[loss=0.1862, simple_loss=0.255, pruned_loss=0.05869, over 828626.70 frames. ], batch size: 49, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:03:08,675 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4967, 1.5990, 2.3061, 2.0916, 1.8695, 4.3689, 1.5235, 1.6919], device='cuda:2'), covar=tensor([0.1274, 0.2289, 0.1212, 0.1196, 0.1896, 0.0269, 0.1999, 0.2304], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0078, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:03:15,905 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:34,374 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.307e+01 1.538e+02 1.774e+02 2.172e+02 4.200e+02, threshold=3.548e+02, percent-clipped=1.0 2023-03-26 19:03:35,735 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1089, 1.8619, 1.9869, 0.8297, 2.1666, 2.5595, 2.0972, 1.8117], device='cuda:2'), covar=tensor([0.0869, 0.0824, 0.0513, 0.0764, 0.0521, 0.0592, 0.0466, 0.0752], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0128, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.2080e-05, 1.1004e-04, 8.8955e-05, 9.1800e-05, 9.1824e-05, 9.1781e-05, 1.0246e-04, 1.0591e-04], device='cuda:2') 2023-03-26 19:03:45,095 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:50,329 INFO [finetune.py:976] (2/7) Epoch 16, batch 450, loss[loss=0.1939, simple_loss=0.2594, pruned_loss=0.06421, over 4904.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2534, pruned_loss=0.05811, over 857038.61 frames. ], batch size: 37, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:04:18,722 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:24,032 INFO [finetune.py:976] (2/7) Epoch 16, batch 500, loss[loss=0.1929, simple_loss=0.2521, pruned_loss=0.06684, over 4308.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2519, pruned_loss=0.05814, over 877906.11 frames. ], batch size: 19, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:04:30,021 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:40,629 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:43,598 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86445.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:44,072 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.779e+01 1.556e+02 1.875e+02 2.226e+02 4.465e+02, threshold=3.750e+02, percent-clipped=2.0 2023-03-26 19:04:57,163 INFO [finetune.py:976] (2/7) Epoch 16, batch 550, loss[loss=0.1465, simple_loss=0.2183, pruned_loss=0.03738, over 4936.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2495, pruned_loss=0.05779, over 897157.27 frames. ], batch size: 33, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:05:31,548 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:05:39,630 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86506.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:05:50,204 INFO [finetune.py:976] (2/7) Epoch 16, batch 600, loss[loss=0.2144, simple_loss=0.2701, pruned_loss=0.07934, over 4871.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2495, pruned_loss=0.05712, over 910882.75 frames. ], batch size: 31, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:06:04,172 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86531.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:14,603 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.575e+02 1.922e+02 2.222e+02 3.111e+02, threshold=3.844e+02, percent-clipped=0.0 2023-03-26 19:06:15,334 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1453, 1.5341, 0.8602, 1.8892, 2.3536, 1.8058, 1.7201, 1.8435], device='cuda:2'), covar=tensor([0.1416, 0.1957, 0.2091, 0.1179, 0.1863, 0.1887, 0.1343, 0.1879], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:06:26,823 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0296, 1.9219, 1.6887, 1.9159, 1.4260, 4.6622, 1.7504, 2.3204], device='cuda:2'), covar=tensor([0.3352, 0.2510, 0.2184, 0.2340, 0.1750, 0.0096, 0.2508, 0.1260], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:06:27,336 INFO [finetune.py:976] (2/7) Epoch 16, batch 650, loss[loss=0.1415, simple_loss=0.2117, pruned_loss=0.0357, over 4757.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2537, pruned_loss=0.05824, over 919954.84 frames. ], batch size: 28, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:06:36,233 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86579.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:36,303 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9640, 2.2682, 1.6808, 1.8588, 2.5966, 2.4906, 2.1602, 2.0552], device='cuda:2'), covar=tensor([0.0284, 0.0289, 0.0576, 0.0345, 0.0210, 0.0491, 0.0301, 0.0339], device='cuda:2'), in_proj_covar=tensor([0.0092, 0.0106, 0.0141, 0.0111, 0.0098, 0.0105, 0.0096, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.1539e-05, 8.2477e-05, 1.1165e-04, 8.5674e-05, 7.6561e-05, 7.7482e-05, 7.2488e-05, 8.1761e-05], device='cuda:2') 2023-03-26 19:06:37,379 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86580.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:39,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6906, 1.5894, 1.4120, 1.8393, 2.1459, 1.7746, 1.4051, 1.3789], device='cuda:2'), covar=tensor([0.2201, 0.2062, 0.1917, 0.1511, 0.1673, 0.1222, 0.2412, 0.1921], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0208, 0.0211, 0.0191, 0.0242, 0.0184, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:06:41,054 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86586.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:06:52,264 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:59,431 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:01,142 INFO [finetune.py:976] (2/7) Epoch 16, batch 700, loss[loss=0.1949, simple_loss=0.2659, pruned_loss=0.0619, over 4829.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2551, pruned_loss=0.05841, over 929026.98 frames. ], batch size: 33, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:07:13,496 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86634.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:07:17,805 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:21,644 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.513e+02 1.866e+02 2.326e+02 3.823e+02, threshold=3.732e+02, percent-clipped=0.0 2023-03-26 19:07:29,572 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:30,989 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-26 19:07:31,329 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86661.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:34,797 INFO [finetune.py:976] (2/7) Epoch 16, batch 750, loss[loss=0.213, simple_loss=0.267, pruned_loss=0.07953, over 4753.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2566, pruned_loss=0.05893, over 935404.79 frames. ], batch size: 26, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:07:39,223 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 19:08:02,109 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86706.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:03,364 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:08,617 INFO [finetune.py:976] (2/7) Epoch 16, batch 800, loss[loss=0.1871, simple_loss=0.2602, pruned_loss=0.05696, over 4742.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2562, pruned_loss=0.05877, over 941218.28 frames. ], batch size: 54, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:08:14,187 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86725.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:26,712 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.87 vs. limit=5.0 2023-03-26 19:08:28,783 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.501e+02 1.840e+02 2.208e+02 4.378e+02, threshold=3.681e+02, percent-clipped=4.0 2023-03-26 19:08:40,241 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:49,459 INFO [finetune.py:976] (2/7) Epoch 16, batch 850, loss[loss=0.1709, simple_loss=0.2465, pruned_loss=0.04768, over 4839.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2532, pruned_loss=0.05746, over 943623.73 frames. ], batch size: 33, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:08:54,221 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:06,034 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6781, 1.2063, 0.8720, 1.5961, 2.0614, 1.4597, 1.4480, 1.5594], device='cuda:2'), covar=tensor([0.1465, 0.2062, 0.1974, 0.1257, 0.1941, 0.1936, 0.1430, 0.1885], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0100, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 19:09:09,590 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86796.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:13,598 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86801.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:09:21,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86814.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:23,014 INFO [finetune.py:976] (2/7) Epoch 16, batch 900, loss[loss=0.1231, simple_loss=0.1967, pruned_loss=0.02472, over 4837.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2491, pruned_loss=0.05548, over 946305.59 frames. ], batch size: 30, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:09:27,879 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6967, 1.6484, 1.4399, 1.8057, 1.9315, 1.7882, 1.3482, 1.4466], device='cuda:2'), covar=tensor([0.2345, 0.2149, 0.2055, 0.1656, 0.1791, 0.1255, 0.2765, 0.2042], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0192, 0.0243, 0.0185, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:09:43,090 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.502e+02 1.904e+02 2.205e+02 3.944e+02, threshold=3.808e+02, percent-clipped=2.0 2023-03-26 19:09:45,004 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6872, 1.5409, 1.0280, 0.2792, 1.2376, 1.4976, 1.4699, 1.4512], device='cuda:2'), covar=tensor([0.0869, 0.0866, 0.1496, 0.2043, 0.1392, 0.2310, 0.2397, 0.0892], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0194, 0.0198, 0.0182, 0.0210, 0.0206, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:09:46,821 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9595, 1.8503, 1.6676, 1.9797, 2.4629, 2.0882, 1.6546, 1.5738], device='cuda:2'), covar=tensor([0.2138, 0.1966, 0.1914, 0.1659, 0.1740, 0.1141, 0.2406, 0.1953], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0208, 0.0211, 0.0191, 0.0242, 0.0185, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:09:56,619 INFO [finetune.py:976] (2/7) Epoch 16, batch 950, loss[loss=0.1643, simple_loss=0.2364, pruned_loss=0.04607, over 4798.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2483, pruned_loss=0.05535, over 949934.84 frames. ], batch size: 51, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:10:02,688 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86875.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:04,476 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:08,746 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.08 vs. limit=5.0 2023-03-26 19:10:20,395 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:40,037 INFO [finetune.py:976] (2/7) Epoch 16, batch 1000, loss[loss=0.2146, simple_loss=0.2604, pruned_loss=0.0844, over 4249.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2513, pruned_loss=0.05705, over 952664.07 frames. ], batch size: 18, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:11:01,990 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:08,279 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:12,980 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.633e+02 1.920e+02 2.320e+02 4.350e+02, threshold=3.840e+02, percent-clipped=2.0 2023-03-26 19:11:20,312 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:34,824 INFO [finetune.py:976] (2/7) Epoch 16, batch 1050, loss[loss=0.1913, simple_loss=0.271, pruned_loss=0.05577, over 4812.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2546, pruned_loss=0.05751, over 954556.40 frames. ], batch size: 38, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:11:58,503 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87002.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:12:08,331 INFO [finetune.py:976] (2/7) Epoch 16, batch 1100, loss[loss=0.156, simple_loss=0.2402, pruned_loss=0.0359, over 4917.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2571, pruned_loss=0.05873, over 955666.89 frames. ], batch size: 38, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:12:27,410 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.613e+02 1.835e+02 2.273e+02 4.124e+02, threshold=3.670e+02, percent-clipped=1.0 2023-03-26 19:12:39,658 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87063.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:12:41,762 INFO [finetune.py:976] (2/7) Epoch 16, batch 1150, loss[loss=0.1472, simple_loss=0.2138, pruned_loss=0.04034, over 4776.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2577, pruned_loss=0.05939, over 956779.99 frames. ], batch size: 29, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:12:45,540 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 19:13:01,410 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:04,442 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87101.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:13:15,215 INFO [finetune.py:976] (2/7) Epoch 16, batch 1200, loss[loss=0.1586, simple_loss=0.2373, pruned_loss=0.03991, over 4766.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2557, pruned_loss=0.05847, over 954949.98 frames. ], batch size: 51, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:13:33,239 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:34,371 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.093e+01 1.544e+02 1.890e+02 2.251e+02 4.242e+02, threshold=3.781e+02, percent-clipped=1.0 2023-03-26 19:13:34,642 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 19:13:36,665 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87149.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:50,131 INFO [finetune.py:976] (2/7) Epoch 16, batch 1250, loss[loss=0.1353, simple_loss=0.2133, pruned_loss=0.0287, over 4794.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2531, pruned_loss=0.05805, over 953234.70 frames. ], batch size: 29, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:13:53,126 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87170.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:31,063 INFO [finetune.py:976] (2/7) Epoch 16, batch 1300, loss[loss=0.1645, simple_loss=0.2282, pruned_loss=0.0504, over 4810.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2489, pruned_loss=0.0563, over 952352.41 frames. ], batch size: 51, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:14:39,435 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8898, 1.2226, 1.9490, 1.9368, 1.7365, 1.6792, 1.8009, 1.8307], device='cuda:2'), covar=tensor([0.3912, 0.3967, 0.3459, 0.3705, 0.4661, 0.3700, 0.4602, 0.3170], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0238, 0.0257, 0.0269, 0.0267, 0.0240, 0.0280, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:14:41,214 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87230.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:43,586 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:44,804 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:51,269 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.578e+02 1.944e+02 2.250e+02 4.130e+02, threshold=3.887e+02, percent-clipped=1.0 2023-03-26 19:15:04,398 INFO [finetune.py:976] (2/7) Epoch 16, batch 1350, loss[loss=0.1431, simple_loss=0.2135, pruned_loss=0.03638, over 4824.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2485, pruned_loss=0.05624, over 951711.36 frames. ], batch size: 30, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:15:16,954 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:15:21,328 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87291.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:15:44,163 INFO [finetune.py:976] (2/7) Epoch 16, batch 1400, loss[loss=0.1917, simple_loss=0.2553, pruned_loss=0.06406, over 4889.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2533, pruned_loss=0.05821, over 952143.62 frames. ], batch size: 32, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:15:50,407 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8818, 1.4077, 1.9852, 1.8304, 1.6619, 1.6262, 1.8056, 1.8209], device='cuda:2'), covar=tensor([0.3913, 0.4069, 0.3259, 0.3723, 0.4671, 0.3650, 0.4305, 0.3025], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0270, 0.0269, 0.0241, 0.0282, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:16:19,254 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.619e+02 1.873e+02 2.376e+02 3.982e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 19:16:31,530 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87358.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:16:38,507 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:16:41,378 INFO [finetune.py:976] (2/7) Epoch 16, batch 1450, loss[loss=0.2337, simple_loss=0.3054, pruned_loss=0.08107, over 4838.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2557, pruned_loss=0.05847, over 954314.16 frames. ], batch size: 47, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:16:42,862 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 19:17:17,381 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5311, 1.4867, 1.6567, 1.7112, 1.7653, 3.2595, 1.5736, 1.6129], device='cuda:2'), covar=tensor([0.0930, 0.1772, 0.1173, 0.0968, 0.1471, 0.0235, 0.1342, 0.1689], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0093, 0.0081, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:17:18,471 INFO [finetune.py:976] (2/7) Epoch 16, batch 1500, loss[loss=0.2328, simple_loss=0.2966, pruned_loss=0.0845, over 4720.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.257, pruned_loss=0.05897, over 954661.31 frames. ], batch size: 59, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:17:22,255 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.97 vs. limit=5.0 2023-03-26 19:17:23,385 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:17:39,130 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.635e+02 2.033e+02 2.421e+02 4.092e+02, threshold=4.066e+02, percent-clipped=1.0 2023-03-26 19:17:48,791 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87461.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:17:52,196 INFO [finetune.py:976] (2/7) Epoch 16, batch 1550, loss[loss=0.133, simple_loss=0.2138, pruned_loss=0.02605, over 4759.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2562, pruned_loss=0.05862, over 956156.06 frames. ], batch size: 28, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:17:52,941 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6736, 1.6185, 1.6486, 0.9679, 1.8004, 2.0102, 1.8866, 1.5193], device='cuda:2'), covar=tensor([0.0882, 0.0633, 0.0427, 0.0551, 0.0373, 0.0467, 0.0320, 0.0650], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0128, 0.0131, 0.0127, 0.0143, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.2411e-05, 1.1020e-04, 8.9094e-05, 9.1775e-05, 9.2722e-05, 9.1769e-05, 1.0274e-04, 1.0640e-04], device='cuda:2') 2023-03-26 19:17:55,187 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:18,833 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:25,477 INFO [finetune.py:976] (2/7) Epoch 16, batch 1600, loss[loss=0.1601, simple_loss=0.23, pruned_loss=0.04512, over 4907.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.254, pruned_loss=0.05819, over 954267.11 frames. ], batch size: 46, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:18:27,231 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87518.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:29,749 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87522.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:38,562 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87534.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:38,600 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6946, 1.5148, 1.0905, 0.2307, 1.3254, 1.4837, 1.4261, 1.4712], device='cuda:2'), covar=tensor([0.0873, 0.0831, 0.1244, 0.1958, 0.1277, 0.2254, 0.2282, 0.0843], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0196, 0.0201, 0.0184, 0.0213, 0.0208, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:18:41,928 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4351, 1.1971, 1.2639, 1.2891, 1.6081, 1.5935, 1.4103, 1.2390], device='cuda:2'), covar=tensor([0.0390, 0.0368, 0.0547, 0.0367, 0.0230, 0.0378, 0.0384, 0.0425], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0107, 0.0141, 0.0112, 0.0099, 0.0105, 0.0097, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.2228e-05, 8.2860e-05, 1.1173e-04, 8.6331e-05, 7.7102e-05, 7.7584e-05, 7.2603e-05, 8.1929e-05], device='cuda:2') 2023-03-26 19:18:46,617 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.707e+01 1.405e+02 1.628e+02 1.991e+02 3.372e+02, threshold=3.256e+02, percent-clipped=0.0 2023-03-26 19:18:48,602 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9850, 1.5624, 2.2378, 1.4820, 2.0365, 2.1257, 1.5483, 2.2471], device='cuda:2'), covar=tensor([0.1101, 0.1798, 0.1211, 0.1789, 0.0736, 0.1261, 0.2398, 0.0732], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0205, 0.0192, 0.0190, 0.0178, 0.0214, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:18:59,343 INFO [finetune.py:976] (2/7) Epoch 16, batch 1650, loss[loss=0.1597, simple_loss=0.2355, pruned_loss=0.04192, over 4751.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2521, pruned_loss=0.05794, over 955981.49 frames. ], batch size: 27, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:18:59,475 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87566.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:19:00,687 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1598, 1.7951, 2.4048, 1.5077, 2.1381, 2.3147, 1.7336, 2.4689], device='cuda:2'), covar=tensor([0.1344, 0.2223, 0.1520, 0.2261, 0.1029, 0.1545, 0.2917, 0.0991], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0205, 0.0192, 0.0190, 0.0178, 0.0214, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:19:10,146 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87582.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:11,429 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:13,571 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:17,422 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:29,735 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3359, 3.7965, 3.9612, 4.1727, 4.1186, 3.8208, 4.4562, 1.4712], device='cuda:2'), covar=tensor([0.0816, 0.0946, 0.0946, 0.1167, 0.1274, 0.1615, 0.0716, 0.5909], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0277, 0.0295, 0.0337, 0.0285, 0.0300, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:19:36,457 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:41,845 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5059, 1.3785, 1.2497, 1.4701, 1.7128, 1.5022, 1.1169, 1.2794], device='cuda:2'), covar=tensor([0.1950, 0.1955, 0.1817, 0.1540, 0.1510, 0.1242, 0.2407, 0.1760], device='cuda:2'), in_proj_covar=tensor([0.0239, 0.0208, 0.0210, 0.0191, 0.0242, 0.0185, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:19:43,483 INFO [finetune.py:976] (2/7) Epoch 16, batch 1700, loss[loss=0.1938, simple_loss=0.2686, pruned_loss=0.0595, over 4850.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2503, pruned_loss=0.05759, over 954408.26 frames. ], batch size: 44, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:19:44,817 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6667, 1.5664, 2.1726, 3.2665, 2.2239, 2.4303, 1.2652, 2.6886], device='cuda:2'), covar=tensor([0.1805, 0.1503, 0.1278, 0.0546, 0.0850, 0.1702, 0.1679, 0.0525], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0135, 0.0165, 0.0101, 0.0139, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:20:03,773 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87645.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:20:04,233 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.606e+02 1.879e+02 2.372e+02 9.403e+02, threshold=3.758e+02, percent-clipped=6.0 2023-03-26 19:20:06,845 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:20:11,628 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87658.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:20:16,842 INFO [finetune.py:976] (2/7) Epoch 16, batch 1750, loss[loss=0.2048, simple_loss=0.284, pruned_loss=0.06277, over 4932.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2528, pruned_loss=0.05883, over 956228.39 frames. ], batch size: 38, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:20:16,971 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87666.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:20:23,012 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8985, 1.2310, 1.8607, 1.8665, 1.6246, 1.5794, 1.7739, 1.7138], device='cuda:2'), covar=tensor([0.3289, 0.3711, 0.3043, 0.3207, 0.4166, 0.3368, 0.3898, 0.3071], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0270, 0.0269, 0.0242, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:20:44,154 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87706.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:20:50,640 INFO [finetune.py:976] (2/7) Epoch 16, batch 1800, loss[loss=0.1903, simple_loss=0.2655, pruned_loss=0.05752, over 4737.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2549, pruned_loss=0.05887, over 956108.10 frames. ], batch size: 27, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:20:51,913 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:21:13,298 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.635e+02 1.907e+02 2.399e+02 5.758e+02, threshold=3.813e+02, percent-clipped=1.0 2023-03-26 19:21:36,191 INFO [finetune.py:976] (2/7) Epoch 16, batch 1850, loss[loss=0.1843, simple_loss=0.2606, pruned_loss=0.05399, over 4874.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2567, pruned_loss=0.05975, over 956080.13 frames. ], batch size: 35, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:21:38,617 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8382, 3.3826, 3.5214, 3.7247, 3.5669, 3.3283, 3.8967, 1.2273], device='cuda:2'), covar=tensor([0.0987, 0.1055, 0.1097, 0.1094, 0.1616, 0.1894, 0.1110, 0.5557], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0245, 0.0276, 0.0293, 0.0335, 0.0282, 0.0298, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:21:58,579 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0101, 1.8950, 1.6836, 1.9225, 1.7853, 1.7632, 1.7905, 2.6196], device='cuda:2'), covar=tensor([0.3937, 0.4553, 0.3369, 0.4099, 0.4455, 0.2502, 0.4121, 0.1734], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0261, 0.0226, 0.0275, 0.0248, 0.0216, 0.0250, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:22:18,774 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0370, 1.6260, 2.3602, 1.4706, 2.1185, 2.1873, 1.6491, 2.3982], device='cuda:2'), covar=tensor([0.1347, 0.2190, 0.1543, 0.2243, 0.1002, 0.1611, 0.2647, 0.0953], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0206, 0.0193, 0.0191, 0.0178, 0.0215, 0.0219, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:22:22,284 INFO [finetune.py:976] (2/7) Epoch 16, batch 1900, loss[loss=0.19, simple_loss=0.2604, pruned_loss=0.05983, over 4878.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2572, pruned_loss=0.05956, over 957293.05 frames. ], batch size: 32, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:22:23,004 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87817.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:22:29,081 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.22 vs. limit=5.0 2023-03-26 19:22:41,851 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.250e+02 1.530e+02 1.792e+02 2.215e+02 4.706e+02, threshold=3.584e+02, percent-clipped=3.0 2023-03-26 19:22:49,534 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 19:22:51,888 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3567, 2.2577, 1.9923, 2.3772, 2.1873, 2.2223, 2.2388, 3.1794], device='cuda:2'), covar=tensor([0.3982, 0.5167, 0.3360, 0.4449, 0.4542, 0.2477, 0.4564, 0.1568], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0261, 0.0226, 0.0275, 0.0248, 0.0216, 0.0250, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:22:53,012 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87861.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:22:55,973 INFO [finetune.py:976] (2/7) Epoch 16, batch 1950, loss[loss=0.1651, simple_loss=0.2347, pruned_loss=0.04768, over 4254.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2548, pruned_loss=0.05824, over 955548.53 frames. ], batch size: 66, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:23:09,171 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:29,599 INFO [finetune.py:976] (2/7) Epoch 16, batch 2000, loss[loss=0.1856, simple_loss=0.2526, pruned_loss=0.05929, over 4810.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2522, pruned_loss=0.05735, over 955662.98 frames. ], batch size: 41, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:23:31,531 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6400, 1.5168, 2.2547, 3.3463, 2.2342, 2.4493, 1.1326, 2.7394], device='cuda:2'), covar=tensor([0.1675, 0.1397, 0.1116, 0.0462, 0.0792, 0.1441, 0.1770, 0.0504], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0138, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:23:41,096 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:45,289 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87940.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:23:47,656 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6968, 1.9443, 1.5664, 1.5999, 2.2512, 2.0947, 1.7828, 1.8458], device='cuda:2'), covar=tensor([0.0383, 0.0310, 0.0484, 0.0346, 0.0228, 0.0618, 0.0338, 0.0375], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0107, 0.0142, 0.0111, 0.0099, 0.0106, 0.0097, 0.0107], device='cuda:2'), out_proj_covar=tensor([7.2176e-05, 8.2928e-05, 1.1233e-04, 8.6191e-05, 7.7107e-05, 7.7919e-05, 7.2562e-05, 8.1914e-05], device='cuda:2') 2023-03-26 19:23:48,825 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:49,306 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.562e+02 1.812e+02 2.199e+02 5.123e+02, threshold=3.624e+02, percent-clipped=1.0 2023-03-26 19:23:50,659 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3867, 2.2375, 1.9922, 2.5382, 2.1478, 2.1605, 2.1469, 3.1176], device='cuda:2'), covar=tensor([0.3952, 0.5393, 0.3499, 0.4479, 0.4850, 0.2659, 0.5078, 0.1715], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0261, 0.0226, 0.0274, 0.0248, 0.0215, 0.0250, 0.0228], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:23:52,384 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7844, 1.6888, 1.6309, 1.6906, 1.4000, 3.8723, 1.6250, 2.1563], device='cuda:2'), covar=tensor([0.3106, 0.2424, 0.2052, 0.2306, 0.1704, 0.0189, 0.2458, 0.1162], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0121, 0.0124, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:23:59,900 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87961.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:24:00,731 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-26 19:24:02,864 INFO [finetune.py:976] (2/7) Epoch 16, batch 2050, loss[loss=0.1485, simple_loss=0.219, pruned_loss=0.039, over 4825.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.248, pruned_loss=0.05543, over 958021.44 frames. ], batch size: 41, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:24:37,647 INFO [finetune.py:976] (2/7) Epoch 16, batch 2100, loss[loss=0.1678, simple_loss=0.2407, pruned_loss=0.0474, over 4895.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2491, pruned_loss=0.05688, over 958280.14 frames. ], batch size: 32, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:24:41,374 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:24:49,501 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 19:24:59,878 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.339e+01 1.671e+02 1.963e+02 2.403e+02 4.597e+02, threshold=3.926e+02, percent-clipped=2.0 2023-03-26 19:25:13,435 INFO [finetune.py:976] (2/7) Epoch 16, batch 2150, loss[loss=0.1858, simple_loss=0.2691, pruned_loss=0.05128, over 4913.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2534, pruned_loss=0.05823, over 957845.09 frames. ], batch size: 36, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:25:13,503 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:35,017 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88100.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:46,508 INFO [finetune.py:976] (2/7) Epoch 16, batch 2200, loss[loss=0.1897, simple_loss=0.2557, pruned_loss=0.06192, over 4797.00 frames. ], tot_loss[loss=0.187, simple_loss=0.256, pruned_loss=0.05905, over 957482.60 frames. ], batch size: 29, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:25:47,208 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:52,080 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3720, 3.8383, 3.9858, 4.1950, 4.1387, 3.8678, 4.4876, 1.3813], device='cuda:2'), covar=tensor([0.0827, 0.0891, 0.0851, 0.1058, 0.1197, 0.1606, 0.0660, 0.5807], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0245, 0.0276, 0.0291, 0.0335, 0.0282, 0.0298, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:26:05,785 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.658e+02 1.940e+02 2.471e+02 6.986e+02, threshold=3.880e+02, percent-clipped=3.0 2023-03-26 19:26:15,395 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88161.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:26:15,438 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88161.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:26:18,733 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:26:19,284 INFO [finetune.py:976] (2/7) Epoch 16, batch 2250, loss[loss=0.2047, simple_loss=0.2735, pruned_loss=0.06793, over 4827.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2589, pruned_loss=0.06061, over 957678.45 frames. ], batch size: 49, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:26:31,472 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1925, 1.7097, 2.2187, 2.1382, 1.8353, 1.8672, 2.0950, 2.0522], device='cuda:2'), covar=tensor([0.3772, 0.4144, 0.3059, 0.3918, 0.4980, 0.3796, 0.4671, 0.3017], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0236, 0.0256, 0.0268, 0.0268, 0.0241, 0.0279, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:26:56,477 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:05,825 INFO [finetune.py:976] (2/7) Epoch 16, batch 2300, loss[loss=0.2229, simple_loss=0.2802, pruned_loss=0.08277, over 4252.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2575, pruned_loss=0.05966, over 958731.69 frames. ], batch size: 66, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:27:29,752 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:33,299 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:34,393 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.462e+02 1.840e+02 2.103e+02 3.666e+02, threshold=3.679e+02, percent-clipped=0.0 2023-03-26 19:27:43,886 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88261.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:27:47,285 INFO [finetune.py:976] (2/7) Epoch 16, batch 2350, loss[loss=0.1884, simple_loss=0.2567, pruned_loss=0.06002, over 4865.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2563, pruned_loss=0.05976, over 958073.76 frames. ], batch size: 34, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:27:50,966 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1308, 2.0113, 1.7820, 1.9916, 1.9203, 1.9206, 1.9788, 2.6802], device='cuda:2'), covar=tensor([0.4168, 0.4611, 0.3672, 0.4067, 0.4290, 0.2503, 0.4071, 0.1733], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0225, 0.0274, 0.0247, 0.0214, 0.0249, 0.0226], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:28:01,663 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88288.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:03,570 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88291.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:28:04,697 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:07,109 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0929, 1.9151, 1.5516, 1.7688, 1.8779, 1.8338, 1.8839, 2.6135], device='cuda:2'), covar=tensor([0.3874, 0.4088, 0.3504, 0.3802, 0.3818, 0.2544, 0.3748, 0.1748], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0225, 0.0274, 0.0247, 0.0214, 0.0249, 0.0226], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:28:15,422 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88309.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:20,094 INFO [finetune.py:976] (2/7) Epoch 16, batch 2400, loss[loss=0.206, simple_loss=0.272, pruned_loss=0.07001, over 4911.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2525, pruned_loss=0.05827, over 958601.27 frames. ], batch size: 37, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:28:40,399 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.525e+02 1.766e+02 2.106e+02 3.774e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 19:28:44,072 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88352.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:28:52,862 INFO [finetune.py:976] (2/7) Epoch 16, batch 2450, loss[loss=0.1753, simple_loss=0.238, pruned_loss=0.05625, over 4875.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.25, pruned_loss=0.05733, over 959193.39 frames. ], batch size: 31, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:29:26,833 INFO [finetune.py:976] (2/7) Epoch 16, batch 2500, loss[loss=0.1946, simple_loss=0.2613, pruned_loss=0.06397, over 4891.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2513, pruned_loss=0.05813, over 958905.79 frames. ], batch size: 32, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:29:48,236 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.743e+02 2.000e+02 2.601e+02 5.270e+02, threshold=4.000e+02, percent-clipped=5.0 2023-03-26 19:29:54,229 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:30:00,708 INFO [finetune.py:976] (2/7) Epoch 16, batch 2550, loss[loss=0.1916, simple_loss=0.2667, pruned_loss=0.05822, over 4753.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2551, pruned_loss=0.05931, over 957910.84 frames. ], batch size: 54, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:30:33,907 INFO [finetune.py:976] (2/7) Epoch 16, batch 2600, loss[loss=0.1568, simple_loss=0.2254, pruned_loss=0.04409, over 4779.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2566, pruned_loss=0.05974, over 955615.07 frames. ], batch size: 25, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:30:34,674 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 19:30:55,464 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.436e+01 1.666e+02 1.940e+02 2.279e+02 3.712e+02, threshold=3.880e+02, percent-clipped=0.0 2023-03-26 19:30:59,281 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1570, 2.0885, 1.7220, 2.0609, 2.0704, 1.7934, 2.4114, 2.1836], device='cuda:2'), covar=tensor([0.1338, 0.2094, 0.2854, 0.2601, 0.2586, 0.1635, 0.3230, 0.1700], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0187, 0.0234, 0.0252, 0.0244, 0.0201, 0.0212, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:31:07,484 INFO [finetune.py:976] (2/7) Epoch 16, batch 2650, loss[loss=0.1821, simple_loss=0.2485, pruned_loss=0.0578, over 4904.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2567, pruned_loss=0.05949, over 956181.26 frames. ], batch size: 38, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:31:12,344 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4702, 3.9071, 4.0934, 4.3312, 4.2255, 3.9846, 4.5674, 1.3915], device='cuda:2'), covar=tensor([0.0742, 0.0855, 0.0859, 0.1016, 0.1257, 0.1556, 0.0689, 0.5737], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0245, 0.0277, 0.0292, 0.0335, 0.0283, 0.0298, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:31:41,339 INFO [finetune.py:976] (2/7) Epoch 16, batch 2700, loss[loss=0.1449, simple_loss=0.2179, pruned_loss=0.03595, over 4840.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.255, pruned_loss=0.05822, over 956451.17 frames. ], batch size: 44, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:32:05,538 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.504e+02 1.815e+02 2.296e+02 4.078e+02, threshold=3.631e+02, percent-clipped=1.0 2023-03-26 19:32:05,623 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88647.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:32:26,939 INFO [finetune.py:976] (2/7) Epoch 16, batch 2750, loss[loss=0.2015, simple_loss=0.2774, pruned_loss=0.06276, over 4934.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2521, pruned_loss=0.05729, over 956566.19 frames. ], batch size: 33, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:32:42,721 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-26 19:32:46,902 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 19:33:17,053 INFO [finetune.py:976] (2/7) Epoch 16, batch 2800, loss[loss=0.1747, simple_loss=0.2444, pruned_loss=0.05254, over 4817.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2491, pruned_loss=0.05597, over 956104.10 frames. ], batch size: 38, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:33:37,857 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.516e+02 1.815e+02 2.286e+02 3.246e+02, threshold=3.631e+02, percent-clipped=0.0 2023-03-26 19:33:43,836 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 19:33:44,964 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:33:50,910 INFO [finetune.py:976] (2/7) Epoch 16, batch 2850, loss[loss=0.1728, simple_loss=0.2408, pruned_loss=0.05241, over 4772.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2475, pruned_loss=0.05537, over 955473.87 frames. ], batch size: 26, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:33:58,299 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.29 vs. limit=5.0 2023-03-26 19:34:17,618 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:34:24,821 INFO [finetune.py:976] (2/7) Epoch 16, batch 2900, loss[loss=0.1256, simple_loss=0.2031, pruned_loss=0.02411, over 4778.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2502, pruned_loss=0.05614, over 954631.91 frames. ], batch size: 26, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:34:28,982 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5337, 3.9380, 4.1259, 4.3369, 4.2434, 4.0269, 4.6293, 1.5005], device='cuda:2'), covar=tensor([0.0773, 0.0925, 0.0982, 0.1025, 0.1396, 0.1635, 0.0661, 0.5796], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0244, 0.0275, 0.0291, 0.0333, 0.0281, 0.0297, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:34:45,220 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.632e+02 1.968e+02 2.500e+02 4.348e+02, threshold=3.936e+02, percent-clipped=6.0 2023-03-26 19:34:58,809 INFO [finetune.py:976] (2/7) Epoch 16, batch 2950, loss[loss=0.2149, simple_loss=0.2837, pruned_loss=0.07308, over 4787.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2529, pruned_loss=0.05645, over 955756.60 frames. ], batch size: 51, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:35:04,891 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4525, 1.4074, 1.6648, 1.6604, 1.5357, 3.2534, 1.3470, 1.4945], device='cuda:2'), covar=tensor([0.1017, 0.1807, 0.1130, 0.1025, 0.1617, 0.0234, 0.1495, 0.1834], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:35:32,640 INFO [finetune.py:976] (2/7) Epoch 16, batch 3000, loss[loss=0.1801, simple_loss=0.2584, pruned_loss=0.05093, over 4870.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2551, pruned_loss=0.05772, over 954455.50 frames. ], batch size: 34, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:35:32,640 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 19:35:38,729 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3791, 1.3466, 1.2604, 1.4555, 1.6976, 1.5035, 1.3273, 1.2339], device='cuda:2'), covar=tensor([0.0372, 0.0331, 0.0714, 0.0303, 0.0224, 0.0536, 0.0414, 0.0447], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0108, 0.0143, 0.0112, 0.0099, 0.0107, 0.0097, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2988e-05, 8.3872e-05, 1.1301e-04, 8.6599e-05, 7.7392e-05, 7.8828e-05, 7.3023e-05, 8.2483e-05], device='cuda:2') 2023-03-26 19:35:41,740 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0488, 1.9944, 1.8287, 1.9941, 2.0726, 1.8851, 2.3207, 2.0807], device='cuda:2'), covar=tensor([0.1234, 0.2485, 0.2798, 0.2305, 0.2357, 0.1548, 0.2896, 0.1787], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0186, 0.0233, 0.0251, 0.0242, 0.0200, 0.0210, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:35:49,227 INFO [finetune.py:1010] (2/7) Epoch 16, validation: loss=0.1563, simple_loss=0.2263, pruned_loss=0.04316, over 2265189.00 frames. 2023-03-26 19:35:49,228 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 19:35:58,834 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8166, 1.5740, 1.4544, 1.2557, 1.5766, 1.5618, 1.5749, 2.1374], device='cuda:2'), covar=tensor([0.3463, 0.3653, 0.2959, 0.3438, 0.3475, 0.2032, 0.3351, 0.1550], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0227, 0.0276, 0.0249, 0.0217, 0.0251, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:36:10,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0821, 1.8919, 1.7595, 1.9076, 1.7836, 1.7656, 1.8276, 2.4490], device='cuda:2'), covar=tensor([0.3293, 0.3759, 0.2963, 0.3265, 0.3855, 0.2161, 0.3370, 0.1469], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0227, 0.0276, 0.0249, 0.0217, 0.0251, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:36:10,682 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.661e+02 1.990e+02 2.439e+02 3.546e+02, threshold=3.980e+02, percent-clipped=0.0 2023-03-26 19:36:11,254 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88947.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:36:23,200 INFO [finetune.py:976] (2/7) Epoch 16, batch 3050, loss[loss=0.2364, simple_loss=0.3058, pruned_loss=0.08347, over 4838.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2572, pruned_loss=0.05885, over 953045.32 frames. ], batch size: 49, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:36:43,520 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88995.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:36:44,768 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3872, 1.4768, 1.2595, 1.5180, 1.8206, 1.5904, 1.4413, 1.3043], device='cuda:2'), covar=tensor([0.0371, 0.0341, 0.0566, 0.0286, 0.0175, 0.0612, 0.0360, 0.0399], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0108, 0.0143, 0.0112, 0.0099, 0.0107, 0.0097, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.3022e-05, 8.3670e-05, 1.1323e-04, 8.6609e-05, 7.7337e-05, 7.8762e-05, 7.2858e-05, 8.2250e-05], device='cuda:2') 2023-03-26 19:36:57,478 INFO [finetune.py:976] (2/7) Epoch 16, batch 3100, loss[loss=0.1832, simple_loss=0.251, pruned_loss=0.05773, over 4897.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2556, pruned_loss=0.05824, over 953194.47 frames. ], batch size: 43, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:37:06,396 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:37:07,168 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 19:37:20,956 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.966e+01 1.506e+02 1.838e+02 2.198e+02 3.411e+02, threshold=3.676e+02, percent-clipped=0.0 2023-03-26 19:37:33,683 INFO [finetune.py:976] (2/7) Epoch 16, batch 3150, loss[loss=0.1964, simple_loss=0.2568, pruned_loss=0.06797, over 4909.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2534, pruned_loss=0.05746, over 954533.18 frames. ], batch size: 32, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:37:56,583 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:38:25,683 INFO [finetune.py:976] (2/7) Epoch 16, batch 3200, loss[loss=0.1454, simple_loss=0.2153, pruned_loss=0.03777, over 4809.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2493, pruned_loss=0.05609, over 955512.63 frames. ], batch size: 25, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:38:34,385 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-26 19:38:50,101 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.609e+02 1.908e+02 2.339e+02 4.086e+02, threshold=3.816e+02, percent-clipped=1.0 2023-03-26 19:38:50,257 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5674, 2.2124, 1.7341, 0.8249, 1.9909, 1.9907, 1.8322, 2.1128], device='cuda:2'), covar=tensor([0.0673, 0.0789, 0.1408, 0.1918, 0.1215, 0.2037, 0.1910, 0.0765], device='cuda:2'), in_proj_covar=tensor([0.0166, 0.0194, 0.0198, 0.0181, 0.0210, 0.0204, 0.0221, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:38:50,328 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 19:38:53,279 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2431, 2.1106, 2.2411, 1.1612, 2.5542, 2.7353, 2.3985, 1.9652], device='cuda:2'), covar=tensor([0.0892, 0.0826, 0.0519, 0.0728, 0.0525, 0.0536, 0.0509, 0.0681], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0123, 0.0126, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:2'), out_proj_covar=tensor([9.1430e-05, 1.0925e-04, 8.7801e-05, 8.9911e-05, 9.2157e-05, 9.1760e-05, 1.0192e-04, 1.0514e-04], device='cuda:2') 2023-03-26 19:38:59,803 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5251, 1.3486, 1.2667, 1.5135, 1.7283, 1.5314, 1.0778, 1.2695], device='cuda:2'), covar=tensor([0.2028, 0.2010, 0.1941, 0.1637, 0.1534, 0.1217, 0.2419, 0.1880], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0209, 0.0212, 0.0193, 0.0245, 0.0186, 0.0215, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:39:02,083 INFO [finetune.py:976] (2/7) Epoch 16, batch 3250, loss[loss=0.2178, simple_loss=0.2842, pruned_loss=0.07575, over 4863.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2497, pruned_loss=0.05704, over 953314.89 frames. ], batch size: 34, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:39:04,010 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:39:21,688 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7422, 1.2093, 0.8815, 1.5781, 1.9895, 1.5057, 1.4894, 1.6266], device='cuda:2'), covar=tensor([0.1502, 0.2266, 0.2048, 0.1276, 0.2076, 0.2082, 0.1474, 0.1975], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:39:28,306 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4107, 1.3605, 1.5979, 1.6157, 1.5510, 3.0534, 1.2995, 1.4648], device='cuda:2'), covar=tensor([0.1064, 0.1918, 0.1492, 0.1101, 0.1621, 0.0302, 0.1612, 0.1891], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:39:35,940 INFO [finetune.py:976] (2/7) Epoch 16, batch 3300, loss[loss=0.1537, simple_loss=0.2287, pruned_loss=0.03938, over 4752.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2522, pruned_loss=0.05737, over 954193.48 frames. ], batch size: 26, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:39:45,107 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89230.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:39:56,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.235e+02 1.765e+02 2.004e+02 2.308e+02 3.942e+02, threshold=4.007e+02, percent-clipped=1.0 2023-03-26 19:40:09,186 INFO [finetune.py:976] (2/7) Epoch 16, batch 3350, loss[loss=0.2241, simple_loss=0.2852, pruned_loss=0.08154, over 4812.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2556, pruned_loss=0.05892, over 954075.95 frames. ], batch size: 38, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:40:26,865 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4963, 3.4922, 3.2481, 1.5758, 3.5657, 2.6359, 0.8182, 2.3607], device='cuda:2'), covar=tensor([0.2135, 0.1633, 0.1502, 0.3182, 0.0985, 0.0982, 0.4074, 0.1352], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0176, 0.0160, 0.0129, 0.0158, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:40:42,689 INFO [finetune.py:976] (2/7) Epoch 16, batch 3400, loss[loss=0.1676, simple_loss=0.2476, pruned_loss=0.04382, over 4922.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2569, pruned_loss=0.05957, over 953959.85 frames. ], batch size: 38, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:40:56,405 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5265, 1.4181, 1.3414, 1.4086, 1.0946, 3.4297, 1.3585, 1.6933], device='cuda:2'), covar=tensor([0.4172, 0.3297, 0.2577, 0.3119, 0.2032, 0.0326, 0.2501, 0.1315], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:41:12,554 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.581e+02 1.832e+02 2.219e+02 5.301e+02, threshold=3.664e+02, percent-clipped=1.0 2023-03-26 19:41:24,424 INFO [finetune.py:976] (2/7) Epoch 16, batch 3450, loss[loss=0.2026, simple_loss=0.2612, pruned_loss=0.07205, over 4425.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2561, pruned_loss=0.05902, over 953612.07 frames. ], batch size: 19, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:41:29,836 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:41:35,742 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89383.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:41:48,194 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7167, 3.7261, 3.5524, 1.7360, 3.8811, 2.7653, 0.7088, 2.6566], device='cuda:2'), covar=tensor([0.2143, 0.1844, 0.1476, 0.3223, 0.0969, 0.0998, 0.4311, 0.1413], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0128, 0.0158, 0.0123, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:41:58,330 INFO [finetune.py:976] (2/7) Epoch 16, batch 3500, loss[loss=0.1372, simple_loss=0.2068, pruned_loss=0.0338, over 4928.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2536, pruned_loss=0.05834, over 955340.81 frames. ], batch size: 33, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:42:04,411 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:10,961 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:13,373 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89439.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:42:18,649 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.517e+02 1.946e+02 2.225e+02 4.216e+02, threshold=3.891e+02, percent-clipped=3.0 2023-03-26 19:42:31,135 INFO [finetune.py:976] (2/7) Epoch 16, batch 3550, loss[loss=0.1516, simple_loss=0.2157, pruned_loss=0.04372, over 4822.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2507, pruned_loss=0.05713, over 956724.55 frames. ], batch size: 25, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:42:40,902 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3046, 1.8228, 1.8138, 1.0187, 1.9446, 2.0642, 1.9862, 1.7556], device='cuda:2'), covar=tensor([0.0734, 0.0577, 0.0503, 0.0598, 0.0513, 0.0564, 0.0389, 0.0550], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0125, 0.0128, 0.0132, 0.0130, 0.0143, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.3115e-05, 1.1122e-04, 8.9509e-05, 9.1377e-05, 9.3680e-05, 9.3461e-05, 1.0323e-04, 1.0698e-04], device='cuda:2') 2023-03-26 19:42:43,948 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:53,370 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89500.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:42:59,271 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89509.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:43:06,068 INFO [finetune.py:976] (2/7) Epoch 16, batch 3600, loss[loss=0.1728, simple_loss=0.2414, pruned_loss=0.05211, over 4826.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2485, pruned_loss=0.05661, over 956619.76 frames. ], batch size: 33, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:43:14,213 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:43:37,732 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.924e+01 1.526e+02 1.890e+02 2.215e+02 3.895e+02, threshold=3.780e+02, percent-clipped=1.0 2023-03-26 19:43:45,473 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.00 vs. limit=5.0 2023-03-26 19:44:03,076 INFO [finetune.py:976] (2/7) Epoch 16, batch 3650, loss[loss=0.1617, simple_loss=0.2473, pruned_loss=0.03811, over 4758.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2512, pruned_loss=0.05778, over 955372.62 frames. ], batch size: 28, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:44:06,130 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89570.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:44:07,371 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1263, 2.6597, 2.5016, 1.2514, 2.6486, 2.2263, 2.2325, 2.5000], device='cuda:2'), covar=tensor([0.0982, 0.1004, 0.1924, 0.2377, 0.1925, 0.2336, 0.2157, 0.1250], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0196, 0.0200, 0.0183, 0.0213, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:44:21,046 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0885, 1.9316, 1.7402, 1.9341, 1.7899, 1.7870, 1.8173, 2.4381], device='cuda:2'), covar=tensor([0.3436, 0.4164, 0.3076, 0.3489, 0.4069, 0.2344, 0.3960, 0.1648], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0263, 0.0228, 0.0277, 0.0251, 0.0218, 0.0252, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:44:36,728 INFO [finetune.py:976] (2/7) Epoch 16, batch 3700, loss[loss=0.16, simple_loss=0.2319, pruned_loss=0.04403, over 4833.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2536, pruned_loss=0.05791, over 954985.15 frames. ], batch size: 30, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:44:43,480 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 19:44:57,073 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.548e+01 1.593e+02 1.994e+02 2.376e+02 3.738e+02, threshold=3.989e+02, percent-clipped=0.0 2023-03-26 19:45:10,209 INFO [finetune.py:976] (2/7) Epoch 16, batch 3750, loss[loss=0.176, simple_loss=0.2636, pruned_loss=0.0442, over 4789.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2533, pruned_loss=0.05783, over 953207.14 frames. ], batch size: 45, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:45:19,302 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:21,101 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89683.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:43,305 INFO [finetune.py:976] (2/7) Epoch 16, batch 3800, loss[loss=0.1765, simple_loss=0.2535, pruned_loss=0.04979, over 4838.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2557, pruned_loss=0.05844, over 954305.89 frames. ], batch size: 30, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:45:49,274 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7228, 3.9218, 3.6927, 1.9236, 4.0764, 2.9190, 1.0014, 2.7191], device='cuda:2'), covar=tensor([0.2302, 0.1762, 0.1518, 0.3081, 0.0868, 0.0969, 0.3994, 0.1332], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0175, 0.0158, 0.0128, 0.0157, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:45:52,764 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:53,369 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=89731.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:00,033 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:00,618 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5518, 1.4701, 1.4058, 1.5155, 1.2811, 3.4184, 1.4036, 1.7193], device='cuda:2'), covar=tensor([0.3497, 0.2654, 0.2381, 0.2550, 0.1798, 0.0200, 0.2679, 0.1384], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:46:03,522 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.549e+02 1.882e+02 2.353e+02 4.344e+02, threshold=3.764e+02, percent-clipped=2.0 2023-03-26 19:46:19,024 INFO [finetune.py:976] (2/7) Epoch 16, batch 3850, loss[loss=0.1537, simple_loss=0.2241, pruned_loss=0.04169, over 4767.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2542, pruned_loss=0.05762, over 954217.04 frames. ], batch size: 28, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:46:29,108 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:38,078 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89795.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:46:38,099 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5615, 1.6491, 2.0611, 2.7763, 1.9669, 2.2305, 1.3953, 2.2810], device='cuda:2'), covar=tensor([0.1448, 0.1146, 0.0973, 0.0586, 0.0741, 0.1722, 0.1285, 0.0540], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0135, 0.0167, 0.0102, 0.0140, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:46:38,741 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8051, 1.7834, 1.5777, 1.9536, 2.3912, 1.9800, 1.7142, 1.4901], device='cuda:2'), covar=tensor([0.2039, 0.1876, 0.1815, 0.1461, 0.1653, 0.1163, 0.2270, 0.1831], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0208, 0.0210, 0.0191, 0.0242, 0.0184, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:46:39,349 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4261, 2.3242, 1.9014, 0.9086, 2.1275, 1.9155, 1.7639, 2.1940], device='cuda:2'), covar=tensor([0.1111, 0.0706, 0.1593, 0.1962, 0.1339, 0.2210, 0.2121, 0.0910], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0195, 0.0200, 0.0182, 0.0213, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:46:44,626 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2564, 2.8593, 2.9877, 3.1669, 3.0160, 2.8949, 3.2830, 1.0155], device='cuda:2'), covar=tensor([0.0902, 0.0992, 0.0990, 0.0971, 0.1408, 0.1555, 0.1068, 0.4914], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0245, 0.0276, 0.0293, 0.0332, 0.0281, 0.0298, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:46:52,160 INFO [finetune.py:976] (2/7) Epoch 16, batch 3900, loss[loss=0.1921, simple_loss=0.2627, pruned_loss=0.06074, over 4903.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2523, pruned_loss=0.05734, over 954406.70 frames. ], batch size: 35, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:46:58,247 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89825.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:12,472 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.500e+02 1.857e+02 2.274e+02 5.172e+02, threshold=3.715e+02, percent-clipped=1.0 2023-03-26 19:47:15,640 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3858, 1.2671, 1.2707, 1.3141, 1.6649, 1.5072, 1.3502, 1.2182], device='cuda:2'), covar=tensor([0.0337, 0.0326, 0.0559, 0.0320, 0.0208, 0.0487, 0.0353, 0.0390], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0109, 0.0145, 0.0114, 0.0101, 0.0109, 0.0100, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.4242e-05, 8.4452e-05, 1.1452e-04, 8.7791e-05, 7.8347e-05, 8.0138e-05, 7.4841e-05, 8.3367e-05], device='cuda:2') 2023-03-26 19:47:23,890 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:23,926 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:24,440 INFO [finetune.py:976] (2/7) Epoch 16, batch 3950, loss[loss=0.1499, simple_loss=0.2258, pruned_loss=0.03698, over 4696.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2491, pruned_loss=0.05581, over 951474.09 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:47:29,791 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=89873.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:30,483 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0490, 2.0033, 1.6651, 1.9072, 2.0783, 1.7970, 2.3734, 2.0480], device='cuda:2'), covar=tensor([0.1450, 0.2267, 0.3118, 0.2717, 0.2766, 0.1719, 0.3468, 0.1903], device='cuda:2'), in_proj_covar=tensor([0.0181, 0.0187, 0.0233, 0.0252, 0.0243, 0.0201, 0.0211, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:47:57,663 INFO [finetune.py:976] (2/7) Epoch 16, batch 4000, loss[loss=0.2626, simple_loss=0.3128, pruned_loss=0.1062, over 4816.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2489, pruned_loss=0.05627, over 952691.88 frames. ], batch size: 51, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:48:04,744 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89926.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:48:18,302 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.676e+02 1.974e+02 2.469e+02 4.779e+02, threshold=3.947e+02, percent-clipped=6.0 2023-03-26 19:48:32,940 INFO [finetune.py:976] (2/7) Epoch 16, batch 4050, loss[loss=0.176, simple_loss=0.2508, pruned_loss=0.0506, over 4865.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2537, pruned_loss=0.05865, over 953882.26 frames. ], batch size: 31, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:48:46,061 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.57 vs. limit=5.0 2023-03-26 19:49:28,775 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.28 vs. limit=5.0 2023-03-26 19:49:29,801 INFO [finetune.py:976] (2/7) Epoch 16, batch 4100, loss[loss=0.2102, simple_loss=0.2735, pruned_loss=0.07341, over 4170.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2547, pruned_loss=0.05814, over 952314.86 frames. ], batch size: 65, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:49:43,247 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:49:46,968 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:49:51,023 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3634, 1.3111, 1.6598, 2.4655, 1.5989, 2.1628, 0.8560, 2.0565], device='cuda:2'), covar=tensor([0.1754, 0.1468, 0.1143, 0.0730, 0.1005, 0.1374, 0.1614, 0.0648], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0166, 0.0101, 0.0140, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 19:49:54,440 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.559e+02 1.839e+02 2.160e+02 6.359e+02, threshold=3.678e+02, percent-clipped=1.0 2023-03-26 19:49:58,241 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9062, 1.6173, 2.3069, 1.3432, 2.0880, 2.0916, 1.5642, 2.3166], device='cuda:2'), covar=tensor([0.1342, 0.2154, 0.1636, 0.2365, 0.1006, 0.1733, 0.2804, 0.0969], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0206, 0.0193, 0.0192, 0.0178, 0.0214, 0.0220, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:50:06,383 INFO [finetune.py:976] (2/7) Epoch 16, batch 4150, loss[loss=0.1665, simple_loss=0.2342, pruned_loss=0.04944, over 4714.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2564, pruned_loss=0.05881, over 953883.42 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:50:14,191 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:16,546 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:26,452 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90095.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:50:39,513 INFO [finetune.py:976] (2/7) Epoch 16, batch 4200, loss[loss=0.1607, simple_loss=0.2259, pruned_loss=0.04778, over 4158.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2571, pruned_loss=0.05868, over 953089.83 frames. ], batch size: 65, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:50:47,974 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:48,627 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:56,644 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3558, 4.6500, 4.9061, 5.1124, 5.0814, 4.8985, 5.4650, 1.8299], device='cuda:2'), covar=tensor([0.0600, 0.0862, 0.0670, 0.0746, 0.1010, 0.1200, 0.0461, 0.4852], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0246, 0.0278, 0.0296, 0.0335, 0.0282, 0.0300, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:50:57,834 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90143.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:51:00,648 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.547e+02 1.785e+02 2.134e+02 3.751e+02, threshold=3.570e+02, percent-clipped=1.0 2023-03-26 19:51:12,329 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:12,436 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 19:51:12,840 INFO [finetune.py:976] (2/7) Epoch 16, batch 4250, loss[loss=0.2396, simple_loss=0.2937, pruned_loss=0.09273, over 4738.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2544, pruned_loss=0.05806, over 951872.39 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:51:29,549 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:36,062 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4674, 1.3657, 1.4685, 0.8270, 1.5011, 1.5175, 1.5080, 1.3291], device='cuda:2'), covar=tensor([0.0626, 0.0786, 0.0717, 0.1015, 0.0849, 0.0727, 0.0601, 0.1235], device='cuda:2'), in_proj_covar=tensor([0.0135, 0.0136, 0.0143, 0.0125, 0.0124, 0.0142, 0.0143, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:51:43,667 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90213.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:45,897 INFO [finetune.py:976] (2/7) Epoch 16, batch 4300, loss[loss=0.1945, simple_loss=0.2456, pruned_loss=0.0717, over 4870.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2518, pruned_loss=0.05747, over 954354.33 frames. ], batch size: 34, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:51:48,956 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90221.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:51:56,158 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:52:05,693 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 19:52:07,167 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.551e+02 1.797e+02 2.190e+02 3.764e+02, threshold=3.594e+02, percent-clipped=1.0 2023-03-26 19:52:18,553 INFO [finetune.py:976] (2/7) Epoch 16, batch 4350, loss[loss=0.2049, simple_loss=0.2558, pruned_loss=0.07704, over 4851.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2489, pruned_loss=0.05653, over 953436.69 frames. ], batch size: 44, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:52:30,633 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1181, 0.9315, 0.9328, 0.3079, 0.7647, 1.1305, 1.1052, 0.9231], device='cuda:2'), covar=tensor([0.0888, 0.0757, 0.0634, 0.0608, 0.0653, 0.0538, 0.0428, 0.0649], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0151, 0.0124, 0.0127, 0.0131, 0.0129, 0.0142, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.2588e-05, 1.1007e-04, 8.8922e-05, 9.0778e-05, 9.2596e-05, 9.2692e-05, 1.0252e-04, 1.0631e-04], device='cuda:2') 2023-03-26 19:52:36,975 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:52:51,897 INFO [finetune.py:976] (2/7) Epoch 16, batch 4400, loss[loss=0.1734, simple_loss=0.2264, pruned_loss=0.06017, over 4055.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2497, pruned_loss=0.05683, over 954032.22 frames. ], batch size: 17, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:53:03,773 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3278, 2.2675, 1.7226, 2.5318, 2.2863, 1.9006, 2.8013, 2.3328], device='cuda:2'), covar=tensor([0.1364, 0.2552, 0.3254, 0.2765, 0.2617, 0.1711, 0.3745, 0.1924], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0187, 0.0234, 0.0253, 0.0245, 0.0202, 0.0212, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:53:04,944 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:04,989 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5296, 1.4945, 1.3770, 1.6060, 1.8528, 1.7692, 1.5694, 1.3165], device='cuda:2'), covar=tensor([0.0365, 0.0279, 0.0587, 0.0291, 0.0191, 0.0375, 0.0302, 0.0385], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0109, 0.0145, 0.0113, 0.0101, 0.0109, 0.0099, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.4063e-05, 8.4263e-05, 1.1460e-04, 8.7437e-05, 7.8343e-05, 8.0129e-05, 7.4308e-05, 8.3216e-05], device='cuda:2') 2023-03-26 19:53:13,597 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.586e+02 1.845e+02 2.241e+02 4.760e+02, threshold=3.689e+02, percent-clipped=4.0 2023-03-26 19:53:15,500 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90351.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:21,424 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 19:53:25,406 INFO [finetune.py:976] (2/7) Epoch 16, batch 4450, loss[loss=0.1775, simple_loss=0.2464, pruned_loss=0.05428, over 4870.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2511, pruned_loss=0.05634, over 952421.66 frames. ], batch size: 31, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:53:37,365 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:55,614 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 19:54:05,567 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90412.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:54:07,882 INFO [finetune.py:976] (2/7) Epoch 16, batch 4500, loss[loss=0.1655, simple_loss=0.2363, pruned_loss=0.04733, over 4126.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.254, pruned_loss=0.05782, over 952831.93 frames. ], batch size: 17, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:54:25,401 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 19:54:35,005 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 19:54:36,203 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-26 19:54:39,423 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7932, 1.0412, 1.7960, 1.7933, 1.5939, 1.5363, 1.7377, 1.6901], device='cuda:2'), covar=tensor([0.3597, 0.3871, 0.3234, 0.3331, 0.4523, 0.3414, 0.4052, 0.3038], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0239, 0.0259, 0.0271, 0.0270, 0.0244, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:54:40,946 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.659e+02 2.056e+02 2.631e+02 3.688e+02, threshold=4.111e+02, percent-clipped=0.0 2023-03-26 19:54:53,431 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:01,919 INFO [finetune.py:976] (2/7) Epoch 16, batch 4550, loss[loss=0.2037, simple_loss=0.2638, pruned_loss=0.07185, over 4718.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2559, pruned_loss=0.05885, over 953447.88 frames. ], batch size: 54, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:55:01,997 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6926, 3.6063, 3.4646, 1.7726, 3.7204, 2.6951, 0.9892, 2.5339], device='cuda:2'), covar=tensor([0.2778, 0.1810, 0.1499, 0.3305, 0.1008, 0.1074, 0.4104, 0.1541], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0128, 0.0157, 0.0123, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:55:18,096 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:24,683 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90496.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:25,303 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6776, 1.6315, 1.5804, 1.6292, 1.1475, 3.6513, 1.5597, 2.0651], device='cuda:2'), covar=tensor([0.3240, 0.2448, 0.2093, 0.2293, 0.1775, 0.0163, 0.2585, 0.1176], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:55:38,625 INFO [finetune.py:976] (2/7) Epoch 16, batch 4600, loss[loss=0.1814, simple_loss=0.2549, pruned_loss=0.05402, over 4773.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2553, pruned_loss=0.05813, over 955546.70 frames. ], batch size: 28, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:55:41,647 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:42,188 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90521.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:55:47,514 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8259, 2.7208, 2.2649, 1.1985, 2.4711, 2.1232, 1.9153, 2.3982], device='cuda:2'), covar=tensor([0.0694, 0.0756, 0.1358, 0.1902, 0.1156, 0.1962, 0.1963, 0.0928], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0193, 0.0197, 0.0181, 0.0211, 0.0205, 0.0221, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:55:50,511 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9526, 1.4470, 1.9793, 1.8844, 1.6967, 1.6516, 1.8872, 1.8181], device='cuda:2'), covar=tensor([0.3521, 0.3792, 0.2892, 0.3391, 0.4435, 0.3638, 0.4235, 0.2828], device='cuda:2'), in_proj_covar=tensor([0.0248, 0.0240, 0.0259, 0.0272, 0.0272, 0.0245, 0.0282, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:55:59,293 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.556e+01 1.469e+02 1.715e+02 2.012e+02 4.010e+02, threshold=3.429e+02, percent-clipped=0.0 2023-03-26 19:55:59,995 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6620, 3.6715, 3.4853, 1.9183, 3.8396, 2.7611, 0.8014, 2.6556], device='cuda:2'), covar=tensor([0.2334, 0.2116, 0.1664, 0.3210, 0.1023, 0.1069, 0.4581, 0.1568], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0129, 0.0158, 0.0123, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 19:56:01,830 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1829, 2.0637, 2.2978, 1.4874, 2.2314, 2.3204, 2.3531, 1.8851], device='cuda:2'), covar=tensor([0.0570, 0.0626, 0.0610, 0.0958, 0.0608, 0.0668, 0.0574, 0.0976], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0123, 0.0123, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:56:03,486 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3193, 2.9212, 3.0453, 3.2172, 3.0811, 2.8924, 3.3489, 0.9635], device='cuda:2'), covar=tensor([0.1102, 0.1200, 0.1160, 0.1287, 0.1755, 0.1859, 0.1118, 0.5538], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0245, 0.0278, 0.0294, 0.0335, 0.0282, 0.0300, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:56:06,291 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90557.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:11,544 INFO [finetune.py:976] (2/7) Epoch 16, batch 4650, loss[loss=0.1748, simple_loss=0.2373, pruned_loss=0.05618, over 4929.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2523, pruned_loss=0.05755, over 956186.30 frames. ], batch size: 38, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:56:13,932 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:26,169 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:45,054 INFO [finetune.py:976] (2/7) Epoch 16, batch 4700, loss[loss=0.1601, simple_loss=0.2162, pruned_loss=0.05204, over 4818.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2503, pruned_loss=0.05722, over 956332.30 frames. ], batch size: 39, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:56:52,705 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0874, 3.5149, 3.7441, 3.8448, 3.8710, 3.6673, 4.1401, 1.3482], device='cuda:2'), covar=tensor([0.0690, 0.0856, 0.0832, 0.1034, 0.0978, 0.1290, 0.0663, 0.5373], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0244, 0.0276, 0.0293, 0.0333, 0.0280, 0.0299, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:56:59,130 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2011, 3.6439, 3.8591, 4.0400, 3.9949, 3.7576, 4.2543, 1.3701], device='cuda:2'), covar=tensor([0.0766, 0.0885, 0.0993, 0.1172, 0.1101, 0.1459, 0.0762, 0.5402], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0243, 0.0276, 0.0293, 0.0333, 0.0280, 0.0299, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:57:05,662 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.593e+02 1.858e+02 2.163e+02 3.767e+02, threshold=3.717e+02, percent-clipped=1.0 2023-03-26 19:57:18,493 INFO [finetune.py:976] (2/7) Epoch 16, batch 4750, loss[loss=0.198, simple_loss=0.2787, pruned_loss=0.05862, over 4821.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.249, pruned_loss=0.05657, over 957731.58 frames. ], batch size: 39, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:57:45,612 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90707.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:57:52,074 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 19:57:52,458 INFO [finetune.py:976] (2/7) Epoch 16, batch 4800, loss[loss=0.1493, simple_loss=0.2222, pruned_loss=0.03826, over 4744.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2512, pruned_loss=0.05702, over 957183.59 frames. ], batch size: 26, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:58:13,288 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.629e+02 1.979e+02 2.321e+02 4.531e+02, threshold=3.957e+02, percent-clipped=1.0 2023-03-26 19:58:16,920 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4841, 1.3622, 1.4970, 0.7819, 1.5449, 1.5222, 1.5080, 1.2748], device='cuda:2'), covar=tensor([0.0600, 0.0802, 0.0713, 0.1071, 0.0792, 0.0709, 0.0624, 0.1268], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0134, 0.0141, 0.0123, 0.0123, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:58:17,514 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5544, 1.4457, 1.4610, 1.5227, 1.1553, 3.1974, 1.2586, 1.7154], device='cuda:2'), covar=tensor([0.3248, 0.2532, 0.2231, 0.2335, 0.1810, 0.0234, 0.2567, 0.1291], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0097, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 19:58:25,070 INFO [finetune.py:976] (2/7) Epoch 16, batch 4850, loss[loss=0.2021, simple_loss=0.2653, pruned_loss=0.06942, over 4850.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.255, pruned_loss=0.05845, over 957420.52 frames. ], batch size: 31, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:58:39,086 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90786.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:58:57,838 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90815.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:58:58,403 INFO [finetune.py:976] (2/7) Epoch 16, batch 4900, loss[loss=0.2434, simple_loss=0.2921, pruned_loss=0.09741, over 4885.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2568, pruned_loss=0.05919, over 957360.46 frames. ], batch size: 32, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:59:11,165 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:11,238 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7050, 1.6672, 1.3915, 1.4498, 1.7890, 1.4744, 1.8031, 1.7202], device='cuda:2'), covar=tensor([0.1418, 0.1871, 0.2916, 0.2353, 0.2368, 0.1663, 0.2754, 0.1684], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0187, 0.0234, 0.0253, 0.0245, 0.0202, 0.0212, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 19:59:24,120 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.543e+02 1.926e+02 2.205e+02 3.945e+02, threshold=3.852e+02, percent-clipped=0.0 2023-03-26 19:59:25,319 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:27,120 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:42,911 INFO [finetune.py:976] (2/7) Epoch 16, batch 4950, loss[loss=0.2714, simple_loss=0.3327, pruned_loss=0.105, over 4824.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2579, pruned_loss=0.0597, over 954928.48 frames. ], batch size: 39, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:00:12,904 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:00:34,912 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90910.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:00:38,891 INFO [finetune.py:976] (2/7) Epoch 16, batch 5000, loss[loss=0.1561, simple_loss=0.2397, pruned_loss=0.03621, over 4842.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2552, pruned_loss=0.05867, over 951832.72 frames. ], batch size: 47, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:00:53,034 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:00:53,686 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6327, 1.2020, 0.8438, 1.4662, 2.0665, 1.0547, 1.2925, 1.4130], device='cuda:2'), covar=tensor([0.1523, 0.2220, 0.1926, 0.1260, 0.1847, 0.1982, 0.1553, 0.1992], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0096, 0.0112, 0.0093, 0.0120, 0.0095, 0.0099, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:01:00,204 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.882e+01 1.531e+02 1.843e+02 2.134e+02 5.620e+02, threshold=3.687e+02, percent-clipped=2.0 2023-03-26 20:01:11,975 INFO [finetune.py:976] (2/7) Epoch 16, batch 5050, loss[loss=0.1641, simple_loss=0.2255, pruned_loss=0.05133, over 4746.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2526, pruned_loss=0.0581, over 953311.78 frames. ], batch size: 26, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:01:40,195 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:01:45,574 INFO [finetune.py:976] (2/7) Epoch 16, batch 5100, loss[loss=0.176, simple_loss=0.2383, pruned_loss=0.05684, over 4901.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2489, pruned_loss=0.05626, over 955102.94 frames. ], batch size: 36, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:02:07,770 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.471e+02 1.778e+02 2.096e+02 3.940e+02, threshold=3.556e+02, percent-clipped=2.0 2023-03-26 20:02:12,115 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:02:19,223 INFO [finetune.py:976] (2/7) Epoch 16, batch 5150, loss[loss=0.2047, simple_loss=0.2816, pruned_loss=0.06385, over 4762.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2494, pruned_loss=0.0566, over 955615.48 frames. ], batch size: 54, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:02:52,451 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:02:52,957 INFO [finetune.py:976] (2/7) Epoch 16, batch 5200, loss[loss=0.2071, simple_loss=0.2749, pruned_loss=0.06964, over 4916.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.252, pruned_loss=0.05752, over 955937.13 frames. ], batch size: 38, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:03:07,942 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8095, 1.3257, 1.8972, 1.8647, 1.6444, 1.6233, 1.8221, 1.7632], device='cuda:2'), covar=tensor([0.4077, 0.3897, 0.3274, 0.3686, 0.4829, 0.3605, 0.4352, 0.3051], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0239, 0.0258, 0.0271, 0.0270, 0.0245, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:03:14,339 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.594e+02 1.966e+02 2.465e+02 4.658e+02, threshold=3.932e+02, percent-clipped=3.0 2023-03-26 20:03:17,837 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91152.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:24,391 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:26,646 INFO [finetune.py:976] (2/7) Epoch 16, batch 5250, loss[loss=0.2137, simple_loss=0.2792, pruned_loss=0.07413, over 4751.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2537, pruned_loss=0.05774, over 956116.97 frames. ], batch size: 27, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:03:31,008 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8961, 1.4507, 1.9726, 1.8448, 1.6620, 1.6328, 1.7911, 1.8396], device='cuda:2'), covar=tensor([0.3875, 0.3838, 0.3200, 0.3829, 0.4804, 0.3773, 0.4413, 0.3013], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0239, 0.0258, 0.0271, 0.0270, 0.0245, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:03:49,258 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91200.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:53,266 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91205.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:59,840 INFO [finetune.py:976] (2/7) Epoch 16, batch 5300, loss[loss=0.1664, simple_loss=0.2423, pruned_loss=0.0453, over 4776.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2534, pruned_loss=0.05751, over 955582.32 frames. ], batch size: 28, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:04:21,114 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.597e+02 1.843e+02 2.222e+02 3.769e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-26 20:04:30,008 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-26 20:04:33,500 INFO [finetune.py:976] (2/7) Epoch 16, batch 5350, loss[loss=0.1873, simple_loss=0.2536, pruned_loss=0.06049, over 4805.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2535, pruned_loss=0.05702, over 957682.49 frames. ], batch size: 41, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:04:50,622 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91285.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:05:29,809 INFO [finetune.py:976] (2/7) Epoch 16, batch 5400, loss[loss=0.1868, simple_loss=0.2543, pruned_loss=0.05968, over 4816.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2509, pruned_loss=0.05608, over 954841.70 frames. ], batch size: 39, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:06:01,737 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:06:03,346 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.781e+01 1.583e+02 1.812e+02 2.291e+02 3.767e+02, threshold=3.624e+02, percent-clipped=1.0 2023-03-26 20:06:15,750 INFO [finetune.py:976] (2/7) Epoch 16, batch 5450, loss[loss=0.1571, simple_loss=0.2229, pruned_loss=0.04563, over 4777.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.249, pruned_loss=0.05561, over 956592.09 frames. ], batch size: 26, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:06:49,422 INFO [finetune.py:976] (2/7) Epoch 16, batch 5500, loss[loss=0.2061, simple_loss=0.2815, pruned_loss=0.06534, over 4864.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2458, pruned_loss=0.05422, over 956007.86 frames. ], batch size: 44, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:07:10,222 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.789e+01 1.460e+02 1.744e+02 2.187e+02 6.443e+02, threshold=3.488e+02, percent-clipped=1.0 2023-03-26 20:07:22,092 INFO [finetune.py:976] (2/7) Epoch 16, batch 5550, loss[loss=0.1859, simple_loss=0.2634, pruned_loss=0.05417, over 4816.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2473, pruned_loss=0.05499, over 955899.71 frames. ], batch size: 39, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:07:23,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0572, 1.8709, 2.0046, 1.3252, 1.9692, 2.0347, 2.0933, 1.6462], device='cuda:2'), covar=tensor([0.0496, 0.0618, 0.0642, 0.0915, 0.0710, 0.0728, 0.0568, 0.1093], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0135, 0.0142, 0.0124, 0.0124, 0.0141, 0.0142, 0.0165], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:07:47,594 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:07:53,902 INFO [finetune.py:976] (2/7) Epoch 16, batch 5600, loss[loss=0.2098, simple_loss=0.2883, pruned_loss=0.06569, over 4792.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2517, pruned_loss=0.05632, over 954602.15 frames. ], batch size: 51, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:08:05,578 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8453, 1.3257, 1.7211, 1.7544, 1.5744, 1.5735, 1.6959, 1.6702], device='cuda:2'), covar=tensor([0.5102, 0.4957, 0.4448, 0.4536, 0.6225, 0.4695, 0.5585, 0.4540], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0270, 0.0269, 0.0244, 0.0280, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:08:09,003 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91542.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:08:13,218 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.603e+02 1.969e+02 2.458e+02 5.397e+02, threshold=3.938e+02, percent-clipped=5.0 2023-03-26 20:08:16,177 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:08:23,624 INFO [finetune.py:976] (2/7) Epoch 16, batch 5650, loss[loss=0.2163, simple_loss=0.2942, pruned_loss=0.06919, over 4815.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2551, pruned_loss=0.05742, over 955398.43 frames. ], batch size: 38, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:08:42,620 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 20:08:45,685 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91603.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:08:53,175 INFO [finetune.py:976] (2/7) Epoch 16, batch 5700, loss[loss=0.1748, simple_loss=0.2326, pruned_loss=0.05852, over 4372.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.252, pruned_loss=0.05721, over 936111.35 frames. ], batch size: 19, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:09:07,889 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:09:21,371 INFO [finetune.py:976] (2/7) Epoch 17, batch 0, loss[loss=0.1947, simple_loss=0.2608, pruned_loss=0.06433, over 4901.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2608, pruned_loss=0.06433, over 4901.00 frames. ], batch size: 36, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:09:21,371 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 20:09:23,550 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8013, 1.0949, 1.9279, 1.7927, 1.6783, 1.5749, 1.6762, 1.7684], device='cuda:2'), covar=tensor([0.3679, 0.3909, 0.3304, 0.3512, 0.4877, 0.3785, 0.4387, 0.3000], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0240, 0.0259, 0.0271, 0.0270, 0.0245, 0.0282, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:09:23,610 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8834, 3.4337, 3.5570, 3.7539, 3.6384, 3.3932, 3.9480, 1.3087], device='cuda:2'), covar=tensor([0.0895, 0.0859, 0.0869, 0.1090, 0.1393, 0.1652, 0.0734, 0.5283], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0241, 0.0272, 0.0290, 0.0330, 0.0278, 0.0296, 0.0291], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:09:24,125 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3889, 1.2507, 1.2460, 1.3985, 1.6863, 1.5096, 1.3429, 1.2102], device='cuda:2'), covar=tensor([0.0359, 0.0302, 0.0605, 0.0306, 0.0207, 0.0512, 0.0351, 0.0358], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.3808e-05, 8.3862e-05, 1.1375e-04, 8.6997e-05, 7.7713e-05, 7.9723e-05, 7.3573e-05, 8.3274e-05], device='cuda:2') 2023-03-26 20:09:32,014 INFO [finetune.py:1010] (2/7) Epoch 17, validation: loss=0.1591, simple_loss=0.2283, pruned_loss=0.04492, over 2265189.00 frames. 2023-03-26 20:09:32,015 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 20:09:35,491 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.774e+01 1.479e+02 1.757e+02 2.057e+02 5.096e+02, threshold=3.514e+02, percent-clipped=1.0 2023-03-26 20:10:00,075 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 20:10:07,336 INFO [finetune.py:976] (2/7) Epoch 17, batch 50, loss[loss=0.1609, simple_loss=0.2323, pruned_loss=0.04478, over 4842.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.256, pruned_loss=0.05923, over 217100.79 frames. ], batch size: 44, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:10:16,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2589, 1.9855, 1.4319, 0.5419, 1.6439, 1.8362, 1.6769, 1.8211], device='cuda:2'), covar=tensor([0.0878, 0.0851, 0.1544, 0.1861, 0.1387, 0.2297, 0.2242, 0.0821], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0194, 0.0198, 0.0181, 0.0211, 0.0205, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:10:23,532 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-26 20:10:32,213 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4634, 1.2847, 1.8948, 2.9895, 2.0204, 2.2533, 0.9728, 2.5230], device='cuda:2'), covar=tensor([0.2212, 0.1998, 0.1669, 0.0992, 0.1056, 0.1521, 0.2172, 0.0740], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0164, 0.0100, 0.0137, 0.0123, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:10:52,788 INFO [finetune.py:976] (2/7) Epoch 17, batch 100, loss[loss=0.1832, simple_loss=0.2436, pruned_loss=0.06145, over 4738.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2497, pruned_loss=0.05653, over 382140.78 frames. ], batch size: 54, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:11:01,250 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.600e+02 1.810e+02 2.096e+02 3.529e+02, threshold=3.620e+02, percent-clipped=1.0 2023-03-26 20:11:09,178 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-26 20:11:37,625 INFO [finetune.py:976] (2/7) Epoch 17, batch 150, loss[loss=0.1695, simple_loss=0.2448, pruned_loss=0.04716, over 4865.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2439, pruned_loss=0.05392, over 511143.07 frames. ], batch size: 34, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:11,013 INFO [finetune.py:976] (2/7) Epoch 17, batch 200, loss[loss=0.1811, simple_loss=0.2426, pruned_loss=0.05976, over 4915.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2434, pruned_loss=0.05428, over 610442.35 frames. ], batch size: 35, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:11,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3680, 1.3822, 1.9613, 1.6893, 1.4642, 3.3755, 1.2299, 1.4549], device='cuda:2'), covar=tensor([0.1046, 0.1837, 0.1212, 0.0981, 0.1608, 0.0244, 0.1591, 0.1814], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:12:14,524 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.595e+02 1.938e+02 2.273e+02 4.627e+02, threshold=3.876e+02, percent-clipped=4.0 2023-03-26 20:12:43,712 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91891.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:12:44,795 INFO [finetune.py:976] (2/7) Epoch 17, batch 250, loss[loss=0.2131, simple_loss=0.2867, pruned_loss=0.06968, over 4754.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2486, pruned_loss=0.05611, over 687437.40 frames. ], batch size: 59, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:47,910 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91898.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:12:50,140 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6618, 1.5169, 1.3310, 1.6363, 2.0126, 1.8244, 1.6033, 1.4312], device='cuda:2'), covar=tensor([0.0292, 0.0339, 0.0599, 0.0284, 0.0187, 0.0460, 0.0289, 0.0364], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.4044e-05, 8.3918e-05, 1.1392e-04, 8.7083e-05, 7.7791e-05, 7.9603e-05, 7.3818e-05, 8.3000e-05], device='cuda:2') 2023-03-26 20:13:12,305 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1005, 2.0675, 1.7031, 1.8888, 1.8997, 1.8411, 1.9403, 2.6276], device='cuda:2'), covar=tensor([0.3853, 0.4298, 0.3334, 0.4001, 0.3973, 0.2621, 0.3806, 0.1682], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0262, 0.0228, 0.0278, 0.0251, 0.0218, 0.0250, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:13:17,049 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:17,089 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7304, 2.5450, 2.1062, 1.0359, 2.3232, 2.0107, 1.9673, 2.2948], device='cuda:2'), covar=tensor([0.0772, 0.0845, 0.1592, 0.2112, 0.1414, 0.2200, 0.1881, 0.1011], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0193, 0.0197, 0.0181, 0.0210, 0.0205, 0.0221, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:13:18,231 INFO [finetune.py:976] (2/7) Epoch 17, batch 300, loss[loss=0.1838, simple_loss=0.2555, pruned_loss=0.05599, over 4862.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2536, pruned_loss=0.05785, over 748924.68 frames. ], batch size: 31, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:13:21,759 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.873e+01 1.605e+02 2.003e+02 2.239e+02 3.510e+02, threshold=4.006e+02, percent-clipped=0.0 2023-03-26 20:13:24,429 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91952.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:27,179 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:44,457 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91981.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:13:49,290 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:52,145 INFO [finetune.py:976] (2/7) Epoch 17, batch 350, loss[loss=0.2411, simple_loss=0.3041, pruned_loss=0.08904, over 4891.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2551, pruned_loss=0.05816, over 793493.09 frames. ], batch size: 43, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:14:09,837 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:14:26,164 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92042.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:14:26,649 INFO [finetune.py:976] (2/7) Epoch 17, batch 400, loss[loss=0.1772, simple_loss=0.246, pruned_loss=0.05419, over 4779.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2551, pruned_loss=0.05751, over 830183.39 frames. ], batch size: 51, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:14:30,187 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.544e+02 1.847e+02 2.163e+02 3.487e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 20:15:00,229 INFO [finetune.py:976] (2/7) Epoch 17, batch 450, loss[loss=0.1461, simple_loss=0.2153, pruned_loss=0.03846, over 4789.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2533, pruned_loss=0.05681, over 856731.37 frames. ], batch size: 29, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:15:22,584 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 20:15:29,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7373, 0.7007, 1.6624, 1.5762, 1.5121, 1.4947, 1.4844, 1.6510], device='cuda:2'), covar=tensor([0.4216, 0.4197, 0.3949, 0.3896, 0.5280, 0.3917, 0.4739, 0.3749], device='cuda:2'), in_proj_covar=tensor([0.0248, 0.0240, 0.0259, 0.0272, 0.0271, 0.0246, 0.0283, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:15:33,735 INFO [finetune.py:976] (2/7) Epoch 17, batch 500, loss[loss=0.1494, simple_loss=0.2097, pruned_loss=0.0446, over 4700.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2508, pruned_loss=0.05602, over 878614.65 frames. ], batch size: 23, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:15:37,217 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.578e+02 1.863e+02 2.269e+02 4.074e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-26 20:16:30,566 INFO [finetune.py:976] (2/7) Epoch 17, batch 550, loss[loss=0.2233, simple_loss=0.279, pruned_loss=0.08382, over 4811.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2481, pruned_loss=0.05551, over 896595.46 frames. ], batch size: 51, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:16:33,714 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92198.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:16:50,756 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6683, 1.2235, 0.8876, 1.4692, 2.0401, 1.0681, 1.3511, 1.4455], device='cuda:2'), covar=tensor([0.1376, 0.2170, 0.1954, 0.1312, 0.1791, 0.2259, 0.1650, 0.2066], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0093, 0.0119, 0.0095, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:16:59,758 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-26 20:17:13,333 INFO [finetune.py:976] (2/7) Epoch 17, batch 600, loss[loss=0.1523, simple_loss=0.218, pruned_loss=0.04335, over 4836.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2484, pruned_loss=0.05601, over 908907.34 frames. ], batch size: 25, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:17:15,212 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92246.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:17:15,825 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:17:16,353 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.709e+02 1.980e+02 2.349e+02 5.069e+02, threshold=3.960e+02, percent-clipped=5.0 2023-03-26 20:17:47,090 INFO [finetune.py:976] (2/7) Epoch 17, batch 650, loss[loss=0.1742, simple_loss=0.244, pruned_loss=0.05222, over 4862.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2514, pruned_loss=0.05677, over 919115.93 frames. ], batch size: 31, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:17:52,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2597, 1.3270, 1.5360, 2.4821, 1.6428, 2.0243, 0.8731, 2.1041], device='cuda:2'), covar=tensor([0.1939, 0.1431, 0.1253, 0.0722, 0.0939, 0.1357, 0.1573, 0.0661], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0164, 0.0101, 0.0136, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:17:58,667 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:18:16,768 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92337.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:18:20,698 INFO [finetune.py:976] (2/7) Epoch 17, batch 700, loss[loss=0.1948, simple_loss=0.2661, pruned_loss=0.06174, over 4921.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2514, pruned_loss=0.05617, over 926945.90 frames. ], batch size: 42, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:18:23,723 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.116e+02 1.553e+02 1.844e+02 2.135e+02 3.970e+02, threshold=3.688e+02, percent-clipped=1.0 2023-03-26 20:18:49,126 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1911, 2.1937, 2.3034, 1.5020, 2.2434, 2.2756, 2.2364, 1.8743], device='cuda:2'), covar=tensor([0.0645, 0.0688, 0.0682, 0.0949, 0.0634, 0.0734, 0.0662, 0.1146], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0135, 0.0142, 0.0124, 0.0124, 0.0141, 0.0142, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:18:54,354 INFO [finetune.py:976] (2/7) Epoch 17, batch 750, loss[loss=0.1683, simple_loss=0.2412, pruned_loss=0.04774, over 4822.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2522, pruned_loss=0.05611, over 933427.70 frames. ], batch size: 25, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:19:01,242 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2750, 4.5534, 4.8167, 5.1397, 4.9803, 4.6408, 5.3524, 1.6650], device='cuda:2'), covar=tensor([0.0692, 0.0802, 0.0768, 0.0902, 0.1154, 0.1518, 0.0509, 0.5417], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0244, 0.0275, 0.0293, 0.0333, 0.0281, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:19:28,150 INFO [finetune.py:976] (2/7) Epoch 17, batch 800, loss[loss=0.1468, simple_loss=0.2197, pruned_loss=0.037, over 4724.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2524, pruned_loss=0.05616, over 941043.17 frames. ], batch size: 27, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:19:31,202 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.753e+02 1.963e+02 2.342e+02 4.288e+02, threshold=3.926e+02, percent-clipped=2.0 2023-03-26 20:20:01,479 INFO [finetune.py:976] (2/7) Epoch 17, batch 850, loss[loss=0.1918, simple_loss=0.2601, pruned_loss=0.06175, over 4904.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2511, pruned_loss=0.05618, over 945036.04 frames. ], batch size: 35, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:20:21,922 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0213, 4.3545, 4.5403, 4.7988, 4.7788, 4.4276, 5.1158, 1.6295], device='cuda:2'), covar=tensor([0.0655, 0.0777, 0.0722, 0.0897, 0.1093, 0.1492, 0.0519, 0.5369], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0244, 0.0275, 0.0293, 0.0334, 0.0281, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:20:35,311 INFO [finetune.py:976] (2/7) Epoch 17, batch 900, loss[loss=0.1426, simple_loss=0.2186, pruned_loss=0.03331, over 4792.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2488, pruned_loss=0.05541, over 946854.81 frames. ], batch size: 29, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:20:38,306 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:20:38,821 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.480e+02 1.791e+02 2.296e+02 4.324e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-26 20:20:41,387 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9765, 1.5764, 1.8101, 1.8886, 1.6857, 1.7038, 1.7897, 1.7921], device='cuda:2'), covar=tensor([0.4951, 0.4702, 0.4595, 0.4420, 0.5650, 0.4611, 0.5492, 0.4229], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0239, 0.0257, 0.0271, 0.0269, 0.0244, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:20:49,219 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6312, 1.5017, 1.3575, 1.6266, 1.9537, 1.8816, 1.5337, 1.3711], device='cuda:2'), covar=tensor([0.0299, 0.0308, 0.0588, 0.0291, 0.0203, 0.0418, 0.0320, 0.0380], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0099, 0.0107, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.3920e-05, 8.3496e-05, 1.1378e-04, 8.7123e-05, 7.7485e-05, 7.9332e-05, 7.3761e-05, 8.3118e-05], device='cuda:2') 2023-03-26 20:21:08,093 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4694, 1.0180, 0.8110, 1.2867, 2.0484, 1.1606, 1.2241, 1.3357], device='cuda:2'), covar=tensor([0.2119, 0.3387, 0.2528, 0.1966, 0.2179, 0.2790, 0.2277, 0.2984], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0112, 0.0093, 0.0119, 0.0095, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:21:15,096 INFO [finetune.py:976] (2/7) Epoch 17, batch 950, loss[loss=0.1812, simple_loss=0.2524, pruned_loss=0.05498, over 4816.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.247, pruned_loss=0.05507, over 948525.59 frames. ], batch size: 40, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:21:16,912 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:21:37,248 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92611.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:21:54,783 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92625.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:05,411 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92637.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:22:13,800 INFO [finetune.py:976] (2/7) Epoch 17, batch 1000, loss[loss=0.2201, simple_loss=0.281, pruned_loss=0.07957, over 4906.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2499, pruned_loss=0.05651, over 948957.87 frames. ], batch size: 35, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:22:20,426 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.492e+01 1.711e+02 2.074e+02 2.603e+02 6.251e+02, threshold=4.148e+02, percent-clipped=4.0 2023-03-26 20:22:27,817 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:45,204 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92685.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:22:45,836 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:50,928 INFO [finetune.py:976] (2/7) Epoch 17, batch 1050, loss[loss=0.1559, simple_loss=0.2442, pruned_loss=0.03379, over 4903.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2529, pruned_loss=0.05685, over 951667.74 frames. ], batch size: 36, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:02,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8504, 1.7898, 1.5962, 2.0360, 2.3447, 1.9766, 1.7246, 1.5248], device='cuda:2'), covar=tensor([0.2249, 0.1926, 0.1910, 0.1588, 0.1526, 0.1129, 0.2183, 0.1938], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0206, 0.0210, 0.0190, 0.0240, 0.0184, 0.0214, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:23:08,345 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:23:23,714 INFO [finetune.py:976] (2/7) Epoch 17, batch 1100, loss[loss=0.2127, simple_loss=0.2888, pruned_loss=0.06831, over 4925.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2537, pruned_loss=0.05663, over 952880.45 frames. ], batch size: 38, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:27,193 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.694e+02 2.013e+02 2.338e+02 4.806e+02, threshold=4.026e+02, percent-clipped=2.0 2023-03-26 20:23:48,934 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:23:57,181 INFO [finetune.py:976] (2/7) Epoch 17, batch 1150, loss[loss=0.1942, simple_loss=0.2603, pruned_loss=0.06403, over 4746.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2547, pruned_loss=0.0568, over 953802.46 frames. ], batch size: 54, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:57,429 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 20:24:22,952 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:24:31,079 INFO [finetune.py:976] (2/7) Epoch 17, batch 1200, loss[loss=0.1657, simple_loss=0.2356, pruned_loss=0.04788, over 4858.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2531, pruned_loss=0.05646, over 951729.10 frames. ], batch size: 49, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:24:33,576 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 20:24:34,573 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.909e+01 1.547e+02 1.742e+02 2.125e+02 5.044e+02, threshold=3.483e+02, percent-clipped=2.0 2023-03-26 20:24:44,589 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3152, 1.2858, 1.2006, 1.3980, 1.6623, 1.5153, 1.3202, 1.2344], device='cuda:2'), covar=tensor([0.0340, 0.0298, 0.0593, 0.0303, 0.0173, 0.0448, 0.0342, 0.0356], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0108, 0.0143, 0.0113, 0.0099, 0.0107, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.3516e-05, 8.3292e-05, 1.1331e-04, 8.7002e-05, 7.7080e-05, 7.9009e-05, 7.3698e-05, 8.3073e-05], device='cuda:2') 2023-03-26 20:25:03,693 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:04,690 INFO [finetune.py:976] (2/7) Epoch 17, batch 1250, loss[loss=0.1692, simple_loss=0.2312, pruned_loss=0.05363, over 4856.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2504, pruned_loss=0.05587, over 952111.39 frames. ], batch size: 31, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:25:08,423 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:13,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6819, 1.6374, 1.5876, 1.6390, 1.3999, 3.7347, 1.4593, 1.8843], device='cuda:2'), covar=tensor([0.3272, 0.2545, 0.2181, 0.2357, 0.1632, 0.0162, 0.2548, 0.1277], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0114, 0.0120, 0.0122, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:25:29,751 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92931.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:37,457 INFO [finetune.py:976] (2/7) Epoch 17, batch 1300, loss[loss=0.1492, simple_loss=0.2231, pruned_loss=0.03762, over 4941.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2462, pruned_loss=0.05402, over 954503.54 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:25:41,344 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.503e+02 1.790e+02 2.154e+02 4.064e+02, threshold=3.581e+02, percent-clipped=1.0 2023-03-26 20:25:49,699 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:57,445 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6689, 1.6048, 1.5845, 1.5909, 1.2093, 3.7032, 1.4707, 1.9205], device='cuda:2'), covar=tensor([0.3360, 0.2503, 0.2155, 0.2407, 0.1798, 0.0158, 0.2527, 0.1327], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:25:59,957 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92975.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:03,543 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:10,761 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:11,263 INFO [finetune.py:976] (2/7) Epoch 17, batch 1350, loss[loss=0.2093, simple_loss=0.2916, pruned_loss=0.0635, over 4825.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2482, pruned_loss=0.05584, over 953657.25 frames. ], batch size: 40, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:26:23,859 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93010.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:51,527 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:51,541 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:52,255 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.07 vs. limit=5.0 2023-03-26 20:26:59,852 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9656, 1.0672, 1.9018, 1.9028, 1.7232, 1.6628, 1.7871, 1.8294], device='cuda:2'), covar=tensor([0.3152, 0.3621, 0.3030, 0.3117, 0.4395, 0.3495, 0.4021, 0.2972], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0257, 0.0271, 0.0269, 0.0244, 0.0282, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:27:00,928 INFO [finetune.py:976] (2/7) Epoch 17, batch 1400, loss[loss=0.1705, simple_loss=0.2594, pruned_loss=0.04075, over 4870.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2506, pruned_loss=0.05597, over 955455.85 frames. ], batch size: 34, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:27:08,967 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.549e+02 1.883e+02 2.310e+02 4.523e+02, threshold=3.767e+02, percent-clipped=3.0 2023-03-26 20:27:34,717 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93071.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:27:39,783 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:27:50,100 INFO [finetune.py:976] (2/7) Epoch 17, batch 1450, loss[loss=0.1822, simple_loss=0.2561, pruned_loss=0.05412, over 4853.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2531, pruned_loss=0.05636, over 954612.21 frames. ], batch size: 31, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:27:53,122 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93097.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 20:28:06,006 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:17,831 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3151, 1.2376, 1.2688, 0.7668, 1.2299, 1.4740, 1.5151, 1.2385], device='cuda:2'), covar=tensor([0.0738, 0.0520, 0.0515, 0.0419, 0.0461, 0.0456, 0.0276, 0.0516], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0123, 0.0127, 0.0130, 0.0128, 0.0142, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.1545e-05, 1.0941e-04, 8.8441e-05, 9.0325e-05, 9.1730e-05, 9.2104e-05, 1.0263e-04, 1.0613e-04], device='cuda:2') 2023-03-26 20:28:23,787 INFO [finetune.py:976] (2/7) Epoch 17, batch 1500, loss[loss=0.178, simple_loss=0.2545, pruned_loss=0.05071, over 4916.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2556, pruned_loss=0.05764, over 953217.14 frames. ], batch size: 37, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:28:27,863 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.651e+02 1.993e+02 2.270e+02 5.642e+02, threshold=3.987e+02, percent-clipped=1.0 2023-03-26 20:28:40,281 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3669, 2.3283, 1.8526, 2.5305, 2.2574, 2.0457, 2.7889, 2.3868], device='cuda:2'), covar=tensor([0.1207, 0.2266, 0.2666, 0.2368, 0.2213, 0.1410, 0.2851, 0.1577], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0189, 0.0234, 0.0253, 0.0245, 0.0202, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:28:47,130 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:53,711 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93187.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:57,279 INFO [finetune.py:976] (2/7) Epoch 17, batch 1550, loss[loss=0.2042, simple_loss=0.2585, pruned_loss=0.07502, over 4782.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2552, pruned_loss=0.0578, over 954646.86 frames. ], batch size: 26, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:29:13,180 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0875, 2.1397, 2.1099, 1.4467, 2.1844, 2.2007, 2.2786, 1.8676], device='cuda:2'), covar=tensor([0.0604, 0.0626, 0.0747, 0.0888, 0.0600, 0.0764, 0.0552, 0.1055], device='cuda:2'), in_proj_covar=tensor([0.0134, 0.0135, 0.0143, 0.0124, 0.0125, 0.0142, 0.0143, 0.0166], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:29:30,932 INFO [finetune.py:976] (2/7) Epoch 17, batch 1600, loss[loss=0.1814, simple_loss=0.246, pruned_loss=0.05837, over 4819.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2537, pruned_loss=0.05781, over 952651.72 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:29:34,595 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.546e+02 1.807e+02 2.216e+02 3.989e+02, threshold=3.613e+02, percent-clipped=1.0 2023-03-26 20:29:38,723 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:29:47,635 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 20:29:57,411 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:01,028 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93287.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:04,613 INFO [finetune.py:976] (2/7) Epoch 17, batch 1650, loss[loss=0.1628, simple_loss=0.2262, pruned_loss=0.04968, over 4801.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2522, pruned_loss=0.05765, over 954929.92 frames. ], batch size: 25, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:30:10,845 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1372, 1.7974, 1.8545, 1.0491, 2.0875, 2.2839, 1.9928, 1.7409], device='cuda:2'), covar=tensor([0.0959, 0.0812, 0.0663, 0.0701, 0.0507, 0.0665, 0.0546, 0.0716], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0127, 0.0131, 0.0128, 0.0143, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.2006e-05, 1.0978e-04, 8.8585e-05, 9.0733e-05, 9.2222e-05, 9.2385e-05, 1.0295e-04, 1.0677e-04], device='cuda:2') 2023-03-26 20:30:13,739 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7691, 1.0497, 1.8768, 1.7062, 1.5572, 1.5001, 1.6264, 1.7680], device='cuda:2'), covar=tensor([0.3458, 0.3510, 0.2981, 0.3322, 0.4266, 0.3417, 0.4134, 0.2704], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0257, 0.0272, 0.0270, 0.0245, 0.0282, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:30:28,977 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:30,704 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93331.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:38,310 INFO [finetune.py:976] (2/7) Epoch 17, batch 1700, loss[loss=0.152, simple_loss=0.2185, pruned_loss=0.04272, over 4757.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2478, pruned_loss=0.05542, over 955855.43 frames. ], batch size: 27, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:30:41,943 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.487e+02 1.694e+02 2.142e+02 3.933e+02, threshold=3.388e+02, percent-clipped=2.0 2023-03-26 20:30:43,419 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.91 vs. limit=5.0 2023-03-26 20:30:53,340 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93366.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:00,359 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:11,617 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93392.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:31:12,186 INFO [finetune.py:976] (2/7) Epoch 17, batch 1750, loss[loss=0.2047, simple_loss=0.2746, pruned_loss=0.06745, over 4757.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2514, pruned_loss=0.05696, over 956306.50 frames. ], batch size: 54, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:31:23,413 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 20:31:33,282 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93424.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:48,780 INFO [finetune.py:976] (2/7) Epoch 17, batch 1800, loss[loss=0.1682, simple_loss=0.243, pruned_loss=0.04675, over 4763.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.254, pruned_loss=0.05725, over 954496.49 frames. ], batch size: 26, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:31:56,892 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.552e+02 1.846e+02 2.179e+02 3.576e+02, threshold=3.692e+02, percent-clipped=3.0 2023-03-26 20:32:20,905 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:32:41,043 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93487.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:32:49,593 INFO [finetune.py:976] (2/7) Epoch 17, batch 1850, loss[loss=0.2131, simple_loss=0.2876, pruned_loss=0.06926, over 4827.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2552, pruned_loss=0.05764, over 956338.39 frames. ], batch size: 47, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:33:00,419 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6717, 0.7546, 1.6360, 1.6054, 1.4878, 1.4053, 1.5552, 1.5696], device='cuda:2'), covar=tensor([0.3297, 0.3462, 0.3029, 0.3019, 0.3909, 0.3177, 0.3714, 0.2764], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0239, 0.0257, 0.0271, 0.0269, 0.0245, 0.0281, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:33:20,424 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93535.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:22,271 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7203, 1.4154, 2.0040, 1.8836, 1.6736, 3.5597, 1.4365, 1.6177], device='cuda:2'), covar=tensor([0.0904, 0.1935, 0.1047, 0.1021, 0.1566, 0.0209, 0.1584, 0.1754], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0078, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:33:26,141 INFO [finetune.py:976] (2/7) Epoch 17, batch 1900, loss[loss=0.1928, simple_loss=0.2609, pruned_loss=0.06231, over 4820.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2575, pruned_loss=0.05838, over 958998.09 frames. ], batch size: 39, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:33:26,892 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9163, 2.5443, 2.4377, 1.2501, 2.5672, 2.0323, 2.0493, 2.2687], device='cuda:2'), covar=tensor([0.1025, 0.0777, 0.1672, 0.2010, 0.1593, 0.2247, 0.2006, 0.1080], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0195, 0.0199, 0.0184, 0.0212, 0.0207, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:33:30,350 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.241e+02 1.618e+02 1.925e+02 2.327e+02 3.543e+02, threshold=3.851e+02, percent-clipped=0.0 2023-03-26 20:33:33,505 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1592, 1.9324, 1.4692, 0.5252, 1.6946, 1.7098, 1.6369, 1.7817], device='cuda:2'), covar=tensor([0.0726, 0.0717, 0.1244, 0.1810, 0.1185, 0.2097, 0.2017, 0.0786], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0196, 0.0199, 0.0184, 0.0213, 0.0208, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:33:34,132 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:51,921 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2742, 3.7077, 3.8923, 4.0797, 4.0148, 3.7790, 4.3736, 1.3972], device='cuda:2'), covar=tensor([0.0765, 0.0952, 0.0809, 0.0987, 0.1190, 0.1560, 0.0703, 0.5532], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0275, 0.0292, 0.0333, 0.0281, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:33:55,456 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93587.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:56,666 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8184, 1.7415, 1.6426, 1.7485, 1.4568, 4.0288, 1.7220, 1.9855], device='cuda:2'), covar=tensor([0.3077, 0.2212, 0.1992, 0.2076, 0.1512, 0.0150, 0.2377, 0.1200], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0119, 0.0122, 0.0112, 0.0095, 0.0096, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:33:59,455 INFO [finetune.py:976] (2/7) Epoch 17, batch 1950, loss[loss=0.1816, simple_loss=0.2468, pruned_loss=0.05819, over 4861.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2543, pruned_loss=0.0571, over 957582.43 frames. ], batch size: 31, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:34:00,642 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2826, 2.9225, 3.0374, 3.2017, 3.0603, 2.9006, 3.3296, 0.9363], device='cuda:2'), covar=tensor([0.1089, 0.0953, 0.1106, 0.1149, 0.1602, 0.1753, 0.1130, 0.5533], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0275, 0.0292, 0.0333, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:34:06,616 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93603.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:24,559 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93631.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:27,388 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:32,610 INFO [finetune.py:976] (2/7) Epoch 17, batch 2000, loss[loss=0.1784, simple_loss=0.2402, pruned_loss=0.05833, over 4792.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.25, pruned_loss=0.05548, over 954984.16 frames. ], batch size: 51, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:34:37,204 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.489e+01 1.528e+02 1.753e+02 2.103e+02 5.258e+02, threshold=3.506e+02, percent-clipped=1.0 2023-03-26 20:34:48,160 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:56,476 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:05,806 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:06,328 INFO [finetune.py:976] (2/7) Epoch 17, batch 2050, loss[loss=0.1557, simple_loss=0.2273, pruned_loss=0.04209, over 4829.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2466, pruned_loss=0.05447, over 953206.64 frames. ], batch size: 40, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:35:20,565 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93714.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:37,805 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:39,563 INFO [finetune.py:976] (2/7) Epoch 17, batch 2100, loss[loss=0.1815, simple_loss=0.2512, pruned_loss=0.05585, over 4691.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2471, pruned_loss=0.05531, over 954821.56 frames. ], batch size: 23, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:35:43,620 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.402e+01 1.568e+02 1.860e+02 2.232e+02 5.340e+02, threshold=3.720e+02, percent-clipped=4.0 2023-03-26 20:35:58,010 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:58,585 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:02,400 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 20:36:08,886 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6326, 1.5538, 1.4991, 1.6304, 1.1697, 3.3747, 1.3553, 1.8087], device='cuda:2'), covar=tensor([0.3479, 0.2740, 0.2294, 0.2537, 0.1937, 0.0268, 0.2567, 0.1314], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:36:09,965 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5868, 1.4314, 1.4326, 1.5659, 1.1974, 3.7261, 1.3231, 1.8317], device='cuda:2'), covar=tensor([0.3499, 0.2696, 0.2355, 0.2513, 0.1844, 0.0161, 0.2615, 0.1348], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:36:13,276 INFO [finetune.py:976] (2/7) Epoch 17, batch 2150, loss[loss=0.2486, simple_loss=0.317, pruned_loss=0.09014, over 4847.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2505, pruned_loss=0.05648, over 956524.00 frames. ], batch size: 47, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:36:31,170 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:38,631 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:47,327 INFO [finetune.py:976] (2/7) Epoch 17, batch 2200, loss[loss=0.1744, simple_loss=0.2512, pruned_loss=0.04881, over 4809.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2516, pruned_loss=0.05628, over 956261.37 frames. ], batch size: 51, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:36:51,478 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.569e+02 1.869e+02 2.308e+02 4.137e+02, threshold=3.738e+02, percent-clipped=1.0 2023-03-26 20:37:36,156 INFO [finetune.py:976] (2/7) Epoch 17, batch 2250, loss[loss=0.156, simple_loss=0.2277, pruned_loss=0.04217, over 4707.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2521, pruned_loss=0.05637, over 955350.58 frames. ], batch size: 23, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:38:05,917 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 20:38:30,051 INFO [finetune.py:976] (2/7) Epoch 17, batch 2300, loss[loss=0.1847, simple_loss=0.2637, pruned_loss=0.05287, over 4922.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2516, pruned_loss=0.05592, over 953779.22 frames. ], batch size: 42, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:38:34,186 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.189e+02 1.601e+02 1.890e+02 2.328e+02 3.292e+02, threshold=3.781e+02, percent-clipped=0.0 2023-03-26 20:38:56,093 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:39:03,825 INFO [finetune.py:976] (2/7) Epoch 17, batch 2350, loss[loss=0.1993, simple_loss=0.2584, pruned_loss=0.07015, over 4901.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2505, pruned_loss=0.05611, over 953251.59 frames. ], batch size: 32, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:39:30,012 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7849, 1.3424, 0.8388, 1.7183, 2.2001, 1.4950, 1.5253, 1.5296], device='cuda:2'), covar=tensor([0.1558, 0.2207, 0.2100, 0.1240, 0.1915, 0.1955, 0.1611, 0.2063], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0092, 0.0119, 0.0095, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:39:37,888 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94042.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:39:38,383 INFO [finetune.py:976] (2/7) Epoch 17, batch 2400, loss[loss=0.188, simple_loss=0.2636, pruned_loss=0.05622, over 4915.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2483, pruned_loss=0.05542, over 954587.86 frames. ], batch size: 37, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:39:42,514 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.475e+02 1.781e+02 2.110e+02 4.538e+02, threshold=3.563e+02, percent-clipped=2.0 2023-03-26 20:40:11,636 INFO [finetune.py:976] (2/7) Epoch 17, batch 2450, loss[loss=0.2191, simple_loss=0.2861, pruned_loss=0.07599, over 4846.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2469, pruned_loss=0.05533, over 954485.40 frames. ], batch size: 47, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:40:34,653 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:40:45,406 INFO [finetune.py:976] (2/7) Epoch 17, batch 2500, loss[loss=0.1332, simple_loss=0.215, pruned_loss=0.02575, over 4779.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2484, pruned_loss=0.05602, over 955698.49 frames. ], batch size: 28, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:40:49,550 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.530e+01 1.683e+02 1.909e+02 2.220e+02 4.342e+02, threshold=3.819e+02, percent-clipped=2.0 2023-03-26 20:40:53,946 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6332, 1.6028, 1.3416, 1.6078, 1.9608, 1.8936, 1.5483, 1.4042], device='cuda:2'), covar=tensor([0.0253, 0.0298, 0.0638, 0.0320, 0.0186, 0.0377, 0.0395, 0.0397], device='cuda:2'), in_proj_covar=tensor([0.0093, 0.0106, 0.0140, 0.0111, 0.0098, 0.0106, 0.0097, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2368e-05, 8.1799e-05, 1.1100e-04, 8.5425e-05, 7.6196e-05, 7.8359e-05, 7.2443e-05, 8.2143e-05], device='cuda:2') 2023-03-26 20:40:59,599 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-26 20:41:18,601 INFO [finetune.py:976] (2/7) Epoch 17, batch 2550, loss[loss=0.2049, simple_loss=0.2859, pruned_loss=0.06192, over 4823.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2521, pruned_loss=0.05714, over 956387.76 frames. ], batch size: 40, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:41:28,916 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9504, 1.4826, 0.8209, 1.8065, 2.3449, 1.4808, 1.5505, 1.7186], device='cuda:2'), covar=tensor([0.1286, 0.1861, 0.1909, 0.1097, 0.1665, 0.1838, 0.1398, 0.1789], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0095, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:41:38,788 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94223.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:41:52,380 INFO [finetune.py:976] (2/7) Epoch 17, batch 2600, loss[loss=0.1673, simple_loss=0.2369, pruned_loss=0.04883, over 4813.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2531, pruned_loss=0.05751, over 956838.67 frames. ], batch size: 38, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:41:56,016 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.647e+02 1.955e+02 2.265e+02 3.573e+02, threshold=3.911e+02, percent-clipped=0.0 2023-03-26 20:41:56,116 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7694, 1.4222, 0.7613, 1.6302, 2.3080, 1.4185, 1.3907, 1.6014], device='cuda:2'), covar=tensor([0.1499, 0.2237, 0.2256, 0.1353, 0.1714, 0.2023, 0.1675, 0.2192], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0095, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:42:01,553 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-26 20:42:13,960 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0436, 1.7516, 2.4355, 1.6065, 2.2255, 2.2569, 1.6354, 2.4144], device='cuda:2'), covar=tensor([0.1312, 0.1928, 0.1304, 0.1940, 0.0852, 0.1535, 0.2976, 0.0803], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0203, 0.0189, 0.0189, 0.0176, 0.0212, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:42:19,695 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1780, 2.1181, 2.0790, 1.0575, 2.3691, 2.6129, 2.2459, 1.9501], device='cuda:2'), covar=tensor([0.0918, 0.0667, 0.0538, 0.0686, 0.0501, 0.0623, 0.0440, 0.0738], device='cuda:2'), in_proj_covar=tensor([0.0127, 0.0153, 0.0125, 0.0129, 0.0132, 0.0130, 0.0145, 0.0150], device='cuda:2'), out_proj_covar=tensor([9.3128e-05, 1.1091e-04, 8.9296e-05, 9.1824e-05, 9.3226e-05, 9.3535e-05, 1.0426e-04, 1.0797e-04], device='cuda:2') 2023-03-26 20:42:19,698 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94284.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:42:24,402 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 20:42:25,432 INFO [finetune.py:976] (2/7) Epoch 17, batch 2650, loss[loss=0.1917, simple_loss=0.2636, pruned_loss=0.0599, over 4838.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2534, pruned_loss=0.05684, over 958283.60 frames. ], batch size: 49, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:43:12,615 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:43:20,898 INFO [finetune.py:976] (2/7) Epoch 17, batch 2700, loss[loss=0.1763, simple_loss=0.2513, pruned_loss=0.05066, over 4750.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2523, pruned_loss=0.05624, over 955744.31 frames. ], batch size: 28, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:43:24,106 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:43:28,201 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.514e+02 1.766e+02 2.145e+02 4.618e+02, threshold=3.532e+02, percent-clipped=2.0 2023-03-26 20:43:31,555 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 20:44:10,179 INFO [finetune.py:976] (2/7) Epoch 17, batch 2750, loss[loss=0.1694, simple_loss=0.2299, pruned_loss=0.05446, over 4834.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2508, pruned_loss=0.05678, over 957370.14 frames. ], batch size: 30, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:44:20,175 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94409.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:44:31,923 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94426.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:44:35,574 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6718, 1.5372, 2.2829, 3.5493, 2.4621, 2.5384, 1.4425, 2.8927], device='cuda:2'), covar=tensor([0.1709, 0.1485, 0.1231, 0.0502, 0.0735, 0.1386, 0.1451, 0.0448], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0166, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 20:44:43,082 INFO [finetune.py:976] (2/7) Epoch 17, batch 2800, loss[loss=0.1762, simple_loss=0.2434, pruned_loss=0.05453, over 4821.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2479, pruned_loss=0.05575, over 956227.61 frames. ], batch size: 38, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:44:47,181 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.577e+01 1.587e+02 1.887e+02 2.313e+02 4.372e+02, threshold=3.775e+02, percent-clipped=5.0 2023-03-26 20:45:02,860 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:45:16,199 INFO [finetune.py:976] (2/7) Epoch 17, batch 2850, loss[loss=0.1686, simple_loss=0.2515, pruned_loss=0.04278, over 4833.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2472, pruned_loss=0.05554, over 955841.67 frames. ], batch size: 39, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:45:49,616 INFO [finetune.py:976] (2/7) Epoch 17, batch 2900, loss[loss=0.1791, simple_loss=0.2296, pruned_loss=0.06426, over 4167.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.251, pruned_loss=0.05699, over 954715.29 frames. ], batch size: 18, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:45:53,201 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.550e+02 1.801e+02 2.117e+02 3.911e+02, threshold=3.601e+02, percent-clipped=1.0 2023-03-26 20:46:12,367 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94579.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:46:22,372 INFO [finetune.py:976] (2/7) Epoch 17, batch 2950, loss[loss=0.188, simple_loss=0.267, pruned_loss=0.05451, over 4761.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2541, pruned_loss=0.05767, over 955839.48 frames. ], batch size: 28, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:46:36,964 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 20:46:46,976 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8653, 1.8379, 1.7082, 1.7779, 1.4836, 4.4640, 1.8411, 2.1299], device='cuda:2'), covar=tensor([0.3178, 0.2326, 0.2045, 0.2311, 0.1562, 0.0132, 0.2220, 0.1209], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0114, 0.0120, 0.0122, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:46:48,237 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2660, 3.4927, 3.2402, 2.5544, 3.1573, 3.5313, 3.5152, 3.0270], device='cuda:2'), covar=tensor([0.0456, 0.0377, 0.0522, 0.0687, 0.0666, 0.0527, 0.0488, 0.0780], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0134, 0.0140, 0.0123, 0.0124, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:46:52,129 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94637.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:46:56,158 INFO [finetune.py:976] (2/7) Epoch 17, batch 3000, loss[loss=0.1833, simple_loss=0.2573, pruned_loss=0.05461, over 4836.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2554, pruned_loss=0.05841, over 955727.59 frames. ], batch size: 49, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:46:56,158 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 20:47:06,772 INFO [finetune.py:1010] (2/7) Epoch 17, validation: loss=0.1562, simple_loss=0.2257, pruned_loss=0.04335, over 2265189.00 frames. 2023-03-26 20:47:06,772 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 20:47:09,453 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 20:47:10,420 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.605e+02 1.916e+02 2.337e+02 3.800e+02, threshold=3.832e+02, percent-clipped=2.0 2023-03-26 20:47:33,747 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:47:38,998 INFO [finetune.py:976] (2/7) Epoch 17, batch 3050, loss[loss=0.1645, simple_loss=0.2481, pruned_loss=0.04042, over 4861.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2558, pruned_loss=0.05818, over 954617.80 frames. ], batch size: 31, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:47:47,347 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94704.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:48:19,139 INFO [finetune.py:976] (2/7) Epoch 17, batch 3100, loss[loss=0.1867, simple_loss=0.2551, pruned_loss=0.05915, over 4894.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2533, pruned_loss=0.05706, over 955263.66 frames. ], batch size: 32, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:48:27,686 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.533e+02 1.793e+02 2.269e+02 8.706e+02, threshold=3.585e+02, percent-clipped=3.0 2023-03-26 20:49:17,517 INFO [finetune.py:976] (2/7) Epoch 17, batch 3150, loss[loss=0.1388, simple_loss=0.2158, pruned_loss=0.03088, over 4932.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2493, pruned_loss=0.0553, over 951734.43 frames. ], batch size: 33, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:49:38,938 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8749, 1.6373, 1.5277, 1.2579, 1.6137, 1.5577, 1.6008, 2.1676], device='cuda:2'), covar=tensor([0.3388, 0.3523, 0.2840, 0.3567, 0.3318, 0.2215, 0.3425, 0.1741], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0227, 0.0276, 0.0251, 0.0219, 0.0250, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:49:51,405 INFO [finetune.py:976] (2/7) Epoch 17, batch 3200, loss[loss=0.1801, simple_loss=0.2473, pruned_loss=0.0565, over 4900.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2468, pruned_loss=0.05449, over 950240.43 frames. ], batch size: 35, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:49:55,533 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.463e+02 1.766e+02 2.027e+02 4.168e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 20:50:16,321 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94879.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:50:25,250 INFO [finetune.py:976] (2/7) Epoch 17, batch 3250, loss[loss=0.2397, simple_loss=0.2969, pruned_loss=0.09121, over 4855.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2489, pruned_loss=0.05593, over 951701.85 frames. ], batch size: 49, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:50:48,424 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94927.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:50:58,673 INFO [finetune.py:976] (2/7) Epoch 17, batch 3300, loss[loss=0.2151, simple_loss=0.2792, pruned_loss=0.07552, over 4897.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2533, pruned_loss=0.05778, over 954272.78 frames. ], batch size: 35, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:51:02,381 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.785e+02 2.188e+02 2.532e+02 5.228e+02, threshold=4.375e+02, percent-clipped=4.0 2023-03-26 20:51:16,199 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 20:51:16,625 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9018, 1.1931, 1.7482, 1.8220, 1.6101, 1.5869, 1.7090, 1.7083], device='cuda:2'), covar=tensor([0.3720, 0.3820, 0.3469, 0.3663, 0.5089, 0.3948, 0.4400, 0.3175], device='cuda:2'), in_proj_covar=tensor([0.0249, 0.0241, 0.0260, 0.0274, 0.0272, 0.0248, 0.0284, 0.0241], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:51:23,780 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5351, 1.4845, 1.4598, 1.4693, 1.1057, 3.0842, 1.2456, 1.5496], device='cuda:2'), covar=tensor([0.3254, 0.2397, 0.1992, 0.2233, 0.1757, 0.0254, 0.2865, 0.1271], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 20:51:32,705 INFO [finetune.py:976] (2/7) Epoch 17, batch 3350, loss[loss=0.1667, simple_loss=0.2433, pruned_loss=0.04509, over 4782.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2546, pruned_loss=0.0582, over 953219.47 frames. ], batch size: 26, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:51:36,490 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5145, 3.3131, 3.1239, 1.6238, 3.4775, 2.6256, 0.7475, 2.3446], device='cuda:2'), covar=tensor([0.2716, 0.2103, 0.1845, 0.3433, 0.1188, 0.1067, 0.4411, 0.1616], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0176, 0.0159, 0.0129, 0.0159, 0.0124, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 20:51:40,210 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:51:43,879 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 20:52:06,005 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8208, 1.5859, 1.4181, 1.2137, 1.5485, 1.5375, 1.5568, 2.1588], device='cuda:2'), covar=tensor([0.3778, 0.3709, 0.3278, 0.3752, 0.3770, 0.2330, 0.3481, 0.1674], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0228, 0.0277, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:52:06,489 INFO [finetune.py:976] (2/7) Epoch 17, batch 3400, loss[loss=0.1992, simple_loss=0.279, pruned_loss=0.05969, over 4840.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.255, pruned_loss=0.05821, over 952542.32 frames. ], batch size: 47, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:52:10,132 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.553e+02 1.863e+02 2.086e+02 3.757e+02, threshold=3.727e+02, percent-clipped=0.0 2023-03-26 20:52:12,046 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:52:16,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7386, 1.6084, 1.3874, 1.7883, 2.2165, 1.7891, 1.6242, 1.3648], device='cuda:2'), covar=tensor([0.2217, 0.2131, 0.2041, 0.1692, 0.1680, 0.1350, 0.2342, 0.2078], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0210, 0.0213, 0.0192, 0.0242, 0.0187, 0.0216, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:52:19,900 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9316, 1.6560, 2.1972, 1.3885, 2.0774, 2.1210, 1.5314, 2.2933], device='cuda:2'), covar=tensor([0.1332, 0.2139, 0.1577, 0.2254, 0.0915, 0.1617, 0.3189, 0.0803], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0204, 0.0190, 0.0190, 0.0177, 0.0212, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:52:28,686 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.46 vs. limit=5.0 2023-03-26 20:52:40,293 INFO [finetune.py:976] (2/7) Epoch 17, batch 3450, loss[loss=0.1852, simple_loss=0.2616, pruned_loss=0.05441, over 4800.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2552, pruned_loss=0.0584, over 954528.45 frames. ], batch size: 45, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:52:45,922 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1011, 2.1071, 1.6510, 2.0947, 2.0487, 1.7287, 2.3527, 2.1048], device='cuda:2'), covar=tensor([0.1264, 0.1942, 0.2762, 0.2336, 0.2355, 0.1645, 0.2908, 0.1605], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0252, 0.0244, 0.0202, 0.0212, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:52:47,678 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95105.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:53:10,931 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 20:53:13,090 INFO [finetune.py:976] (2/7) Epoch 17, batch 3500, loss[loss=0.1481, simple_loss=0.2297, pruned_loss=0.03326, over 4919.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2525, pruned_loss=0.05726, over 953794.30 frames. ], batch size: 37, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:53:17,181 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.619e+02 1.940e+02 2.279e+02 3.817e+02, threshold=3.880e+02, percent-clipped=1.0 2023-03-26 20:53:19,755 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1688, 1.3334, 1.4591, 0.7542, 1.3515, 1.6286, 1.6597, 1.3376], device='cuda:2'), covar=tensor([0.0872, 0.0571, 0.0551, 0.0513, 0.0510, 0.0595, 0.0394, 0.0657], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0125, 0.0128, 0.0132, 0.0130, 0.0145, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.2802e-05, 1.1027e-04, 8.9423e-05, 9.1124e-05, 9.3166e-05, 9.3722e-05, 1.0435e-04, 1.0725e-04], device='cuda:2') 2023-03-26 20:53:35,210 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95166.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:54:05,085 INFO [finetune.py:976] (2/7) Epoch 17, batch 3550, loss[loss=0.1588, simple_loss=0.2202, pruned_loss=0.04871, over 4766.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2493, pruned_loss=0.05599, over 954295.62 frames. ], batch size: 28, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:54:51,060 INFO [finetune.py:976] (2/7) Epoch 17, batch 3600, loss[loss=0.1918, simple_loss=0.2698, pruned_loss=0.05685, over 4814.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2461, pruned_loss=0.05481, over 954493.58 frames. ], batch size: 41, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:54:54,643 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.580e+02 1.871e+02 2.182e+02 4.206e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-26 20:55:01,986 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:06,878 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:23,783 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.95 vs. limit=5.0 2023-03-26 20:55:24,757 INFO [finetune.py:976] (2/7) Epoch 17, batch 3650, loss[loss=0.1625, simple_loss=0.2251, pruned_loss=0.04998, over 4222.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2466, pruned_loss=0.05476, over 951752.02 frames. ], batch size: 18, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:55:29,794 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1211, 1.9660, 1.7258, 1.8994, 1.8484, 1.8743, 1.9251, 2.6719], device='cuda:2'), covar=tensor([0.3444, 0.4270, 0.3148, 0.3957, 0.4330, 0.2457, 0.3849, 0.1642], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0262, 0.0227, 0.0277, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:55:42,961 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:48,262 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:55,798 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95339.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:58,602 INFO [finetune.py:976] (2/7) Epoch 17, batch 3700, loss[loss=0.1801, simple_loss=0.259, pruned_loss=0.05056, over 4826.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2501, pruned_loss=0.05554, over 951261.12 frames. ], batch size: 33, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:56:02,229 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.605e+02 1.907e+02 2.409e+02 3.957e+02, threshold=3.813e+02, percent-clipped=4.0 2023-03-26 20:56:31,736 INFO [finetune.py:976] (2/7) Epoch 17, batch 3750, loss[loss=0.2057, simple_loss=0.2749, pruned_loss=0.06828, over 4879.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2523, pruned_loss=0.05653, over 950921.64 frames. ], batch size: 32, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:56:36,734 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:56:57,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3067, 2.3018, 2.2592, 1.5848, 2.3355, 2.3927, 2.4239, 1.9144], device='cuda:2'), covar=tensor([0.0536, 0.0557, 0.0693, 0.0865, 0.0564, 0.0616, 0.0535, 0.0999], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0135, 0.0141, 0.0123, 0.0123, 0.0141, 0.0142, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:57:04,525 INFO [finetune.py:976] (2/7) Epoch 17, batch 3800, loss[loss=0.2166, simple_loss=0.2882, pruned_loss=0.07254, over 4800.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2539, pruned_loss=0.05727, over 951896.98 frames. ], batch size: 40, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:57:09,546 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.488e+02 1.737e+02 2.235e+02 4.648e+02, threshold=3.475e+02, percent-clipped=3.0 2023-03-26 20:57:13,983 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-26 20:57:16,898 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95461.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:57:25,265 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 20:57:37,562 INFO [finetune.py:976] (2/7) Epoch 17, batch 3850, loss[loss=0.1623, simple_loss=0.2402, pruned_loss=0.04226, over 4911.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2526, pruned_loss=0.05678, over 950845.06 frames. ], batch size: 46, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:57:41,183 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 20:57:45,752 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8704, 1.8040, 1.9094, 1.1768, 1.9555, 1.9642, 1.9196, 1.5987], device='cuda:2'), covar=tensor([0.0616, 0.0669, 0.0697, 0.0941, 0.0660, 0.0716, 0.0599, 0.1159], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0123, 0.0123, 0.0140, 0.0142, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:58:04,404 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7907, 2.9960, 2.7269, 2.0799, 2.8218, 3.0653, 3.0652, 2.5028], device='cuda:2'), covar=tensor([0.0543, 0.0541, 0.0666, 0.0814, 0.0573, 0.0630, 0.0539, 0.0902], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0123, 0.0123, 0.0140, 0.0142, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:58:07,552 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 20:58:10,766 INFO [finetune.py:976] (2/7) Epoch 17, batch 3900, loss[loss=0.1443, simple_loss=0.2133, pruned_loss=0.03766, over 4819.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2514, pruned_loss=0.05717, over 951999.23 frames. ], batch size: 38, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:58:15,379 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.550e+02 1.834e+02 2.229e+02 4.290e+02, threshold=3.669e+02, percent-clipped=3.0 2023-03-26 20:58:46,334 INFO [finetune.py:976] (2/7) Epoch 17, batch 3950, loss[loss=0.192, simple_loss=0.25, pruned_loss=0.06702, over 4766.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2484, pruned_loss=0.05603, over 954706.76 frames. ], batch size: 26, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:59:04,355 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:05,016 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95617.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:13,741 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:28,598 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6834, 2.5271, 2.1786, 2.8600, 2.5990, 2.3211, 3.2089, 2.7371], device='cuda:2'), covar=tensor([0.1355, 0.2309, 0.2991, 0.2638, 0.2685, 0.1741, 0.2806, 0.1910], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0189, 0.0236, 0.0255, 0.0247, 0.0204, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 20:59:34,007 INFO [finetune.py:976] (2/7) Epoch 17, batch 4000, loss[loss=0.1859, simple_loss=0.2531, pruned_loss=0.05934, over 4871.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2466, pruned_loss=0.05511, over 955915.19 frames. ], batch size: 31, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:59:42,363 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.540e+02 1.979e+02 2.285e+02 3.877e+02, threshold=3.958e+02, percent-clipped=2.0 2023-03-26 21:00:12,635 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95678.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:00:26,082 INFO [finetune.py:976] (2/7) Epoch 17, batch 4050, loss[loss=0.1783, simple_loss=0.2601, pruned_loss=0.04823, over 4760.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2497, pruned_loss=0.05654, over 953187.94 frames. ], batch size: 28, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:00:27,351 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95695.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:00:59,870 INFO [finetune.py:976] (2/7) Epoch 17, batch 4100, loss[loss=0.176, simple_loss=0.2531, pruned_loss=0.04944, over 4755.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2524, pruned_loss=0.05672, over 955826.30 frames. ], batch size: 28, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:01:00,011 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4885, 1.7129, 1.8166, 0.9672, 1.8088, 2.0361, 2.0148, 1.5708], device='cuda:2'), covar=tensor([0.0873, 0.0481, 0.0419, 0.0535, 0.0362, 0.0475, 0.0298, 0.0576], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0128, 0.0131, 0.0129, 0.0144, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.2299e-05, 1.0998e-04, 8.8856e-05, 9.0874e-05, 9.2090e-05, 9.3112e-05, 1.0368e-04, 1.0674e-04], device='cuda:2') 2023-03-26 21:01:04,068 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.600e+02 1.864e+02 2.304e+02 4.240e+02, threshold=3.729e+02, percent-clipped=2.0 2023-03-26 21:01:07,745 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7823, 1.6760, 1.6027, 1.6600, 1.3752, 3.8185, 1.6568, 2.2121], device='cuda:2'), covar=tensor([0.3395, 0.2591, 0.2113, 0.2246, 0.1603, 0.0174, 0.2246, 0.1133], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0114, 0.0119, 0.0122, 0.0112, 0.0095, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 21:01:08,941 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 21:01:12,306 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95761.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:01:33,030 INFO [finetune.py:976] (2/7) Epoch 17, batch 4150, loss[loss=0.1897, simple_loss=0.2735, pruned_loss=0.05293, over 4922.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2539, pruned_loss=0.05759, over 953734.15 frames. ], batch size: 38, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:01:44,397 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95809.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:01:46,220 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95812.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:01:59,114 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6115, 1.5560, 1.3248, 1.6953, 1.9948, 1.6456, 1.2348, 1.3484], device='cuda:2'), covar=tensor([0.2105, 0.1877, 0.1843, 0.1677, 0.1548, 0.1202, 0.2448, 0.1859], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0191, 0.0241, 0.0186, 0.0215, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:02:06,757 INFO [finetune.py:976] (2/7) Epoch 17, batch 4200, loss[loss=0.1908, simple_loss=0.2754, pruned_loss=0.05313, over 4814.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2545, pruned_loss=0.05748, over 954007.45 frames. ], batch size: 38, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:02:09,317 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95847.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:02:11,507 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.538e+02 1.932e+02 2.354e+02 8.206e+02, threshold=3.863e+02, percent-clipped=2.0 2023-03-26 21:02:27,865 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95873.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:02:39,921 INFO [finetune.py:976] (2/7) Epoch 17, batch 4250, loss[loss=0.1887, simple_loss=0.272, pruned_loss=0.05269, over 4825.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2529, pruned_loss=0.05635, over 955037.17 frames. ], batch size: 38, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:02:50,080 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95908.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:02:55,853 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:02,175 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:13,535 INFO [finetune.py:976] (2/7) Epoch 17, batch 4300, loss[loss=0.1546, simple_loss=0.2395, pruned_loss=0.03483, over 4828.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2494, pruned_loss=0.05498, over 955483.05 frames. ], batch size: 30, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:03:16,720 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 21:03:18,258 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.465e+02 1.654e+02 2.123e+02 3.225e+02, threshold=3.308e+02, percent-clipped=0.0 2023-03-26 21:03:27,289 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:29,025 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7588, 4.1427, 4.3715, 4.5633, 4.5169, 4.1891, 4.8041, 1.7761], device='cuda:2'), covar=tensor([0.0644, 0.0820, 0.0789, 0.0848, 0.1026, 0.1476, 0.0636, 0.4752], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0244, 0.0276, 0.0292, 0.0335, 0.0281, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:03:33,598 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:34,212 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95973.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:39,479 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:47,254 INFO [finetune.py:976] (2/7) Epoch 17, batch 4350, loss[loss=0.161, simple_loss=0.2465, pruned_loss=0.03779, over 4786.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2455, pruned_loss=0.05317, over 954818.50 frames. ], batch size: 29, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:03:48,551 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:54,093 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 21:04:20,844 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:04:21,010 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 21:04:21,942 INFO [finetune.py:976] (2/7) Epoch 17, batch 4400, loss[loss=0.1846, simple_loss=0.2628, pruned_loss=0.05315, over 4808.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2471, pruned_loss=0.05431, over 954287.70 frames. ], batch size: 41, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:04:22,001 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96043.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:04:28,705 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.398e+01 1.490e+02 1.749e+02 2.200e+02 3.209e+02, threshold=3.497e+02, percent-clipped=0.0 2023-03-26 21:04:43,707 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3461, 1.3642, 1.6672, 1.6551, 1.4569, 3.0725, 1.2249, 1.4020], device='cuda:2'), covar=tensor([0.1005, 0.1862, 0.1246, 0.0983, 0.1634, 0.0240, 0.1609, 0.1849], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 21:04:59,720 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96080.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:05:13,444 INFO [finetune.py:976] (2/7) Epoch 17, batch 4450, loss[loss=0.1848, simple_loss=0.2609, pruned_loss=0.05434, over 4903.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2493, pruned_loss=0.05475, over 954036.56 frames. ], batch size: 36, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:05:58,452 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96141.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:05:59,553 INFO [finetune.py:976] (2/7) Epoch 17, batch 4500, loss[loss=0.1659, simple_loss=0.2336, pruned_loss=0.04909, over 4883.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2518, pruned_loss=0.05544, over 954755.98 frames. ], batch size: 35, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:06:03,840 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.724e+02 1.946e+02 2.358e+02 4.504e+02, threshold=3.891e+02, percent-clipped=3.0 2023-03-26 21:06:13,994 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:06:15,769 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96168.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:06:33,241 INFO [finetune.py:976] (2/7) Epoch 17, batch 4550, loss[loss=0.1925, simple_loss=0.2621, pruned_loss=0.06143, over 4913.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2552, pruned_loss=0.05755, over 954899.03 frames. ], batch size: 36, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:06:39,484 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96203.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:06:54,562 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96226.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:07:07,173 INFO [finetune.py:976] (2/7) Epoch 17, batch 4600, loss[loss=0.1667, simple_loss=0.2408, pruned_loss=0.04632, over 4898.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2527, pruned_loss=0.05584, over 953541.11 frames. ], batch size: 35, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:07:11,422 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.530e+02 1.886e+02 2.340e+02 4.335e+02, threshold=3.772e+02, percent-clipped=2.0 2023-03-26 21:07:26,464 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:07:40,044 INFO [finetune.py:976] (2/7) Epoch 17, batch 4650, loss[loss=0.2216, simple_loss=0.2635, pruned_loss=0.08991, over 4815.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2511, pruned_loss=0.05593, over 955028.81 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:07:41,840 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0301, 1.8799, 1.7156, 1.7460, 1.7776, 1.6758, 1.8374, 2.4411], device='cuda:2'), covar=tensor([0.2934, 0.3236, 0.2574, 0.2633, 0.3069, 0.1965, 0.3096, 0.1338], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0226, 0.0276, 0.0251, 0.0218, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:07:58,450 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:00,269 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:08,003 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:09,324 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1642, 1.3062, 1.4624, 0.7400, 1.3717, 1.6110, 1.6670, 1.3031], device='cuda:2'), covar=tensor([0.0938, 0.0589, 0.0516, 0.0529, 0.0430, 0.0604, 0.0341, 0.0687], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0123, 0.0127, 0.0130, 0.0129, 0.0143, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1666e-05, 1.0920e-04, 8.8144e-05, 9.0370e-05, 9.1200e-05, 9.2646e-05, 1.0254e-04, 1.0626e-04], device='cuda:2') 2023-03-26 21:08:13,200 INFO [finetune.py:976] (2/7) Epoch 17, batch 4700, loss[loss=0.1616, simple_loss=0.2266, pruned_loss=0.04823, over 4908.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.247, pruned_loss=0.0546, over 955612.66 frames. ], batch size: 36, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:08:18,315 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.531e+02 1.882e+02 2.216e+02 4.319e+02, threshold=3.764e+02, percent-clipped=2.0 2023-03-26 21:08:40,431 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:43,960 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96390.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:45,651 INFO [finetune.py:976] (2/7) Epoch 17, batch 4750, loss[loss=0.1727, simple_loss=0.2474, pruned_loss=0.04905, over 4798.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2452, pruned_loss=0.0541, over 956129.78 frames. ], batch size: 29, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:09:14,770 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:09:19,420 INFO [finetune.py:976] (2/7) Epoch 17, batch 4800, loss[loss=0.1473, simple_loss=0.2156, pruned_loss=0.03949, over 4763.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.249, pruned_loss=0.05582, over 955529.79 frames. ], batch size: 26, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:09:25,020 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.609e+02 1.875e+02 2.422e+02 6.864e+02, threshold=3.750e+02, percent-clipped=2.0 2023-03-26 21:09:25,771 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96451.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:09:36,646 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96468.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:09:37,970 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 21:09:54,757 INFO [finetune.py:976] (2/7) Epoch 17, batch 4850, loss[loss=0.221, simple_loss=0.2864, pruned_loss=0.07783, over 4847.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2518, pruned_loss=0.05624, over 954683.20 frames. ], batch size: 47, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:10:02,908 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96503.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:10:15,682 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96516.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:10:18,748 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:10:37,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0306, 2.8771, 3.3208, 2.2439, 3.2050, 3.4654, 2.5118, 3.4718], device='cuda:2'), covar=tensor([0.1116, 0.1497, 0.1238, 0.1938, 0.0723, 0.1182, 0.2209, 0.0663], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0203, 0.0188, 0.0188, 0.0175, 0.0211, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:10:45,276 INFO [finetune.py:976] (2/7) Epoch 17, batch 4900, loss[loss=0.1903, simple_loss=0.2623, pruned_loss=0.05914, over 4862.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2527, pruned_loss=0.05666, over 954986.78 frames. ], batch size: 44, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:10:54,622 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.592e+02 1.896e+02 2.164e+02 3.347e+02, threshold=3.792e+02, percent-clipped=0.0 2023-03-26 21:10:55,803 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:11:04,399 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3165, 3.7463, 3.9274, 4.1599, 4.0780, 3.8524, 4.3913, 1.4213], device='cuda:2'), covar=tensor([0.0795, 0.0862, 0.0862, 0.0871, 0.1150, 0.1514, 0.0695, 0.5677], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0278, 0.0293, 0.0336, 0.0283, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:11:26,121 INFO [finetune.py:976] (2/7) Epoch 17, batch 4950, loss[loss=0.1987, simple_loss=0.2652, pruned_loss=0.06613, over 4804.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2541, pruned_loss=0.05671, over 955776.62 frames. ], batch size: 40, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:11:55,634 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96636.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:11:59,761 INFO [finetune.py:976] (2/7) Epoch 17, batch 5000, loss[loss=0.1868, simple_loss=0.2543, pruned_loss=0.05961, over 4739.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2523, pruned_loss=0.05594, over 955995.00 frames. ], batch size: 59, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:12:03,013 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 21:12:04,410 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.531e+02 1.819e+02 2.156e+02 3.437e+02, threshold=3.638e+02, percent-clipped=0.0 2023-03-26 21:12:19,963 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 21:12:24,599 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:12:26,963 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:12:33,329 INFO [finetune.py:976] (2/7) Epoch 17, batch 5050, loss[loss=0.1933, simple_loss=0.2544, pruned_loss=0.06608, over 4908.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.251, pruned_loss=0.05624, over 956825.23 frames. ], batch size: 36, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:13:01,804 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96736.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:06,445 INFO [finetune.py:976] (2/7) Epoch 17, batch 5100, loss[loss=0.1798, simple_loss=0.2434, pruned_loss=0.05812, over 4208.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2486, pruned_loss=0.05598, over 956864.13 frames. ], batch size: 65, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:13:08,318 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96746.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:10,589 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.513e+02 1.848e+02 2.246e+02 3.685e+02, threshold=3.695e+02, percent-clipped=1.0 2023-03-26 21:13:28,937 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:33,647 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:39,089 INFO [finetune.py:976] (2/7) Epoch 17, batch 5150, loss[loss=0.1575, simple_loss=0.2227, pruned_loss=0.04611, over 4738.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2486, pruned_loss=0.05602, over 954664.49 frames. ], batch size: 23, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:13:58,219 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:59,854 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7946, 3.5243, 3.3832, 1.9500, 3.7443, 2.8454, 1.1432, 2.6755], device='cuda:2'), covar=tensor([0.2599, 0.1649, 0.1514, 0.2924, 0.0821, 0.0909, 0.3900, 0.1321], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0160, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 21:14:08,377 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96837.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:14:11,883 INFO [finetune.py:976] (2/7) Epoch 17, batch 5200, loss[loss=0.1772, simple_loss=0.2487, pruned_loss=0.05289, over 4750.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2515, pruned_loss=0.05656, over 952625.83 frames. ], batch size: 54, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:14:16,595 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.635e+02 1.977e+02 2.300e+02 5.939e+02, threshold=3.955e+02, percent-clipped=5.0 2023-03-26 21:14:29,997 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96869.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:14:44,941 INFO [finetune.py:976] (2/7) Epoch 17, batch 5250, loss[loss=0.1937, simple_loss=0.2725, pruned_loss=0.05743, over 4934.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2536, pruned_loss=0.05687, over 953191.82 frames. ], batch size: 42, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:15:21,072 INFO [finetune.py:976] (2/7) Epoch 17, batch 5300, loss[loss=0.1624, simple_loss=0.2118, pruned_loss=0.05645, over 4140.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.255, pruned_loss=0.05677, over 953314.39 frames. ], batch size: 18, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:15:30,005 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.644e+02 1.959e+02 2.379e+02 3.599e+02, threshold=3.918e+02, percent-clipped=0.0 2023-03-26 21:16:00,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:17,642 INFO [finetune.py:976] (2/7) Epoch 17, batch 5350, loss[loss=0.1928, simple_loss=0.254, pruned_loss=0.06579, over 4903.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2546, pruned_loss=0.05657, over 954032.29 frames. ], batch size: 37, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:16:25,310 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97000.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:44,613 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:54,587 INFO [finetune.py:976] (2/7) Epoch 17, batch 5400, loss[loss=0.1683, simple_loss=0.2447, pruned_loss=0.04597, over 4819.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2521, pruned_loss=0.05598, over 953669.10 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:16:56,510 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:58,800 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.591e+02 1.860e+02 2.348e+02 4.043e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 21:17:06,262 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97061.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:17:27,397 INFO [finetune.py:976] (2/7) Epoch 17, batch 5450, loss[loss=0.1365, simple_loss=0.2053, pruned_loss=0.03383, over 4731.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2489, pruned_loss=0.05519, over 952324.12 frames. ], batch size: 59, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:17:28,069 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97094.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:17:52,426 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:18:00,547 INFO [finetune.py:976] (2/7) Epoch 17, batch 5500, loss[loss=0.1745, simple_loss=0.2392, pruned_loss=0.05488, over 4848.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.246, pruned_loss=0.05441, over 953169.48 frames. ], batch size: 47, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:18:04,745 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.221e+01 1.533e+02 1.769e+02 2.042e+02 3.202e+02, threshold=3.539e+02, percent-clipped=0.0 2023-03-26 21:18:33,719 INFO [finetune.py:976] (2/7) Epoch 17, batch 5550, loss[loss=0.2189, simple_loss=0.2967, pruned_loss=0.07048, over 4815.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2463, pruned_loss=0.05438, over 953274.79 frames. ], batch size: 40, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:18:57,106 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4703, 1.3984, 1.3098, 1.4712, 1.7003, 1.5981, 1.4185, 1.2850], device='cuda:2'), covar=tensor([0.0307, 0.0314, 0.0600, 0.0302, 0.0234, 0.0437, 0.0324, 0.0411], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0108, 0.0144, 0.0112, 0.0100, 0.0108, 0.0099, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4698e-05, 8.3219e-05, 1.1373e-04, 8.6476e-05, 7.8180e-05, 8.0182e-05, 7.3802e-05, 8.3801e-05], device='cuda:2') 2023-03-26 21:19:04,081 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3595, 2.3422, 1.7158, 2.5612, 2.3147, 1.9265, 2.8802, 2.2850], device='cuda:2'), covar=tensor([0.1220, 0.2080, 0.3153, 0.2482, 0.2623, 0.1689, 0.3173, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0255, 0.0247, 0.0203, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:19:05,109 INFO [finetune.py:976] (2/7) Epoch 17, batch 5600, loss[loss=0.1917, simple_loss=0.2738, pruned_loss=0.05474, over 4903.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.251, pruned_loss=0.05568, over 953792.63 frames. ], batch size: 43, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:19:08,732 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 21:19:09,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.641e+02 1.914e+02 2.406e+02 4.422e+02, threshold=3.827e+02, percent-clipped=1.0 2023-03-26 21:19:14,868 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:19:34,291 INFO [finetune.py:976] (2/7) Epoch 17, batch 5650, loss[loss=0.1994, simple_loss=0.2755, pruned_loss=0.06167, over 4824.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2544, pruned_loss=0.05657, over 953027.39 frames. ], batch size: 33, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:19:36,130 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0047, 1.8976, 2.3450, 3.9512, 2.7273, 2.6180, 0.7653, 3.2829], device='cuda:2'), covar=tensor([0.1710, 0.1377, 0.1387, 0.0471, 0.0734, 0.1600, 0.2074, 0.0401], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0164, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:19:51,456 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:19:51,465 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2452, 2.1188, 2.0439, 2.4699, 2.9101, 2.5174, 2.1613, 1.9890], device='cuda:2'), covar=tensor([0.2098, 0.1890, 0.1747, 0.1532, 0.1381, 0.0991, 0.2026, 0.1828], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0208, 0.0212, 0.0192, 0.0241, 0.0187, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:20:04,108 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97342.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:04,606 INFO [finetune.py:976] (2/7) Epoch 17, batch 5700, loss[loss=0.1544, simple_loss=0.2099, pruned_loss=0.04944, over 3824.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2511, pruned_loss=0.05604, over 935090.87 frames. ], batch size: 16, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:20:07,062 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97347.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:08,735 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.528e+01 1.533e+02 1.739e+02 2.212e+02 3.283e+02, threshold=3.478e+02, percent-clipped=0.0 2023-03-26 21:20:12,305 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97356.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:18,376 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 21:20:35,933 INFO [finetune.py:976] (2/7) Epoch 18, batch 0, loss[loss=0.1294, simple_loss=0.2066, pruned_loss=0.02607, over 4746.00 frames. ], tot_loss[loss=0.1294, simple_loss=0.2066, pruned_loss=0.02607, over 4746.00 frames. ], batch size: 27, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:20:35,933 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 21:20:46,783 INFO [finetune.py:1010] (2/7) Epoch 18, validation: loss=0.1584, simple_loss=0.2281, pruned_loss=0.0444, over 2265189.00 frames. 2023-03-26 21:20:46,783 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 21:20:49,163 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:21:26,103 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97403.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:21:30,206 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:21:47,681 INFO [finetune.py:976] (2/7) Epoch 18, batch 50, loss[loss=0.2104, simple_loss=0.279, pruned_loss=0.07088, over 4823.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2551, pruned_loss=0.05726, over 217438.77 frames. ], batch size: 39, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:21:58,964 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:00,772 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:09,808 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.163e+01 1.563e+02 1.902e+02 2.308e+02 3.615e+02, threshold=3.804e+02, percent-clipped=1.0 2023-03-26 21:22:25,130 INFO [finetune.py:976] (2/7) Epoch 18, batch 100, loss[loss=0.1382, simple_loss=0.2101, pruned_loss=0.03315, over 4795.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2463, pruned_loss=0.05497, over 378599.49 frames. ], batch size: 51, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:22:31,666 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97480.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:33,609 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.53 vs. limit=5.0 2023-03-26 21:22:35,375 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.70 vs. limit=5.0 2023-03-26 21:22:40,275 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 21:22:58,722 INFO [finetune.py:976] (2/7) Epoch 18, batch 150, loss[loss=0.1643, simple_loss=0.2232, pruned_loss=0.05271, over 4347.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2423, pruned_loss=0.05358, over 506877.77 frames. ], batch size: 65, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:23:13,606 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7385, 4.1441, 3.8277, 2.1384, 4.2114, 3.1598, 1.0331, 3.0865], device='cuda:2'), covar=tensor([0.2275, 0.1999, 0.1370, 0.3195, 0.0995, 0.0938, 0.4313, 0.1280], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0175, 0.0158, 0.0128, 0.0158, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 21:23:17,209 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.559e+02 1.792e+02 2.227e+02 6.409e+02, threshold=3.584e+02, percent-clipped=2.0 2023-03-26 21:23:20,342 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:23:32,346 INFO [finetune.py:976] (2/7) Epoch 18, batch 200, loss[loss=0.1342, simple_loss=0.2001, pruned_loss=0.03415, over 4888.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2436, pruned_loss=0.05477, over 607340.92 frames. ], batch size: 32, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:23:43,738 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5485, 1.2590, 0.8236, 1.5624, 1.9825, 1.1303, 1.3360, 1.5744], device='cuda:2'), covar=tensor([0.1510, 0.1895, 0.1885, 0.1136, 0.1983, 0.2094, 0.1443, 0.1768], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:24:01,854 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:01,914 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:05,307 INFO [finetune.py:976] (2/7) Epoch 18, batch 250, loss[loss=0.2151, simple_loss=0.2829, pruned_loss=0.07361, over 4859.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2483, pruned_loss=0.05591, over 686160.22 frames. ], batch size: 31, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:24:16,418 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4028, 1.4458, 1.9768, 2.9445, 1.9167, 2.1119, 1.1491, 2.4244], device='cuda:2'), covar=tensor([0.1850, 0.1451, 0.1233, 0.0627, 0.0875, 0.1510, 0.1603, 0.0579], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0134, 0.0164, 0.0100, 0.0136, 0.0123, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:24:18,255 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6266, 1.2401, 0.8616, 1.5650, 2.1117, 1.2041, 1.3799, 1.5733], device='cuda:2'), covar=tensor([0.1563, 0.2157, 0.2000, 0.1213, 0.1966, 0.2125, 0.1603, 0.2002], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:24:24,122 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.114e+02 1.615e+02 1.960e+02 2.417e+02 4.168e+02, threshold=3.921e+02, percent-clipped=3.0 2023-03-26 21:24:27,913 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:37,889 INFO [finetune.py:976] (2/7) Epoch 18, batch 300, loss[loss=0.2007, simple_loss=0.2716, pruned_loss=0.06495, over 4742.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2507, pruned_loss=0.05548, over 745113.20 frames. ], batch size: 27, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:24:40,277 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97674.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:24:56,232 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97698.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:24:59,261 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97703.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:59,859 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97704.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:25:10,594 INFO [finetune.py:976] (2/7) Epoch 18, batch 350, loss[loss=0.1828, simple_loss=0.2548, pruned_loss=0.05544, over 4889.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2529, pruned_loss=0.05624, over 792952.91 frames. ], batch size: 35, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:25:17,545 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:25:20,600 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97735.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:25:28,926 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3974, 2.1454, 1.7973, 2.1484, 2.1417, 1.9969, 2.5352, 2.3257], device='cuda:2'), covar=tensor([0.1306, 0.2279, 0.3317, 0.2841, 0.2738, 0.1712, 0.3786, 0.1861], device='cuda:2'), in_proj_covar=tensor([0.0184, 0.0188, 0.0235, 0.0254, 0.0246, 0.0203, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:25:30,595 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.611e+02 1.882e+02 2.387e+02 3.928e+02, threshold=3.763e+02, percent-clipped=1.0 2023-03-26 21:25:43,300 INFO [finetune.py:976] (2/7) Epoch 18, batch 400, loss[loss=0.1805, simple_loss=0.2446, pruned_loss=0.05818, over 4797.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2532, pruned_loss=0.05543, over 830035.89 frames. ], batch size: 25, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:25:46,127 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2299, 1.3821, 0.8730, 2.1026, 2.5469, 1.8275, 1.7648, 2.0306], device='cuda:2'), covar=tensor([0.1435, 0.2070, 0.2022, 0.1157, 0.1738, 0.1882, 0.1446, 0.1874], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:25:52,708 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.85 vs. limit=5.0 2023-03-26 21:26:07,823 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 21:26:22,831 INFO [finetune.py:976] (2/7) Epoch 18, batch 450, loss[loss=0.1881, simple_loss=0.2564, pruned_loss=0.05992, over 4753.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2502, pruned_loss=0.05455, over 857468.03 frames. ], batch size: 27, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:26:52,266 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1443, 1.9228, 1.6924, 1.7999, 2.1012, 1.8693, 2.2859, 2.1323], device='cuda:2'), covar=tensor([0.1362, 0.2186, 0.3041, 0.2679, 0.2619, 0.1680, 0.3253, 0.1806], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0187, 0.0234, 0.0253, 0.0244, 0.0201, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:27:01,042 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.519e+02 1.772e+02 2.128e+02 3.513e+02, threshold=3.544e+02, percent-clipped=0.0 2023-03-26 21:27:16,934 INFO [finetune.py:976] (2/7) Epoch 18, batch 500, loss[loss=0.161, simple_loss=0.2346, pruned_loss=0.04372, over 4806.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2492, pruned_loss=0.05483, over 879729.23 frames. ], batch size: 51, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:27:29,764 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:30,630 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 21:27:44,610 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:47,674 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:50,635 INFO [finetune.py:976] (2/7) Epoch 18, batch 550, loss[loss=0.1978, simple_loss=0.2562, pruned_loss=0.06974, over 4230.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2465, pruned_loss=0.05438, over 896640.89 frames. ], batch size: 66, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:28:19,283 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:19,731 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.294e+01 1.530e+02 1.816e+02 2.060e+02 3.951e+02, threshold=3.633e+02, percent-clipped=3.0 2023-03-26 21:28:28,265 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:32,533 INFO [finetune.py:976] (2/7) Epoch 18, batch 600, loss[loss=0.2438, simple_loss=0.3151, pruned_loss=0.08628, over 4157.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2473, pruned_loss=0.05439, over 908407.87 frames. ], batch size: 65, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:28:51,974 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97998.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:56,782 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98003.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:07,659 INFO [finetune.py:976] (2/7) Epoch 18, batch 650, loss[loss=0.1951, simple_loss=0.2436, pruned_loss=0.07325, over 4783.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2506, pruned_loss=0.05588, over 917314.55 frames. ], batch size: 26, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:29:13,277 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98030.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:29:13,309 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:25,507 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:28,301 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.543e+02 1.878e+02 2.128e+02 3.672e+02, threshold=3.757e+02, percent-clipped=1.0 2023-03-26 21:29:28,990 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:41,536 INFO [finetune.py:976] (2/7) Epoch 18, batch 700, loss[loss=0.1934, simple_loss=0.2657, pruned_loss=0.06051, over 4827.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2529, pruned_loss=0.05565, over 926134.85 frames. ], batch size: 25, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:29:45,870 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:05,718 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:10,998 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3196, 2.8350, 2.6449, 1.2045, 2.9466, 2.1291, 0.8906, 1.8784], device='cuda:2'), covar=tensor([0.2494, 0.2173, 0.1809, 0.3387, 0.1462, 0.1178, 0.3660, 0.1664], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0177, 0.0159, 0.0128, 0.0160, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 21:30:14,136 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6012, 1.5648, 1.5662, 0.9282, 1.7382, 1.8656, 1.9210, 1.3671], device='cuda:2'), covar=tensor([0.0817, 0.0604, 0.0526, 0.0532, 0.0466, 0.0577, 0.0329, 0.0727], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0127, 0.0130, 0.0129, 0.0143, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1361e-05, 1.0967e-04, 8.8496e-05, 9.0323e-05, 9.1903e-05, 9.2905e-05, 1.0280e-04, 1.0652e-04], device='cuda:2') 2023-03-26 21:30:15,235 INFO [finetune.py:976] (2/7) Epoch 18, batch 750, loss[loss=0.1813, simple_loss=0.2556, pruned_loss=0.05348, over 4229.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2537, pruned_loss=0.05606, over 933071.40 frames. ], batch size: 65, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:30:34,724 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.501e+02 1.785e+02 2.309e+02 4.193e+02, threshold=3.569e+02, percent-clipped=2.0 2023-03-26 21:30:46,063 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:48,361 INFO [finetune.py:976] (2/7) Epoch 18, batch 800, loss[loss=0.1657, simple_loss=0.2301, pruned_loss=0.0507, over 4933.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2518, pruned_loss=0.05536, over 937965.46 frames. ], batch size: 38, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:31:15,635 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:31:22,189 INFO [finetune.py:976] (2/7) Epoch 18, batch 850, loss[loss=0.1715, simple_loss=0.2484, pruned_loss=0.04732, over 4869.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2493, pruned_loss=0.05462, over 941238.70 frames. ], batch size: 31, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:31:45,825 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:31:55,530 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.511e+02 1.794e+02 2.111e+02 3.360e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-26 21:32:06,592 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:32:15,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8269, 1.6487, 1.4309, 1.8000, 2.1540, 1.8717, 1.4709, 1.5165], device='cuda:2'), covar=tensor([0.1849, 0.1839, 0.1767, 0.1444, 0.1560, 0.1119, 0.2346, 0.1700], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0209, 0.0212, 0.0191, 0.0242, 0.0187, 0.0215, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:32:25,141 INFO [finetune.py:976] (2/7) Epoch 18, batch 900, loss[loss=0.1481, simple_loss=0.22, pruned_loss=0.03808, over 4810.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2467, pruned_loss=0.05375, over 945042.40 frames. ], batch size: 51, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:32:52,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0381, 1.7735, 2.3420, 1.6909, 2.0816, 2.3082, 1.7376, 2.4311], device='cuda:2'), covar=tensor([0.1109, 0.1698, 0.1259, 0.1622, 0.0759, 0.1007, 0.2351, 0.0620], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0202, 0.0191, 0.0190, 0.0175, 0.0213, 0.0215, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:33:02,857 INFO [finetune.py:976] (2/7) Epoch 18, batch 950, loss[loss=0.198, simple_loss=0.2562, pruned_loss=0.06992, over 4755.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2449, pruned_loss=0.05342, over 948047.20 frames. ], batch size: 26, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:33:08,484 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98330.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:33:16,065 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2970, 1.5434, 1.2038, 1.5226, 1.8277, 1.6403, 1.4800, 1.4219], device='cuda:2'), covar=tensor([0.0394, 0.0330, 0.0622, 0.0345, 0.0217, 0.0691, 0.0402, 0.0394], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0108, 0.0144, 0.0112, 0.0101, 0.0109, 0.0099, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4758e-05, 8.3073e-05, 1.1336e-04, 8.6526e-05, 7.8452e-05, 8.0564e-05, 7.3970e-05, 8.4057e-05], device='cuda:2') 2023-03-26 21:33:23,082 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.511e+02 1.760e+02 2.160e+02 3.441e+02, threshold=3.521e+02, percent-clipped=0.0 2023-03-26 21:33:37,315 INFO [finetune.py:976] (2/7) Epoch 18, batch 1000, loss[loss=0.1779, simple_loss=0.253, pruned_loss=0.05139, over 4776.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2468, pruned_loss=0.0537, over 950678.27 frames. ], batch size: 29, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:33:42,165 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98378.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:34:02,781 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98410.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:34:14,797 INFO [finetune.py:976] (2/7) Epoch 18, batch 1050, loss[loss=0.1614, simple_loss=0.2475, pruned_loss=0.03761, over 4856.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2497, pruned_loss=0.05379, over 953417.24 frames. ], batch size: 44, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:34:36,628 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.558e+02 1.856e+02 2.287e+02 5.753e+02, threshold=3.713e+02, percent-clipped=4.0 2023-03-26 21:34:44,673 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98462.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:34:51,514 INFO [finetune.py:976] (2/7) Epoch 18, batch 1100, loss[loss=0.2205, simple_loss=0.285, pruned_loss=0.07803, over 4809.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2532, pruned_loss=0.05572, over 954314.50 frames. ], batch size: 39, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:34:51,622 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:04,373 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.20 vs. limit=5.0 2023-03-26 21:35:24,231 INFO [finetune.py:976] (2/7) Epoch 18, batch 1150, loss[loss=0.153, simple_loss=0.2253, pruned_loss=0.04032, over 4769.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.253, pruned_loss=0.0558, over 952010.75 frames. ], batch size: 27, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:35:39,070 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:39,639 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98544.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:43,757 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.197e+02 1.665e+02 1.912e+02 2.323e+02 4.830e+02, threshold=3.825e+02, percent-clipped=3.0 2023-03-26 21:35:52,676 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6004, 1.4949, 2.2140, 3.2800, 2.1834, 2.2498, 1.2137, 2.6962], device='cuda:2'), covar=tensor([0.1605, 0.1475, 0.1210, 0.0590, 0.0784, 0.1756, 0.1578, 0.0498], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0134, 0.0165, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:35:57,262 INFO [finetune.py:976] (2/7) Epoch 18, batch 1200, loss[loss=0.1351, simple_loss=0.2087, pruned_loss=0.03073, over 4761.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2513, pruned_loss=0.05557, over 952164.39 frames. ], batch size: 26, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:36:12,049 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:36:19,567 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:36:31,242 INFO [finetune.py:976] (2/7) Epoch 18, batch 1250, loss[loss=0.2024, simple_loss=0.254, pruned_loss=0.07541, over 4909.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2492, pruned_loss=0.05503, over 952206.94 frames. ], batch size: 36, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:36:48,566 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 21:36:52,697 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0661, 1.9533, 2.4634, 3.5012, 2.5028, 2.5541, 1.5502, 2.8798], device='cuda:2'), covar=tensor([0.1344, 0.1166, 0.1145, 0.0634, 0.0632, 0.1460, 0.1525, 0.0525], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0134, 0.0164, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:37:00,486 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4240, 1.4135, 1.9793, 2.8735, 1.9123, 2.0907, 1.1191, 2.3923], device='cuda:2'), covar=tensor([0.1659, 0.1412, 0.1109, 0.0580, 0.0823, 0.1690, 0.1600, 0.0551], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0134, 0.0164, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:37:01,603 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.074e+02 1.465e+02 1.816e+02 2.333e+02 4.053e+02, threshold=3.632e+02, percent-clipped=1.0 2023-03-26 21:37:29,821 INFO [finetune.py:976] (2/7) Epoch 18, batch 1300, loss[loss=0.13, simple_loss=0.2034, pruned_loss=0.02834, over 4754.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2461, pruned_loss=0.05416, over 952793.93 frames. ], batch size: 28, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:38:11,396 INFO [finetune.py:976] (2/7) Epoch 18, batch 1350, loss[loss=0.17, simple_loss=0.2268, pruned_loss=0.05657, over 4216.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2467, pruned_loss=0.05513, over 952448.34 frames. ], batch size: 18, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:38:19,897 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.32 vs. limit=5.0 2023-03-26 21:38:31,462 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.676e+02 1.899e+02 2.271e+02 3.691e+02, threshold=3.798e+02, percent-clipped=1.0 2023-03-26 21:38:37,631 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2873, 1.3762, 1.5428, 1.5055, 1.4867, 2.7781, 1.3216, 1.4503], device='cuda:2'), covar=tensor([0.0999, 0.1766, 0.1239, 0.1021, 0.1518, 0.0284, 0.1451, 0.1688], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 21:38:38,223 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98762.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:38:41,104 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:38:44,609 INFO [finetune.py:976] (2/7) Epoch 18, batch 1400, loss[loss=0.1775, simple_loss=0.2353, pruned_loss=0.05988, over 4745.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2496, pruned_loss=0.05572, over 953720.18 frames. ], batch size: 23, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:39:04,087 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8367, 1.6348, 2.1016, 1.4774, 2.0449, 2.1055, 1.5293, 2.3422], device='cuda:2'), covar=tensor([0.1463, 0.2207, 0.1600, 0.2273, 0.0973, 0.1602, 0.3044, 0.0899], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0203, 0.0192, 0.0191, 0.0176, 0.0213, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:39:04,842 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5536, 1.6576, 1.3256, 1.5747, 1.9863, 1.9103, 1.7246, 1.5296], device='cuda:2'), covar=tensor([0.0395, 0.0303, 0.0614, 0.0317, 0.0206, 0.0518, 0.0283, 0.0370], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0111, 0.0100, 0.0108, 0.0098, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4502e-05, 8.2511e-05, 1.1313e-04, 8.5763e-05, 7.7652e-05, 8.0067e-05, 7.3422e-05, 8.3809e-05], device='cuda:2') 2023-03-26 21:39:10,803 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:39:12,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7772, 2.1078, 1.6250, 1.7772, 2.3731, 2.3168, 2.0861, 1.9394], device='cuda:2'), covar=tensor([0.0451, 0.0304, 0.0581, 0.0315, 0.0256, 0.0487, 0.0368, 0.0355], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0144, 0.0112, 0.0100, 0.0108, 0.0098, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4592e-05, 8.2613e-05, 1.1332e-04, 8.5820e-05, 7.7717e-05, 8.0128e-05, 7.3544e-05, 8.3886e-05], device='cuda:2') 2023-03-26 21:39:17,933 INFO [finetune.py:976] (2/7) Epoch 18, batch 1450, loss[loss=0.252, simple_loss=0.3082, pruned_loss=0.09795, over 4203.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2516, pruned_loss=0.05597, over 953646.54 frames. ], batch size: 65, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:39:27,091 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 21:39:40,442 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.618e+02 1.947e+02 2.343e+02 3.750e+02, threshold=3.895e+02, percent-clipped=0.0 2023-03-26 21:39:52,508 INFO [finetune.py:976] (2/7) Epoch 18, batch 1500, loss[loss=0.1766, simple_loss=0.2527, pruned_loss=0.05031, over 4889.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2521, pruned_loss=0.05635, over 953151.47 frames. ], batch size: 35, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:40:06,455 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98890.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:10,005 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98895.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:12,870 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:26,172 INFO [finetune.py:976] (2/7) Epoch 18, batch 1550, loss[loss=0.183, simple_loss=0.2665, pruned_loss=0.04976, over 4897.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.252, pruned_loss=0.05577, over 952446.25 frames. ], batch size: 36, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:40:47,923 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.516e+02 1.782e+02 2.221e+02 4.511e+02, threshold=3.564e+02, percent-clipped=1.0 2023-03-26 21:40:48,069 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:51,129 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98956.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:41:00,138 INFO [finetune.py:976] (2/7) Epoch 18, batch 1600, loss[loss=0.1583, simple_loss=0.2376, pruned_loss=0.03952, over 4869.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2495, pruned_loss=0.05473, over 953364.48 frames. ], batch size: 31, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:41:00,258 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98971.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:41:13,798 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 21:41:28,054 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9491, 1.7055, 1.5684, 1.3312, 1.6831, 1.6973, 1.6475, 2.2490], device='cuda:2'), covar=tensor([0.4022, 0.4331, 0.3447, 0.3781, 0.3907, 0.2562, 0.3744, 0.1885], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0226, 0.0273, 0.0250, 0.0218, 0.0250, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:41:29,203 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4725, 1.3284, 1.9874, 3.1616, 2.0429, 2.4011, 0.8764, 2.7546], device='cuda:2'), covar=tensor([0.1918, 0.1908, 0.1552, 0.0890, 0.0966, 0.1391, 0.2204, 0.0573], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:41:33,949 INFO [finetune.py:976] (2/7) Epoch 18, batch 1650, loss[loss=0.1628, simple_loss=0.241, pruned_loss=0.04233, over 4827.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2469, pruned_loss=0.05393, over 951664.76 frames. ], batch size: 40, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:41:41,738 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99032.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:41:58,772 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.91 vs. limit=5.0 2023-03-26 21:42:02,531 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.819e+01 1.633e+02 2.015e+02 2.343e+02 3.841e+02, threshold=4.030e+02, percent-clipped=3.0 2023-03-26 21:42:02,644 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:42:17,588 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:42:25,675 INFO [finetune.py:976] (2/7) Epoch 18, batch 1700, loss[loss=0.1669, simple_loss=0.235, pruned_loss=0.0494, over 4767.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2456, pruned_loss=0.05428, over 952195.10 frames. ], batch size: 26, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:42:27,844 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 21:43:00,996 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99112.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:43:02,159 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99114.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:43:10,653 INFO [finetune.py:976] (2/7) Epoch 18, batch 1750, loss[loss=0.1904, simple_loss=0.2484, pruned_loss=0.0662, over 4797.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2476, pruned_loss=0.05525, over 952413.90 frames. ], batch size: 25, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:43:38,852 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.106e+01 1.643e+02 1.917e+02 2.422e+02 4.876e+02, threshold=3.835e+02, percent-clipped=2.0 2023-03-26 21:43:50,370 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 21:43:51,913 INFO [finetune.py:976] (2/7) Epoch 18, batch 1800, loss[loss=0.197, simple_loss=0.2686, pruned_loss=0.0627, over 4858.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2516, pruned_loss=0.05639, over 952263.75 frames. ], batch size: 44, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:43:58,263 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 21:44:10,874 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:13,973 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1188, 2.0361, 1.7907, 1.8185, 2.1220, 1.9161, 2.2546, 2.0672], device='cuda:2'), covar=tensor([0.1356, 0.1844, 0.3026, 0.2358, 0.2561, 0.1797, 0.2403, 0.1875], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0253, 0.0245, 0.0202, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:44:25,712 INFO [finetune.py:976] (2/7) Epoch 18, batch 1850, loss[loss=0.2147, simple_loss=0.2773, pruned_loss=0.07603, over 4224.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2545, pruned_loss=0.05785, over 953503.99 frames. ], batch size: 66, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:44:30,696 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:42,401 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:43,019 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:45,841 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.618e+02 1.939e+02 2.302e+02 3.831e+02, threshold=3.878e+02, percent-clipped=0.0 2023-03-26 21:44:45,941 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:47,174 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4630, 3.0659, 2.7284, 1.4139, 2.9632, 2.3940, 2.4198, 2.6644], device='cuda:2'), covar=tensor([0.0954, 0.0785, 0.1635, 0.2142, 0.1555, 0.2145, 0.2072, 0.1155], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0194, 0.0201, 0.0183, 0.0214, 0.0209, 0.0222, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:44:57,630 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:59,345 INFO [finetune.py:976] (2/7) Epoch 18, batch 1900, loss[loss=0.2022, simple_loss=0.2599, pruned_loss=0.07224, over 4724.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2551, pruned_loss=0.0573, over 954040.48 frames. ], batch size: 59, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:45:11,540 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:33,104 INFO [finetune.py:976] (2/7) Epoch 18, batch 1950, loss[loss=0.1542, simple_loss=0.2275, pruned_loss=0.04042, over 4826.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2527, pruned_loss=0.05587, over 955141.69 frames. ], batch size: 33, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:45:35,072 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:35,618 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7025, 4.0777, 4.2553, 4.4576, 4.4361, 4.1610, 4.7904, 1.6944], device='cuda:2'), covar=tensor([0.0681, 0.0804, 0.0697, 0.0690, 0.1088, 0.1473, 0.0510, 0.5289], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0244, 0.0277, 0.0290, 0.0332, 0.0279, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:45:36,818 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99327.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:38,127 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:52,707 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.452e+02 1.660e+02 1.987e+02 4.820e+02, threshold=3.320e+02, percent-clipped=1.0 2023-03-26 21:46:06,274 INFO [finetune.py:976] (2/7) Epoch 18, batch 2000, loss[loss=0.1495, simple_loss=0.2156, pruned_loss=0.04168, over 4832.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2494, pruned_loss=0.05517, over 953832.42 frames. ], batch size: 49, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:46:15,389 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:23,710 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.79 vs. limit=5.0 2023-03-26 21:46:30,179 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:39,596 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:40,078 INFO [finetune.py:976] (2/7) Epoch 18, batch 2050, loss[loss=0.1802, simple_loss=0.24, pruned_loss=0.06019, over 4864.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2466, pruned_loss=0.05412, over 954559.00 frames. ], batch size: 44, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:46:59,827 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.434e+02 1.742e+02 2.145e+02 5.049e+02, threshold=3.484e+02, percent-clipped=5.0 2023-03-26 21:47:17,709 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99468.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:47:19,432 INFO [finetune.py:976] (2/7) Epoch 18, batch 2100, loss[loss=0.1655, simple_loss=0.2397, pruned_loss=0.04558, over 4777.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2456, pruned_loss=0.05364, over 956043.39 frames. ], batch size: 28, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:47:31,493 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:47:39,513 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2270, 2.2999, 2.1738, 2.5636, 2.7047, 2.4141, 2.2457, 1.7813], device='cuda:2'), covar=tensor([0.2102, 0.1770, 0.1624, 0.1346, 0.1841, 0.1042, 0.1885, 0.1774], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0211, 0.0190, 0.0239, 0.0185, 0.0214, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:47:54,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4932, 1.3916, 1.3999, 1.3907, 1.0148, 2.9265, 1.0170, 1.4245], device='cuda:2'), covar=tensor([0.3452, 0.2586, 0.2269, 0.2445, 0.1877, 0.0270, 0.2818, 0.1371], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0113, 0.0095, 0.0095, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 21:47:58,125 INFO [finetune.py:976] (2/7) Epoch 18, batch 2150, loss[loss=0.1943, simple_loss=0.2785, pruned_loss=0.05505, over 4839.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2507, pruned_loss=0.05553, over 957448.02 frames. ], batch size: 47, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:48:08,431 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99529.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:48:16,784 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6705, 1.5394, 1.1136, 0.2525, 1.2840, 1.4993, 1.4545, 1.4234], device='cuda:2'), covar=tensor([0.1046, 0.0924, 0.1470, 0.2124, 0.1418, 0.2483, 0.2391, 0.0946], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0196, 0.0201, 0.0184, 0.0215, 0.0210, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:48:28,284 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99546.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:35,642 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.546e+02 1.925e+02 2.366e+02 3.688e+02, threshold=3.850e+02, percent-clipped=2.0 2023-03-26 21:48:35,734 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:51,474 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99568.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:53,717 INFO [finetune.py:976] (2/7) Epoch 18, batch 2200, loss[loss=0.1851, simple_loss=0.2657, pruned_loss=0.05225, over 4801.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2521, pruned_loss=0.05573, over 957715.87 frames. ], batch size: 40, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:49:03,267 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:05,760 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:08,745 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99594.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:11,757 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99599.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:11,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.5801, 3.1091, 2.8622, 1.4502, 3.0952, 2.5527, 2.4736, 2.7540], device='cuda:2'), covar=tensor([0.0809, 0.0751, 0.1620, 0.2187, 0.1338, 0.1889, 0.1768, 0.1081], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0195, 0.0200, 0.0184, 0.0214, 0.0209, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:49:27,186 INFO [finetune.py:976] (2/7) Epoch 18, batch 2250, loss[loss=0.1552, simple_loss=0.2281, pruned_loss=0.04111, over 4772.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2521, pruned_loss=0.05527, over 956384.90 frames. ], batch size: 29, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:49:29,582 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:31,396 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99627.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:33,076 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:39,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1554, 2.2279, 1.7397, 2.3121, 2.1200, 1.9056, 2.4866, 2.2557], device='cuda:2'), covar=tensor([0.1354, 0.2018, 0.2938, 0.2350, 0.2215, 0.1534, 0.2765, 0.1726], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0187, 0.0233, 0.0252, 0.0244, 0.0201, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:49:46,340 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:46,818 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.914e+01 1.516e+02 1.824e+02 2.092e+02 3.162e+02, threshold=3.647e+02, percent-clipped=0.0 2023-03-26 21:49:55,229 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8184, 1.6062, 1.4173, 1.2961, 1.5169, 1.5028, 1.5458, 2.1343], device='cuda:2'), covar=tensor([0.3843, 0.3871, 0.3101, 0.3581, 0.3690, 0.2327, 0.3437, 0.1881], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0261, 0.0227, 0.0275, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:50:00,832 INFO [finetune.py:976] (2/7) Epoch 18, batch 2300, loss[loss=0.1951, simple_loss=0.2561, pruned_loss=0.06707, over 4797.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2526, pruned_loss=0.05536, over 955733.42 frames. ], batch size: 25, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:50:03,802 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99675.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:06,805 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:06,833 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:24,129 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99707.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:34,082 INFO [finetune.py:976] (2/7) Epoch 18, batch 2350, loss[loss=0.1739, simple_loss=0.2469, pruned_loss=0.05044, over 4829.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2511, pruned_loss=0.05537, over 955553.61 frames. ], batch size: 41, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:50:47,877 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:52,125 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 21:50:54,364 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.551e+02 1.845e+02 2.144e+02 4.060e+02, threshold=3.690e+02, percent-clipped=1.0 2023-03-26 21:50:56,923 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99755.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:51:08,062 INFO [finetune.py:976] (2/7) Epoch 18, batch 2400, loss[loss=0.1632, simple_loss=0.235, pruned_loss=0.04575, over 4914.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2485, pruned_loss=0.05491, over 955675.17 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:51:11,641 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:51:23,523 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9450, 1.6105, 2.3647, 3.7242, 2.4879, 2.7302, 0.9142, 3.0410], device='cuda:2'), covar=tensor([0.1644, 0.1476, 0.1381, 0.0627, 0.0819, 0.1628, 0.1888, 0.0507], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0165, 0.0101, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 21:51:39,943 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 21:51:41,396 INFO [finetune.py:976] (2/7) Epoch 18, batch 2450, loss[loss=0.1834, simple_loss=0.2469, pruned_loss=0.06, over 4819.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2461, pruned_loss=0.05429, over 956863.34 frames. ], batch size: 39, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:51:43,690 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99824.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:51:57,461 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-26 21:52:01,733 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.091e+01 1.409e+02 1.705e+02 2.080e+02 4.896e+02, threshold=3.409e+02, percent-clipped=2.0 2023-03-26 21:52:14,325 INFO [finetune.py:976] (2/7) Epoch 18, batch 2500, loss[loss=0.1878, simple_loss=0.2593, pruned_loss=0.05813, over 4851.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.249, pruned_loss=0.05612, over 955940.41 frames. ], batch size: 47, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:52:26,318 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:50,111 INFO [finetune.py:976] (2/7) Epoch 18, batch 2550, loss[loss=0.2043, simple_loss=0.271, pruned_loss=0.06881, over 4913.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2534, pruned_loss=0.05759, over 955587.50 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:52:52,503 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:52,521 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:58,896 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:06,642 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:10,619 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.647e+02 1.933e+02 2.438e+02 4.501e+02, threshold=3.867e+02, percent-clipped=7.0 2023-03-26 21:53:21,024 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 21:53:29,758 INFO [finetune.py:976] (2/7) Epoch 18, batch 2600, loss[loss=0.175, simple_loss=0.253, pruned_loss=0.04853, over 4909.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2534, pruned_loss=0.05734, over 952824.11 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:53:30,434 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:40,414 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:12,874 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100007.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:54:24,615 INFO [finetune.py:976] (2/7) Epoch 18, batch 2650, loss[loss=0.1496, simple_loss=0.2377, pruned_loss=0.0307, over 4812.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2545, pruned_loss=0.05703, over 952278.83 frames. ], batch size: 39, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:54:25,488 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 21:54:29,385 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:35,200 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:42,283 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2049, 2.1144, 2.7199, 1.5365, 2.1454, 2.5514, 1.9452, 2.5666], device='cuda:2'), covar=tensor([0.1357, 0.1827, 0.1362, 0.2246, 0.1017, 0.1511, 0.2489, 0.0880], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0202, 0.0189, 0.0187, 0.0175, 0.0211, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:54:46,209 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.530e+02 1.912e+02 2.295e+02 4.144e+02, threshold=3.823e+02, percent-clipped=1.0 2023-03-26 21:54:56,606 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100068.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:54:58,304 INFO [finetune.py:976] (2/7) Epoch 18, batch 2700, loss[loss=0.1938, simple_loss=0.2479, pruned_loss=0.0699, over 4818.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2532, pruned_loss=0.05631, over 952429.72 frames. ], batch size: 38, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:55:01,427 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:55:15,705 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-26 21:55:31,862 INFO [finetune.py:976] (2/7) Epoch 18, batch 2750, loss[loss=0.1735, simple_loss=0.243, pruned_loss=0.05203, over 4856.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2497, pruned_loss=0.05532, over 954904.84 frames. ], batch size: 31, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:55:33,700 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100124.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:55:33,753 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100124.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:55:52,997 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.510e+02 1.766e+02 2.096e+02 4.575e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 21:55:54,321 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4345, 2.2942, 1.8397, 0.9207, 2.0327, 1.9311, 1.7762, 2.0333], device='cuda:2'), covar=tensor([0.0882, 0.0737, 0.1388, 0.2007, 0.1387, 0.1952, 0.2109, 0.0945], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0193, 0.0198, 0.0182, 0.0212, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:56:05,347 INFO [finetune.py:976] (2/7) Epoch 18, batch 2800, loss[loss=0.1554, simple_loss=0.2236, pruned_loss=0.0436, over 4897.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2473, pruned_loss=0.0549, over 955138.66 frames. ], batch size: 32, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:56:06,012 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100172.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:56:11,426 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9398, 1.4769, 1.8965, 1.8445, 1.7007, 1.6977, 1.7835, 1.8382], device='cuda:2'), covar=tensor([0.4832, 0.4404, 0.4114, 0.4463, 0.5495, 0.4631, 0.5485, 0.4048], device='cuda:2'), in_proj_covar=tensor([0.0249, 0.0239, 0.0259, 0.0274, 0.0273, 0.0248, 0.0284, 0.0241], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 21:56:20,141 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 21:56:38,931 INFO [finetune.py:976] (2/7) Epoch 18, batch 2850, loss[loss=0.1891, simple_loss=0.251, pruned_loss=0.06359, over 4766.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2452, pruned_loss=0.05396, over 956681.68 frames. ], batch size: 27, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:56:40,882 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100224.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:49,832 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100238.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:54,568 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:59,192 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.635e+02 1.899e+02 2.309e+02 4.393e+02, threshold=3.799e+02, percent-clipped=4.0 2023-03-26 21:57:11,715 INFO [finetune.py:976] (2/7) Epoch 18, batch 2900, loss[loss=0.1462, simple_loss=0.2127, pruned_loss=0.0399, over 4761.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2471, pruned_loss=0.05411, over 954606.84 frames. ], batch size: 26, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:57:12,868 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:26,155 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:30,860 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:45,542 INFO [finetune.py:976] (2/7) Epoch 18, batch 2950, loss[loss=0.2065, simple_loss=0.2693, pruned_loss=0.07187, over 4740.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2505, pruned_loss=0.05483, over 955234.56 frames. ], batch size: 59, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:57:55,272 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:58:06,321 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.490e+02 1.822e+02 2.174e+02 4.072e+02, threshold=3.643e+02, percent-clipped=2.0 2023-03-26 21:58:13,576 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100363.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:58:18,412 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 21:58:18,818 INFO [finetune.py:976] (2/7) Epoch 18, batch 3000, loss[loss=0.1991, simple_loss=0.2739, pruned_loss=0.06212, over 4845.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2516, pruned_loss=0.0553, over 955841.34 frames. ], batch size: 44, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:58:18,818 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 21:58:31,198 INFO [finetune.py:1010] (2/7) Epoch 18, validation: loss=0.1568, simple_loss=0.2261, pruned_loss=0.04375, over 2265189.00 frames. 2023-03-26 21:58:31,198 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 21:58:44,386 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:59:06,993 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100402.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:59:11,281 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 21:59:29,743 INFO [finetune.py:976] (2/7) Epoch 18, batch 3050, loss[loss=0.1603, simple_loss=0.2402, pruned_loss=0.04022, over 4812.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2521, pruned_loss=0.05478, over 958011.73 frames. ], batch size: 33, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:59:53,773 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.572e+02 1.917e+02 2.276e+02 3.597e+02, threshold=3.833e+02, percent-clipped=0.0 2023-03-26 22:00:01,169 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100463.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:00:07,400 INFO [finetune.py:976] (2/7) Epoch 18, batch 3100, loss[loss=0.1827, simple_loss=0.2473, pruned_loss=0.05901, over 4836.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2502, pruned_loss=0.05399, over 957571.68 frames. ], batch size: 47, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 22:00:25,953 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-26 22:00:40,540 INFO [finetune.py:976] (2/7) Epoch 18, batch 3150, loss[loss=0.1661, simple_loss=0.2416, pruned_loss=0.04534, over 4931.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2469, pruned_loss=0.05313, over 957992.21 frames. ], batch size: 33, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 22:00:53,536 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1961, 2.1972, 1.8659, 2.2400, 2.1106, 2.0976, 2.1214, 2.9343], device='cuda:2'), covar=tensor([0.3494, 0.4319, 0.3284, 0.4189, 0.4675, 0.2360, 0.4202, 0.1570], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0226, 0.0272, 0.0249, 0.0217, 0.0249, 0.0229], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:00:59,250 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5310, 1.3766, 1.8905, 1.7281, 1.5716, 3.3521, 1.2426, 1.5579], device='cuda:2'), covar=tensor([0.0914, 0.1716, 0.1170, 0.1009, 0.1481, 0.0248, 0.1564, 0.1714], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:01:00,944 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.523e+02 1.835e+02 2.195e+02 4.344e+02, threshold=3.670e+02, percent-clipped=3.0 2023-03-26 22:01:12,894 INFO [finetune.py:976] (2/7) Epoch 18, batch 3200, loss[loss=0.1784, simple_loss=0.2481, pruned_loss=0.05433, over 4807.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2436, pruned_loss=0.05192, over 958527.27 frames. ], batch size: 29, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:01:28,940 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100594.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:01:46,316 INFO [finetune.py:976] (2/7) Epoch 18, batch 3250, loss[loss=0.2493, simple_loss=0.3015, pruned_loss=0.09858, over 4933.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2449, pruned_loss=0.05297, over 956235.81 frames. ], batch size: 33, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:01:55,190 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0487, 1.9063, 1.4657, 0.6069, 1.6770, 1.6626, 1.5204, 1.6980], device='cuda:2'), covar=tensor([0.0889, 0.0686, 0.1258, 0.1803, 0.1192, 0.2187, 0.2290, 0.0808], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0192, 0.0199, 0.0183, 0.0212, 0.0207, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:02:08,115 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.502e+02 1.862e+02 2.235e+02 4.464e+02, threshold=3.723e+02, percent-clipped=3.0 2023-03-26 22:02:08,239 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4715, 1.4486, 1.5404, 1.5711, 1.5510, 3.0125, 1.5201, 1.5732], device='cuda:2'), covar=tensor([0.1053, 0.1916, 0.1169, 0.1075, 0.1670, 0.0281, 0.1492, 0.1828], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:02:14,916 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100663.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:02:20,155 INFO [finetune.py:976] (2/7) Epoch 18, batch 3300, loss[loss=0.1413, simple_loss=0.2116, pruned_loss=0.03552, over 4771.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2489, pruned_loss=0.05401, over 953830.22 frames. ], batch size: 26, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:02:47,027 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100711.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:02:53,405 INFO [finetune.py:976] (2/7) Epoch 18, batch 3350, loss[loss=0.2063, simple_loss=0.2805, pruned_loss=0.06602, over 4803.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.251, pruned_loss=0.05496, over 952734.43 frames. ], batch size: 45, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:02:59,526 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100730.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:03:14,086 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.901e+01 1.577e+02 1.865e+02 2.249e+02 4.268e+02, threshold=3.731e+02, percent-clipped=3.0 2023-03-26 22:03:18,263 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100758.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:03:23,248 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1054, 2.0301, 1.6059, 1.8443, 1.8904, 1.7715, 1.9132, 2.5856], device='cuda:2'), covar=tensor([0.3642, 0.4234, 0.3476, 0.3638, 0.3791, 0.2474, 0.3676, 0.1806], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0229, 0.0274, 0.0251, 0.0220, 0.0252, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:03:26,649 INFO [finetune.py:976] (2/7) Epoch 18, batch 3400, loss[loss=0.216, simple_loss=0.2878, pruned_loss=0.07214, over 4888.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2532, pruned_loss=0.05639, over 950514.24 frames. ], batch size: 43, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:03:39,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3770, 2.9465, 2.8140, 1.1923, 3.0522, 2.2667, 0.7544, 1.9917], device='cuda:2'), covar=tensor([0.2197, 0.2115, 0.1595, 0.3497, 0.1408, 0.1128, 0.3882, 0.1488], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0130, 0.0160, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:03:40,306 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100791.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:04:13,287 INFO [finetune.py:976] (2/7) Epoch 18, batch 3450, loss[loss=0.1762, simple_loss=0.2518, pruned_loss=0.05033, over 4865.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2521, pruned_loss=0.05576, over 951242.12 frames. ], batch size: 34, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:04:14,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4096, 1.5633, 1.3173, 1.4916, 1.8690, 1.7315, 1.4869, 1.4131], device='cuda:2'), covar=tensor([0.0328, 0.0285, 0.0601, 0.0313, 0.0218, 0.0436, 0.0330, 0.0348], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0107, 0.0143, 0.0110, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.4024e-05, 8.2353e-05, 1.1261e-04, 8.4594e-05, 7.8234e-05, 7.9916e-05, 7.3467e-05, 8.3214e-05], device='cuda:2') 2023-03-26 22:04:49,164 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.551e+02 1.762e+02 2.136e+02 3.810e+02, threshold=3.524e+02, percent-clipped=1.0 2023-03-26 22:05:03,939 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.42 vs. limit=5.0 2023-03-26 22:05:04,969 INFO [finetune.py:976] (2/7) Epoch 18, batch 3500, loss[loss=0.1509, simple_loss=0.2234, pruned_loss=0.03914, over 4681.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2509, pruned_loss=0.0556, over 952861.55 frames. ], batch size: 23, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:05:21,067 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100894.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:05:37,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8021, 1.6612, 1.4904, 1.9559, 2.2116, 1.9318, 1.5819, 1.5142], device='cuda:2'), covar=tensor([0.2383, 0.2130, 0.2057, 0.1708, 0.1576, 0.1183, 0.2473, 0.2113], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0209, 0.0212, 0.0192, 0.0241, 0.0186, 0.0215, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:05:38,764 INFO [finetune.py:976] (2/7) Epoch 18, batch 3550, loss[loss=0.1451, simple_loss=0.223, pruned_loss=0.03361, over 4828.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.248, pruned_loss=0.05441, over 953088.13 frames. ], batch size: 30, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:05:49,118 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4527, 2.2565, 1.8664, 2.3796, 2.3446, 2.0057, 2.6597, 2.3317], device='cuda:2'), covar=tensor([0.1266, 0.2167, 0.2928, 0.2346, 0.2342, 0.1618, 0.3141, 0.1755], device='cuda:2'), in_proj_covar=tensor([0.0182, 0.0186, 0.0232, 0.0251, 0.0243, 0.0201, 0.0212, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:05:52,400 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100942.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:05:59,322 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.557e+01 1.553e+02 1.835e+02 2.348e+02 4.609e+02, threshold=3.670e+02, percent-clipped=4.0 2023-03-26 22:06:12,132 INFO [finetune.py:976] (2/7) Epoch 18, batch 3600, loss[loss=0.1447, simple_loss=0.2158, pruned_loss=0.03677, over 4690.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.245, pruned_loss=0.05374, over 954171.51 frames. ], batch size: 23, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:06:41,391 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2869, 3.7708, 3.9020, 4.1543, 4.0209, 3.8026, 4.3836, 1.3221], device='cuda:2'), covar=tensor([0.0854, 0.0839, 0.0855, 0.0959, 0.1318, 0.1765, 0.0771, 0.5848], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0244, 0.0278, 0.0291, 0.0334, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:06:46,075 INFO [finetune.py:976] (2/7) Epoch 18, batch 3650, loss[loss=0.2105, simple_loss=0.2762, pruned_loss=0.07237, over 4894.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2483, pruned_loss=0.05536, over 952545.29 frames. ], batch size: 32, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:07:06,790 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.559e+02 1.860e+02 2.177e+02 4.070e+02, threshold=3.719e+02, percent-clipped=1.0 2023-03-26 22:07:10,512 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:18,906 INFO [finetune.py:976] (2/7) Epoch 18, batch 3700, loss[loss=0.1705, simple_loss=0.2461, pruned_loss=0.04746, over 4831.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2501, pruned_loss=0.0558, over 947858.76 frames. ], batch size: 49, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:07:28,500 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101086.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:07:37,444 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0388, 4.3779, 4.5498, 4.8969, 4.7854, 4.4473, 5.1500, 1.6069], device='cuda:2'), covar=tensor([0.0721, 0.0847, 0.0719, 0.0681, 0.1221, 0.1736, 0.0604, 0.5835], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0243, 0.0277, 0.0289, 0.0333, 0.0281, 0.0299, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:07:42,148 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:43,258 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101107.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:52,637 INFO [finetune.py:976] (2/7) Epoch 18, batch 3750, loss[loss=0.1671, simple_loss=0.2341, pruned_loss=0.05006, over 4883.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2526, pruned_loss=0.05651, over 950764.73 frames. ], batch size: 35, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:07:54,576 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3957, 1.4433, 1.7212, 1.6494, 1.5742, 3.0522, 1.4015, 1.5198], device='cuda:2'), covar=tensor([0.1000, 0.1834, 0.1213, 0.0964, 0.1502, 0.0277, 0.1447, 0.1720], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:07:54,589 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0467, 1.9260, 2.0709, 1.3999, 1.9015, 2.1357, 2.1070, 1.6583], device='cuda:2'), covar=tensor([0.0509, 0.0586, 0.0591, 0.0812, 0.0710, 0.0616, 0.0493, 0.1004], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0135, 0.0140, 0.0120, 0.0123, 0.0138, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:07:59,879 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4530, 1.4790, 2.1854, 1.8245, 1.7806, 3.7074, 1.4327, 1.7061], device='cuda:2'), covar=tensor([0.0994, 0.1827, 0.1391, 0.0940, 0.1484, 0.0243, 0.1521, 0.1725], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:08:12,837 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.641e+02 1.814e+02 2.131e+02 4.110e+02, threshold=3.627e+02, percent-clipped=2.0 2023-03-26 22:08:24,450 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:08:26,160 INFO [finetune.py:976] (2/7) Epoch 18, batch 3800, loss[loss=0.1707, simple_loss=0.2452, pruned_loss=0.04806, over 4749.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2531, pruned_loss=0.0569, over 950089.37 frames. ], batch size: 54, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:08:41,819 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-26 22:08:59,897 INFO [finetune.py:976] (2/7) Epoch 18, batch 3850, loss[loss=0.1469, simple_loss=0.2217, pruned_loss=0.03605, over 4897.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2509, pruned_loss=0.05582, over 951269.96 frames. ], batch size: 43, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:09:02,512 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-26 22:09:28,092 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 22:09:28,670 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2452, 2.0908, 1.6999, 1.9844, 2.2189, 1.9085, 2.4406, 2.2178], device='cuda:2'), covar=tensor([0.1336, 0.2084, 0.3271, 0.2568, 0.2521, 0.1738, 0.3084, 0.1875], device='cuda:2'), in_proj_covar=tensor([0.0184, 0.0189, 0.0236, 0.0254, 0.0246, 0.0204, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:09:30,610 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-26 22:09:30,993 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.525e+02 1.862e+02 2.180e+02 4.556e+02, threshold=3.724e+02, percent-clipped=2.0 2023-03-26 22:09:57,716 INFO [finetune.py:976] (2/7) Epoch 18, batch 3900, loss[loss=0.1966, simple_loss=0.2624, pruned_loss=0.06537, over 4939.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2483, pruned_loss=0.05509, over 952964.04 frames. ], batch size: 38, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:10:41,970 INFO [finetune.py:976] (2/7) Epoch 18, batch 3950, loss[loss=0.1961, simple_loss=0.2511, pruned_loss=0.0706, over 4903.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2457, pruned_loss=0.05432, over 953105.11 frames. ], batch size: 32, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:11:02,251 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.470e+02 1.754e+02 2.083e+02 3.090e+02, threshold=3.508e+02, percent-clipped=0.0 2023-03-26 22:11:15,361 INFO [finetune.py:976] (2/7) Epoch 18, batch 4000, loss[loss=0.2018, simple_loss=0.272, pruned_loss=0.06575, over 4842.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2455, pruned_loss=0.05405, over 955566.95 frames. ], batch size: 47, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:11:25,997 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101386.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:11:49,435 INFO [finetune.py:976] (2/7) Epoch 18, batch 4050, loss[loss=0.1541, simple_loss=0.2394, pruned_loss=0.03439, over 4832.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2487, pruned_loss=0.05452, over 957466.68 frames. ], batch size: 30, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:11:58,803 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101434.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:12:03,165 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5520, 2.3482, 2.2375, 2.3397, 2.2484, 2.3327, 2.3207, 3.0089], device='cuda:2'), covar=tensor([0.3234, 0.4008, 0.2846, 0.3506, 0.3680, 0.2394, 0.3357, 0.1521], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0229, 0.0275, 0.0252, 0.0220, 0.0251, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:12:10,021 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.670e+02 2.040e+02 2.363e+02 9.256e+02, threshold=4.080e+02, percent-clipped=2.0 2023-03-26 22:12:17,270 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:12:19,259 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 22:12:22,996 INFO [finetune.py:976] (2/7) Epoch 18, batch 4100, loss[loss=0.1662, simple_loss=0.2487, pruned_loss=0.04181, over 4812.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2498, pruned_loss=0.05511, over 953914.50 frames. ], batch size: 40, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:12:24,839 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9323, 1.7171, 2.3332, 3.7015, 2.5113, 2.5926, 0.8963, 3.0435], device='cuda:2'), covar=tensor([0.1496, 0.1277, 0.1257, 0.0518, 0.0719, 0.1958, 0.1814, 0.0414], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 22:12:31,405 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 22:12:56,254 INFO [finetune.py:976] (2/7) Epoch 18, batch 4150, loss[loss=0.1712, simple_loss=0.2474, pruned_loss=0.04746, over 4831.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2523, pruned_loss=0.05643, over 952771.25 frames. ], batch size: 49, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:13:16,875 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.508e+02 1.851e+02 2.208e+02 3.984e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 22:13:29,469 INFO [finetune.py:976] (2/7) Epoch 18, batch 4200, loss[loss=0.1747, simple_loss=0.2433, pruned_loss=0.05309, over 4814.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2523, pruned_loss=0.05594, over 952796.18 frames. ], batch size: 33, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:13:51,342 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0487, 1.9534, 1.6068, 1.7687, 2.0556, 1.7139, 2.1773, 2.0239], device='cuda:2'), covar=tensor([0.1247, 0.1968, 0.2837, 0.2516, 0.2392, 0.1697, 0.2745, 0.1657], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0187, 0.0233, 0.0251, 0.0243, 0.0201, 0.0213, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:14:03,051 INFO [finetune.py:976] (2/7) Epoch 18, batch 4250, loss[loss=0.1773, simple_loss=0.227, pruned_loss=0.06375, over 4194.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2504, pruned_loss=0.05563, over 952756.26 frames. ], batch size: 18, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:14:04,393 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8110, 1.6752, 1.5343, 1.5933, 1.9093, 1.9422, 1.7312, 1.6199], device='cuda:2'), covar=tensor([0.0299, 0.0309, 0.0474, 0.0323, 0.0241, 0.0359, 0.0296, 0.0335], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0105, 0.0141, 0.0109, 0.0099, 0.0107, 0.0097, 0.0108], device='cuda:2'), out_proj_covar=tensor([7.2863e-05, 8.1284e-05, 1.1106e-04, 8.3572e-05, 7.6940e-05, 7.8845e-05, 7.2287e-05, 8.2474e-05], device='cuda:2') 2023-03-26 22:14:24,253 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.580e+02 1.774e+02 2.219e+02 3.425e+02, threshold=3.547e+02, percent-clipped=0.0 2023-03-26 22:14:38,481 INFO [finetune.py:976] (2/7) Epoch 18, batch 4300, loss[loss=0.1324, simple_loss=0.205, pruned_loss=0.02994, over 4801.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2474, pruned_loss=0.05432, over 951833.11 frames. ], batch size: 25, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:14:53,820 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-26 22:14:54,336 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:15:33,151 INFO [finetune.py:976] (2/7) Epoch 18, batch 4350, loss[loss=0.1367, simple_loss=0.1989, pruned_loss=0.03724, over 4236.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2436, pruned_loss=0.05302, over 952196.21 frames. ], batch size: 18, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:16:05,141 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:10,239 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.272e+01 1.483e+02 1.686e+02 2.087e+02 3.591e+02, threshold=3.373e+02, percent-clipped=1.0 2023-03-26 22:16:16,965 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:21,725 INFO [finetune.py:976] (2/7) Epoch 18, batch 4400, loss[loss=0.238, simple_loss=0.2998, pruned_loss=0.08807, over 4812.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2457, pruned_loss=0.05376, over 951982.70 frames. ], batch size: 45, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:16:27,075 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1679, 2.8743, 3.0133, 2.9495, 2.7909, 2.6971, 3.2323, 1.0721], device='cuda:2'), covar=tensor([0.1896, 0.1950, 0.1801, 0.2326, 0.2634, 0.2587, 0.1678, 0.7330], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0277, 0.0291, 0.0332, 0.0281, 0.0301, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:16:49,622 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:55,630 INFO [finetune.py:976] (2/7) Epoch 18, batch 4450, loss[loss=0.194, simple_loss=0.2677, pruned_loss=0.06016, over 4903.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.25, pruned_loss=0.05542, over 952511.14 frames. ], batch size: 37, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:17:16,755 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.606e+02 1.893e+02 2.313e+02 4.401e+02, threshold=3.785e+02, percent-clipped=7.0 2023-03-26 22:17:29,381 INFO [finetune.py:976] (2/7) Epoch 18, batch 4500, loss[loss=0.2136, simple_loss=0.2798, pruned_loss=0.07373, over 4895.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2529, pruned_loss=0.05664, over 953927.79 frames. ], batch size: 35, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:17:40,215 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6815, 1.7174, 1.5190, 1.9204, 2.2617, 2.0126, 1.6683, 1.3900], device='cuda:2'), covar=tensor([0.2203, 0.2002, 0.1900, 0.1591, 0.1859, 0.1221, 0.2362, 0.1982], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0209, 0.0213, 0.0193, 0.0241, 0.0187, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:18:02,637 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101920.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:18:03,132 INFO [finetune.py:976] (2/7) Epoch 18, batch 4550, loss[loss=0.1683, simple_loss=0.241, pruned_loss=0.04777, over 4891.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2536, pruned_loss=0.05685, over 954689.52 frames. ], batch size: 32, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:18:24,137 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.522e+02 1.794e+02 2.336e+02 4.256e+02, threshold=3.587e+02, percent-clipped=2.0 2023-03-26 22:18:36,745 INFO [finetune.py:976] (2/7) Epoch 18, batch 4600, loss[loss=0.2223, simple_loss=0.2828, pruned_loss=0.08094, over 4856.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2533, pruned_loss=0.05627, over 955692.49 frames. ], batch size: 44, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:18:42,887 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101981.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:18:46,867 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6726, 1.7093, 1.4885, 1.7117, 2.0056, 1.9385, 1.6123, 1.4801], device='cuda:2'), covar=tensor([0.0323, 0.0280, 0.0555, 0.0285, 0.0202, 0.0437, 0.0335, 0.0412], device='cuda:2'), in_proj_covar=tensor([0.0094, 0.0106, 0.0141, 0.0109, 0.0099, 0.0107, 0.0097, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.2895e-05, 8.1440e-05, 1.1104e-04, 8.3614e-05, 7.7019e-05, 7.8868e-05, 7.2311e-05, 8.2947e-05], device='cuda:2') 2023-03-26 22:18:47,469 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:18:54,767 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 22:19:11,218 INFO [finetune.py:976] (2/7) Epoch 18, batch 4650, loss[loss=0.1918, simple_loss=0.2598, pruned_loss=0.06184, over 4927.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.25, pruned_loss=0.05567, over 952762.05 frames. ], batch size: 33, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:19:12,711 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 22:19:23,878 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:19:29,231 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:19:31,548 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.564e+02 1.867e+02 2.217e+02 4.281e+02, threshold=3.734e+02, percent-clipped=4.0 2023-03-26 22:19:32,245 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4178, 3.8028, 4.0166, 4.2340, 4.1772, 3.8450, 4.5169, 1.7672], device='cuda:2'), covar=tensor([0.0889, 0.0935, 0.0868, 0.1022, 0.1287, 0.1743, 0.0678, 0.5176], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0279, 0.0291, 0.0333, 0.0282, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:19:45,055 INFO [finetune.py:976] (2/7) Epoch 18, batch 4700, loss[loss=0.1727, simple_loss=0.2334, pruned_loss=0.05603, over 4820.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2471, pruned_loss=0.05476, over 955063.06 frames. ], batch size: 33, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:19:53,720 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.06 vs. limit=2.0 2023-03-26 22:20:28,611 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0223, 4.3582, 4.6019, 4.7990, 4.7669, 4.4808, 5.1399, 1.6263], device='cuda:2'), covar=tensor([0.0749, 0.0866, 0.0729, 0.0981, 0.1218, 0.1508, 0.0603, 0.5669], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0243, 0.0278, 0.0291, 0.0332, 0.0282, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:20:31,468 INFO [finetune.py:976] (2/7) Epoch 18, batch 4750, loss[loss=0.1776, simple_loss=0.2477, pruned_loss=0.05371, over 4904.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2453, pruned_loss=0.05369, over 955490.60 frames. ], batch size: 35, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:20:35,343 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.07 vs. limit=5.0 2023-03-26 22:20:55,658 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6236, 1.6053, 2.1780, 1.9238, 1.8613, 4.0420, 1.6164, 1.8111], device='cuda:2'), covar=tensor([0.0965, 0.1886, 0.1125, 0.0971, 0.1485, 0.0194, 0.1455, 0.1733], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0090, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:20:56,162 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.538e+02 1.904e+02 2.326e+02 4.380e+02, threshold=3.807e+02, percent-clipped=2.0 2023-03-26 22:21:23,316 INFO [finetune.py:976] (2/7) Epoch 18, batch 4800, loss[loss=0.1619, simple_loss=0.23, pruned_loss=0.04688, over 4814.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2488, pruned_loss=0.05504, over 955173.10 frames. ], batch size: 25, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:21:51,887 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 22:21:53,489 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0772, 2.6181, 2.4789, 1.2222, 2.6052, 2.1681, 2.0794, 2.2763], device='cuda:2'), covar=tensor([0.0909, 0.0937, 0.1642, 0.2195, 0.1840, 0.2162, 0.1976, 0.1294], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0194, 0.0201, 0.0184, 0.0214, 0.0209, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:22:00,413 INFO [finetune.py:976] (2/7) Epoch 18, batch 4850, loss[loss=0.1536, simple_loss=0.2199, pruned_loss=0.04365, over 4246.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2526, pruned_loss=0.05612, over 954919.03 frames. ], batch size: 18, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:22:20,214 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.897e+01 1.478e+02 1.803e+02 2.234e+02 3.533e+02, threshold=3.606e+02, percent-clipped=0.0 2023-03-26 22:22:33,608 INFO [finetune.py:976] (2/7) Epoch 18, batch 4900, loss[loss=0.1811, simple_loss=0.2592, pruned_loss=0.05151, over 4736.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2534, pruned_loss=0.05677, over 951319.12 frames. ], batch size: 54, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:22:37,665 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102276.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:22:40,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7489, 1.9878, 1.5747, 1.7143, 2.2208, 2.1616, 1.9512, 1.8015], device='cuda:2'), covar=tensor([0.0440, 0.0355, 0.0634, 0.0371, 0.0383, 0.0643, 0.0337, 0.0470], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0106, 0.0142, 0.0109, 0.0099, 0.0107, 0.0098, 0.0109], device='cuda:2'), out_proj_covar=tensor([7.3412e-05, 8.1924e-05, 1.1196e-04, 8.4069e-05, 7.7018e-05, 7.9263e-05, 7.3053e-05, 8.3151e-05], device='cuda:2') 2023-03-26 22:22:56,963 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2638, 2.8824, 2.7392, 1.2041, 2.9887, 2.2248, 0.7673, 1.9409], device='cuda:2'), covar=tensor([0.2697, 0.2121, 0.1855, 0.3524, 0.1609, 0.1181, 0.4032, 0.1663], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0162, 0.0131, 0.0162, 0.0125, 0.0150, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:23:06,931 INFO [finetune.py:976] (2/7) Epoch 18, batch 4950, loss[loss=0.2192, simple_loss=0.2887, pruned_loss=0.07488, over 4883.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2535, pruned_loss=0.0565, over 951583.54 frames. ], batch size: 32, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:23:12,982 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.62 vs. limit=5.0 2023-03-26 22:23:19,540 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102340.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:23:21,365 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:23:26,708 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.970e+01 1.449e+02 1.847e+02 2.188e+02 4.191e+02, threshold=3.694e+02, percent-clipped=1.0 2023-03-26 22:23:40,084 INFO [finetune.py:976] (2/7) Epoch 18, batch 5000, loss[loss=0.1676, simple_loss=0.2399, pruned_loss=0.0477, over 4803.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2514, pruned_loss=0.05585, over 953566.34 frames. ], batch size: 25, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:23:46,545 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7066, 1.1584, 0.8385, 1.5367, 2.1181, 1.0463, 1.4439, 1.5062], device='cuda:2'), covar=tensor([0.1460, 0.2099, 0.1922, 0.1147, 0.1853, 0.1876, 0.1490, 0.1951], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 22:23:51,810 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:24:13,295 INFO [finetune.py:976] (2/7) Epoch 18, batch 5050, loss[loss=0.1894, simple_loss=0.2521, pruned_loss=0.06337, over 4899.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2495, pruned_loss=0.05595, over 954613.72 frames. ], batch size: 35, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:24:24,369 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8048, 3.6310, 3.3959, 1.7794, 3.6810, 2.8762, 0.6903, 2.6394], device='cuda:2'), covar=tensor([0.2519, 0.2140, 0.1652, 0.3441, 0.1232, 0.1032, 0.4781, 0.1690], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0179, 0.0162, 0.0130, 0.0163, 0.0125, 0.0150, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:24:33,970 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.223e+01 1.582e+02 1.966e+02 2.354e+02 3.513e+02, threshold=3.932e+02, percent-clipped=0.0 2023-03-26 22:24:46,889 INFO [finetune.py:976] (2/7) Epoch 18, batch 5100, loss[loss=0.1601, simple_loss=0.2332, pruned_loss=0.04346, over 4767.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2461, pruned_loss=0.0543, over 956572.59 frames. ], batch size: 26, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:24:47,057 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.40 vs. limit=5.0 2023-03-26 22:25:06,475 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3759, 1.4912, 1.6321, 1.7621, 1.6235, 3.2553, 1.4193, 1.5903], device='cuda:2'), covar=tensor([0.0963, 0.1723, 0.1062, 0.0907, 0.1487, 0.0216, 0.1401, 0.1729], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:25:20,567 INFO [finetune.py:976] (2/7) Epoch 18, batch 5150, loss[loss=0.1702, simple_loss=0.2482, pruned_loss=0.04609, over 4872.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2461, pruned_loss=0.0544, over 957427.45 frames. ], batch size: 34, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:25:25,501 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 22:25:53,894 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.631e+02 1.989e+02 2.441e+02 4.766e+02, threshold=3.977e+02, percent-clipped=3.0 2023-03-26 22:26:14,502 INFO [finetune.py:976] (2/7) Epoch 18, batch 5200, loss[loss=0.1876, simple_loss=0.2677, pruned_loss=0.05371, over 4899.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2508, pruned_loss=0.05655, over 956579.31 frames. ], batch size: 36, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:26:22,142 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102576.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:26:56,625 INFO [finetune.py:976] (2/7) Epoch 18, batch 5250, loss[loss=0.1629, simple_loss=0.2376, pruned_loss=0.04407, over 4822.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2535, pruned_loss=0.05774, over 957679.23 frames. ], batch size: 30, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:26:58,547 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102624.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:27:11,538 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:27:17,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.670e+02 1.981e+02 2.217e+02 4.217e+02, threshold=3.962e+02, percent-clipped=1.0 2023-03-26 22:27:27,029 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4765, 1.8056, 1.4808, 1.4946, 2.0208, 1.9671, 1.8325, 1.7664], device='cuda:2'), covar=tensor([0.0544, 0.0368, 0.0652, 0.0346, 0.0370, 0.0635, 0.0308, 0.0423], device='cuda:2'), in_proj_covar=tensor([0.0095, 0.0107, 0.0143, 0.0110, 0.0099, 0.0108, 0.0098, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.3927e-05, 8.2310e-05, 1.1271e-04, 8.4591e-05, 7.7309e-05, 7.9578e-05, 7.3443e-05, 8.3837e-05], device='cuda:2') 2023-03-26 22:27:29,390 INFO [finetune.py:976] (2/7) Epoch 18, batch 5300, loss[loss=0.1915, simple_loss=0.2641, pruned_loss=0.05943, over 4806.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2545, pruned_loss=0.05736, over 957244.01 frames. ], batch size: 41, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:27:44,024 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102691.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:27:44,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5691, 1.4602, 1.4608, 1.4770, 1.0267, 3.2666, 1.3460, 1.8376], device='cuda:2'), covar=tensor([0.3077, 0.2472, 0.2189, 0.2310, 0.1799, 0.0206, 0.2597, 0.1177], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:27:50,551 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 22:28:03,083 INFO [finetune.py:976] (2/7) Epoch 18, batch 5350, loss[loss=0.2073, simple_loss=0.2662, pruned_loss=0.07425, over 4721.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2539, pruned_loss=0.05653, over 957220.08 frames. ], batch size: 59, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:28:06,219 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5499, 1.4767, 2.0676, 3.2293, 2.1414, 2.3568, 0.8941, 2.6608], device='cuda:2'), covar=tensor([0.1696, 0.1426, 0.1261, 0.0580, 0.0804, 0.1514, 0.1845, 0.0508], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0115, 0.0133, 0.0163, 0.0100, 0.0134, 0.0123, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 22:28:25,303 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.536e+02 1.872e+02 2.228e+02 4.473e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 22:28:36,910 INFO [finetune.py:976] (2/7) Epoch 18, batch 5400, loss[loss=0.1986, simple_loss=0.2697, pruned_loss=0.06371, over 4932.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2513, pruned_loss=0.0558, over 958505.21 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:28:48,203 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0908, 2.2190, 1.9037, 2.3437, 2.7982, 2.3119, 2.2537, 1.5927], device='cuda:2'), covar=tensor([0.2207, 0.2010, 0.1901, 0.1553, 0.1617, 0.1138, 0.1956, 0.2019], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0208, 0.0212, 0.0192, 0.0240, 0.0186, 0.0214, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:29:10,820 INFO [finetune.py:976] (2/7) Epoch 18, batch 5450, loss[loss=0.1443, simple_loss=0.2131, pruned_loss=0.03773, over 4362.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2467, pruned_loss=0.05359, over 958503.19 frames. ], batch size: 19, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:29:30,980 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:29:31,448 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.551e+01 1.511e+02 1.793e+02 2.102e+02 5.113e+02, threshold=3.586e+02, percent-clipped=1.0 2023-03-26 22:29:44,410 INFO [finetune.py:976] (2/7) Epoch 18, batch 5500, loss[loss=0.1736, simple_loss=0.2445, pruned_loss=0.05136, over 4843.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2447, pruned_loss=0.0535, over 956039.95 frames. ], batch size: 47, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:29:54,086 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102887.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:30:12,681 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102912.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:30:14,559 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-26 22:30:16,337 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0381, 0.9006, 0.9867, 1.1157, 1.2210, 1.1572, 0.9843, 0.9163], device='cuda:2'), covar=tensor([0.0369, 0.0336, 0.0630, 0.0305, 0.0295, 0.0438, 0.0351, 0.0435], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0110, 0.0099, 0.0108, 0.0099, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4150e-05, 8.2680e-05, 1.1280e-04, 8.4912e-05, 7.7377e-05, 7.9738e-05, 7.3576e-05, 8.4155e-05], device='cuda:2') 2023-03-26 22:30:18,101 INFO [finetune.py:976] (2/7) Epoch 18, batch 5550, loss[loss=0.1555, simple_loss=0.2257, pruned_loss=0.0426, over 4721.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2475, pruned_loss=0.05482, over 956519.04 frames. ], batch size: 23, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:30:36,208 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:30:39,503 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.559e+02 1.902e+02 2.213e+02 4.520e+02, threshold=3.805e+02, percent-clipped=3.0 2023-03-26 22:30:50,053 INFO [finetune.py:976] (2/7) Epoch 18, batch 5600, loss[loss=0.189, simple_loss=0.2606, pruned_loss=0.05868, over 4867.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2515, pruned_loss=0.05581, over 953768.10 frames. ], batch size: 34, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:31:42,253 INFO [finetune.py:976] (2/7) Epoch 18, batch 5650, loss[loss=0.1793, simple_loss=0.2501, pruned_loss=0.05422, over 4887.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2535, pruned_loss=0.05597, over 953814.39 frames. ], batch size: 36, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:31:44,117 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3300, 2.8855, 2.7548, 1.3862, 3.0222, 2.2399, 0.6222, 1.8501], device='cuda:2'), covar=tensor([0.2473, 0.2392, 0.1975, 0.3482, 0.1526, 0.1064, 0.4318, 0.1789], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0158, 0.0128, 0.0159, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:31:45,376 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0812, 4.7175, 4.4860, 2.7342, 4.8529, 3.7978, 0.8404, 3.3149], device='cuda:2'), covar=tensor([0.2052, 0.1861, 0.1174, 0.2659, 0.0738, 0.0696, 0.4542, 0.1382], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0128, 0.0159, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:32:09,152 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.800e+01 1.506e+02 1.835e+02 2.191e+02 3.638e+02, threshold=3.670e+02, percent-clipped=0.0 2023-03-26 22:32:19,851 INFO [finetune.py:976] (2/7) Epoch 18, batch 5700, loss[loss=0.1689, simple_loss=0.216, pruned_loss=0.06091, over 4182.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2488, pruned_loss=0.05505, over 935769.59 frames. ], batch size: 18, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:32:25,943 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 22:32:48,066 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:32:48,574 INFO [finetune.py:976] (2/7) Epoch 19, batch 0, loss[loss=0.1616, simple_loss=0.2347, pruned_loss=0.04429, over 4859.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2347, pruned_loss=0.04429, over 4859.00 frames. ], batch size: 31, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:32:48,575 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 22:33:03,099 INFO [finetune.py:1010] (2/7) Epoch 19, validation: loss=0.1586, simple_loss=0.2282, pruned_loss=0.04454, over 2265189.00 frames. 2023-03-26 22:33:03,099 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 22:33:25,835 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:33:32,552 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 22:33:38,061 INFO [finetune.py:976] (2/7) Epoch 19, batch 50, loss[loss=0.1777, simple_loss=0.2582, pruned_loss=0.04863, over 4244.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2548, pruned_loss=0.05713, over 215048.03 frames. ], batch size: 66, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:33:40,496 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.779e+01 1.456e+02 1.782e+02 2.150e+02 3.860e+02, threshold=3.565e+02, percent-clipped=1.0 2023-03-26 22:33:42,024 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 22:33:44,742 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103159.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:33:58,487 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 22:34:07,249 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103193.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:11,663 INFO [finetune.py:976] (2/7) Epoch 19, batch 100, loss[loss=0.1501, simple_loss=0.2168, pruned_loss=0.04168, over 4131.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2487, pruned_loss=0.05569, over 377376.23 frames. ], batch size: 18, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:34:17,535 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103207.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:40,720 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103243.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:42,030 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 22:34:44,095 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.5923, 4.8445, 5.1351, 5.3775, 5.2668, 5.0403, 5.6999, 2.3464], device='cuda:2'), covar=tensor([0.0753, 0.0998, 0.0600, 0.0929, 0.1322, 0.1398, 0.0498, 0.5029], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0245, 0.0281, 0.0292, 0.0334, 0.0284, 0.0304, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:34:45,882 INFO [finetune.py:976] (2/7) Epoch 19, batch 150, loss[loss=0.1924, simple_loss=0.2497, pruned_loss=0.06755, over 4896.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2428, pruned_loss=0.05359, over 504760.88 frames. ], batch size: 35, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:34:48,705 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.463e+02 1.787e+02 2.269e+02 3.542e+02, threshold=3.573e+02, percent-clipped=0.0 2023-03-26 22:35:19,727 INFO [finetune.py:976] (2/7) Epoch 19, batch 200, loss[loss=0.1367, simple_loss=0.2055, pruned_loss=0.03389, over 4768.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2421, pruned_loss=0.05405, over 605963.98 frames. ], batch size: 28, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:35:34,080 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1705, 4.8368, 4.5098, 3.0509, 4.9484, 3.7849, 0.9352, 3.4434], device='cuda:2'), covar=tensor([0.2088, 0.1620, 0.1368, 0.2645, 0.0694, 0.0784, 0.4599, 0.1402], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0128, 0.0159, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:35:53,176 INFO [finetune.py:976] (2/7) Epoch 19, batch 250, loss[loss=0.2243, simple_loss=0.3011, pruned_loss=0.07372, over 4913.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2459, pruned_loss=0.05455, over 684953.30 frames. ], batch size: 43, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:35:56,516 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.389e+01 1.572e+02 1.886e+02 2.263e+02 4.128e+02, threshold=3.772e+02, percent-clipped=1.0 2023-03-26 22:36:05,019 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103366.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:36:09,987 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.30 vs. limit=5.0 2023-03-26 22:36:25,723 INFO [finetune.py:976] (2/7) Epoch 19, batch 300, loss[loss=0.1693, simple_loss=0.241, pruned_loss=0.04883, over 4743.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2509, pruned_loss=0.05633, over 744719.96 frames. ], batch size: 54, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:36:34,034 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9931, 1.1830, 2.0095, 1.8929, 1.7571, 1.7142, 1.7820, 1.8992], device='cuda:2'), covar=tensor([0.3359, 0.3815, 0.3156, 0.3522, 0.4399, 0.3513, 0.4170, 0.2884], device='cuda:2'), in_proj_covar=tensor([0.0250, 0.0240, 0.0259, 0.0276, 0.0274, 0.0249, 0.0284, 0.0241], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:37:00,432 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 22:37:02,187 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103427.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:37:02,775 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5607, 1.3981, 1.4482, 1.7262, 1.4746, 2.9565, 1.3816, 1.4552], device='cuda:2'), covar=tensor([0.0878, 0.1777, 0.1195, 0.0888, 0.1638, 0.0281, 0.1426, 0.1745], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:37:21,695 INFO [finetune.py:976] (2/7) Epoch 19, batch 350, loss[loss=0.1519, simple_loss=0.2253, pruned_loss=0.03929, over 4877.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2527, pruned_loss=0.05653, over 792364.88 frames. ], batch size: 32, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:37:28,081 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.554e+02 1.898e+02 2.403e+02 5.343e+02, threshold=3.796e+02, percent-clipped=4.0 2023-03-26 22:37:29,298 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:03,052 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103488.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:10,178 INFO [finetune.py:976] (2/7) Epoch 19, batch 400, loss[loss=0.165, simple_loss=0.2398, pruned_loss=0.04514, over 4883.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2527, pruned_loss=0.05603, over 828584.53 frames. ], batch size: 43, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:38:10,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5674, 1.4252, 1.8383, 1.7982, 1.6283, 3.3299, 1.3526, 1.5243], device='cuda:2'), covar=tensor([0.0919, 0.1808, 0.1269, 0.0963, 0.1547, 0.0253, 0.1491, 0.1710], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0078, 0.0092, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:38:15,638 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:23,943 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 22:38:39,539 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:43,097 INFO [finetune.py:976] (2/7) Epoch 19, batch 450, loss[loss=0.2029, simple_loss=0.266, pruned_loss=0.06991, over 4927.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2522, pruned_loss=0.05585, over 856955.62 frames. ], batch size: 38, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:38:45,992 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.312e+01 1.502e+02 1.696e+02 2.061e+02 2.854e+02, threshold=3.392e+02, percent-clipped=0.0 2023-03-26 22:38:47,236 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:56,049 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 22:39:18,396 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1416, 2.8682, 3.0020, 2.9293, 2.7633, 2.7240, 3.1970, 0.9503], device='cuda:2'), covar=tensor([0.1840, 0.1955, 0.2176, 0.2505, 0.2748, 0.3139, 0.2078, 0.7551], device='cuda:2'), in_proj_covar=tensor([0.0345, 0.0241, 0.0278, 0.0289, 0.0330, 0.0281, 0.0299, 0.0292], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:39:19,608 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:39:24,372 INFO [finetune.py:976] (2/7) Epoch 19, batch 500, loss[loss=0.1508, simple_loss=0.2265, pruned_loss=0.03751, over 4930.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2496, pruned_loss=0.05514, over 878192.45 frames. ], batch size: 38, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:39:43,289 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9853, 1.9524, 1.7400, 2.1136, 2.5554, 2.1268, 1.7884, 1.6200], device='cuda:2'), covar=tensor([0.2376, 0.1972, 0.2034, 0.1721, 0.1697, 0.1192, 0.2378, 0.2118], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0193, 0.0242, 0.0187, 0.0216, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:39:57,690 INFO [finetune.py:976] (2/7) Epoch 19, batch 550, loss[loss=0.1817, simple_loss=0.2401, pruned_loss=0.06168, over 4805.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2469, pruned_loss=0.05435, over 894384.40 frames. ], batch size: 25, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:40:00,581 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.543e+02 1.834e+02 2.179e+02 4.966e+02, threshold=3.668e+02, percent-clipped=2.0 2023-03-26 22:40:31,341 INFO [finetune.py:976] (2/7) Epoch 19, batch 600, loss[loss=0.1603, simple_loss=0.2386, pruned_loss=0.04102, over 4803.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2486, pruned_loss=0.05481, over 908636.91 frames. ], batch size: 45, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:40:47,253 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103722.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:40:58,707 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:04,044 INFO [finetune.py:976] (2/7) Epoch 19, batch 650, loss[loss=0.1679, simple_loss=0.2395, pruned_loss=0.04815, over 4905.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2518, pruned_loss=0.05578, over 919267.13 frames. ], batch size: 35, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:41:04,116 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8769, 3.8660, 3.6900, 2.0114, 4.0033, 3.0439, 0.7859, 2.6623], device='cuda:2'), covar=tensor([0.2304, 0.1991, 0.1343, 0.3156, 0.0966, 0.0995, 0.4583, 0.1494], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0160, 0.0128, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:41:06,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.539e+02 1.795e+02 2.169e+02 3.837e+02, threshold=3.591e+02, percent-clipped=1.0 2023-03-26 22:41:07,208 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103754.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:11,334 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4981, 1.4342, 1.4675, 0.8724, 1.5081, 1.7782, 1.7623, 1.3111], device='cuda:2'), covar=tensor([0.0997, 0.0696, 0.0553, 0.0560, 0.0543, 0.0552, 0.0346, 0.0766], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0123, 0.0126, 0.0130, 0.0128, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0686e-05, 1.0856e-04, 8.8057e-05, 8.9333e-05, 9.1241e-05, 9.1447e-05, 1.0113e-04, 1.0594e-04], device='cuda:2') 2023-03-26 22:41:15,349 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:30,849 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:37,406 INFO [finetune.py:976] (2/7) Epoch 19, batch 700, loss[loss=0.1724, simple_loss=0.2543, pruned_loss=0.04523, over 4896.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2533, pruned_loss=0.05589, over 927130.22 frames. ], batch size: 37, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:41:38,877 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:39,435 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:02,027 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:02,141 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-26 22:42:13,879 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103836.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:24,078 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 22:42:26,322 INFO [finetune.py:976] (2/7) Epoch 19, batch 750, loss[loss=0.1967, simple_loss=0.2664, pruned_loss=0.06352, over 4743.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2537, pruned_loss=0.05584, over 932843.62 frames. ], batch size: 27, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:42:33,271 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.573e+02 1.871e+02 2.192e+02 5.260e+02, threshold=3.742e+02, percent-clipped=2.0 2023-03-26 22:43:03,204 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103876.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:43:12,701 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4924, 1.5655, 2.0106, 1.9581, 1.8308, 3.6925, 1.6176, 1.6745], device='cuda:2'), covar=tensor([0.0948, 0.1701, 0.0981, 0.0861, 0.1369, 0.0241, 0.1362, 0.1654], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0090, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:43:22,082 INFO [finetune.py:976] (2/7) Epoch 19, batch 800, loss[loss=0.1696, simple_loss=0.2499, pruned_loss=0.0446, over 4789.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2524, pruned_loss=0.05507, over 935850.74 frames. ], batch size: 29, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:43:22,356 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 22:43:25,209 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3749, 1.3882, 1.8210, 1.8030, 1.5539, 3.4669, 1.3858, 1.4753], device='cuda:2'), covar=tensor([0.1153, 0.2022, 0.1265, 0.1055, 0.1769, 0.0290, 0.1658, 0.2081], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:43:32,669 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 22:43:48,640 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103937.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:43:55,832 INFO [finetune.py:976] (2/7) Epoch 19, batch 850, loss[loss=0.1492, simple_loss=0.2249, pruned_loss=0.03673, over 4757.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2503, pruned_loss=0.05454, over 940731.98 frames. ], batch size: 27, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:43:58,237 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.499e+02 1.798e+02 2.103e+02 3.961e+02, threshold=3.597e+02, percent-clipped=1.0 2023-03-26 22:44:25,973 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103990.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:44:30,237 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:44:31,321 INFO [finetune.py:976] (2/7) Epoch 19, batch 900, loss[loss=0.1358, simple_loss=0.2076, pruned_loss=0.03196, over 4849.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2471, pruned_loss=0.05326, over 943748.70 frames. ], batch size: 47, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:44:46,730 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104022.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:44:53,228 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 22:45:05,988 INFO [finetune.py:976] (2/7) Epoch 19, batch 950, loss[loss=0.1566, simple_loss=0.234, pruned_loss=0.03963, over 4913.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2447, pruned_loss=0.05233, over 949025.75 frames. ], batch size: 37, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:45:06,102 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=104049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:07,329 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:08,386 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.542e+01 1.532e+02 1.748e+02 2.079e+02 4.067e+02, threshold=3.497e+02, percent-clipped=1.0 2023-03-26 22:45:11,550 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:18,738 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104070.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:45:36,942 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:38,695 INFO [finetune.py:976] (2/7) Epoch 19, batch 1000, loss[loss=0.1807, simple_loss=0.2485, pruned_loss=0.05648, over 4863.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2468, pruned_loss=0.05346, over 948658.69 frames. ], batch size: 34, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:45:45,416 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:52,087 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:46:12,344 INFO [finetune.py:976] (2/7) Epoch 19, batch 1050, loss[loss=0.2295, simple_loss=0.2967, pruned_loss=0.08117, over 4754.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.25, pruned_loss=0.05422, over 951083.93 frames. ], batch size: 54, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:46:14,761 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.591e+02 1.940e+02 2.273e+02 3.456e+02, threshold=3.881e+02, percent-clipped=0.0 2023-03-26 22:46:45,914 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7468, 1.7214, 1.4664, 1.8667, 2.2942, 1.8372, 1.5112, 1.4250], device='cuda:2'), covar=tensor([0.2100, 0.1843, 0.1866, 0.1490, 0.1408, 0.1157, 0.2267, 0.1858], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0209, 0.0214, 0.0193, 0.0242, 0.0187, 0.0216, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:46:53,935 INFO [finetune.py:976] (2/7) Epoch 19, batch 1100, loss[loss=0.1732, simple_loss=0.2421, pruned_loss=0.05215, over 4902.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2507, pruned_loss=0.05409, over 949629.73 frames. ], batch size: 37, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:47:02,611 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 22:47:16,906 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104232.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:47:39,153 INFO [finetune.py:976] (2/7) Epoch 19, batch 1150, loss[loss=0.2024, simple_loss=0.2694, pruned_loss=0.06775, over 4753.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2503, pruned_loss=0.0542, over 950582.78 frames. ], batch size: 28, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:47:47,027 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.710e+02 1.992e+02 2.366e+02 4.129e+02, threshold=3.984e+02, percent-clipped=1.0 2023-03-26 22:47:50,725 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 22:48:10,447 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 22:48:22,262 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 22:48:25,525 INFO [finetune.py:976] (2/7) Epoch 19, batch 1200, loss[loss=0.1936, simple_loss=0.2614, pruned_loss=0.06288, over 4840.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2496, pruned_loss=0.05403, over 952404.25 frames. ], batch size: 47, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:49:05,635 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:07,373 INFO [finetune.py:976] (2/7) Epoch 19, batch 1250, loss[loss=0.1279, simple_loss=0.1989, pruned_loss=0.02845, over 4763.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2475, pruned_loss=0.05359, over 950944.50 frames. ], batch size: 27, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:49:10,322 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.979e+01 1.472e+02 1.754e+02 2.218e+02 4.171e+02, threshold=3.509e+02, percent-clipped=1.0 2023-03-26 22:49:10,410 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:12,800 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6252, 1.6384, 1.6134, 0.8771, 1.7699, 1.9792, 1.9099, 1.4870], device='cuda:2'), covar=tensor([0.1002, 0.0728, 0.0548, 0.0594, 0.0478, 0.0526, 0.0368, 0.0647], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0124, 0.0126, 0.0130, 0.0128, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1021e-05, 1.0925e-04, 8.8856e-05, 8.9371e-05, 9.1788e-05, 9.1947e-05, 1.0164e-04, 1.0630e-04], device='cuda:2') 2023-03-26 22:49:20,631 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1550, 2.0485, 1.6766, 1.9815, 2.0129, 1.9509, 1.9895, 2.6179], device='cuda:2'), covar=tensor([0.3680, 0.3999, 0.3424, 0.3810, 0.4069, 0.2556, 0.3868, 0.1803], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0275, 0.0251, 0.0220, 0.0252, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:49:39,165 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:39,914 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 22:49:41,418 INFO [finetune.py:976] (2/7) Epoch 19, batch 1300, loss[loss=0.2184, simple_loss=0.2806, pruned_loss=0.07813, over 4821.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2444, pruned_loss=0.05255, over 951330.87 frames. ], batch size: 40, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:49:45,741 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104405.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:56,439 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:07,985 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0411, 1.9465, 1.5892, 1.8476, 1.7767, 1.7421, 1.8311, 2.4084], device='cuda:2'), covar=tensor([0.3366, 0.3658, 0.3000, 0.3233, 0.3536, 0.2317, 0.3416, 0.1683], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0275, 0.0251, 0.0220, 0.0252, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:50:08,521 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2185, 2.8759, 2.9646, 3.1385, 3.0207, 2.8873, 3.2776, 1.0539], device='cuda:2'), covar=tensor([0.1130, 0.1030, 0.1056, 0.1274, 0.1606, 0.1776, 0.1240, 0.5373], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0241, 0.0278, 0.0288, 0.0329, 0.0280, 0.0300, 0.0293], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:50:10,348 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:14,760 INFO [finetune.py:976] (2/7) Epoch 19, batch 1350, loss[loss=0.1552, simple_loss=0.2297, pruned_loss=0.04042, over 4775.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2448, pruned_loss=0.05341, over 950596.97 frames. ], batch size: 26, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:50:17,642 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.617e+02 1.931e+02 2.310e+02 3.973e+02, threshold=3.863e+02, percent-clipped=4.0 2023-03-26 22:50:24,988 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 22:50:29,049 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104469.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:38,090 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9287, 1.6009, 2.2810, 1.4335, 2.0616, 2.2005, 1.5702, 2.3287], device='cuda:2'), covar=tensor([0.1342, 0.2172, 0.1413, 0.2221, 0.0856, 0.1474, 0.2805, 0.0867], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0203, 0.0190, 0.0188, 0.0175, 0.0213, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:50:48,508 INFO [finetune.py:976] (2/7) Epoch 19, batch 1400, loss[loss=0.1679, simple_loss=0.2481, pruned_loss=0.04382, over 4820.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2483, pruned_loss=0.05483, over 951272.40 frames. ], batch size: 39, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:51:05,772 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.57 vs. limit=2.0 2023-03-26 22:51:10,565 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104532.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:51:13,690 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 22:51:18,412 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7564, 3.8892, 3.7026, 1.9285, 4.0267, 3.0039, 1.3162, 2.7505], device='cuda:2'), covar=tensor([0.2109, 0.1713, 0.1354, 0.3040, 0.0834, 0.0897, 0.3740, 0.1278], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0161, 0.0129, 0.0160, 0.0123, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:51:21,267 INFO [finetune.py:976] (2/7) Epoch 19, batch 1450, loss[loss=0.19, simple_loss=0.2586, pruned_loss=0.06074, over 4833.00 frames. ], tot_loss[loss=0.181, simple_loss=0.251, pruned_loss=0.05549, over 952861.94 frames. ], batch size: 33, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:51:24,644 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.650e+02 1.913e+02 2.290e+02 4.485e+02, threshold=3.826e+02, percent-clipped=3.0 2023-03-26 22:51:24,819 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8876, 1.2131, 1.8857, 1.8569, 1.6747, 1.6356, 1.7684, 1.7360], device='cuda:2'), covar=tensor([0.3686, 0.3848, 0.3122, 0.3436, 0.4326, 0.3468, 0.4079, 0.3008], device='cuda:2'), in_proj_covar=tensor([0.0251, 0.0242, 0.0260, 0.0278, 0.0276, 0.0250, 0.0285, 0.0242], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:51:35,740 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2959, 2.9613, 3.0817, 3.2651, 3.0772, 2.9072, 3.3578, 0.8984], device='cuda:2'), covar=tensor([0.1136, 0.0987, 0.1115, 0.1147, 0.1774, 0.1790, 0.1074, 0.5847], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0242, 0.0279, 0.0288, 0.0331, 0.0280, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:51:42,884 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104580.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:02,825 INFO [finetune.py:976] (2/7) Epoch 19, batch 1500, loss[loss=0.2123, simple_loss=0.275, pruned_loss=0.0748, over 4703.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.253, pruned_loss=0.05641, over 954792.00 frames. ], batch size: 59, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:52:03,122 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 22:52:33,836 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:35,531 INFO [finetune.py:976] (2/7) Epoch 19, batch 1550, loss[loss=0.1643, simple_loss=0.2271, pruned_loss=0.05068, over 4799.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2531, pruned_loss=0.05591, over 956298.81 frames. ], batch size: 25, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:52:40,199 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.878e+01 1.494e+02 1.864e+02 2.283e+02 3.386e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-26 22:52:40,319 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:42,118 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 22:53:26,170 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104694.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:29,187 INFO [finetune.py:976] (2/7) Epoch 19, batch 1600, loss[loss=0.174, simple_loss=0.2344, pruned_loss=0.05677, over 4872.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2498, pruned_loss=0.05456, over 956110.80 frames. ], batch size: 34, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:53:35,259 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:38,268 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104705.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:51,380 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4416, 3.9916, 4.1273, 3.9850, 3.9939, 3.7521, 4.5499, 1.6196], device='cuda:2'), covar=tensor([0.1304, 0.1749, 0.1733, 0.2348, 0.2108, 0.2529, 0.1276, 0.7864], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0241, 0.0279, 0.0290, 0.0331, 0.0281, 0.0301, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:53:58,673 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-26 22:54:01,675 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4656, 1.3478, 1.6740, 2.4791, 1.7255, 2.2205, 0.8918, 2.1238], device='cuda:2'), covar=tensor([0.1800, 0.1495, 0.1156, 0.0704, 0.0919, 0.1250, 0.1681, 0.0629], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0115, 0.0131, 0.0162, 0.0097, 0.0134, 0.0122, 0.0098], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 22:54:02,330 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1238, 1.9704, 1.6834, 1.9365, 1.8916, 1.8876, 1.9346, 2.5877], device='cuda:2'), covar=tensor([0.3678, 0.4252, 0.3400, 0.3800, 0.3811, 0.2506, 0.3810, 0.1643], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0261, 0.0230, 0.0275, 0.0251, 0.0221, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:54:09,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2265, 1.9763, 2.0319, 0.8722, 2.3248, 2.5812, 2.1480, 1.9328], device='cuda:2'), covar=tensor([0.1088, 0.0902, 0.0547, 0.0842, 0.0532, 0.0705, 0.0613, 0.0862], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0125, 0.0126, 0.0132, 0.0129, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.1676e-05, 1.0994e-04, 8.9412e-05, 8.9737e-05, 9.3005e-05, 9.2714e-05, 1.0188e-04, 1.0683e-04], device='cuda:2') 2023-03-26 22:54:11,339 INFO [finetune.py:976] (2/7) Epoch 19, batch 1650, loss[loss=0.1686, simple_loss=0.2411, pruned_loss=0.04804, over 4900.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2475, pruned_loss=0.05368, over 957264.75 frames. ], batch size: 32, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:54:13,774 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.513e+02 1.754e+02 2.121e+02 3.523e+02, threshold=3.508e+02, percent-clipped=0.0 2023-03-26 22:54:13,846 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:54:15,150 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4260, 2.3108, 1.8148, 2.3407, 2.3001, 2.0292, 2.6542, 2.4080], device='cuda:2'), covar=tensor([0.1238, 0.2101, 0.3047, 0.2552, 0.2513, 0.1665, 0.2836, 0.1695], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0254, 0.0246, 0.0203, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:54:20,489 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9446, 1.4284, 1.9288, 1.9375, 1.7095, 1.6468, 1.8355, 1.7628], device='cuda:2'), covar=tensor([0.3620, 0.3789, 0.2952, 0.3245, 0.4353, 0.3523, 0.4118, 0.3099], device='cuda:2'), in_proj_covar=tensor([0.0251, 0.0241, 0.0260, 0.0278, 0.0276, 0.0250, 0.0286, 0.0242], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:54:28,675 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7342, 3.7934, 3.6568, 2.0040, 3.9196, 2.9894, 0.9907, 2.8175], device='cuda:2'), covar=tensor([0.2503, 0.2171, 0.1471, 0.3156, 0.1081, 0.1034, 0.4329, 0.1375], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0176, 0.0160, 0.0128, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:54:29,922 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6506, 1.4299, 1.4554, 0.7815, 1.6038, 1.8560, 1.7176, 1.3431], device='cuda:2'), covar=tensor([0.0953, 0.0891, 0.0628, 0.0652, 0.0485, 0.0585, 0.0443, 0.0762], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0125, 0.0126, 0.0132, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1477e-05, 1.0959e-04, 8.9192e-05, 8.9388e-05, 9.2847e-05, 9.2401e-05, 1.0155e-04, 1.0663e-04], device='cuda:2') 2023-03-26 22:54:44,696 INFO [finetune.py:976] (2/7) Epoch 19, batch 1700, loss[loss=0.1567, simple_loss=0.2172, pruned_loss=0.04806, over 3716.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2458, pruned_loss=0.05341, over 955787.72 frames. ], batch size: 16, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:55:17,898 INFO [finetune.py:976] (2/7) Epoch 19, batch 1750, loss[loss=0.1591, simple_loss=0.2424, pruned_loss=0.03785, over 4851.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2468, pruned_loss=0.05408, over 954309.17 frames. ], batch size: 44, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:55:20,304 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.630e+02 1.888e+02 2.368e+02 5.925e+02, threshold=3.776e+02, percent-clipped=5.0 2023-03-26 22:55:39,825 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4933, 1.4215, 1.8971, 1.7554, 1.7244, 3.5232, 1.4555, 1.5657], device='cuda:2'), covar=tensor([0.1093, 0.1992, 0.1325, 0.1103, 0.1632, 0.0284, 0.1640, 0.1972], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:55:51,602 INFO [finetune.py:976] (2/7) Epoch 19, batch 1800, loss[loss=0.1983, simple_loss=0.2729, pruned_loss=0.0619, over 4895.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2512, pruned_loss=0.05529, over 955803.32 frames. ], batch size: 37, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:55:56,572 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0089, 1.5321, 0.7328, 1.7577, 2.1872, 1.4204, 1.7990, 1.7744], device='cuda:2'), covar=tensor([0.1435, 0.2026, 0.2172, 0.1149, 0.1922, 0.2041, 0.1366, 0.1925], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 22:56:25,139 INFO [finetune.py:976] (2/7) Epoch 19, batch 1850, loss[loss=0.2031, simple_loss=0.2621, pruned_loss=0.07205, over 4910.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2527, pruned_loss=0.05623, over 956508.16 frames. ], batch size: 46, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:56:27,537 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.864e+01 1.560e+02 1.785e+02 2.312e+02 4.235e+02, threshold=3.569e+02, percent-clipped=1.0 2023-03-26 22:56:41,316 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4919, 1.4043, 1.4891, 0.8203, 1.5300, 1.4629, 1.4763, 1.3444], device='cuda:2'), covar=tensor([0.0634, 0.0843, 0.0762, 0.1058, 0.0919, 0.0777, 0.0687, 0.1309], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0133, 0.0138, 0.0119, 0.0123, 0.0136, 0.0138, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:56:50,321 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 22:57:00,525 INFO [finetune.py:976] (2/7) Epoch 19, batch 1900, loss[loss=0.2189, simple_loss=0.2896, pruned_loss=0.07406, over 4843.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2538, pruned_loss=0.05622, over 957534.17 frames. ], batch size: 44, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:57:20,706 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8029, 4.3217, 4.1583, 2.2005, 4.4683, 3.3895, 0.7724, 3.0795], device='cuda:2'), covar=tensor([0.2426, 0.1556, 0.1164, 0.2808, 0.0748, 0.0743, 0.4177, 0.1155], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0177, 0.0160, 0.0128, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 22:57:25,269 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3557, 1.3856, 1.5821, 1.0398, 1.3484, 1.5294, 1.3768, 1.6754], device='cuda:2'), covar=tensor([0.1111, 0.1983, 0.1296, 0.1561, 0.0878, 0.1134, 0.2868, 0.0772], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0204, 0.0190, 0.0188, 0.0174, 0.0213, 0.0217, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:57:27,688 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2301, 2.1215, 1.7994, 2.2211, 2.0022, 2.0290, 2.0284, 2.8052], device='cuda:2'), covar=tensor([0.3787, 0.4908, 0.3472, 0.4532, 0.5093, 0.2408, 0.4442, 0.1663], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0231, 0.0277, 0.0252, 0.0222, 0.0253, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:57:37,594 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2232, 1.4986, 0.6551, 2.1663, 2.3728, 1.8571, 1.8532, 1.9519], device='cuda:2'), covar=tensor([0.1327, 0.2003, 0.2202, 0.1077, 0.1871, 0.1782, 0.1374, 0.1899], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 22:57:41,798 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1579, 1.9260, 1.7767, 1.8081, 1.8249, 1.8342, 1.8476, 2.6111], device='cuda:2'), covar=tensor([0.3583, 0.4168, 0.3122, 0.3968, 0.3908, 0.2481, 0.3838, 0.1554], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0276, 0.0252, 0.0221, 0.0252, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:57:42,257 INFO [finetune.py:976] (2/7) Epoch 19, batch 1950, loss[loss=0.1793, simple_loss=0.2549, pruned_loss=0.05186, over 4925.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2524, pruned_loss=0.05577, over 958183.27 frames. ], batch size: 38, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:57:44,664 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.430e+02 1.759e+02 2.099e+02 5.293e+02, threshold=3.517e+02, percent-clipped=3.0 2023-03-26 22:58:31,191 INFO [finetune.py:976] (2/7) Epoch 19, batch 2000, loss[loss=0.1567, simple_loss=0.2294, pruned_loss=0.04201, over 4767.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.25, pruned_loss=0.05452, over 958234.64 frames. ], batch size: 28, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 22:58:54,315 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.58 vs. limit=5.0 2023-03-26 22:59:05,391 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6591, 1.6504, 2.2739, 1.9256, 1.9062, 4.2669, 1.6835, 1.8183], device='cuda:2'), covar=tensor([0.0962, 0.1822, 0.1106, 0.0994, 0.1506, 0.0185, 0.1416, 0.1747], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:59:17,045 INFO [finetune.py:976] (2/7) Epoch 19, batch 2050, loss[loss=0.1901, simple_loss=0.2554, pruned_loss=0.06234, over 4821.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2458, pruned_loss=0.05309, over 958786.34 frames. ], batch size: 38, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 22:59:19,897 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.557e+02 1.803e+02 2.317e+02 4.729e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-26 22:59:23,639 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8091, 2.8437, 2.5327, 2.0991, 2.7501, 2.9341, 3.0581, 2.3554], device='cuda:2'), covar=tensor([0.0601, 0.0602, 0.0773, 0.0898, 0.0602, 0.0679, 0.0552, 0.1051], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0133, 0.0138, 0.0118, 0.0123, 0.0136, 0.0137, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:59:40,150 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5880, 1.4819, 1.9262, 1.8302, 1.6862, 3.6270, 1.3839, 1.6249], device='cuda:2'), covar=tensor([0.1005, 0.1849, 0.1105, 0.0997, 0.1597, 0.0240, 0.1597, 0.1848], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 22:59:46,641 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1786, 2.1462, 1.6861, 2.0888, 2.1161, 1.8755, 2.4660, 2.2333], device='cuda:2'), covar=tensor([0.1339, 0.1959, 0.3117, 0.2506, 0.2530, 0.1636, 0.3094, 0.1667], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0189, 0.0237, 0.0254, 0.0248, 0.0204, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 22:59:50,002 INFO [finetune.py:976] (2/7) Epoch 19, batch 2100, loss[loss=0.1592, simple_loss=0.2262, pruned_loss=0.04611, over 4697.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2465, pruned_loss=0.05423, over 958605.53 frames. ], batch size: 23, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 22:59:57,265 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5832, 3.6704, 3.4242, 1.7204, 3.8206, 2.8053, 0.8806, 2.5780], device='cuda:2'), covar=tensor([0.2528, 0.2279, 0.1641, 0.3473, 0.1042, 0.1049, 0.4523, 0.1621], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:00:01,469 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9391, 1.9052, 2.3172, 3.8610, 2.6952, 2.7140, 1.0984, 3.2514], device='cuda:2'), covar=tensor([0.1791, 0.1389, 0.1447, 0.0527, 0.0732, 0.1760, 0.1768, 0.0458], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0099, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:00:23,800 INFO [finetune.py:976] (2/7) Epoch 19, batch 2150, loss[loss=0.1968, simple_loss=0.2828, pruned_loss=0.0554, over 4812.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.25, pruned_loss=0.05553, over 959767.90 frames. ], batch size: 45, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:00:26,648 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.573e+02 1.937e+02 2.406e+02 5.182e+02, threshold=3.875e+02, percent-clipped=4.0 2023-03-26 23:00:57,385 INFO [finetune.py:976] (2/7) Epoch 19, batch 2200, loss[loss=0.1615, simple_loss=0.227, pruned_loss=0.04805, over 4188.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2522, pruned_loss=0.05628, over 959735.00 frames. ], batch size: 65, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:01:08,799 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0517, 1.9764, 1.9053, 2.1977, 2.2995, 2.1874, 1.8566, 1.8087], device='cuda:2'), covar=tensor([0.1622, 0.1538, 0.1362, 0.1096, 0.1452, 0.0808, 0.1842, 0.1436], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0191, 0.0241, 0.0186, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:01:30,610 INFO [finetune.py:976] (2/7) Epoch 19, batch 2250, loss[loss=0.1651, simple_loss=0.248, pruned_loss=0.04111, over 4724.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.254, pruned_loss=0.05716, over 956801.67 frames. ], batch size: 54, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:01:33,464 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.629e+02 1.918e+02 2.372e+02 6.301e+02, threshold=3.835e+02, percent-clipped=3.0 2023-03-26 23:01:56,204 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:02:03,218 INFO [finetune.py:976] (2/7) Epoch 19, batch 2300, loss[loss=0.1733, simple_loss=0.2443, pruned_loss=0.05116, over 4764.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.254, pruned_loss=0.05668, over 957656.56 frames. ], batch size: 27, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:02:03,320 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4967, 1.0475, 0.8244, 1.3786, 1.9234, 0.7128, 1.2535, 1.3331], device='cuda:2'), covar=tensor([0.1546, 0.2198, 0.1739, 0.1158, 0.1967, 0.2102, 0.1545, 0.1998], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0090, 0.0118, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:02:21,615 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6215, 1.5305, 2.1765, 3.3567, 2.2303, 2.4647, 1.0533, 2.8046], device='cuda:2'), covar=tensor([0.1641, 0.1428, 0.1214, 0.0507, 0.0736, 0.1481, 0.1730, 0.0449], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0115, 0.0132, 0.0162, 0.0099, 0.0134, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:02:45,812 INFO [finetune.py:976] (2/7) Epoch 19, batch 2350, loss[loss=0.1403, simple_loss=0.2092, pruned_loss=0.03568, over 4414.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2508, pruned_loss=0.05522, over 956156.11 frames. ], batch size: 19, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:02:45,963 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105449.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:02:48,225 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.493e+02 1.723e+02 2.054e+02 4.367e+02, threshold=3.447e+02, percent-clipped=1.0 2023-03-26 23:03:10,226 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:03:19,408 INFO [finetune.py:976] (2/7) Epoch 19, batch 2400, loss[loss=0.1675, simple_loss=0.2412, pruned_loss=0.04686, over 4907.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2476, pruned_loss=0.05416, over 956350.18 frames. ], batch size: 35, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:03:23,349 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:03:25,193 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.35 vs. limit=5.0 2023-03-26 23:04:14,434 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:04:15,985 INFO [finetune.py:976] (2/7) Epoch 19, batch 2450, loss[loss=0.1423, simple_loss=0.2113, pruned_loss=0.03666, over 4789.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2442, pruned_loss=0.05316, over 952907.04 frames. ], batch size: 26, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:04:18,403 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.415e+01 1.457e+02 1.742e+02 2.171e+02 6.143e+02, threshold=3.484e+02, percent-clipped=3.0 2023-03-26 23:04:25,637 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:04:29,898 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4046, 2.2558, 1.9251, 2.2644, 2.2570, 2.0550, 2.5557, 2.3674], device='cuda:2'), covar=tensor([0.1220, 0.1953, 0.2755, 0.2415, 0.2401, 0.1570, 0.2903, 0.1736], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0236, 0.0253, 0.0247, 0.0203, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:04:49,911 INFO [finetune.py:976] (2/7) Epoch 19, batch 2500, loss[loss=0.1521, simple_loss=0.2376, pruned_loss=0.03333, over 4808.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2458, pruned_loss=0.05372, over 952936.85 frames. ], batch size: 45, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:04:50,865 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 23:05:06,243 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0530, 0.9581, 1.0633, 0.3598, 0.9449, 1.1671, 1.2311, 1.0455], device='cuda:2'), covar=tensor([0.0908, 0.0613, 0.0562, 0.0626, 0.0538, 0.0744, 0.0428, 0.0689], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0152, 0.0126, 0.0127, 0.0133, 0.0130, 0.0143, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.1987e-05, 1.1045e-04, 9.0062e-05, 9.0342e-05, 9.3511e-05, 9.3213e-05, 1.0254e-04, 1.0727e-04], device='cuda:2') 2023-03-26 23:05:23,461 INFO [finetune.py:976] (2/7) Epoch 19, batch 2550, loss[loss=0.1526, simple_loss=0.2383, pruned_loss=0.03341, over 4816.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.25, pruned_loss=0.05446, over 953398.13 frames. ], batch size: 39, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:05:26,385 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.575e+02 1.924e+02 2.412e+02 4.379e+02, threshold=3.848e+02, percent-clipped=4.0 2023-03-26 23:05:31,505 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 23:05:56,905 INFO [finetune.py:976] (2/7) Epoch 19, batch 2600, loss[loss=0.171, simple_loss=0.2468, pruned_loss=0.04758, over 4794.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2517, pruned_loss=0.05456, over 955464.83 frames. ], batch size: 29, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:06:17,244 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-26 23:06:26,488 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2993, 2.8700, 2.7291, 1.1789, 3.0220, 2.1547, 0.6820, 1.8580], device='cuda:2'), covar=tensor([0.2386, 0.2265, 0.1749, 0.3508, 0.1422, 0.1145, 0.4097, 0.1569], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0130, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:06:27,123 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105744.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 23:06:30,086 INFO [finetune.py:976] (2/7) Epoch 19, batch 2650, loss[loss=0.2079, simple_loss=0.2848, pruned_loss=0.06545, over 4098.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2526, pruned_loss=0.05486, over 954036.55 frames. ], batch size: 65, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:06:32,908 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.545e+02 1.903e+02 2.181e+02 3.189e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 23:06:59,546 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105792.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:03,678 INFO [finetune.py:976] (2/7) Epoch 19, batch 2700, loss[loss=0.1398, simple_loss=0.2219, pruned_loss=0.02885, over 4795.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2514, pruned_loss=0.05426, over 955176.13 frames. ], batch size: 25, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:07:03,792 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:05,131 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4235, 1.3302, 1.2589, 1.4397, 1.6321, 1.6010, 1.3117, 1.2240], device='cuda:2'), covar=tensor([0.0391, 0.0380, 0.0681, 0.0308, 0.0253, 0.0465, 0.0419, 0.0478], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0111, 0.0100, 0.0110, 0.0100, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.4556e-05, 8.2727e-05, 1.1270e-04, 8.4859e-05, 7.7622e-05, 8.1366e-05, 7.4922e-05, 8.4701e-05], device='cuda:2') 2023-03-26 23:07:32,866 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8141, 2.6751, 3.1572, 2.0764, 2.9473, 3.2622, 2.3447, 3.2713], device='cuda:2'), covar=tensor([0.1243, 0.1709, 0.1226, 0.1975, 0.0826, 0.1065, 0.2401, 0.0682], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0205, 0.0191, 0.0188, 0.0174, 0.0214, 0.0217, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:07:33,448 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:37,643 INFO [finetune.py:976] (2/7) Epoch 19, batch 2750, loss[loss=0.2159, simple_loss=0.2758, pruned_loss=0.07802, over 4758.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2485, pruned_loss=0.05384, over 956091.15 frames. ], batch size: 59, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:07:40,095 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.803e+01 1.395e+02 1.671e+02 1.966e+02 3.086e+02, threshold=3.343e+02, percent-clipped=0.0 2023-03-26 23:07:40,224 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105853.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:43,749 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105858.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:45,040 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105860.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:57,398 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1904, 1.3110, 1.3346, 0.7172, 1.2703, 1.4907, 1.5599, 1.2838], device='cuda:2'), covar=tensor([0.0795, 0.0550, 0.0441, 0.0428, 0.0392, 0.0553, 0.0266, 0.0553], device='cuda:2'), in_proj_covar=tensor([0.0126, 0.0153, 0.0126, 0.0127, 0.0133, 0.0131, 0.0143, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.2089e-05, 1.1099e-04, 9.0297e-05, 9.0301e-05, 9.4075e-05, 9.3566e-05, 1.0298e-04, 1.0719e-04], device='cuda:2') 2023-03-26 23:08:14,879 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-26 23:08:22,328 INFO [finetune.py:976] (2/7) Epoch 19, batch 2800, loss[loss=0.1798, simple_loss=0.2518, pruned_loss=0.05391, over 4869.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2454, pruned_loss=0.05298, over 957796.81 frames. ], batch size: 31, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:08:24,290 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3000, 2.1891, 1.9008, 2.3179, 2.0354, 2.1485, 2.0658, 3.0371], device='cuda:2'), covar=tensor([0.3864, 0.4639, 0.3324, 0.4291, 0.4634, 0.2477, 0.4316, 0.1646], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0260, 0.0229, 0.0275, 0.0251, 0.0220, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:09:03,613 INFO [finetune.py:976] (2/7) Epoch 19, batch 2850, loss[loss=0.2067, simple_loss=0.2676, pruned_loss=0.0729, over 4761.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2453, pruned_loss=0.05338, over 957928.40 frames. ], batch size: 54, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:09:10,697 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.025e+01 1.444e+02 1.775e+02 2.176e+02 4.047e+02, threshold=3.549e+02, percent-clipped=5.0 2023-03-26 23:09:13,910 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1256, 2.0684, 1.7912, 2.1209, 1.9825, 2.0061, 1.9837, 2.8461], device='cuda:2'), covar=tensor([0.3574, 0.4421, 0.3232, 0.4459, 0.4490, 0.2471, 0.4454, 0.1532], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0259, 0.0229, 0.0274, 0.0251, 0.0220, 0.0251, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:09:36,422 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.87 vs. limit=5.0 2023-03-26 23:09:49,341 INFO [finetune.py:976] (2/7) Epoch 19, batch 2900, loss[loss=0.2158, simple_loss=0.2842, pruned_loss=0.07372, over 4802.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2481, pruned_loss=0.05431, over 958119.42 frames. ], batch size: 45, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:21,005 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:24,380 INFO [finetune.py:976] (2/7) Epoch 19, batch 2950, loss[loss=0.199, simple_loss=0.2633, pruned_loss=0.06735, over 4821.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2508, pruned_loss=0.05504, over 955937.75 frames. ], batch size: 45, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:27,319 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.599e+02 1.865e+02 2.251e+02 4.962e+02, threshold=3.729e+02, percent-clipped=1.0 2023-03-26 23:10:43,290 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106077.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:53,299 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:57,525 INFO [finetune.py:976] (2/7) Epoch 19, batch 3000, loss[loss=0.1651, simple_loss=0.2503, pruned_loss=0.03996, over 4894.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.25, pruned_loss=0.05423, over 954681.11 frames. ], batch size: 36, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:57,525 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 23:11:02,130 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1326, 2.0455, 1.6173, 0.7962, 1.7397, 1.7961, 1.7277, 1.9373], device='cuda:2'), covar=tensor([0.0825, 0.0628, 0.1354, 0.1617, 0.1167, 0.2004, 0.1880, 0.0613], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0192, 0.0199, 0.0182, 0.0210, 0.0207, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:11:08,359 INFO [finetune.py:1010] (2/7) Epoch 19, validation: loss=0.1576, simple_loss=0.2259, pruned_loss=0.04462, over 2265189.00 frames. 2023-03-26 23:11:08,360 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 23:11:43,378 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106138.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:46,252 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:50,372 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:50,871 INFO [finetune.py:976] (2/7) Epoch 19, batch 3050, loss[loss=0.1901, simple_loss=0.2577, pruned_loss=0.06124, over 4780.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2516, pruned_loss=0.05474, over 954482.34 frames. ], batch size: 25, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:11:53,804 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.577e+02 1.927e+02 2.196e+02 3.458e+02, threshold=3.854e+02, percent-clipped=0.0 2023-03-26 23:11:55,078 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106155.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:56,159 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4010, 1.3665, 1.6180, 2.4965, 1.6628, 2.1466, 0.8876, 2.1061], device='cuda:2'), covar=tensor([0.1659, 0.1338, 0.1044, 0.0709, 0.0869, 0.1278, 0.1507, 0.0586], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0099, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:11:57,422 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:18,599 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:24,007 INFO [finetune.py:976] (2/7) Epoch 19, batch 3100, loss[loss=0.2029, simple_loss=0.2731, pruned_loss=0.06638, over 4901.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2499, pruned_loss=0.05397, over 955866.77 frames. ], batch size: 43, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:12:29,243 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:50,772 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6475, 0.7163, 1.7425, 1.6143, 1.5138, 1.4511, 1.5299, 1.6619], device='cuda:2'), covar=tensor([0.3807, 0.3667, 0.3031, 0.3381, 0.4232, 0.3322, 0.3934, 0.2942], device='cuda:2'), in_proj_covar=tensor([0.0251, 0.0240, 0.0260, 0.0278, 0.0277, 0.0251, 0.0285, 0.0242], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:12:57,681 INFO [finetune.py:976] (2/7) Epoch 19, batch 3150, loss[loss=0.1785, simple_loss=0.2472, pruned_loss=0.05489, over 4895.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2478, pruned_loss=0.05386, over 957048.87 frames. ], batch size: 43, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:13:00,115 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.828e+01 1.675e+02 1.879e+02 2.192e+02 3.916e+02, threshold=3.758e+02, percent-clipped=1.0 2023-03-26 23:13:03,208 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6288, 1.8087, 1.5134, 1.5934, 2.2248, 2.0253, 1.8072, 1.7486], device='cuda:2'), covar=tensor([0.0443, 0.0445, 0.0609, 0.0381, 0.0260, 0.0755, 0.0466, 0.0496], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0110, 0.0099, 0.0110, 0.0100, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.4679e-05, 8.2718e-05, 1.1238e-04, 8.4796e-05, 7.7157e-05, 8.1132e-05, 7.4504e-05, 8.4557e-05], device='cuda:2') 2023-03-26 23:13:17,036 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:13:41,053 INFO [finetune.py:976] (2/7) Epoch 19, batch 3200, loss[loss=0.1607, simple_loss=0.2352, pruned_loss=0.04305, over 4819.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2446, pruned_loss=0.05282, over 957931.02 frames. ], batch size: 30, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:13:48,377 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0085, 1.9393, 1.6286, 1.8034, 1.9927, 1.7531, 2.2041, 2.0237], device='cuda:2'), covar=tensor([0.1492, 0.1984, 0.3105, 0.2732, 0.2681, 0.1678, 0.3420, 0.1750], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0254, 0.0247, 0.0203, 0.0216, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:13:52,456 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4374, 3.2698, 3.1189, 1.3710, 3.4133, 2.5990, 0.6916, 2.2946], device='cuda:2'), covar=tensor([0.2777, 0.2360, 0.1721, 0.3454, 0.1224, 0.1053, 0.4423, 0.1553], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0160, 0.0129, 0.0160, 0.0123, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:14:01,422 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:14:04,029 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.26 vs. limit=5.0 2023-03-26 23:14:05,718 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:14:06,929 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3873, 1.6768, 0.7808, 2.0545, 2.4784, 1.7406, 1.7453, 1.8784], device='cuda:2'), covar=tensor([0.1162, 0.1885, 0.2124, 0.1062, 0.1678, 0.1909, 0.1365, 0.1909], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0090, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:14:06,939 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7119, 1.5483, 2.2089, 1.9130, 1.8696, 4.1208, 1.5997, 1.7561], device='cuda:2'), covar=tensor([0.0914, 0.1818, 0.1155, 0.0906, 0.1498, 0.0222, 0.1438, 0.1654], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 23:14:16,902 INFO [finetune.py:976] (2/7) Epoch 19, batch 3250, loss[loss=0.1909, simple_loss=0.26, pruned_loss=0.06095, over 4723.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2465, pruned_loss=0.05412, over 957782.52 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:14:24,769 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.520e+02 1.839e+02 2.222e+02 4.428e+02, threshold=3.677e+02, percent-clipped=2.0 2023-03-26 23:15:08,393 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:10,069 INFO [finetune.py:976] (2/7) Epoch 19, batch 3300, loss[loss=0.1903, simple_loss=0.2782, pruned_loss=0.05121, over 4894.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.252, pruned_loss=0.05609, over 957481.36 frames. ], batch size: 43, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:15:25,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7315, 4.3119, 4.1252, 2.1923, 4.4426, 3.5274, 0.7393, 3.0395], device='cuda:2'), covar=tensor([0.2656, 0.1434, 0.1395, 0.3164, 0.0854, 0.0805, 0.4700, 0.1363], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0161, 0.0123, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:15:32,155 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1216, 1.2358, 1.3634, 0.6937, 1.2712, 1.5354, 1.6061, 1.1813], device='cuda:2'), covar=tensor([0.0853, 0.0571, 0.0472, 0.0482, 0.0450, 0.0626, 0.0282, 0.0704], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0125, 0.0125, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1067e-05, 1.0947e-04, 8.9160e-05, 8.8760e-05, 9.2388e-05, 9.2634e-05, 1.0089e-04, 1.0602e-04], device='cuda:2') 2023-03-26 23:15:36,419 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 23:15:41,468 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106433.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:51,017 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:51,546 INFO [finetune.py:976] (2/7) Epoch 19, batch 3350, loss[loss=0.1984, simple_loss=0.2547, pruned_loss=0.07104, over 4730.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2543, pruned_loss=0.05652, over 958768.08 frames. ], batch size: 54, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:15:54,464 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.616e+02 1.883e+02 2.222e+02 4.657e+02, threshold=3.766e+02, percent-clipped=2.0 2023-03-26 23:15:55,201 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:55,782 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:31,243 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106496.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:31,390 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.64 vs. limit=2.0 2023-03-26 23:16:33,552 INFO [finetune.py:976] (2/7) Epoch 19, batch 3400, loss[loss=0.174, simple_loss=0.2291, pruned_loss=0.05943, over 4780.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2558, pruned_loss=0.05743, over 958148.04 frames. ], batch size: 27, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:16:35,508 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5643, 1.5490, 1.8312, 3.0356, 2.0448, 2.2257, 1.0710, 2.5536], device='cuda:2'), covar=tensor([0.1732, 0.1435, 0.1384, 0.0619, 0.0834, 0.1273, 0.1725, 0.0500], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0164, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:16:36,075 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106503.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:39,080 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6827, 1.6470, 1.4330, 1.6379, 2.0062, 1.9848, 1.6642, 1.5056], device='cuda:2'), covar=tensor([0.0312, 0.0310, 0.0599, 0.0292, 0.0208, 0.0445, 0.0317, 0.0359], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0108, 0.0144, 0.0111, 0.0100, 0.0110, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5370e-05, 8.3111e-05, 1.1365e-04, 8.5406e-05, 7.7860e-05, 8.1629e-05, 7.5001e-05, 8.5210e-05], device='cuda:2') 2023-03-26 23:16:44,439 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:46,882 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9028, 1.9133, 1.7228, 2.1527, 2.6060, 2.0976, 1.7328, 1.5381], device='cuda:2'), covar=tensor([0.2339, 0.2000, 0.1909, 0.1511, 0.1580, 0.1144, 0.2365, 0.2079], device='cuda:2'), in_proj_covar=tensor([0.0241, 0.0209, 0.0211, 0.0192, 0.0241, 0.0187, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:16:47,440 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4707, 3.3955, 3.2040, 1.3376, 3.5501, 2.6431, 0.7111, 2.2748], device='cuda:2'), covar=tensor([0.2372, 0.1760, 0.1651, 0.3408, 0.1116, 0.0980, 0.4242, 0.1422], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0160, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:17:06,743 INFO [finetune.py:976] (2/7) Epoch 19, batch 3450, loss[loss=0.2004, simple_loss=0.2717, pruned_loss=0.06455, over 4812.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2553, pruned_loss=0.05681, over 958609.47 frames. ], batch size: 39, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:17:09,626 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.792e+01 1.525e+02 1.781e+02 2.060e+02 3.433e+02, threshold=3.562e+02, percent-clipped=0.0 2023-03-26 23:17:09,779 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5060, 1.2960, 1.2405, 1.4870, 1.6343, 1.5016, 0.9552, 1.2137], device='cuda:2'), covar=tensor([0.2412, 0.2174, 0.2100, 0.1720, 0.1598, 0.1419, 0.2732, 0.2089], device='cuda:2'), in_proj_covar=tensor([0.0240, 0.0208, 0.0210, 0.0191, 0.0240, 0.0186, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:17:40,385 INFO [finetune.py:976] (2/7) Epoch 19, batch 3500, loss[loss=0.1668, simple_loss=0.2388, pruned_loss=0.04737, over 4790.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.251, pruned_loss=0.05532, over 957880.91 frames. ], batch size: 29, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:17:42,350 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1380, 1.7313, 2.0847, 1.4813, 2.1643, 2.1543, 2.1730, 1.3858], device='cuda:2'), covar=tensor([0.0748, 0.1041, 0.0758, 0.1034, 0.0679, 0.0772, 0.0755, 0.1995], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0133, 0.0138, 0.0119, 0.0122, 0.0137, 0.0138, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:17:57,677 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:18:14,144 INFO [finetune.py:976] (2/7) Epoch 19, batch 3550, loss[loss=0.1836, simple_loss=0.2484, pruned_loss=0.05938, over 4823.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.249, pruned_loss=0.05511, over 958487.88 frames. ], batch size: 38, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:18:16,539 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.567e+02 1.861e+02 2.307e+02 3.604e+02, threshold=3.722e+02, percent-clipped=2.0 2023-03-26 23:18:43,934 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.79 vs. limit=2.0 2023-03-26 23:18:51,973 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:18:56,158 INFO [finetune.py:976] (2/7) Epoch 19, batch 3600, loss[loss=0.1603, simple_loss=0.2209, pruned_loss=0.04987, over 4790.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.246, pruned_loss=0.05385, over 957633.86 frames. ], batch size: 25, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:19:19,210 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106733.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:19:29,858 INFO [finetune.py:976] (2/7) Epoch 19, batch 3650, loss[loss=0.2071, simple_loss=0.2864, pruned_loss=0.06393, over 4807.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2495, pruned_loss=0.05538, over 956714.99 frames. ], batch size: 41, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:19:34,982 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.609e+02 2.013e+02 2.438e+02 4.457e+02, threshold=4.025e+02, percent-clipped=1.0 2023-03-26 23:19:46,536 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4994, 1.6325, 2.1641, 1.9647, 1.8261, 3.1684, 1.4886, 1.6823], device='cuda:2'), covar=tensor([0.0904, 0.1550, 0.1292, 0.0817, 0.1298, 0.0256, 0.1340, 0.1447], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 23:20:03,523 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:20:23,369 INFO [finetune.py:976] (2/7) Epoch 19, batch 3700, loss[loss=0.1504, simple_loss=0.2277, pruned_loss=0.0365, over 4798.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2533, pruned_loss=0.05625, over 956486.56 frames. ], batch size: 29, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:20:32,839 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:20:59,510 INFO [finetune.py:976] (2/7) Epoch 19, batch 3750, loss[loss=0.137, simple_loss=0.199, pruned_loss=0.03747, over 4030.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2529, pruned_loss=0.05587, over 954882.37 frames. ], batch size: 17, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:21:06,524 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.604e+02 1.833e+02 2.350e+02 4.465e+02, threshold=3.666e+02, percent-clipped=2.0 2023-03-26 23:21:48,084 INFO [finetune.py:976] (2/7) Epoch 19, batch 3800, loss[loss=0.17, simple_loss=0.2186, pruned_loss=0.06073, over 4468.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2534, pruned_loss=0.05595, over 954258.36 frames. ], batch size: 19, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:21:58,865 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1173, 1.9514, 2.1774, 2.0971, 1.8656, 1.9065, 2.0713, 2.1367], device='cuda:2'), covar=tensor([0.4182, 0.3666, 0.2981, 0.3888, 0.4621, 0.3925, 0.4747, 0.2811], device='cuda:2'), in_proj_covar=tensor([0.0249, 0.0237, 0.0258, 0.0276, 0.0274, 0.0248, 0.0282, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:22:07,682 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:16,662 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 23:22:23,052 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5161, 3.4016, 3.2458, 1.6237, 3.5274, 2.5808, 0.9721, 2.4045], device='cuda:2'), covar=tensor([0.2316, 0.2056, 0.1521, 0.3347, 0.1099, 0.1127, 0.4188, 0.1526], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0131, 0.0162, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:22:24,702 INFO [finetune.py:976] (2/7) Epoch 19, batch 3850, loss[loss=0.1414, simple_loss=0.2108, pruned_loss=0.03599, over 4904.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2514, pruned_loss=0.05533, over 954469.31 frames. ], batch size: 43, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:22:27,157 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.119e+02 1.623e+02 1.818e+02 2.255e+02 6.115e+02, threshold=3.637e+02, percent-clipped=1.0 2023-03-26 23:22:39,182 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:52,750 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:57,317 INFO [finetune.py:976] (2/7) Epoch 19, batch 3900, loss[loss=0.1491, simple_loss=0.213, pruned_loss=0.04255, over 4822.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2484, pruned_loss=0.05421, over 953511.26 frames. ], batch size: 30, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:23:24,074 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:23:24,090 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8094, 3.7354, 3.5261, 2.0155, 3.7953, 2.8590, 1.0466, 2.7143], device='cuda:2'), covar=tensor([0.2006, 0.1816, 0.1451, 0.3194, 0.1152, 0.1070, 0.4368, 0.1444], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0130, 0.0162, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:23:29,940 INFO [finetune.py:976] (2/7) Epoch 19, batch 3950, loss[loss=0.1648, simple_loss=0.2336, pruned_loss=0.04801, over 4894.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2445, pruned_loss=0.05274, over 954289.36 frames. ], batch size: 32, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:23:35,024 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.519e+02 1.802e+02 2.250e+02 5.271e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-26 23:24:12,974 INFO [finetune.py:976] (2/7) Epoch 19, batch 4000, loss[loss=0.2225, simple_loss=0.2873, pruned_loss=0.07881, over 4814.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2443, pruned_loss=0.05285, over 954053.75 frames. ], batch size: 51, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:24:21,396 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:24:41,074 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107140.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:24:46,974 INFO [finetune.py:976] (2/7) Epoch 19, batch 4050, loss[loss=0.1845, simple_loss=0.2514, pruned_loss=0.05877, over 4775.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2474, pruned_loss=0.05378, over 951325.54 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:24:48,836 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107152.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:24:49,865 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.579e+02 1.895e+02 2.231e+02 3.900e+02, threshold=3.790e+02, percent-clipped=1.0 2023-03-26 23:24:52,894 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:25:40,274 INFO [finetune.py:976] (2/7) Epoch 19, batch 4100, loss[loss=0.1814, simple_loss=0.2546, pruned_loss=0.05408, over 4705.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2492, pruned_loss=0.05416, over 950322.87 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:25:41,801 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107201.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:25:50,508 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107213.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:26:01,337 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 23:26:13,434 INFO [finetune.py:976] (2/7) Epoch 19, batch 4150, loss[loss=0.2114, simple_loss=0.2818, pruned_loss=0.07046, over 4734.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2511, pruned_loss=0.05505, over 950372.67 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:26:21,810 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.570e+02 1.970e+02 2.461e+02 5.293e+02, threshold=3.939e+02, percent-clipped=1.0 2023-03-26 23:26:35,973 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0519, 0.9507, 0.9458, 0.3820, 1.0002, 1.1981, 1.2029, 0.9956], device='cuda:2'), covar=tensor([0.0929, 0.0740, 0.0636, 0.0614, 0.0604, 0.0752, 0.0447, 0.0711], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0125, 0.0125, 0.0131, 0.0129, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1139e-05, 1.0925e-04, 8.9666e-05, 8.8632e-05, 9.2561e-05, 9.2379e-05, 1.0193e-04, 1.0631e-04], device='cuda:2') 2023-03-26 23:26:56,744 INFO [finetune.py:976] (2/7) Epoch 19, batch 4200, loss[loss=0.1558, simple_loss=0.2367, pruned_loss=0.03746, over 4812.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2521, pruned_loss=0.05483, over 949227.50 frames. ], batch size: 45, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:27:05,804 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8627, 4.8691, 4.4654, 2.7606, 5.0133, 3.7824, 1.3631, 3.4499], device='cuda:2'), covar=tensor([0.2278, 0.1497, 0.1402, 0.2941, 0.0689, 0.0817, 0.3993, 0.1243], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0177, 0.0160, 0.0130, 0.0161, 0.0123, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:27:29,526 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.97 vs. limit=2.0 2023-03-26 23:27:29,943 INFO [finetune.py:976] (2/7) Epoch 19, batch 4250, loss[loss=0.1566, simple_loss=0.2298, pruned_loss=0.04174, over 4809.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2492, pruned_loss=0.05398, over 950498.90 frames. ], batch size: 41, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:27:33,463 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.503e+02 1.795e+02 2.146e+02 3.676e+02, threshold=3.590e+02, percent-clipped=0.0 2023-03-26 23:27:45,314 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0885, 1.9750, 1.6530, 1.8482, 1.8369, 1.8413, 1.8946, 2.5812], device='cuda:2'), covar=tensor([0.3547, 0.3629, 0.3166, 0.3535, 0.3521, 0.2391, 0.3527, 0.1578], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0230, 0.0276, 0.0252, 0.0222, 0.0252, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:28:03,397 INFO [finetune.py:976] (2/7) Epoch 19, batch 4300, loss[loss=0.1697, simple_loss=0.2364, pruned_loss=0.05151, over 4826.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2488, pruned_loss=0.05478, over 953267.83 frames. ], batch size: 30, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:28:13,060 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2142, 1.9942, 1.8491, 2.1802, 2.7834, 2.1716, 2.1166, 1.6798], device='cuda:2'), covar=tensor([0.1989, 0.1905, 0.1742, 0.1538, 0.1546, 0.1097, 0.1919, 0.1764], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0210, 0.0212, 0.0193, 0.0242, 0.0188, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:28:36,190 INFO [finetune.py:976] (2/7) Epoch 19, batch 4350, loss[loss=0.1714, simple_loss=0.2385, pruned_loss=0.05217, over 4835.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2446, pruned_loss=0.05356, over 955308.98 frames. ], batch size: 30, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:28:40,180 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.532e+02 1.813e+02 2.231e+02 3.395e+02, threshold=3.625e+02, percent-clipped=1.0 2023-03-26 23:29:17,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2619, 2.1317, 1.6361, 2.0937, 2.0192, 1.8247, 2.4222, 2.2254], device='cuda:2'), covar=tensor([0.1297, 0.2185, 0.3187, 0.3025, 0.2937, 0.1791, 0.4155, 0.1805], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0236, 0.0253, 0.0247, 0.0203, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:29:21,289 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107496.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:29:23,018 INFO [finetune.py:976] (2/7) Epoch 19, batch 4400, loss[loss=0.1947, simple_loss=0.2728, pruned_loss=0.0583, over 4827.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2463, pruned_loss=0.05445, over 954030.65 frames. ], batch size: 51, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:29:29,602 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107508.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:29:31,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107510.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:29:56,825 INFO [finetune.py:976] (2/7) Epoch 19, batch 4450, loss[loss=0.2036, simple_loss=0.2795, pruned_loss=0.06388, over 4907.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2504, pruned_loss=0.05548, over 954531.63 frames. ], batch size: 35, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:29:59,905 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.983e+01 1.601e+02 1.972e+02 2.467e+02 3.942e+02, threshold=3.944e+02, percent-clipped=4.0 2023-03-26 23:30:12,259 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:30:42,901 INFO [finetune.py:976] (2/7) Epoch 19, batch 4500, loss[loss=0.1515, simple_loss=0.2249, pruned_loss=0.03909, over 4772.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2522, pruned_loss=0.05599, over 953759.50 frames. ], batch size: 54, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:30:46,266 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 23:31:25,199 INFO [finetune.py:976] (2/7) Epoch 19, batch 4550, loss[loss=0.2309, simple_loss=0.299, pruned_loss=0.08145, over 4807.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2536, pruned_loss=0.05603, over 954573.83 frames. ], batch size: 45, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:31:28,200 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.552e+02 1.832e+02 2.186e+02 5.352e+02, threshold=3.664e+02, percent-clipped=1.0 2023-03-26 23:31:31,369 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:32:09,819 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5616, 1.2759, 1.3227, 0.7480, 1.5023, 1.5826, 1.6481, 1.2861], device='cuda:2'), covar=tensor([0.0875, 0.0728, 0.0606, 0.0525, 0.0445, 0.0613, 0.0339, 0.0669], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0125, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0621e-05, 1.0855e-04, 8.9061e-05, 8.8315e-05, 9.2175e-05, 9.1431e-05, 1.0139e-04, 1.0549e-04], device='cuda:2') 2023-03-26 23:32:12,109 INFO [finetune.py:976] (2/7) Epoch 19, batch 4600, loss[loss=0.1946, simple_loss=0.2603, pruned_loss=0.06448, over 4812.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2525, pruned_loss=0.05563, over 954439.16 frames. ], batch size: 38, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:32:20,542 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5018, 1.0958, 0.7207, 1.3390, 1.9265, 0.6596, 1.2314, 1.3615], device='cuda:2'), covar=tensor([0.1506, 0.2054, 0.1778, 0.1207, 0.1893, 0.1954, 0.1500, 0.1984], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0090, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:32:26,322 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:32:37,571 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:32:45,689 INFO [finetune.py:976] (2/7) Epoch 19, batch 4650, loss[loss=0.15, simple_loss=0.2117, pruned_loss=0.04414, over 4832.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2507, pruned_loss=0.05571, over 954118.24 frames. ], batch size: 30, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:32:47,056 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2449, 2.0215, 2.1392, 0.9928, 2.3473, 2.5946, 2.1755, 1.9471], device='cuda:2'), covar=tensor([0.0941, 0.0692, 0.0455, 0.0698, 0.0459, 0.0620, 0.0451, 0.0709], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0125, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0675e-05, 1.0843e-04, 8.9138e-05, 8.8341e-05, 9.2202e-05, 9.1490e-05, 1.0142e-04, 1.0554e-04], device='cuda:2') 2023-03-26 23:32:48,738 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.855e+01 1.504e+02 1.713e+02 2.086e+02 4.043e+02, threshold=3.426e+02, percent-clipped=2.0 2023-03-26 23:33:17,177 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107796.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:33:18,396 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107798.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:33:19,350 INFO [finetune.py:976] (2/7) Epoch 19, batch 4700, loss[loss=0.1677, simple_loss=0.2405, pruned_loss=0.04747, over 4861.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2469, pruned_loss=0.05413, over 956668.24 frames. ], batch size: 31, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:33:25,041 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107808.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:33:49,095 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107844.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:33:54,086 INFO [finetune.py:976] (2/7) Epoch 19, batch 4750, loss[loss=0.203, simple_loss=0.2755, pruned_loss=0.06523, over 4805.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2451, pruned_loss=0.0539, over 954850.43 frames. ], batch size: 45, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:33:57,616 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.438e+02 1.688e+02 2.143e+02 3.806e+02, threshold=3.376e+02, percent-clipped=2.0 2023-03-26 23:33:58,880 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107856.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:34:05,000 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107866.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:34:37,227 INFO [finetune.py:976] (2/7) Epoch 19, batch 4800, loss[loss=0.1905, simple_loss=0.2619, pruned_loss=0.05958, over 4873.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2476, pruned_loss=0.05496, over 953250.74 frames. ], batch size: 31, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:34:54,135 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 23:35:10,747 INFO [finetune.py:976] (2/7) Epoch 19, batch 4850, loss[loss=0.1744, simple_loss=0.2454, pruned_loss=0.05165, over 4862.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2487, pruned_loss=0.0547, over 952607.64 frames. ], batch size: 34, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:35:13,742 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.610e+02 1.895e+02 2.225e+02 4.035e+02, threshold=3.790e+02, percent-clipped=2.0 2023-03-26 23:35:19,082 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4937, 1.4018, 1.4050, 1.3825, 0.9824, 2.8966, 1.0030, 1.4031], device='cuda:2'), covar=tensor([0.3468, 0.2628, 0.2182, 0.2440, 0.1832, 0.0314, 0.2990, 0.1340], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0114, 0.0097, 0.0095, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 23:35:29,441 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-26 23:35:45,813 INFO [finetune.py:976] (2/7) Epoch 19, batch 4900, loss[loss=0.1377, simple_loss=0.2161, pruned_loss=0.02967, over 4709.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.251, pruned_loss=0.05576, over 951395.72 frames. ], batch size: 23, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:35:56,298 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 23:35:57,377 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108015.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:35:58,160 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 23:36:28,166 INFO [finetune.py:976] (2/7) Epoch 19, batch 4950, loss[loss=0.1904, simple_loss=0.2727, pruned_loss=0.05407, over 4847.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2505, pruned_loss=0.05485, over 951552.96 frames. ], batch size: 44, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:36:31,631 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.572e+02 1.807e+02 2.323e+02 4.539e+02, threshold=3.614e+02, percent-clipped=1.0 2023-03-26 23:37:04,011 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108093.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:37:10,957 INFO [finetune.py:976] (2/7) Epoch 19, batch 5000, loss[loss=0.1739, simple_loss=0.245, pruned_loss=0.0514, over 4908.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2492, pruned_loss=0.05445, over 950773.73 frames. ], batch size: 36, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:37:45,897 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6431, 1.6899, 1.5941, 0.8324, 1.7106, 1.9225, 1.8852, 1.4250], device='cuda:2'), covar=tensor([0.0956, 0.0549, 0.0500, 0.0591, 0.0416, 0.0577, 0.0355, 0.0761], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0124, 0.0124, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0408e-05, 1.0829e-04, 8.8757e-05, 8.8218e-05, 9.2150e-05, 9.1606e-05, 1.0133e-04, 1.0548e-04], device='cuda:2') 2023-03-26 23:37:54,036 INFO [finetune.py:976] (2/7) Epoch 19, batch 5050, loss[loss=0.1525, simple_loss=0.2203, pruned_loss=0.04233, over 4868.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2461, pruned_loss=0.05335, over 950319.22 frames. ], batch size: 31, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:37:57,572 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.577e+02 1.863e+02 2.132e+02 3.762e+02, threshold=3.725e+02, percent-clipped=1.0 2023-03-26 23:38:05,857 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:38:27,769 INFO [finetune.py:976] (2/7) Epoch 19, batch 5100, loss[loss=0.1582, simple_loss=0.2152, pruned_loss=0.05062, over 4068.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2439, pruned_loss=0.05269, over 950830.38 frames. ], batch size: 17, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:38:38,019 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108214.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:39:00,757 INFO [finetune.py:976] (2/7) Epoch 19, batch 5150, loss[loss=0.2242, simple_loss=0.2867, pruned_loss=0.08087, over 4726.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2424, pruned_loss=0.05186, over 950081.86 frames. ], batch size: 59, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:39:04,795 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.278e+01 1.451e+02 1.873e+02 2.231e+02 4.201e+02, threshold=3.747e+02, percent-clipped=0.0 2023-03-26 23:39:39,544 INFO [finetune.py:976] (2/7) Epoch 19, batch 5200, loss[loss=0.2334, simple_loss=0.2946, pruned_loss=0.08612, over 4881.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2452, pruned_loss=0.05258, over 950249.65 frames. ], batch size: 32, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:39:50,294 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2998, 2.8949, 2.7481, 1.2408, 2.9978, 2.2543, 0.6639, 1.8674], device='cuda:2'), covar=tensor([0.2445, 0.2388, 0.1704, 0.3432, 0.1336, 0.1143, 0.4066, 0.1605], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0174, 0.0158, 0.0128, 0.0158, 0.0121, 0.0145, 0.0121], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:39:54,423 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:39:59,937 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-26 23:40:00,992 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108325.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:16,476 INFO [finetune.py:976] (2/7) Epoch 19, batch 5250, loss[loss=0.1649, simple_loss=0.2398, pruned_loss=0.04496, over 4376.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2476, pruned_loss=0.05334, over 951320.46 frames. ], batch size: 65, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:40:19,992 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.566e+02 1.984e+02 2.332e+02 4.295e+02, threshold=3.968e+02, percent-clipped=2.0 2023-03-26 23:40:26,550 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108363.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:42,054 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:46,338 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:49,957 INFO [finetune.py:976] (2/7) Epoch 19, batch 5300, loss[loss=0.1458, simple_loss=0.224, pruned_loss=0.03382, over 4778.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2489, pruned_loss=0.05412, over 951765.00 frames. ], batch size: 26, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:40:53,138 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3365, 2.2566, 2.0001, 1.0767, 2.1214, 1.8347, 1.6618, 2.0988], device='cuda:2'), covar=tensor([0.1006, 0.0772, 0.1713, 0.2042, 0.1307, 0.2066, 0.2231, 0.0977], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0194, 0.0200, 0.0183, 0.0211, 0.0208, 0.0223, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:41:22,536 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:41:22,568 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0559, 1.9912, 1.6114, 1.9257, 1.8118, 1.8089, 1.8636, 2.6088], device='cuda:2'), covar=tensor([0.3954, 0.4233, 0.3355, 0.3840, 0.4069, 0.2642, 0.4024, 0.1755], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0260, 0.0230, 0.0275, 0.0251, 0.0221, 0.0253, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:41:32,019 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:41:36,861 INFO [finetune.py:976] (2/7) Epoch 19, batch 5350, loss[loss=0.174, simple_loss=0.2511, pruned_loss=0.04851, over 4895.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2499, pruned_loss=0.05462, over 952327.80 frames. ], batch size: 32, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:41:39,875 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.546e+01 1.453e+02 1.815e+02 2.266e+02 3.194e+02, threshold=3.630e+02, percent-clipped=0.0 2023-03-26 23:42:09,105 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108493.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:42:12,597 INFO [finetune.py:976] (2/7) Epoch 19, batch 5400, loss[loss=0.1805, simple_loss=0.2371, pruned_loss=0.06195, over 4239.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2483, pruned_loss=0.05451, over 952717.42 frames. ], batch size: 17, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:42:58,626 INFO [finetune.py:976] (2/7) Epoch 19, batch 5450, loss[loss=0.1342, simple_loss=0.2112, pruned_loss=0.02861, over 4757.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2463, pruned_loss=0.05419, over 954225.53 frames. ], batch size: 27, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:42:58,766 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9154, 1.8432, 1.4152, 1.6715, 1.9503, 1.6294, 2.4607, 1.8946], device='cuda:2'), covar=tensor([0.1288, 0.1694, 0.3059, 0.2595, 0.2439, 0.1691, 0.2087, 0.1765], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0186, 0.0232, 0.0251, 0.0245, 0.0202, 0.0213, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:43:01,646 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.562e+02 1.816e+02 2.216e+02 4.232e+02, threshold=3.632e+02, percent-clipped=2.0 2023-03-26 23:43:04,818 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8820, 1.6750, 1.5244, 1.9163, 2.2323, 1.8826, 1.5282, 1.5214], device='cuda:2'), covar=tensor([0.2107, 0.2065, 0.1968, 0.1666, 0.1679, 0.1247, 0.2594, 0.1988], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0209, 0.0211, 0.0193, 0.0243, 0.0188, 0.0216, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:43:08,292 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1162, 2.1366, 1.8371, 1.8783, 2.6605, 2.7057, 2.0613, 2.1077], device='cuda:2'), covar=tensor([0.0345, 0.0387, 0.0524, 0.0336, 0.0191, 0.0339, 0.0403, 0.0374], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0108, 0.0145, 0.0112, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5359e-05, 8.3081e-05, 1.1388e-04, 8.5666e-05, 7.7807e-05, 8.2015e-05, 7.4309e-05, 8.5285e-05], device='cuda:2') 2023-03-26 23:43:14,423 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 23:43:17,226 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108577.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:43:31,854 INFO [finetune.py:976] (2/7) Epoch 19, batch 5500, loss[loss=0.2358, simple_loss=0.281, pruned_loss=0.09532, over 4899.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2433, pruned_loss=0.05286, over 956609.83 frames. ], batch size: 32, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:43:50,781 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9169, 1.6318, 2.0745, 1.4632, 1.9208, 2.0836, 1.5860, 2.2441], device='cuda:2'), covar=tensor([0.1316, 0.2274, 0.1450, 0.1831, 0.0918, 0.1389, 0.2838, 0.0808], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0204, 0.0191, 0.0189, 0.0174, 0.0213, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:43:58,831 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108638.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:44:05,686 INFO [finetune.py:976] (2/7) Epoch 19, batch 5550, loss[loss=0.1849, simple_loss=0.2476, pruned_loss=0.06108, over 4897.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2453, pruned_loss=0.05377, over 956543.43 frames. ], batch size: 32, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:44:08,703 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.553e+02 1.822e+02 2.201e+02 3.552e+02, threshold=3.643e+02, percent-clipped=0.0 2023-03-26 23:44:10,800 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 23:44:21,373 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 23:44:27,084 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108681.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:44:37,477 INFO [finetune.py:976] (2/7) Epoch 19, batch 5600, loss[loss=0.187, simple_loss=0.2616, pruned_loss=0.05619, over 4839.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2488, pruned_loss=0.05494, over 951073.78 frames. ], batch size: 47, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:44:41,098 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7876, 1.7117, 1.8299, 1.2085, 1.8186, 1.8539, 1.8227, 1.5000], device='cuda:2'), covar=tensor([0.0545, 0.0620, 0.0601, 0.0849, 0.0811, 0.0621, 0.0618, 0.1066], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0121, 0.0125, 0.0139, 0.0139, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:44:52,354 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 23:45:09,467 INFO [finetune.py:976] (2/7) Epoch 19, batch 5650, loss[loss=0.1805, simple_loss=0.2627, pruned_loss=0.0492, over 4823.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2513, pruned_loss=0.05507, over 952985.95 frames. ], batch size: 25, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:45:12,317 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.584e+02 1.878e+02 2.184e+02 3.636e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-26 23:45:32,964 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108788.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:45:34,789 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108791.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:45:39,454 INFO [finetune.py:976] (2/7) Epoch 19, batch 5700, loss[loss=0.1409, simple_loss=0.1988, pruned_loss=0.04145, over 4314.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2479, pruned_loss=0.05473, over 933423.95 frames. ], batch size: 18, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:46:08,107 INFO [finetune.py:976] (2/7) Epoch 20, batch 0, loss[loss=0.1692, simple_loss=0.24, pruned_loss=0.04924, over 4752.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.24, pruned_loss=0.04924, over 4752.00 frames. ], batch size: 27, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:46:08,107 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-26 23:46:15,450 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8847, 1.2803, 0.9286, 1.6607, 2.1373, 1.2132, 1.6192, 1.5969], device='cuda:2'), covar=tensor([0.1368, 0.1854, 0.1753, 0.1039, 0.1787, 0.1930, 0.1190, 0.1815], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:46:16,842 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2031, 2.0923, 2.0312, 1.8606, 1.9941, 2.1108, 2.1058, 2.6453], device='cuda:2'), covar=tensor([0.3453, 0.4383, 0.3330, 0.3703, 0.3596, 0.2425, 0.3444, 0.1671], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0275, 0.0251, 0.0221, 0.0252, 0.0232], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:46:24,542 INFO [finetune.py:1010] (2/7) Epoch 20, validation: loss=0.158, simple_loss=0.2276, pruned_loss=0.04423, over 2265189.00 frames. 2023-03-26 23:46:24,543 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-26 23:46:28,018 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9109, 2.0645, 1.7276, 1.7837, 2.4292, 2.4886, 1.8894, 2.0292], device='cuda:2'), covar=tensor([0.0385, 0.0323, 0.0611, 0.0352, 0.0294, 0.0545, 0.0367, 0.0342], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0111, 0.0099, 0.0110, 0.0099, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.4468e-05, 8.2235e-05, 1.1294e-04, 8.5013e-05, 7.7084e-05, 8.1163e-05, 7.3666e-05, 8.4546e-05], device='cuda:2') 2023-03-26 23:46:32,154 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8784, 1.2851, 0.8922, 1.6391, 2.1184, 1.4335, 1.6526, 1.7102], device='cuda:2'), covar=tensor([0.1419, 0.2049, 0.1879, 0.1133, 0.1887, 0.1860, 0.1315, 0.1772], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:46:52,593 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:46:58,208 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.806e+01 1.424e+02 1.737e+02 2.098e+02 5.389e+02, threshold=3.475e+02, percent-clipped=2.0 2023-03-26 23:47:17,654 INFO [finetune.py:976] (2/7) Epoch 20, batch 50, loss[loss=0.1785, simple_loss=0.2538, pruned_loss=0.05162, over 4885.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2559, pruned_loss=0.05681, over 217588.76 frames. ], batch size: 35, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:47:18,977 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9112, 1.8688, 1.6982, 1.9333, 1.6840, 4.6027, 1.7813, 2.3934], device='cuda:2'), covar=tensor([0.3346, 0.2526, 0.2068, 0.2209, 0.1494, 0.0120, 0.2414, 0.1100], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0114, 0.0119, 0.0122, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-26 23:47:57,360 INFO [finetune.py:976] (2/7) Epoch 20, batch 100, loss[loss=0.1549, simple_loss=0.2253, pruned_loss=0.04223, over 4819.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2445, pruned_loss=0.05224, over 381176.05 frames. ], batch size: 33, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:47:59,447 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 23:48:06,344 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:48:13,179 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-26 23:48:23,147 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.362e+02 1.754e+02 2.070e+02 5.157e+02, threshold=3.508e+02, percent-clipped=1.0 2023-03-26 23:48:38,580 INFO [finetune.py:976] (2/7) Epoch 20, batch 150, loss[loss=0.2112, simple_loss=0.2745, pruned_loss=0.07394, over 4831.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2406, pruned_loss=0.05126, over 509830.30 frames. ], batch size: 40, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:48:41,560 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:49:04,009 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6144, 1.6008, 1.3652, 1.5017, 1.8871, 1.7988, 1.5212, 1.4216], device='cuda:2'), covar=tensor([0.0309, 0.0328, 0.0591, 0.0308, 0.0205, 0.0446, 0.0344, 0.0407], device='cuda:2'), in_proj_covar=tensor([0.0096, 0.0106, 0.0143, 0.0111, 0.0099, 0.0110, 0.0099, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.4520e-05, 8.1978e-05, 1.1242e-04, 8.4888e-05, 7.6935e-05, 8.1188e-05, 7.3524e-05, 8.4272e-05], device='cuda:2') 2023-03-26 23:49:11,418 INFO [finetune.py:976] (2/7) Epoch 20, batch 200, loss[loss=0.1564, simple_loss=0.2163, pruned_loss=0.04826, over 4903.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2394, pruned_loss=0.0509, over 610469.19 frames. ], batch size: 32, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:49:11,509 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5932, 1.5136, 2.2298, 3.1742, 2.1104, 2.3035, 1.0114, 2.5841], device='cuda:2'), covar=tensor([0.1717, 0.1435, 0.1139, 0.0654, 0.0859, 0.1716, 0.1768, 0.0532], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0116, 0.0133, 0.0164, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:49:13,180 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109029.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:49:19,015 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109037.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:49:29,131 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.519e+02 1.780e+02 2.129e+02 3.450e+02, threshold=3.561e+02, percent-clipped=0.0 2023-03-26 23:49:32,730 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9263, 1.3588, 0.9233, 1.7592, 2.2039, 1.5571, 1.6766, 1.6894], device='cuda:2'), covar=tensor([0.1443, 0.2067, 0.1978, 0.1164, 0.1960, 0.1855, 0.1448, 0.1983], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-26 23:49:41,634 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0127, 1.4166, 0.7700, 1.8145, 2.2126, 1.6603, 1.7380, 1.7265], device='cuda:2'), covar=tensor([0.1506, 0.2156, 0.2150, 0.1216, 0.1914, 0.1785, 0.1430, 0.2118], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:49:44,471 INFO [finetune.py:976] (2/7) Epoch 20, batch 250, loss[loss=0.2119, simple_loss=0.2743, pruned_loss=0.0747, over 4903.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2456, pruned_loss=0.05366, over 686867.17 frames. ], batch size: 35, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:49:52,133 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:49:58,701 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109098.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:50:17,243 INFO [finetune.py:976] (2/7) Epoch 20, batch 300, loss[loss=0.169, simple_loss=0.2297, pruned_loss=0.05416, over 4719.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2502, pruned_loss=0.05451, over 745359.22 frames. ], batch size: 23, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:50:23,612 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109136.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:50:31,197 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109147.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:50:35,398 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.518e+02 1.856e+02 2.256e+02 3.204e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-26 23:50:50,165 INFO [finetune.py:976] (2/7) Epoch 20, batch 350, loss[loss=0.1384, simple_loss=0.1974, pruned_loss=0.03973, over 4538.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2513, pruned_loss=0.05539, over 791712.98 frames. ], batch size: 20, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:51:00,288 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 23:51:01,446 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:51:01,676 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 23:51:08,329 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3732, 4.6543, 5.0065, 5.3115, 5.0515, 4.6922, 5.4955, 1.6752], device='cuda:2'), covar=tensor([0.0722, 0.0863, 0.0744, 0.0851, 0.1281, 0.1699, 0.0547, 0.5860], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0244, 0.0276, 0.0291, 0.0331, 0.0281, 0.0300, 0.0294], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:51:10,165 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9119, 1.3525, 0.7817, 1.6666, 2.1358, 1.3975, 1.7301, 1.6078], device='cuda:2'), covar=tensor([0.1439, 0.2074, 0.2073, 0.1208, 0.1933, 0.1961, 0.1299, 0.2079], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:51:25,272 INFO [finetune.py:976] (2/7) Epoch 20, batch 400, loss[loss=0.127, simple_loss=0.192, pruned_loss=0.03104, over 4161.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2516, pruned_loss=0.05489, over 827301.34 frames. ], batch size: 17, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:51:34,318 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109233.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:03,601 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.657e+02 1.900e+02 2.185e+02 4.941e+02, threshold=3.801e+02, percent-clipped=3.0 2023-03-26 23:52:03,751 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9468, 1.8304, 1.6210, 2.0224, 2.3287, 2.0806, 1.5602, 1.5697], device='cuda:2'), covar=tensor([0.2054, 0.1870, 0.1858, 0.1564, 0.1649, 0.1103, 0.2404, 0.1979], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0211, 0.0194, 0.0244, 0.0188, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:52:04,365 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:04,424 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-26 23:52:17,245 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8711, 1.3179, 1.0577, 1.6793, 2.0733, 1.4909, 1.7303, 1.7017], device='cuda:2'), covar=tensor([0.1442, 0.2034, 0.1862, 0.1177, 0.1983, 0.1891, 0.1342, 0.1942], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:52:26,281 INFO [finetune.py:976] (2/7) Epoch 20, batch 450, loss[loss=0.1419, simple_loss=0.2116, pruned_loss=0.0361, over 4817.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2497, pruned_loss=0.05414, over 856049.42 frames. ], batch size: 38, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:52:29,294 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:38,181 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:54,644 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109318.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:53:02,103 INFO [finetune.py:976] (2/7) Epoch 20, batch 500, loss[loss=0.2, simple_loss=0.2584, pruned_loss=0.07086, over 4821.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2468, pruned_loss=0.0535, over 879335.06 frames. ], batch size: 40, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:53:23,178 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.42 vs. limit=5.0 2023-03-26 23:53:34,821 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.492e+02 1.802e+02 2.178e+02 4.247e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-26 23:53:39,384 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:53:46,623 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 23:53:52,981 INFO [finetune.py:976] (2/7) Epoch 20, batch 550, loss[loss=0.1626, simple_loss=0.2287, pruned_loss=0.04824, over 4827.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2437, pruned_loss=0.05268, over 894454.72 frames. ], batch size: 25, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:53:54,338 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109379.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 23:54:03,683 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109393.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:54:26,236 INFO [finetune.py:976] (2/7) Epoch 20, batch 600, loss[loss=0.2216, simple_loss=0.3065, pruned_loss=0.06833, over 4841.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2455, pruned_loss=0.05347, over 908481.44 frames. ], batch size: 47, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:54:39,788 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:54:44,549 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.417e+01 1.548e+02 1.729e+02 2.159e+02 3.434e+02, threshold=3.458e+02, percent-clipped=0.0 2023-03-26 23:54:59,325 INFO [finetune.py:976] (2/7) Epoch 20, batch 650, loss[loss=0.2299, simple_loss=0.2891, pruned_loss=0.08531, over 4930.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2493, pruned_loss=0.05493, over 919906.44 frames. ], batch size: 42, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:55:05,493 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5388, 3.7756, 3.5553, 1.5028, 3.9012, 2.7351, 0.8375, 2.5413], device='cuda:2'), covar=tensor([0.2643, 0.1876, 0.1511, 0.3724, 0.0937, 0.1045, 0.4383, 0.1444], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0129, 0.0159, 0.0123, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-26 23:55:10,856 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:55:33,023 INFO [finetune.py:976] (2/7) Epoch 20, batch 700, loss[loss=0.1627, simple_loss=0.2511, pruned_loss=0.03719, over 4918.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2508, pruned_loss=0.05478, over 927158.37 frames. ], batch size: 38, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:55:37,971 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2946, 2.9285, 3.0844, 3.2078, 3.0583, 2.8398, 3.3477, 0.9381], device='cuda:2'), covar=tensor([0.1251, 0.1144, 0.1216, 0.1358, 0.1871, 0.1947, 0.1220, 0.5947], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0244, 0.0277, 0.0292, 0.0331, 0.0281, 0.0301, 0.0295], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:55:47,500 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:55:51,348 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.485e+02 1.783e+02 2.085e+02 4.380e+02, threshold=3.566e+02, percent-clipped=3.0 2023-03-26 23:56:06,061 INFO [finetune.py:976] (2/7) Epoch 20, batch 750, loss[loss=0.2013, simple_loss=0.2683, pruned_loss=0.06709, over 4815.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2534, pruned_loss=0.0561, over 932222.23 frames. ], batch size: 39, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:56:07,740 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109579.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:12,636 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.9305, 1.4417, 1.3590, 0.8405, 1.6023, 1.6683, 1.5942, 1.4170], device='cuda:2'), covar=tensor([0.0809, 0.0671, 0.0620, 0.0594, 0.0551, 0.0663, 0.0434, 0.0666], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0124, 0.0124, 0.0131, 0.0128, 0.0142, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0524e-05, 1.0863e-04, 8.8719e-05, 8.8048e-05, 9.1909e-05, 9.1738e-05, 1.0175e-04, 1.0555e-04], device='cuda:2') 2023-03-26 23:56:39,568 INFO [finetune.py:976] (2/7) Epoch 20, batch 800, loss[loss=0.1663, simple_loss=0.2375, pruned_loss=0.04755, over 4867.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2514, pruned_loss=0.05467, over 936362.68 frames. ], batch size: 31, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:56:50,330 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:56,816 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:59,764 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.497e+02 1.774e+02 2.103e+02 3.199e+02, threshold=3.548e+02, percent-clipped=0.0 2023-03-26 23:57:03,306 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:57:23,373 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109674.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:57:23,452 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0084, 1.8975, 1.5810, 1.5906, 1.7882, 1.7475, 1.8380, 2.4916], device='cuda:2'), covar=tensor([0.3672, 0.4033, 0.3107, 0.3557, 0.3606, 0.2339, 0.3299, 0.1627], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0260, 0.0230, 0.0274, 0.0251, 0.0220, 0.0252, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-26 23:57:25,121 INFO [finetune.py:976] (2/7) Epoch 20, batch 850, loss[loss=0.1569, simple_loss=0.2067, pruned_loss=0.05353, over 4274.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2493, pruned_loss=0.0539, over 940333.70 frames. ], batch size: 18, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:57:40,035 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109693.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:58:06,031 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:58:10,728 INFO [finetune.py:976] (2/7) Epoch 20, batch 900, loss[loss=0.1633, simple_loss=0.2327, pruned_loss=0.047, over 4819.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.245, pruned_loss=0.0523, over 944320.62 frames. ], batch size: 41, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:58:10,823 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7236, 1.5680, 2.1640, 3.4857, 2.3461, 2.4380, 1.1715, 2.8064], device='cuda:2'), covar=tensor([0.1680, 0.1367, 0.1318, 0.0524, 0.0808, 0.1686, 0.1775, 0.0457], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-26 23:58:22,309 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109741.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:58:40,320 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.530e+02 1.823e+02 2.181e+02 3.809e+02, threshold=3.647e+02, percent-clipped=1.0 2023-03-26 23:59:03,843 INFO [finetune.py:976] (2/7) Epoch 20, batch 950, loss[loss=0.2032, simple_loss=0.2712, pruned_loss=0.06762, over 4821.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2434, pruned_loss=0.0524, over 946832.31 frames. ], batch size: 30, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:59:08,305 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 23:59:36,850 INFO [finetune.py:976] (2/7) Epoch 20, batch 1000, loss[loss=0.1654, simple_loss=0.2211, pruned_loss=0.0549, over 4725.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2451, pruned_loss=0.05268, over 950535.46 frames. ], batch size: 23, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:59:52,407 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:59:55,343 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.630e+02 1.951e+02 2.313e+02 5.473e+02, threshold=3.903e+02, percent-clipped=2.0 2023-03-27 00:00:08,421 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7347, 1.6206, 1.5198, 1.7465, 1.2310, 4.4251, 1.5963, 1.8909], device='cuda:2'), covar=tensor([0.3214, 0.2361, 0.2208, 0.2278, 0.1856, 0.0137, 0.2513, 0.1258], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:00:10,589 INFO [finetune.py:976] (2/7) Epoch 20, batch 1050, loss[loss=0.1922, simple_loss=0.2594, pruned_loss=0.06257, over 4914.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2484, pruned_loss=0.05366, over 953464.31 frames. ], batch size: 36, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:00:24,790 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109898.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:00:43,711 INFO [finetune.py:976] (2/7) Epoch 20, batch 1100, loss[loss=0.1445, simple_loss=0.2191, pruned_loss=0.03499, over 4776.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2501, pruned_loss=0.0539, over 955214.75 frames. ], batch size: 26, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:00:49,668 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:00:59,763 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109950.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:02,698 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.532e+02 1.890e+02 2.271e+02 3.423e+02, threshold=3.780e+02, percent-clipped=0.0 2023-03-27 00:01:15,524 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:17,222 INFO [finetune.py:976] (2/7) Epoch 20, batch 1150, loss[loss=0.1731, simple_loss=0.2496, pruned_loss=0.04833, over 4870.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2514, pruned_loss=0.05399, over 955169.59 frames. ], batch size: 34, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:01:31,932 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109998.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:37,935 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3638, 2.2594, 1.8083, 2.3002, 2.1942, 2.0092, 2.6252, 2.3548], device='cuda:2'), covar=tensor([0.1419, 0.2192, 0.2983, 0.2640, 0.2651, 0.1609, 0.3170, 0.1731], device='cuda:2'), in_proj_covar=tensor([0.0183, 0.0186, 0.0232, 0.0251, 0.0245, 0.0202, 0.0213, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:01:44,406 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110015.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:45,677 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110017.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:48,678 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110022.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:52,168 INFO [finetune.py:976] (2/7) Epoch 20, batch 1200, loss[loss=0.1983, simple_loss=0.256, pruned_loss=0.0703, over 4865.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2505, pruned_loss=0.05375, over 956278.77 frames. ], batch size: 34, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:02:04,554 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110044.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:02:11,656 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.682e+01 1.527e+02 1.776e+02 2.166e+02 4.163e+02, threshold=3.552e+02, percent-clipped=2.0 2023-03-27 00:02:32,495 INFO [finetune.py:976] (2/7) Epoch 20, batch 1250, loss[loss=0.188, simple_loss=0.2584, pruned_loss=0.05882, over 4840.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2484, pruned_loss=0.05336, over 958206.65 frames. ], batch size: 30, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:02:33,254 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110078.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:02:45,715 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1533, 3.6353, 3.7982, 4.0467, 3.9156, 3.6991, 4.2488, 1.3422], device='cuda:2'), covar=tensor([0.0850, 0.0920, 0.0892, 0.0945, 0.1284, 0.1580, 0.0765, 0.5460], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0245, 0.0280, 0.0292, 0.0333, 0.0284, 0.0302, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:03:02,224 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:10,000 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110110.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:13,063 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110115.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:23,864 INFO [finetune.py:976] (2/7) Epoch 20, batch 1300, loss[loss=0.183, simple_loss=0.2403, pruned_loss=0.06283, over 4830.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2453, pruned_loss=0.05262, over 959549.60 frames. ], batch size: 33, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:03:45,426 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.532e+02 1.884e+02 2.318e+02 4.682e+02, threshold=3.767e+02, percent-clipped=2.0 2023-03-27 00:03:52,819 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 00:04:04,755 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:04:12,688 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110176.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:04:13,180 INFO [finetune.py:976] (2/7) Epoch 20, batch 1350, loss[loss=0.1738, simple_loss=0.2432, pruned_loss=0.05225, over 4743.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2446, pruned_loss=0.05268, over 953758.19 frames. ], batch size: 59, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:05:15,847 INFO [finetune.py:976] (2/7) Epoch 20, batch 1400, loss[loss=0.1475, simple_loss=0.2287, pruned_loss=0.03316, over 4764.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2476, pruned_loss=0.0536, over 952081.98 frames. ], batch size: 27, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:05:26,719 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110235.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:05:48,218 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.545e+02 1.824e+02 2.176e+02 3.637e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-27 00:06:02,045 INFO [finetune.py:976] (2/7) Epoch 20, batch 1450, loss[loss=0.139, simple_loss=0.2055, pruned_loss=0.03625, over 4599.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2485, pruned_loss=0.05319, over 953506.17 frames. ], batch size: 20, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:06:06,233 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110283.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:06,274 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110283.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:16,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6286, 3.5043, 3.2967, 1.7804, 3.5816, 2.7438, 1.0236, 2.5167], device='cuda:2'), covar=tensor([0.2482, 0.1988, 0.1652, 0.3195, 0.1169, 0.1020, 0.4066, 0.1364], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0131, 0.0162, 0.0124, 0.0147, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:06:20,401 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110303.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:28,197 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 00:06:28,600 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:30,462 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110318.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:35,827 INFO [finetune.py:976] (2/7) Epoch 20, batch 1500, loss[loss=0.1823, simple_loss=0.2459, pruned_loss=0.05934, over 4901.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2495, pruned_loss=0.0535, over 954691.06 frames. ], batch size: 36, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:06:47,639 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110344.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:55,109 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.590e+02 1.939e+02 2.244e+02 3.777e+02, threshold=3.878e+02, percent-clipped=2.0 2023-03-27 00:07:00,488 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110363.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:01,163 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110364.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:07,073 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110373.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:09,448 INFO [finetune.py:976] (2/7) Epoch 20, batch 1550, loss[loss=0.1813, simple_loss=0.251, pruned_loss=0.05585, over 4929.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2503, pruned_loss=0.05338, over 954751.52 frames. ], batch size: 42, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:07:10,802 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110379.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:25,521 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110400.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:43,127 INFO [finetune.py:976] (2/7) Epoch 20, batch 1600, loss[loss=0.1684, simple_loss=0.2393, pruned_loss=0.04875, over 4867.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.248, pruned_loss=0.05326, over 956192.42 frames. ], batch size: 31, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:08:13,511 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.758e+01 1.502e+02 1.787e+02 2.306e+02 4.709e+02, threshold=3.574e+02, percent-clipped=2.0 2023-03-27 00:08:29,701 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110466.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:08:33,152 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:08:40,392 INFO [finetune.py:976] (2/7) Epoch 20, batch 1650, loss[loss=0.145, simple_loss=0.2159, pruned_loss=0.03701, over 4922.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2465, pruned_loss=0.05335, over 955098.26 frames. ], batch size: 43, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:08:55,549 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2807, 3.6875, 3.8988, 4.1272, 4.0498, 3.8239, 4.3570, 1.4444], device='cuda:2'), covar=tensor([0.0763, 0.0904, 0.0857, 0.0970, 0.1153, 0.1460, 0.0722, 0.5480], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0244, 0.0278, 0.0291, 0.0331, 0.0282, 0.0302, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:09:05,832 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.21 vs. limit=5.0 2023-03-27 00:09:22,343 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4841, 3.0489, 2.7656, 1.5040, 3.0035, 2.5694, 2.2242, 2.8460], device='cuda:2'), covar=tensor([0.0846, 0.0809, 0.1595, 0.2070, 0.1442, 0.1689, 0.1853, 0.0979], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0191, 0.0199, 0.0181, 0.0208, 0.0206, 0.0221, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:09:24,056 INFO [finetune.py:976] (2/7) Epoch 20, batch 1700, loss[loss=0.1702, simple_loss=0.2439, pruned_loss=0.04824, over 4855.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2441, pruned_loss=0.05247, over 956431.15 frames. ], batch size: 49, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:09:40,335 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8399, 4.0437, 3.8296, 1.9413, 4.2441, 3.0710, 0.8947, 2.7181], device='cuda:2'), covar=tensor([0.2036, 0.1687, 0.1370, 0.3185, 0.0928, 0.0959, 0.4332, 0.1429], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0131, 0.0163, 0.0124, 0.0147, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:09:42,518 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.500e+02 1.773e+02 2.246e+02 3.830e+02, threshold=3.546e+02, percent-clipped=2.0 2023-03-27 00:09:56,505 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4070, 1.3039, 1.8989, 1.5809, 1.4897, 3.2321, 1.2729, 1.4397], device='cuda:2'), covar=tensor([0.1041, 0.1821, 0.1347, 0.1085, 0.1596, 0.0280, 0.1579, 0.1838], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0075, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:09:57,622 INFO [finetune.py:976] (2/7) Epoch 20, batch 1750, loss[loss=0.1985, simple_loss=0.2757, pruned_loss=0.06061, over 4852.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2473, pruned_loss=0.05387, over 958404.94 frames. ], batch size: 44, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:10:20,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110611.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:10:30,986 INFO [finetune.py:976] (2/7) Epoch 20, batch 1800, loss[loss=0.1658, simple_loss=0.2347, pruned_loss=0.04849, over 4744.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2493, pruned_loss=0.05379, over 959628.16 frames. ], batch size: 28, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:10:31,091 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4938, 1.4222, 1.3464, 1.4174, 0.9267, 2.8783, 1.0841, 1.5028], device='cuda:2'), covar=tensor([0.3118, 0.2411, 0.2170, 0.2284, 0.1861, 0.0260, 0.2831, 0.1267], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0115, 0.0119, 0.0122, 0.0113, 0.0095, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:10:38,905 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110639.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:10:46,701 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5330, 1.7029, 1.8059, 0.9954, 1.8178, 2.0812, 2.0213, 1.5706], device='cuda:2'), covar=tensor([0.1095, 0.0892, 0.0582, 0.0606, 0.0517, 0.0766, 0.0393, 0.0790], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0125, 0.0131, 0.0129, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.0840e-05, 1.0880e-04, 8.9390e-05, 8.8321e-05, 9.2368e-05, 9.2051e-05, 1.0174e-04, 1.0587e-04], device='cuda:2') 2023-03-27 00:10:55,666 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.630e+02 1.887e+02 2.220e+02 3.285e+02, threshold=3.774e+02, percent-clipped=0.0 2023-03-27 00:11:01,586 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:11,051 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110672.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:11,626 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110673.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:12,218 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110674.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:14,002 INFO [finetune.py:976] (2/7) Epoch 20, batch 1850, loss[loss=0.2034, simple_loss=0.2664, pruned_loss=0.07022, over 4819.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2513, pruned_loss=0.05457, over 959320.76 frames. ], batch size: 33, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:11:29,614 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110700.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:39,750 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9423, 1.6029, 2.4105, 1.5444, 1.8816, 2.2084, 1.5555, 2.2964], device='cuda:2'), covar=tensor([0.1435, 0.2227, 0.1436, 0.2074, 0.1123, 0.1507, 0.2964, 0.0988], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0204, 0.0190, 0.0189, 0.0174, 0.0213, 0.0218, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:11:40,396 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 00:11:44,150 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110721.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:47,773 INFO [finetune.py:976] (2/7) Epoch 20, batch 1900, loss[loss=0.1854, simple_loss=0.2544, pruned_loss=0.05818, over 4724.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2522, pruned_loss=0.05455, over 958969.51 frames. ], batch size: 54, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:11:54,342 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110736.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:01,609 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110748.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:03,680 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-27 00:12:04,074 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8072, 1.3768, 2.4354, 3.6858, 2.2887, 2.5234, 1.3945, 2.9708], device='cuda:2'), covar=tensor([0.1764, 0.1662, 0.1267, 0.0531, 0.0883, 0.1442, 0.1653, 0.0494], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0101, 0.0136, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 00:12:06,288 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.472e+02 1.826e+02 2.198e+02 3.929e+02, threshold=3.651e+02, percent-clipped=1.0 2023-03-27 00:12:11,151 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5573, 2.7520, 2.4056, 1.7355, 2.5007, 2.7359, 2.9177, 2.4508], device='cuda:2'), covar=tensor([0.0585, 0.0543, 0.0729, 0.0880, 0.0897, 0.0685, 0.0530, 0.0888], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0120, 0.0125, 0.0139, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:12:14,049 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:17,585 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110771.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:21,622 INFO [finetune.py:976] (2/7) Epoch 20, batch 1950, loss[loss=0.158, simple_loss=0.2335, pruned_loss=0.04129, over 4827.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2505, pruned_loss=0.05353, over 958741.41 frames. ], batch size: 30, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:12:34,798 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110797.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:12:46,182 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110814.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:49,729 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110819.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:55,563 INFO [finetune.py:976] (2/7) Epoch 20, batch 2000, loss[loss=0.1896, simple_loss=0.2615, pruned_loss=0.05886, over 3993.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2485, pruned_loss=0.05332, over 958448.82 frames. ], batch size: 17, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:12:56,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6758, 1.6017, 1.5081, 1.6191, 1.4275, 3.7361, 1.5960, 2.0317], device='cuda:2'), covar=tensor([0.3268, 0.2370, 0.2150, 0.2378, 0.1578, 0.0178, 0.2477, 0.1157], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0115, 0.0119, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:13:17,583 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.482e+02 1.740e+02 2.016e+02 2.901e+02, threshold=3.480e+02, percent-clipped=0.0 2023-03-27 00:13:30,406 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110868.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:13:38,675 INFO [finetune.py:976] (2/7) Epoch 20, batch 2050, loss[loss=0.1663, simple_loss=0.2453, pruned_loss=0.04364, over 4905.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2454, pruned_loss=0.05265, over 956644.23 frames. ], batch size: 35, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:13:44,552 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4348, 1.3243, 1.7181, 2.4985, 1.6212, 2.2736, 0.8256, 2.1666], device='cuda:2'), covar=tensor([0.1846, 0.1449, 0.1216, 0.0723, 0.0989, 0.1061, 0.1690, 0.0602], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0164, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 00:13:45,790 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6768, 1.4284, 2.0095, 1.2572, 1.6579, 1.9069, 1.4124, 2.0492], device='cuda:2'), covar=tensor([0.1165, 0.2216, 0.1071, 0.1593, 0.0906, 0.1219, 0.2878, 0.0723], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0203, 0.0190, 0.0189, 0.0173, 0.0212, 0.0217, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:14:28,244 INFO [finetune.py:976] (2/7) Epoch 20, batch 2100, loss[loss=0.153, simple_loss=0.2231, pruned_loss=0.04141, over 4783.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2458, pruned_loss=0.05326, over 958428.83 frames. ], batch size: 26, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:14:29,624 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:37,082 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 00:14:38,175 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9451, 1.7049, 2.0778, 1.3430, 1.9138, 2.2363, 1.5657, 2.3831], device='cuda:2'), covar=tensor([0.1230, 0.2128, 0.1414, 0.1994, 0.0899, 0.1258, 0.3210, 0.0755], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0205, 0.0191, 0.0190, 0.0174, 0.0214, 0.0218, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:14:39,968 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110939.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:50,630 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.458e+01 1.520e+02 1.862e+02 2.217e+02 3.516e+02, threshold=3.725e+02, percent-clipped=1.0 2023-03-27 00:14:52,589 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110958.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:53,183 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110959.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:58,414 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110967.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:03,723 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:05,047 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 00:15:05,921 INFO [finetune.py:976] (2/7) Epoch 20, batch 2150, loss[loss=0.2171, simple_loss=0.2934, pruned_loss=0.07038, over 4829.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2484, pruned_loss=0.05387, over 956198.64 frames. ], batch size: 47, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:15:12,589 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:15,644 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1215, 1.1870, 1.3302, 1.3631, 1.3763, 2.4337, 1.1370, 1.3485], device='cuda:2'), covar=tensor([0.1087, 0.1978, 0.1195, 0.1028, 0.1707, 0.0380, 0.1713, 0.1961], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:15:25,801 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111007.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:33,627 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111019.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:35,882 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111022.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:38,858 INFO [finetune.py:976] (2/7) Epoch 20, batch 2200, loss[loss=0.1429, simple_loss=0.2219, pruned_loss=0.03197, over 4834.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2499, pruned_loss=0.05406, over 955605.50 frames. ], batch size: 25, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:16:00,233 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.523e+02 1.854e+02 2.195e+02 4.707e+02, threshold=3.708e+02, percent-clipped=2.0 2023-03-27 00:16:02,346 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 00:16:21,550 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 00:16:23,018 INFO [finetune.py:976] (2/7) Epoch 20, batch 2250, loss[loss=0.15, simple_loss=0.2389, pruned_loss=0.03057, over 4830.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2512, pruned_loss=0.05436, over 956851.77 frames. ], batch size: 44, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:16:32,553 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4546, 2.0563, 1.7373, 1.6468, 2.7216, 3.0252, 2.4230, 2.1849], device='cuda:2'), covar=tensor([0.0299, 0.0390, 0.0764, 0.0439, 0.0229, 0.0333, 0.0279, 0.0385], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0099, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.5336e-05, 8.2390e-05, 1.1354e-04, 8.5325e-05, 7.7626e-05, 8.2026e-05, 7.4051e-05, 8.5152e-05], device='cuda:2') 2023-03-27 00:16:33,690 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111092.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:16:56,459 INFO [finetune.py:976] (2/7) Epoch 20, batch 2300, loss[loss=0.1605, simple_loss=0.2387, pruned_loss=0.04112, over 4756.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2506, pruned_loss=0.05376, over 954014.56 frames. ], batch size: 27, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:17:15,905 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.510e+02 1.791e+02 2.077e+02 4.254e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-27 00:17:22,016 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1137, 1.8617, 2.2696, 2.1415, 1.8956, 1.9426, 2.1356, 2.0168], device='cuda:2'), covar=tensor([0.4161, 0.4221, 0.3280, 0.4142, 0.5229, 0.4253, 0.5025, 0.3185], device='cuda:2'), in_proj_covar=tensor([0.0253, 0.0242, 0.0263, 0.0282, 0.0280, 0.0254, 0.0290, 0.0244], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:17:25,027 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7629, 1.7829, 1.5945, 1.5136, 2.3144, 2.4263, 1.9528, 1.8596], device='cuda:2'), covar=tensor([0.0520, 0.0494, 0.0645, 0.0446, 0.0313, 0.0552, 0.0405, 0.0510], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5405e-05, 8.2534e-05, 1.1392e-04, 8.5532e-05, 7.7792e-05, 8.2114e-05, 7.4244e-05, 8.5464e-05], device='cuda:2') 2023-03-27 00:17:30,234 INFO [finetune.py:976] (2/7) Epoch 20, batch 2350, loss[loss=0.1697, simple_loss=0.2423, pruned_loss=0.0486, over 4777.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2476, pruned_loss=0.05284, over 954679.37 frames. ], batch size: 28, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:18:01,644 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:02,316 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:03,427 INFO [finetune.py:976] (2/7) Epoch 20, batch 2400, loss[loss=0.1746, simple_loss=0.2489, pruned_loss=0.05013, over 4833.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2457, pruned_loss=0.05237, over 954705.53 frames. ], batch size: 30, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:18:22,740 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.826e+01 1.548e+02 1.775e+02 2.063e+02 3.363e+02, threshold=3.550e+02, percent-clipped=0.0 2023-03-27 00:18:30,978 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111267.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:36,920 INFO [finetune.py:976] (2/7) Epoch 20, batch 2450, loss[loss=0.1983, simple_loss=0.2741, pruned_loss=0.06128, over 4834.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2438, pruned_loss=0.0521, over 955657.82 frames. ], batch size: 39, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:18:50,053 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:21,759 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:22,351 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:33,325 INFO [finetune.py:976] (2/7) Epoch 20, batch 2500, loss[loss=0.2252, simple_loss=0.2988, pruned_loss=0.07585, over 4805.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2446, pruned_loss=0.05288, over 954346.97 frames. ], batch size: 45, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:20:03,161 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.591e+02 1.841e+02 2.119e+02 4.112e+02, threshold=3.683e+02, percent-clipped=1.0 2023-03-27 00:20:17,414 INFO [finetune.py:976] (2/7) Epoch 20, batch 2550, loss[loss=0.2796, simple_loss=0.3204, pruned_loss=0.1194, over 4101.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2491, pruned_loss=0.05443, over 953795.97 frames. ], batch size: 65, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:20:27,539 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111392.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:20:51,266 INFO [finetune.py:976] (2/7) Epoch 20, batch 2600, loss[loss=0.1812, simple_loss=0.2499, pruned_loss=0.05623, over 4186.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.251, pruned_loss=0.05498, over 953662.95 frames. ], batch size: 65, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:20:54,465 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0269, 1.8625, 1.6003, 1.6491, 1.7972, 1.7787, 1.8696, 2.5143], device='cuda:2'), covar=tensor([0.3562, 0.3768, 0.3006, 0.3229, 0.3656, 0.2217, 0.3273, 0.1592], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0259, 0.0228, 0.0273, 0.0248, 0.0219, 0.0249, 0.0230], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:20:59,724 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111440.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:21:01,021 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4702, 2.4260, 2.0195, 2.5361, 3.1126, 2.4282, 2.4882, 1.9727], device='cuda:2'), covar=tensor([0.2106, 0.1762, 0.1906, 0.1612, 0.1535, 0.1063, 0.1811, 0.1870], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0213, 0.0194, 0.0244, 0.0189, 0.0218, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:21:05,245 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 00:21:10,161 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.233e+02 1.591e+02 1.878e+02 2.220e+02 5.233e+02, threshold=3.757e+02, percent-clipped=1.0 2023-03-27 00:21:13,809 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4566, 1.2850, 1.8548, 1.7879, 1.4853, 3.3866, 1.0951, 1.4143], device='cuda:2'), covar=tensor([0.1258, 0.2539, 0.1325, 0.1262, 0.2097, 0.0284, 0.2296, 0.2632], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0077, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:21:20,888 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:21:31,397 INFO [finetune.py:976] (2/7) Epoch 20, batch 2650, loss[loss=0.2015, simple_loss=0.2779, pruned_loss=0.06256, over 4822.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2519, pruned_loss=0.05516, over 955021.78 frames. ], batch size: 39, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:22:06,260 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111524.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:07,977 INFO [finetune.py:976] (2/7) Epoch 20, batch 2700, loss[loss=0.1608, simple_loss=0.2352, pruned_loss=0.04322, over 4809.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2511, pruned_loss=0.05477, over 954845.02 frames. ], batch size: 51, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:22:10,259 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:15,687 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7107, 1.5502, 1.4712, 1.5583, 1.8972, 1.8368, 1.6321, 1.4501], device='cuda:2'), covar=tensor([0.0357, 0.0298, 0.0558, 0.0313, 0.0226, 0.0383, 0.0306, 0.0423], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5189e-05, 8.2610e-05, 1.1389e-04, 8.5369e-05, 7.7862e-05, 8.2049e-05, 7.4458e-05, 8.5576e-05], device='cuda:2') 2023-03-27 00:22:27,297 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.581e+02 1.832e+02 2.307e+02 3.346e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 00:22:27,963 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8065, 3.8891, 3.5550, 1.6941, 4.0085, 2.9820, 0.9115, 2.6965], device='cuda:2'), covar=tensor([0.2297, 0.1873, 0.1614, 0.3821, 0.0983, 0.1096, 0.4539, 0.1613], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0132, 0.0163, 0.0125, 0.0149, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:22:38,121 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:41,150 INFO [finetune.py:976] (2/7) Epoch 20, batch 2750, loss[loss=0.1528, simple_loss=0.2208, pruned_loss=0.04239, over 4749.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2473, pruned_loss=0.05346, over 953347.91 frames. ], batch size: 27, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:22:44,077 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111581.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:47,142 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0738, 0.9624, 0.9191, 0.3831, 0.8967, 1.1255, 1.1609, 0.9437], device='cuda:2'), covar=tensor([0.0710, 0.0543, 0.0535, 0.0468, 0.0536, 0.0609, 0.0388, 0.0615], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0125, 0.0125, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.1221e-05, 1.0931e-04, 8.9378e-05, 8.8117e-05, 9.2057e-05, 9.2622e-05, 1.0146e-04, 1.0625e-04], device='cuda:2') 2023-03-27 00:22:48,356 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0333, 1.9130, 1.7600, 2.1682, 2.3399, 2.1412, 1.7502, 1.7179], device='cuda:2'), covar=tensor([0.2352, 0.1987, 0.2007, 0.1570, 0.1605, 0.1125, 0.2277, 0.2019], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0212, 0.0194, 0.0243, 0.0188, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:23:01,015 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 00:23:06,494 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111614.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:23:06,652 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 00:23:14,366 INFO [finetune.py:976] (2/7) Epoch 20, batch 2800, loss[loss=0.1611, simple_loss=0.2258, pruned_loss=0.0482, over 4910.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2447, pruned_loss=0.05324, over 951956.30 frames. ], batch size: 43, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:23:32,758 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.029e+01 1.565e+02 1.748e+02 2.177e+02 3.583e+02, threshold=3.496e+02, percent-clipped=0.0 2023-03-27 00:23:38,054 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111662.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:23:48,042 INFO [finetune.py:976] (2/7) Epoch 20, batch 2850, loss[loss=0.1603, simple_loss=0.2458, pruned_loss=0.03745, over 4763.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2422, pruned_loss=0.05189, over 951053.73 frames. ], batch size: 28, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:24:38,638 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0425, 3.4603, 3.7342, 3.8270, 3.8442, 3.6070, 4.1140, 1.7658], device='cuda:2'), covar=tensor([0.0763, 0.0793, 0.0719, 0.1014, 0.1057, 0.1264, 0.0686, 0.4988], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0245, 0.0278, 0.0293, 0.0334, 0.0283, 0.0304, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:24:41,627 INFO [finetune.py:976] (2/7) Epoch 20, batch 2900, loss[loss=0.1213, simple_loss=0.182, pruned_loss=0.03033, over 4167.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2451, pruned_loss=0.05347, over 951492.61 frames. ], batch size: 17, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:25:12,981 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.542e+02 1.820e+02 2.254e+02 5.949e+02, threshold=3.641e+02, percent-clipped=1.0 2023-03-27 00:25:19,938 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-27 00:25:31,431 INFO [finetune.py:976] (2/7) Epoch 20, batch 2950, loss[loss=0.162, simple_loss=0.2345, pruned_loss=0.04477, over 4788.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2478, pruned_loss=0.05395, over 953266.78 frames. ], batch size: 29, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:25:40,067 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111791.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:25:49,643 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 00:26:03,396 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:26:05,122 INFO [finetune.py:976] (2/7) Epoch 20, batch 3000, loss[loss=0.2137, simple_loss=0.2778, pruned_loss=0.07484, over 4780.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2504, pruned_loss=0.05528, over 951280.99 frames. ], batch size: 51, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:26:05,122 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 00:26:06,818 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3092, 1.2679, 1.1980, 1.2918, 1.5422, 1.5095, 1.3365, 1.2179], device='cuda:2'), covar=tensor([0.0491, 0.0273, 0.0622, 0.0300, 0.0235, 0.0394, 0.0298, 0.0351], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.4838e-05, 8.2350e-05, 1.1394e-04, 8.5056e-05, 7.7760e-05, 8.1940e-05, 7.4232e-05, 8.5598e-05], device='cuda:2') 2023-03-27 00:26:07,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8483, 3.4548, 3.6064, 3.7517, 3.6122, 3.5123, 3.8935, 1.4707], device='cuda:2'), covar=tensor([0.0794, 0.0816, 0.0876, 0.0929, 0.1242, 0.1363, 0.0736, 0.4732], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0245, 0.0278, 0.0292, 0.0333, 0.0283, 0.0304, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:26:08,776 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3729, 1.3254, 1.2402, 1.3837, 1.6598, 1.5687, 1.3896, 1.2520], device='cuda:2'), covar=tensor([0.0424, 0.0305, 0.0668, 0.0293, 0.0239, 0.0357, 0.0335, 0.0404], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.4838e-05, 8.2350e-05, 1.1394e-04, 8.5056e-05, 7.7760e-05, 8.1940e-05, 7.4232e-05, 8.5598e-05], device='cuda:2') 2023-03-27 00:26:20,305 INFO [finetune.py:1010] (2/7) Epoch 20, validation: loss=0.1563, simple_loss=0.2257, pruned_loss=0.04344, over 2265189.00 frames. 2023-03-27 00:26:20,305 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 00:26:29,876 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4473, 1.4601, 1.8757, 1.8088, 1.5146, 3.1817, 1.3953, 1.5230], device='cuda:2'), covar=tensor([0.1009, 0.1763, 0.1187, 0.0915, 0.1557, 0.0297, 0.1402, 0.1753], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:26:40,767 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111852.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 00:26:46,890 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8165, 2.5180, 2.4080, 1.3139, 2.5550, 2.0189, 1.9664, 2.3171], device='cuda:2'), covar=tensor([0.1108, 0.0840, 0.1821, 0.2131, 0.1573, 0.2258, 0.2112, 0.1266], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0193, 0.0200, 0.0183, 0.0212, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:26:48,435 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.618e+02 1.875e+02 2.230e+02 3.575e+02, threshold=3.749e+02, percent-clipped=0.0 2023-03-27 00:26:57,653 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3818, 1.4620, 1.7378, 1.7194, 1.5140, 3.2325, 1.4287, 1.4690], device='cuda:2'), covar=tensor([0.1032, 0.1760, 0.1094, 0.0945, 0.1662, 0.0243, 0.1445, 0.1829], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:27:11,150 INFO [finetune.py:976] (2/7) Epoch 20, batch 3050, loss[loss=0.1974, simple_loss=0.2603, pruned_loss=0.06726, over 4909.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2516, pruned_loss=0.05501, over 949829.64 frames. ], batch size: 37, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:27:18,725 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:27:46,884 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8390, 1.2647, 1.9099, 1.8417, 1.6770, 1.6288, 1.8256, 1.7550], device='cuda:2'), covar=tensor([0.4054, 0.3790, 0.3133, 0.3388, 0.4397, 0.3595, 0.4218, 0.3042], device='cuda:2'), in_proj_covar=tensor([0.0252, 0.0241, 0.0262, 0.0280, 0.0279, 0.0253, 0.0289, 0.0243], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:28:06,554 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8367, 3.2961, 3.5416, 3.7293, 3.6050, 3.3542, 3.9104, 1.2483], device='cuda:2'), covar=tensor([0.0858, 0.0927, 0.0924, 0.1017, 0.1287, 0.1640, 0.0776, 0.5702], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0244, 0.0278, 0.0292, 0.0334, 0.0283, 0.0304, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:28:08,474 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 00:28:14,421 INFO [finetune.py:976] (2/7) Epoch 20, batch 3100, loss[loss=0.1944, simple_loss=0.2574, pruned_loss=0.06575, over 4818.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2507, pruned_loss=0.05475, over 952450.42 frames. ], batch size: 30, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:28:15,706 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:28:17,594 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0617, 0.9702, 1.0197, 0.4316, 0.8666, 1.1814, 1.1687, 1.0169], device='cuda:2'), covar=tensor([0.0771, 0.0549, 0.0499, 0.0504, 0.0481, 0.0568, 0.0386, 0.0666], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0124, 0.0124, 0.0130, 0.0128, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([9.0354e-05, 1.0831e-04, 8.8573e-05, 8.7698e-05, 9.1283e-05, 9.1835e-05, 1.0071e-04, 1.0540e-04], device='cuda:2') 2023-03-27 00:28:33,001 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.507e+02 1.737e+02 2.301e+02 5.151e+02, threshold=3.474e+02, percent-clipped=3.0 2023-03-27 00:28:47,148 INFO [finetune.py:976] (2/7) Epoch 20, batch 3150, loss[loss=0.1497, simple_loss=0.213, pruned_loss=0.04322, over 3994.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2489, pruned_loss=0.05444, over 951861.87 frames. ], batch size: 17, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:29:23,070 INFO [finetune.py:976] (2/7) Epoch 20, batch 3200, loss[loss=0.1697, simple_loss=0.2391, pruned_loss=0.05011, over 4861.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2448, pruned_loss=0.05319, over 952525.02 frames. ], batch size: 44, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:29:43,187 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0013, 1.8220, 1.7364, 2.0919, 2.4565, 2.1424, 1.7725, 1.6264], device='cuda:2'), covar=tensor([0.2389, 0.2018, 0.2064, 0.1856, 0.1624, 0.1189, 0.2416, 0.2130], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0213, 0.0194, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:29:52,242 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1910, 1.9912, 1.8476, 2.2382, 2.6435, 2.1595, 2.0388, 1.6466], device='cuda:2'), covar=tensor([0.1969, 0.1878, 0.1823, 0.1605, 0.1748, 0.1152, 0.2013, 0.1901], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0211, 0.0213, 0.0194, 0.0244, 0.0189, 0.0218, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:29:54,668 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3221, 2.1565, 1.9915, 2.0397, 2.0475, 2.1306, 2.1572, 2.7611], device='cuda:2'), covar=tensor([0.3496, 0.3968, 0.2879, 0.3663, 0.3782, 0.2417, 0.3433, 0.1708], device='cuda:2'), in_proj_covar=tensor([0.0285, 0.0261, 0.0229, 0.0275, 0.0250, 0.0221, 0.0250, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:29:56,972 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.705e+01 1.565e+02 1.739e+02 2.228e+02 3.922e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 00:30:06,299 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112066.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:30:13,834 INFO [finetune.py:976] (2/7) Epoch 20, batch 3250, loss[loss=0.2059, simple_loss=0.2756, pruned_loss=0.0681, over 4865.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2443, pruned_loss=0.05297, over 950692.38 frames. ], batch size: 31, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:30:34,846 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5649, 1.7151, 1.5172, 1.5115, 2.0924, 2.0101, 1.7672, 1.7331], device='cuda:2'), covar=tensor([0.0542, 0.0409, 0.0566, 0.0363, 0.0290, 0.0613, 0.0475, 0.0443], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0099, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.4803e-05, 8.2225e-05, 1.1322e-04, 8.5039e-05, 7.7596e-05, 8.1964e-05, 7.4073e-05, 8.5440e-05], device='cuda:2') 2023-03-27 00:30:54,511 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112124.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:30:55,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4855, 2.3833, 2.0073, 2.5071, 2.2736, 2.3041, 2.2590, 3.2002], device='cuda:2'), covar=tensor([0.3850, 0.4826, 0.3570, 0.4394, 0.4457, 0.2724, 0.4704, 0.1752], device='cuda:2'), in_proj_covar=tensor([0.0284, 0.0260, 0.0229, 0.0275, 0.0250, 0.0220, 0.0250, 0.0231], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:30:56,192 INFO [finetune.py:976] (2/7) Epoch 20, batch 3300, loss[loss=0.2212, simple_loss=0.2929, pruned_loss=0.07476, over 4803.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2481, pruned_loss=0.05409, over 951884.02 frames. ], batch size: 51, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:30:56,312 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:31:09,776 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112147.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:31:12,069 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 00:31:16,121 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.319e+01 1.653e+02 1.982e+02 2.472e+02 3.934e+02, threshold=3.965e+02, percent-clipped=2.0 2023-03-27 00:31:26,940 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:31:29,977 INFO [finetune.py:976] (2/7) Epoch 20, batch 3350, loss[loss=0.157, simple_loss=0.239, pruned_loss=0.03744, over 4882.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2509, pruned_loss=0.05505, over 952303.21 frames. ], batch size: 35, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:32:11,815 INFO [finetune.py:976] (2/7) Epoch 20, batch 3400, loss[loss=0.1965, simple_loss=0.2402, pruned_loss=0.07637, over 4042.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2518, pruned_loss=0.05562, over 950372.96 frames. ], batch size: 17, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:32:28,437 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7339, 2.4484, 2.4539, 1.5525, 2.5634, 2.0599, 1.9558, 2.3609], device='cuda:2'), covar=tensor([0.0995, 0.0680, 0.1597, 0.1688, 0.1277, 0.1728, 0.1694, 0.0833], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0192, 0.0200, 0.0182, 0.0211, 0.0207, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:32:29,022 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1615, 1.9340, 2.5652, 4.0530, 2.7743, 2.6597, 0.7295, 3.3493], device='cuda:2'), covar=tensor([0.1626, 0.1301, 0.1280, 0.0487, 0.0722, 0.1616, 0.2085, 0.0337], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0134, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 00:32:31,199 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.676e+02 1.964e+02 2.392e+02 4.564e+02, threshold=3.928e+02, percent-clipped=2.0 2023-03-27 00:32:31,339 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1985, 2.0794, 1.9787, 2.2105, 2.6083, 2.1253, 2.2202, 1.7590], device='cuda:2'), covar=tensor([0.1869, 0.1872, 0.1682, 0.1577, 0.1913, 0.1142, 0.2161, 0.1715], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0213, 0.0195, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:32:34,361 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3896, 1.2939, 1.2471, 1.3414, 1.6488, 1.5148, 1.3719, 1.2178], device='cuda:2'), covar=tensor([0.0295, 0.0306, 0.0611, 0.0267, 0.0203, 0.0454, 0.0297, 0.0392], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0108, 0.0145, 0.0112, 0.0101, 0.0112, 0.0100, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.5499e-05, 8.2920e-05, 1.1431e-04, 8.5745e-05, 7.8383e-05, 8.2965e-05, 7.4686e-05, 8.6228e-05], device='cuda:2') 2023-03-27 00:32:44,378 INFO [finetune.py:976] (2/7) Epoch 20, batch 3450, loss[loss=0.1659, simple_loss=0.2435, pruned_loss=0.04413, over 4885.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2516, pruned_loss=0.05509, over 950117.76 frames. ], batch size: 35, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:33:19,433 INFO [finetune.py:976] (2/7) Epoch 20, batch 3500, loss[loss=0.1642, simple_loss=0.2377, pruned_loss=0.04529, over 4901.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2488, pruned_loss=0.05426, over 952570.36 frames. ], batch size: 36, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:33:29,338 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5570, 3.3770, 3.1979, 1.5035, 3.4385, 2.6354, 0.8226, 2.4476], device='cuda:2'), covar=tensor([0.2410, 0.2021, 0.1724, 0.3555, 0.1356, 0.1018, 0.4275, 0.1550], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0177, 0.0160, 0.0129, 0.0162, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:33:56,443 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.856e+01 1.454e+02 1.799e+02 2.150e+02 5.052e+02, threshold=3.598e+02, percent-clipped=2.0 2023-03-27 00:34:18,957 INFO [finetune.py:976] (2/7) Epoch 20, batch 3550, loss[loss=0.118, simple_loss=0.1954, pruned_loss=0.02024, over 4759.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.245, pruned_loss=0.0528, over 953402.19 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:34:28,785 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4842, 1.6560, 2.2610, 1.8200, 1.8113, 3.9206, 1.6141, 1.8329], device='cuda:2'), covar=tensor([0.0944, 0.1756, 0.1163, 0.0976, 0.1495, 0.0232, 0.1381, 0.1744], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 00:34:36,596 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112389.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:17,937 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112422.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:20,889 INFO [finetune.py:976] (2/7) Epoch 20, batch 3600, loss[loss=0.1333, simple_loss=0.1945, pruned_loss=0.03603, over 4720.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2428, pruned_loss=0.0526, over 951150.93 frames. ], batch size: 23, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:35:32,691 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.66 vs. limit=2.0 2023-03-27 00:35:37,948 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112447.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:35:39,834 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:44,346 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.527e+02 1.833e+02 2.137e+02 4.874e+02, threshold=3.667e+02, percent-clipped=1.0 2023-03-27 00:36:08,331 INFO [finetune.py:976] (2/7) Epoch 20, batch 3650, loss[loss=0.1795, simple_loss=0.2476, pruned_loss=0.05574, over 4790.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2444, pruned_loss=0.05306, over 951850.45 frames. ], batch size: 26, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:36:20,405 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112495.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:36:41,251 INFO [finetune.py:976] (2/7) Epoch 20, batch 3700, loss[loss=0.2085, simple_loss=0.258, pruned_loss=0.07954, over 3845.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2487, pruned_loss=0.05457, over 949784.07 frames. ], batch size: 16, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:37:01,295 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.568e+02 1.941e+02 2.323e+02 3.962e+02, threshold=3.882e+02, percent-clipped=1.0 2023-03-27 00:37:03,792 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0609, 1.0119, 1.0209, 0.3729, 0.9480, 1.1762, 1.1993, 1.0186], device='cuda:2'), covar=tensor([0.0826, 0.0557, 0.0523, 0.0528, 0.0482, 0.0541, 0.0361, 0.0623], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0124, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.0439e-05, 1.0861e-04, 8.9138e-05, 8.7751e-05, 9.1464e-05, 9.2601e-05, 1.0136e-04, 1.0592e-04], device='cuda:2') 2023-03-27 00:37:05,588 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112562.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:15,376 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4301, 3.3988, 3.1922, 1.4660, 3.5003, 2.5804, 0.7440, 2.3505], device='cuda:2'), covar=tensor([0.2657, 0.1900, 0.1820, 0.3677, 0.1190, 0.1105, 0.4595, 0.1559], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0177, 0.0161, 0.0130, 0.0162, 0.0123, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:37:15,914 INFO [finetune.py:976] (2/7) Epoch 20, batch 3750, loss[loss=0.2183, simple_loss=0.2804, pruned_loss=0.07806, over 4821.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2501, pruned_loss=0.05512, over 950257.20 frames. ], batch size: 33, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:37:16,031 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:33,646 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1788, 2.1115, 1.8113, 2.2895, 2.1120, 1.8909, 2.4851, 2.2329], device='cuda:2'), covar=tensor([0.1101, 0.1967, 0.2451, 0.2064, 0.2017, 0.1386, 0.2663, 0.1395], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0234, 0.0252, 0.0246, 0.0203, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:37:46,049 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6358, 2.3654, 2.1699, 2.5654, 2.4140, 2.2883, 2.8866, 2.5526], device='cuda:2'), covar=tensor([0.1135, 0.1815, 0.2720, 0.2176, 0.2427, 0.1551, 0.2187, 0.1679], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0234, 0.0251, 0.0246, 0.0203, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:37:50,091 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112617.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:54,210 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112623.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:57,418 INFO [finetune.py:976] (2/7) Epoch 20, batch 3800, loss[loss=0.1893, simple_loss=0.2686, pruned_loss=0.05496, over 4718.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2507, pruned_loss=0.05521, over 950798.11 frames. ], batch size: 54, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:38:04,244 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112638.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 00:38:11,465 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.70 vs. limit=5.0 2023-03-27 00:38:16,043 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.515e+02 1.800e+02 2.233e+02 3.828e+02, threshold=3.600e+02, percent-clipped=0.0 2023-03-27 00:38:27,905 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8922, 1.7579, 1.7914, 1.2161, 1.8810, 1.9576, 1.9443, 1.5305], device='cuda:2'), covar=tensor([0.0633, 0.0771, 0.0756, 0.0932, 0.0747, 0.0741, 0.0640, 0.1241], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0136, 0.0140, 0.0120, 0.0125, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:38:29,125 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8214, 1.6040, 1.4620, 1.2092, 1.5524, 1.5440, 1.5992, 2.1239], device='cuda:2'), covar=tensor([0.3096, 0.3340, 0.2697, 0.2904, 0.3098, 0.1931, 0.2802, 0.1511], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0263, 0.0232, 0.0277, 0.0253, 0.0222, 0.0252, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:38:30,667 INFO [finetune.py:976] (2/7) Epoch 20, batch 3850, loss[loss=0.1915, simple_loss=0.261, pruned_loss=0.06095, over 4772.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2493, pruned_loss=0.05437, over 951185.50 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:38:31,366 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112678.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 00:38:46,321 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3287, 2.1399, 2.2570, 1.6528, 2.1668, 2.4398, 2.4255, 1.7239], device='cuda:2'), covar=tensor([0.0568, 0.0566, 0.0645, 0.0812, 0.0686, 0.0552, 0.0521, 0.1142], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0136, 0.0140, 0.0120, 0.0125, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:38:59,761 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112722.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:38:59,785 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1645, 2.1393, 2.2741, 0.9222, 2.5324, 2.7513, 2.3814, 1.9148], device='cuda:2'), covar=tensor([0.1066, 0.0875, 0.0485, 0.0696, 0.0542, 0.0687, 0.0419, 0.0762], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0125, 0.0124, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.0292e-05, 1.0846e-04, 8.9019e-05, 8.7496e-05, 9.1263e-05, 9.2405e-05, 1.0119e-04, 1.0580e-04], device='cuda:2') 2023-03-27 00:39:03,155 INFO [finetune.py:976] (2/7) Epoch 20, batch 3900, loss[loss=0.1608, simple_loss=0.2346, pruned_loss=0.04353, over 4923.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.246, pruned_loss=0.05338, over 951607.04 frames. ], batch size: 33, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:39:15,055 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112745.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:39:21,552 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.403e+01 1.612e+02 1.959e+02 2.410e+02 4.123e+02, threshold=3.918e+02, percent-clipped=1.0 2023-03-27 00:39:32,076 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112770.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:39:36,268 INFO [finetune.py:976] (2/7) Epoch 20, batch 3950, loss[loss=0.1564, simple_loss=0.2262, pruned_loss=0.04333, over 4806.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2431, pruned_loss=0.05234, over 952503.43 frames. ], batch size: 25, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:39:51,982 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2612, 2.0640, 1.5308, 0.6610, 1.7017, 1.8369, 1.6646, 1.9005], device='cuda:2'), covar=tensor([0.0798, 0.0701, 0.1387, 0.1868, 0.1256, 0.2223, 0.2056, 0.0790], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0198, 0.0181, 0.0210, 0.0208, 0.0221, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:40:19,039 INFO [finetune.py:976] (2/7) Epoch 20, batch 4000, loss[loss=0.1841, simple_loss=0.2588, pruned_loss=0.05466, over 4909.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2423, pruned_loss=0.05164, over 951998.13 frames. ], batch size: 32, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:40:48,438 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.638e+02 1.904e+02 2.320e+02 3.891e+02, threshold=3.808e+02, percent-clipped=0.0 2023-03-27 00:40:50,432 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1887, 2.0450, 1.7709, 1.9697, 1.9828, 1.9623, 1.9982, 2.6992], device='cuda:2'), covar=tensor([0.3962, 0.4560, 0.3423, 0.3974, 0.4335, 0.2586, 0.3907, 0.1746], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0263, 0.0231, 0.0276, 0.0253, 0.0222, 0.0251, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:40:56,978 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 00:41:04,948 INFO [finetune.py:976] (2/7) Epoch 20, batch 4050, loss[loss=0.1945, simple_loss=0.2666, pruned_loss=0.06117, over 4807.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2453, pruned_loss=0.05233, over 953229.23 frames. ], batch size: 38, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:41:23,154 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5594, 3.6941, 3.5111, 1.7590, 3.8532, 2.8418, 0.7456, 2.5850], device='cuda:2'), covar=tensor([0.2445, 0.2392, 0.1548, 0.3453, 0.0985, 0.0954, 0.4600, 0.1525], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0176, 0.0160, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:41:40,671 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0958, 1.9655, 1.6738, 1.9766, 1.9286, 1.9057, 1.9143, 2.6155], device='cuda:2'), covar=tensor([0.3548, 0.4409, 0.3172, 0.3797, 0.4364, 0.2371, 0.4123, 0.1681], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0263, 0.0232, 0.0277, 0.0253, 0.0223, 0.0252, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:41:41,261 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:41:43,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112921.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:41:47,212 INFO [finetune.py:976] (2/7) Epoch 20, batch 4100, loss[loss=0.1599, simple_loss=0.232, pruned_loss=0.04396, over 4810.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2479, pruned_loss=0.05297, over 952628.59 frames. ], batch size: 25, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:41:51,357 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112933.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 00:42:06,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.570e+02 1.869e+02 2.355e+02 4.214e+02, threshold=3.739e+02, percent-clipped=0.0 2023-03-27 00:42:07,465 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8204, 1.7564, 1.5627, 1.8901, 2.3182, 1.9141, 1.6351, 1.5097], device='cuda:2'), covar=tensor([0.1924, 0.1809, 0.1762, 0.1426, 0.1482, 0.1110, 0.2218, 0.1693], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0209, 0.0211, 0.0193, 0.0241, 0.0187, 0.0216, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:42:17,472 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112973.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 00:42:19,818 INFO [finetune.py:976] (2/7) Epoch 20, batch 4150, loss[loss=0.2242, simple_loss=0.2806, pruned_loss=0.08392, over 4140.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.25, pruned_loss=0.05441, over 951180.31 frames. ], batch size: 66, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:42:24,000 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:43:03,967 INFO [finetune.py:976] (2/7) Epoch 20, batch 4200, loss[loss=0.1859, simple_loss=0.2635, pruned_loss=0.05413, over 4894.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2506, pruned_loss=0.05411, over 952336.79 frames. ], batch size: 46, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:43:16,405 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113045.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:43:23,433 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.578e+02 1.900e+02 2.258e+02 5.235e+02, threshold=3.800e+02, percent-clipped=4.0 2023-03-27 00:43:36,993 INFO [finetune.py:976] (2/7) Epoch 20, batch 4250, loss[loss=0.1813, simple_loss=0.2412, pruned_loss=0.06065, over 4828.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2485, pruned_loss=0.05364, over 953161.92 frames. ], batch size: 33, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:43:47,716 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113093.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:44:10,286 INFO [finetune.py:976] (2/7) Epoch 20, batch 4300, loss[loss=0.1386, simple_loss=0.2176, pruned_loss=0.02978, over 4824.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2461, pruned_loss=0.05274, over 954972.78 frames. ], batch size: 38, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:44:30,856 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.679e+01 1.396e+02 1.741e+02 2.023e+02 4.034e+02, threshold=3.482e+02, percent-clipped=1.0 2023-03-27 00:44:43,636 INFO [finetune.py:976] (2/7) Epoch 20, batch 4350, loss[loss=0.144, simple_loss=0.2065, pruned_loss=0.0407, over 4778.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2426, pruned_loss=0.05105, over 956001.58 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:45:12,214 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:17,618 INFO [finetune.py:976] (2/7) Epoch 20, batch 4400, loss[loss=0.1877, simple_loss=0.2703, pruned_loss=0.05256, over 4842.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2438, pruned_loss=0.05213, over 953938.61 frames. ], batch size: 49, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:45:21,833 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:22,448 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113234.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:32,830 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 00:45:49,232 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.729e+01 1.603e+02 1.846e+02 2.173e+02 5.642e+02, threshold=3.692e+02, percent-clipped=4.0 2023-03-27 00:46:00,744 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113266.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:08,336 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113273.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:46:10,701 INFO [finetune.py:976] (2/7) Epoch 20, batch 4450, loss[loss=0.2109, simple_loss=0.2877, pruned_loss=0.06706, over 4820.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2486, pruned_loss=0.05392, over 955805.45 frames. ], batch size: 39, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:46:10,769 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:13,203 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113281.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:23,177 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113295.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:32,661 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1507, 1.8431, 2.4927, 1.6382, 2.1863, 2.4542, 1.7747, 2.5739], device='cuda:2'), covar=tensor([0.1280, 0.1996, 0.1289, 0.1998, 0.0984, 0.1316, 0.2699, 0.0852], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0205, 0.0190, 0.0190, 0.0175, 0.0213, 0.0219, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:46:35,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6402, 1.5299, 1.0867, 0.3071, 1.2721, 1.4548, 1.4267, 1.4639], device='cuda:2'), covar=tensor([0.0879, 0.0778, 0.1252, 0.1839, 0.1338, 0.2200, 0.2167, 0.0793], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0192, 0.0200, 0.0182, 0.0210, 0.0209, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:46:50,259 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:53,838 INFO [finetune.py:976] (2/7) Epoch 20, batch 4500, loss[loss=0.1624, simple_loss=0.2364, pruned_loss=0.04424, over 4793.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2512, pruned_loss=0.0549, over 955310.01 frames. ], batch size: 25, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:46:56,406 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6258, 3.4463, 3.2388, 1.5836, 3.5756, 2.6419, 0.7104, 2.3727], device='cuda:2'), covar=tensor([0.2333, 0.2203, 0.1743, 0.3427, 0.1171, 0.1078, 0.4408, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0130, 0.0162, 0.0124, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 00:47:13,423 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.617e+02 2.027e+02 2.372e+02 5.258e+02, threshold=4.055e+02, percent-clipped=2.0 2023-03-27 00:47:27,581 INFO [finetune.py:976] (2/7) Epoch 20, batch 4550, loss[loss=0.1737, simple_loss=0.2527, pruned_loss=0.0473, over 4900.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2533, pruned_loss=0.05577, over 956734.78 frames. ], batch size: 37, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:47:40,404 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.72 vs. limit=5.0 2023-03-27 00:47:47,434 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4087, 1.4067, 1.3848, 0.7892, 1.4152, 1.6362, 1.6671, 1.3016], device='cuda:2'), covar=tensor([0.0820, 0.0572, 0.0481, 0.0502, 0.0451, 0.0527, 0.0288, 0.0617], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0124, 0.0123, 0.0129, 0.0128, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.9582e-05, 1.0771e-04, 8.8539e-05, 8.6679e-05, 9.0437e-05, 9.1342e-05, 1.0022e-04, 1.0523e-04], device='cuda:2') 2023-03-27 00:48:03,324 INFO [finetune.py:976] (2/7) Epoch 20, batch 4600, loss[loss=0.1839, simple_loss=0.2547, pruned_loss=0.05651, over 4872.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2525, pruned_loss=0.05556, over 955660.10 frames. ], batch size: 34, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:48:31,337 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.757e+01 1.614e+02 1.832e+02 2.145e+02 3.668e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 00:48:45,584 INFO [finetune.py:976] (2/7) Epoch 20, batch 4650, loss[loss=0.1739, simple_loss=0.2391, pruned_loss=0.05434, over 4887.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2499, pruned_loss=0.05472, over 957103.53 frames. ], batch size: 32, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:19,010 INFO [finetune.py:976] (2/7) Epoch 20, batch 4700, loss[loss=0.1227, simple_loss=0.2026, pruned_loss=0.02135, over 4832.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2464, pruned_loss=0.05352, over 956881.22 frames. ], batch size: 30, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:37,204 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.713e+01 1.518e+02 1.819e+02 2.172e+02 5.123e+02, threshold=3.639e+02, percent-clipped=2.0 2023-03-27 00:49:51,489 INFO [finetune.py:976] (2/7) Epoch 20, batch 4750, loss[loss=0.1435, simple_loss=0.2294, pruned_loss=0.02881, over 4816.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2438, pruned_loss=0.05212, over 957325.54 frames. ], batch size: 39, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:51,580 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:00,074 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113590.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:07,344 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.16 vs. limit=5.0 2023-03-27 00:50:18,364 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113617.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:23,694 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113625.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:24,845 INFO [finetune.py:976] (2/7) Epoch 20, batch 4800, loss[loss=0.2016, simple_loss=0.2773, pruned_loss=0.0629, over 4901.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.247, pruned_loss=0.05365, over 955513.60 frames. ], batch size: 43, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:50:37,469 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:38,079 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4500, 2.0516, 2.6449, 1.6445, 2.3783, 2.5830, 2.0025, 2.8185], device='cuda:2'), covar=tensor([0.1375, 0.2100, 0.1737, 0.2402, 0.1005, 0.1480, 0.2783, 0.0803], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0207, 0.0192, 0.0192, 0.0176, 0.0214, 0.0221, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:50:43,816 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.583e+02 1.918e+02 2.188e+02 4.674e+02, threshold=3.836e+02, percent-clipped=2.0 2023-03-27 00:50:55,145 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4421, 1.7191, 1.2786, 1.3793, 2.0106, 1.8802, 1.6016, 1.6703], device='cuda:2'), covar=tensor([0.0583, 0.0329, 0.0653, 0.0396, 0.0335, 0.0686, 0.0382, 0.0422], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0108, 0.0145, 0.0112, 0.0101, 0.0112, 0.0101, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.5710e-05, 8.2973e-05, 1.1415e-04, 8.5706e-05, 7.8452e-05, 8.2821e-05, 7.5182e-05, 8.6176e-05], device='cuda:2') 2023-03-27 00:51:01,327 INFO [finetune.py:976] (2/7) Epoch 20, batch 4850, loss[loss=0.155, simple_loss=0.2297, pruned_loss=0.04016, over 4815.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2497, pruned_loss=0.05445, over 953380.91 frames. ], batch size: 38, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:51:02,041 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113678.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:51:34,437 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113707.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:51:57,945 INFO [finetune.py:976] (2/7) Epoch 20, batch 4900, loss[loss=0.2133, simple_loss=0.2808, pruned_loss=0.07287, over 4177.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2514, pruned_loss=0.05549, over 951950.10 frames. ], batch size: 65, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:52:20,124 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.649e+02 2.053e+02 2.498e+02 4.513e+02, threshold=4.105e+02, percent-clipped=3.0 2023-03-27 00:52:34,813 INFO [finetune.py:976] (2/7) Epoch 20, batch 4950, loss[loss=0.1942, simple_loss=0.2684, pruned_loss=0.06003, over 4892.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2522, pruned_loss=0.05548, over 953549.74 frames. ], batch size: 37, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:53:07,553 INFO [finetune.py:976] (2/7) Epoch 20, batch 5000, loss[loss=0.2046, simple_loss=0.2653, pruned_loss=0.07199, over 4210.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2499, pruned_loss=0.05414, over 952921.92 frames. ], batch size: 65, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:53:17,704 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 00:53:21,225 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9897, 1.9564, 1.7837, 2.0855, 2.5470, 2.2482, 1.7831, 1.6457], device='cuda:2'), covar=tensor([0.2097, 0.1832, 0.1835, 0.1537, 0.1581, 0.1085, 0.2200, 0.1825], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0212, 0.0194, 0.0243, 0.0187, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:53:26,538 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.516e+02 1.779e+02 2.023e+02 5.156e+02, threshold=3.559e+02, percent-clipped=2.0 2023-03-27 00:53:40,065 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 00:53:42,068 INFO [finetune.py:976] (2/7) Epoch 20, batch 5050, loss[loss=0.1451, simple_loss=0.2142, pruned_loss=0.03796, over 4768.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2467, pruned_loss=0.05299, over 952984.90 frames. ], batch size: 26, lr: 3.22e-03, grad_scale: 64.0 2023-03-27 00:53:51,009 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113890.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:14,769 INFO [finetune.py:976] (2/7) Epoch 20, batch 5100, loss[loss=0.1908, simple_loss=0.2535, pruned_loss=0.06402, over 4813.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2443, pruned_loss=0.05259, over 953424.83 frames. ], batch size: 25, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:54:15,512 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4188, 2.3034, 1.8695, 2.6051, 2.3099, 2.0417, 2.8732, 2.5097], device='cuda:2'), covar=tensor([0.1317, 0.2406, 0.2835, 0.2537, 0.2457, 0.1619, 0.3404, 0.1639], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0187, 0.0234, 0.0253, 0.0246, 0.0204, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:54:23,010 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:35,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.567e+02 1.826e+02 2.193e+02 3.507e+02, threshold=3.652e+02, percent-clipped=0.0 2023-03-27 00:54:46,072 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113973.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:48,407 INFO [finetune.py:976] (2/7) Epoch 20, batch 5150, loss[loss=0.1642, simple_loss=0.2228, pruned_loss=0.05276, over 4706.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2438, pruned_loss=0.05266, over 951772.55 frames. ], batch size: 23, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:55:07,785 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=114002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:55:15,830 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 00:55:23,294 INFO [finetune.py:976] (2/7) Epoch 20, batch 5200, loss[loss=0.1924, simple_loss=0.2709, pruned_loss=0.05691, over 4149.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2488, pruned_loss=0.05461, over 952605.14 frames. ], batch size: 65, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:55:43,820 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.581e+02 1.814e+02 2.243e+02 3.815e+02, threshold=3.628e+02, percent-clipped=1.0 2023-03-27 00:55:44,040 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.96 vs. limit=5.0 2023-03-27 00:55:56,415 INFO [finetune.py:976] (2/7) Epoch 20, batch 5250, loss[loss=0.1787, simple_loss=0.2499, pruned_loss=0.05373, over 4819.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2506, pruned_loss=0.05444, over 953579.37 frames. ], batch size: 30, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:56:19,704 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 00:56:36,233 INFO [finetune.py:976] (2/7) Epoch 20, batch 5300, loss[loss=0.1532, simple_loss=0.2177, pruned_loss=0.04432, over 4775.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2523, pruned_loss=0.05482, over 954773.15 frames. ], batch size: 26, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:56:36,355 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:56:56,801 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 00:57:13,340 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.589e+01 1.501e+02 1.834e+02 2.338e+02 4.244e+02, threshold=3.669e+02, percent-clipped=1.0 2023-03-27 00:57:13,594 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-27 00:57:30,109 INFO [finetune.py:976] (2/7) Epoch 20, batch 5350, loss[loss=0.1872, simple_loss=0.2536, pruned_loss=0.06037, over 4857.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2508, pruned_loss=0.05359, over 954751.55 frames. ], batch size: 31, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:57:37,354 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114188.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:57:47,469 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7837, 1.2650, 0.9528, 1.6294, 2.0318, 1.4621, 1.5016, 1.8199], device='cuda:2'), covar=tensor([0.1386, 0.1917, 0.1795, 0.1056, 0.1902, 0.1856, 0.1382, 0.1748], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0094, 0.0110, 0.0091, 0.0119, 0.0092, 0.0096, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 00:57:50,955 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9954, 1.8577, 1.7451, 2.0210, 2.5951, 2.0567, 2.0996, 1.6833], device='cuda:2'), covar=tensor([0.1991, 0.2058, 0.1819, 0.1553, 0.1551, 0.1200, 0.1996, 0.1746], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0243, 0.0187, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:58:03,280 INFO [finetune.py:976] (2/7) Epoch 20, batch 5400, loss[loss=0.1543, simple_loss=0.2237, pruned_loss=0.04242, over 4738.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2475, pruned_loss=0.05233, over 953925.14 frames. ], batch size: 23, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:58:14,069 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8228, 1.2713, 0.9065, 1.6943, 2.1839, 1.5983, 1.4910, 1.8100], device='cuda:2'), covar=tensor([0.1459, 0.2095, 0.1919, 0.1104, 0.1828, 0.1834, 0.1425, 0.1820], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0091, 0.0119, 0.0093, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 00:58:23,337 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.275e+01 1.521e+02 1.786e+02 2.103e+02 5.074e+02, threshold=3.573e+02, percent-clipped=1.0 2023-03-27 00:58:30,072 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3941, 2.0497, 2.6739, 1.6606, 2.3557, 2.5376, 1.8603, 2.7160], device='cuda:2'), covar=tensor([0.1198, 0.1737, 0.1462, 0.2032, 0.0867, 0.1316, 0.2603, 0.0731], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0206, 0.0190, 0.0189, 0.0174, 0.0213, 0.0219, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 00:58:33,649 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:58:35,980 INFO [finetune.py:976] (2/7) Epoch 20, batch 5450, loss[loss=0.1447, simple_loss=0.2096, pruned_loss=0.03986, over 4913.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2448, pruned_loss=0.05217, over 954891.36 frames. ], batch size: 32, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:58:51,605 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114302.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:05,065 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:08,650 INFO [finetune.py:976] (2/7) Epoch 20, batch 5500, loss[loss=0.1881, simple_loss=0.2425, pruned_loss=0.06685, over 4836.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2412, pruned_loss=0.05076, over 952334.30 frames. ], batch size: 30, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:59:16,167 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 00:59:23,094 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:27,723 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.653e+01 1.524e+02 1.755e+02 2.254e+02 4.886e+02, threshold=3.510e+02, percent-clipped=3.0 2023-03-27 00:59:42,371 INFO [finetune.py:976] (2/7) Epoch 20, batch 5550, loss[loss=0.2252, simple_loss=0.3026, pruned_loss=0.07388, over 4821.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2426, pruned_loss=0.05148, over 954439.98 frames. ], batch size: 51, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:14,060 INFO [finetune.py:976] (2/7) Epoch 20, batch 5600, loss[loss=0.2038, simple_loss=0.2753, pruned_loss=0.06613, over 4730.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2465, pruned_loss=0.05193, over 955894.79 frames. ], batch size: 59, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:31,888 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.587e+02 1.911e+02 2.256e+02 4.682e+02, threshold=3.822e+02, percent-clipped=4.0 2023-03-27 01:00:43,492 INFO [finetune.py:976] (2/7) Epoch 20, batch 5650, loss[loss=0.1577, simple_loss=0.2359, pruned_loss=0.03973, over 4815.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2489, pruned_loss=0.05244, over 954648.38 frames. ], batch size: 33, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:46,983 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=114483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:00:57,538 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5095, 2.2163, 1.6671, 0.9625, 2.0075, 2.0804, 1.7554, 2.0418], device='cuda:2'), covar=tensor([0.0640, 0.0688, 0.1341, 0.1652, 0.1061, 0.1611, 0.2005, 0.0742], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0190, 0.0198, 0.0181, 0.0209, 0.0209, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:01:04,505 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0679, 1.4642, 1.0108, 1.8054, 2.3134, 1.4924, 1.6916, 1.8196], device='cuda:2'), covar=tensor([0.1256, 0.1858, 0.1779, 0.1006, 0.1666, 0.1789, 0.1229, 0.1829], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0091, 0.0120, 0.0093, 0.0097, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:01:13,265 INFO [finetune.py:976] (2/7) Epoch 20, batch 5700, loss[loss=0.1534, simple_loss=0.2111, pruned_loss=0.04785, over 4186.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2459, pruned_loss=0.05206, over 938985.26 frames. ], batch size: 18, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:01:39,091 INFO [finetune.py:976] (2/7) Epoch 21, batch 0, loss[loss=0.1905, simple_loss=0.2484, pruned_loss=0.06637, over 4858.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2484, pruned_loss=0.06637, over 4858.00 frames. ], batch size: 31, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:01:39,091 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 01:01:45,724 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.7788, 3.3640, 3.5174, 3.6611, 3.5499, 3.3454, 3.8336, 1.5037], device='cuda:2'), covar=tensor([0.0714, 0.0679, 0.0718, 0.0755, 0.1141, 0.1369, 0.0657, 0.4357], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0241, 0.0277, 0.0289, 0.0330, 0.0283, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:01:52,342 INFO [finetune.py:1010] (2/7) Epoch 21, validation: loss=0.1598, simple_loss=0.2277, pruned_loss=0.0459, over 2265189.00 frames. 2023-03-27 01:01:52,342 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 01:01:56,941 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.573e+01 1.356e+02 1.658e+02 2.014e+02 3.472e+02, threshold=3.316e+02, percent-clipped=0.0 2023-03-27 01:02:29,074 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1192, 1.6691, 2.2118, 2.1738, 1.9484, 1.9154, 2.1122, 2.0465], device='cuda:2'), covar=tensor([0.3861, 0.3750, 0.3107, 0.3440, 0.4789, 0.3755, 0.4188, 0.2961], device='cuda:2'), in_proj_covar=tensor([0.0254, 0.0243, 0.0263, 0.0281, 0.0280, 0.0255, 0.0290, 0.0245], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:02:47,783 INFO [finetune.py:976] (2/7) Epoch 21, batch 50, loss[loss=0.1417, simple_loss=0.2217, pruned_loss=0.03086, over 4757.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2537, pruned_loss=0.05804, over 215149.84 frames. ], batch size: 28, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:03:21,579 INFO [finetune.py:976] (2/7) Epoch 21, batch 100, loss[loss=0.1526, simple_loss=0.2239, pruned_loss=0.04064, over 4822.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2441, pruned_loss=0.05325, over 377090.31 frames. ], batch size: 51, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:03:23,373 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.865e+01 1.560e+02 1.971e+02 2.354e+02 5.080e+02, threshold=3.943e+02, percent-clipped=2.0 2023-03-27 01:03:54,253 INFO [finetune.py:976] (2/7) Epoch 21, batch 150, loss[loss=0.1463, simple_loss=0.2208, pruned_loss=0.03587, over 4807.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2412, pruned_loss=0.05314, over 505831.81 frames. ], batch size: 25, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:04:02,519 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:04:03,308 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 01:04:26,908 INFO [finetune.py:976] (2/7) Epoch 21, batch 200, loss[loss=0.1737, simple_loss=0.2434, pruned_loss=0.05202, over 4817.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2385, pruned_loss=0.0514, over 606382.47 frames. ], batch size: 40, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:04:29,190 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.562e+02 1.886e+02 2.300e+02 5.249e+02, threshold=3.772e+02, percent-clipped=1.0 2023-03-27 01:04:30,993 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7399, 1.6416, 1.5646, 1.7268, 1.1849, 3.7436, 1.4999, 1.7592], device='cuda:2'), covar=tensor([0.3090, 0.2347, 0.2074, 0.2182, 0.1624, 0.0163, 0.2403, 0.1243], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:04:42,955 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:04:46,580 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:00,770 INFO [finetune.py:976] (2/7) Epoch 21, batch 250, loss[loss=0.1887, simple_loss=0.2401, pruned_loss=0.06862, over 4743.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2429, pruned_loss=0.05282, over 684647.80 frames. ], batch size: 23, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:05:05,633 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 01:05:19,225 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114831.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:33,143 INFO [finetune.py:976] (2/7) Epoch 21, batch 300, loss[loss=0.1799, simple_loss=0.2564, pruned_loss=0.05169, over 4919.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2464, pruned_loss=0.05319, over 745779.21 frames. ], batch size: 36, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:05:36,359 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.500e+02 1.787e+02 2.137e+02 3.935e+02, threshold=3.575e+02, percent-clipped=3.0 2023-03-27 01:05:37,090 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1754, 2.0293, 2.2219, 1.8006, 2.1757, 2.5138, 2.5098, 1.5887], device='cuda:2'), covar=tensor([0.0668, 0.0767, 0.0709, 0.0909, 0.1027, 0.0560, 0.0528, 0.1452], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0136, 0.0140, 0.0121, 0.0125, 0.0139, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:05:46,352 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114871.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:47,582 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8375, 1.7375, 1.6006, 1.9729, 2.4975, 2.0262, 1.7273, 1.5373], device='cuda:2'), covar=tensor([0.2436, 0.2174, 0.2218, 0.1860, 0.1645, 0.1294, 0.2421, 0.2074], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0214, 0.0195, 0.0243, 0.0188, 0.0218, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:06:06,528 INFO [finetune.py:976] (2/7) Epoch 21, batch 350, loss[loss=0.1665, simple_loss=0.2436, pruned_loss=0.04475, over 4831.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2507, pruned_loss=0.05516, over 793395.16 frames. ], batch size: 49, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:06:26,477 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114932.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:06:40,073 INFO [finetune.py:976] (2/7) Epoch 21, batch 400, loss[loss=0.1723, simple_loss=0.2434, pruned_loss=0.05061, over 4905.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2504, pruned_loss=0.0545, over 828356.20 frames. ], batch size: 37, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:06:41,872 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.627e+02 1.950e+02 2.383e+02 4.205e+02, threshold=3.900e+02, percent-clipped=3.0 2023-03-27 01:07:20,653 INFO [finetune.py:976] (2/7) Epoch 21, batch 450, loss[loss=0.2008, simple_loss=0.2585, pruned_loss=0.07156, over 4762.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.248, pruned_loss=0.05347, over 855816.95 frames. ], batch size: 28, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:02,748 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5664, 3.8706, 4.1208, 4.3563, 4.3759, 4.0689, 4.6560, 1.4044], device='cuda:2'), covar=tensor([0.0866, 0.0918, 0.0915, 0.1081, 0.1218, 0.1557, 0.0696, 0.5697], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0242, 0.0277, 0.0291, 0.0330, 0.0282, 0.0302, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:08:06,608 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.90 vs. limit=5.0 2023-03-27 01:08:11,192 INFO [finetune.py:976] (2/7) Epoch 21, batch 500, loss[loss=0.1559, simple_loss=0.238, pruned_loss=0.0369, over 4794.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2451, pruned_loss=0.05212, over 878307.54 frames. ], batch size: 29, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:13,015 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.445e+02 1.728e+02 2.124e+02 2.919e+02, threshold=3.456e+02, percent-clipped=0.0 2023-03-27 01:08:24,204 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:08:33,772 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115086.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:08:45,009 INFO [finetune.py:976] (2/7) Epoch 21, batch 550, loss[loss=0.2307, simple_loss=0.2933, pruned_loss=0.0841, over 4913.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2444, pruned_loss=0.05276, over 894578.14 frames. ], batch size: 43, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:52,889 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8264, 2.8060, 2.5920, 2.9169, 3.3818, 3.0175, 2.7778, 2.4265], device='cuda:2'), covar=tensor([0.1783, 0.1535, 0.1450, 0.1327, 0.1233, 0.0776, 0.1550, 0.1510], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0242, 0.0187, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:09:14,153 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:09:18,262 INFO [finetune.py:976] (2/7) Epoch 21, batch 600, loss[loss=0.1867, simple_loss=0.2672, pruned_loss=0.05312, over 4900.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2435, pruned_loss=0.0522, over 906411.99 frames. ], batch size: 43, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:09:19,075 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 01:09:20,110 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.558e+02 1.835e+02 2.263e+02 4.639e+02, threshold=3.670e+02, percent-clipped=5.0 2023-03-27 01:09:32,255 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 01:09:41,573 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1067, 2.0741, 2.2203, 1.5004, 2.1760, 2.3133, 2.2748, 1.6833], device='cuda:2'), covar=tensor([0.0683, 0.0741, 0.0742, 0.0909, 0.0690, 0.0702, 0.0624, 0.1351], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0125, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:09:51,868 INFO [finetune.py:976] (2/7) Epoch 21, batch 650, loss[loss=0.1941, simple_loss=0.2692, pruned_loss=0.05951, over 4918.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2473, pruned_loss=0.05342, over 914947.40 frames. ], batch size: 43, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:10:01,556 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1392, 2.1758, 2.2102, 1.5671, 2.1065, 2.2761, 2.2981, 1.7863], device='cuda:2'), covar=tensor([0.0718, 0.0687, 0.0774, 0.0887, 0.0735, 0.0796, 0.0633, 0.1227], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0125, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:10:07,423 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115227.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:10:25,138 INFO [finetune.py:976] (2/7) Epoch 21, batch 700, loss[loss=0.1701, simple_loss=0.2467, pruned_loss=0.04673, over 4777.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2488, pruned_loss=0.0533, over 925141.12 frames. ], batch size: 29, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:10:26,912 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.664e+02 1.957e+02 2.283e+02 3.730e+02, threshold=3.914e+02, percent-clipped=1.0 2023-03-27 01:10:32,657 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-27 01:10:58,918 INFO [finetune.py:976] (2/7) Epoch 21, batch 750, loss[loss=0.1534, simple_loss=0.2192, pruned_loss=0.04381, over 4712.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2493, pruned_loss=0.05338, over 929199.82 frames. ], batch size: 23, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:11:31,743 INFO [finetune.py:976] (2/7) Epoch 21, batch 800, loss[loss=0.1838, simple_loss=0.2487, pruned_loss=0.05949, over 4788.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2503, pruned_loss=0.05371, over 933434.35 frames. ], batch size: 51, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:11:33,560 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.426e+02 1.720e+02 2.044e+02 3.360e+02, threshold=3.441e+02, percent-clipped=0.0 2023-03-27 01:11:41,498 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115370.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:42,118 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:42,699 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115372.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:48,718 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5828, 1.5848, 1.3271, 1.5535, 1.9301, 1.8475, 1.5776, 1.3934], device='cuda:2'), covar=tensor([0.0333, 0.0294, 0.0657, 0.0311, 0.0209, 0.0455, 0.0343, 0.0432], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0109, 0.0146, 0.0113, 0.0101, 0.0112, 0.0101, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.6653e-05, 8.3669e-05, 1.1514e-04, 8.6645e-05, 7.8521e-05, 8.3039e-05, 7.5312e-05, 8.6898e-05], device='cuda:2') 2023-03-27 01:12:04,583 INFO [finetune.py:976] (2/7) Epoch 21, batch 850, loss[loss=0.1827, simple_loss=0.2665, pruned_loss=0.0495, over 4761.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2486, pruned_loss=0.05299, over 939807.65 frames. ], batch size: 26, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:12:10,838 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-27 01:12:13,707 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3542, 2.1532, 1.7585, 0.8523, 1.8926, 1.9106, 1.8017, 1.9573], device='cuda:2'), covar=tensor([0.0859, 0.0741, 0.1440, 0.1908, 0.1319, 0.2062, 0.1953, 0.0835], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0190, 0.0198, 0.0181, 0.0209, 0.0208, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:12:14,852 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:16,193 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.13 vs. limit=5.0 2023-03-27 01:12:24,434 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115431.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:25,042 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:32,209 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4162, 2.1737, 1.7963, 0.8755, 1.9055, 1.8676, 1.7168, 2.0088], device='cuda:2'), covar=tensor([0.0866, 0.0789, 0.1581, 0.2167, 0.1558, 0.2555, 0.2386, 0.0954], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0190, 0.0198, 0.0181, 0.0209, 0.0209, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:12:41,966 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115442.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:54,669 INFO [finetune.py:976] (2/7) Epoch 21, batch 900, loss[loss=0.1909, simple_loss=0.2482, pruned_loss=0.06686, over 4752.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.246, pruned_loss=0.05177, over 944894.99 frames. ], batch size: 54, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:13:00,780 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.494e+02 1.776e+02 2.140e+02 4.219e+02, threshold=3.551e+02, percent-clipped=3.0 2023-03-27 01:13:37,347 INFO [finetune.py:976] (2/7) Epoch 21, batch 950, loss[loss=0.1766, simple_loss=0.2533, pruned_loss=0.04994, over 4869.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2449, pruned_loss=0.05227, over 947744.62 frames. ], batch size: 31, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:13:42,947 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-27 01:13:51,951 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115527.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:14:11,295 INFO [finetune.py:976] (2/7) Epoch 21, batch 1000, loss[loss=0.1784, simple_loss=0.2367, pruned_loss=0.05998, over 4421.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2451, pruned_loss=0.05251, over 947726.16 frames. ], batch size: 19, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:14:13,112 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.550e+02 1.849e+02 2.159e+02 3.452e+02, threshold=3.698e+02, percent-clipped=0.0 2023-03-27 01:14:24,465 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:14:44,024 INFO [finetune.py:976] (2/7) Epoch 21, batch 1050, loss[loss=0.184, simple_loss=0.2638, pruned_loss=0.05213, over 4752.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2486, pruned_loss=0.05246, over 951540.67 frames. ], batch size: 27, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:15:16,667 INFO [finetune.py:976] (2/7) Epoch 21, batch 1100, loss[loss=0.199, simple_loss=0.266, pruned_loss=0.06601, over 4713.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2505, pruned_loss=0.05318, over 952644.99 frames. ], batch size: 54, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:15:19,444 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.582e+02 1.822e+02 2.328e+02 4.675e+02, threshold=3.643e+02, percent-clipped=4.0 2023-03-27 01:15:20,296 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 01:15:50,437 INFO [finetune.py:976] (2/7) Epoch 21, batch 1150, loss[loss=0.1764, simple_loss=0.2555, pruned_loss=0.04865, over 4897.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.25, pruned_loss=0.05329, over 950146.62 frames. ], batch size: 43, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:16:05,310 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:05,912 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115727.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:14,926 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115742.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:15,562 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1068, 1.0064, 1.0086, 0.4890, 0.9191, 1.1793, 1.2051, 1.0113], device='cuda:2'), covar=tensor([0.0860, 0.0549, 0.0581, 0.0566, 0.0586, 0.0572, 0.0445, 0.0625], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0125, 0.0123, 0.0130, 0.0129, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9982e-05, 1.0796e-04, 8.9652e-05, 8.7279e-05, 9.1453e-05, 9.1930e-05, 1.0173e-04, 1.0609e-04], device='cuda:2') 2023-03-27 01:16:16,239 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 01:16:24,029 INFO [finetune.py:976] (2/7) Epoch 21, batch 1200, loss[loss=0.1871, simple_loss=0.2626, pruned_loss=0.05583, over 4867.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2493, pruned_loss=0.05272, over 952080.93 frames. ], batch size: 34, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:16:25,822 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.465e+02 1.737e+02 2.048e+02 4.574e+02, threshold=3.475e+02, percent-clipped=2.0 2023-03-27 01:16:47,357 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115790.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:56,822 INFO [finetune.py:976] (2/7) Epoch 21, batch 1250, loss[loss=0.1609, simple_loss=0.2189, pruned_loss=0.05148, over 4454.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2481, pruned_loss=0.0532, over 954076.11 frames. ], batch size: 19, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:17:13,343 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5349, 3.4728, 3.2810, 1.5590, 3.6576, 2.7594, 0.9894, 2.5143], device='cuda:2'), covar=tensor([0.2965, 0.2221, 0.1609, 0.3437, 0.1024, 0.1024, 0.4246, 0.1549], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0178, 0.0159, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:17:29,532 INFO [finetune.py:976] (2/7) Epoch 21, batch 1300, loss[loss=0.1751, simple_loss=0.245, pruned_loss=0.05258, over 4945.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2451, pruned_loss=0.05216, over 953970.56 frames. ], batch size: 33, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:17:32,371 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.563e+02 1.759e+02 2.181e+02 4.124e+02, threshold=3.519e+02, percent-clipped=1.0 2023-03-27 01:17:52,265 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0787, 1.8805, 1.6948, 1.7572, 1.8344, 1.8095, 1.8356, 2.5751], device='cuda:2'), covar=tensor([0.3129, 0.3445, 0.2905, 0.3259, 0.3456, 0.2264, 0.3472, 0.1424], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0233, 0.0277, 0.0253, 0.0223, 0.0253, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:17:53,434 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1002, 1.9266, 2.4934, 4.0721, 2.8244, 2.7521, 0.8052, 3.4305], device='cuda:2'), covar=tensor([0.1739, 0.1404, 0.1471, 0.0486, 0.0736, 0.1425, 0.2235, 0.0403], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:18:11,982 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-27 01:18:12,431 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115892.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:18:23,976 INFO [finetune.py:976] (2/7) Epoch 21, batch 1350, loss[loss=0.1981, simple_loss=0.2816, pruned_loss=0.05732, over 4910.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2449, pruned_loss=0.0519, over 954512.05 frames. ], batch size: 43, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:18:56,331 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3939, 1.2975, 1.2903, 1.3104, 0.7975, 2.0276, 0.7732, 1.2452], device='cuda:2'), covar=tensor([0.3052, 0.2501, 0.2121, 0.2336, 0.1901, 0.0395, 0.2705, 0.1278], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0115, 0.0120, 0.0122, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:19:00,565 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115953.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:19:01,044 INFO [finetune.py:976] (2/7) Epoch 21, batch 1400, loss[loss=0.1828, simple_loss=0.26, pruned_loss=0.05279, over 4740.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2482, pruned_loss=0.05262, over 953763.37 frames. ], batch size: 54, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:19:02,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.541e+02 1.818e+02 2.071e+02 3.575e+02, threshold=3.635e+02, percent-clipped=1.0 2023-03-27 01:19:09,286 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 01:19:27,711 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0099, 1.8649, 1.5982, 1.7114, 1.8014, 1.7731, 1.8703, 2.4927], device='cuda:2'), covar=tensor([0.3718, 0.4062, 0.3564, 0.3976, 0.4179, 0.2463, 0.3690, 0.1714], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0261, 0.0231, 0.0275, 0.0251, 0.0221, 0.0251, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:19:35,502 INFO [finetune.py:976] (2/7) Epoch 21, batch 1450, loss[loss=0.1671, simple_loss=0.2466, pruned_loss=0.04384, over 4898.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2494, pruned_loss=0.05298, over 952018.45 frames. ], batch size: 43, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:19:43,015 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:19:51,744 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:19:52,342 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:00,696 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0713, 0.9108, 0.9459, 0.3824, 0.9510, 1.1074, 1.0892, 0.8980], device='cuda:2'), covar=tensor([0.0985, 0.0785, 0.0660, 0.0597, 0.0621, 0.0753, 0.0553, 0.0737], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0125, 0.0123, 0.0130, 0.0128, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9918e-05, 1.0780e-04, 8.9473e-05, 8.7318e-05, 9.1146e-05, 9.1593e-05, 1.0139e-04, 1.0594e-04], device='cuda:2') 2023-03-27 01:20:09,068 INFO [finetune.py:976] (2/7) Epoch 21, batch 1500, loss[loss=0.2401, simple_loss=0.2982, pruned_loss=0.09106, over 4810.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2513, pruned_loss=0.05419, over 951409.79 frames. ], batch size: 39, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:20:10,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.719e+02 2.058e+02 2.312e+02 4.180e+02, threshold=4.116e+02, percent-clipped=2.0 2023-03-27 01:20:11,722 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.79 vs. limit=5.0 2023-03-27 01:20:23,170 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116074.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:23,792 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:23,861 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116075.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:20:31,494 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7288, 0.9988, 1.8339, 1.6804, 1.5503, 1.4786, 1.5781, 1.7209], device='cuda:2'), covar=tensor([0.3611, 0.3678, 0.2954, 0.3344, 0.4183, 0.3475, 0.3899, 0.2651], device='cuda:2'), in_proj_covar=tensor([0.0253, 0.0242, 0.0263, 0.0281, 0.0280, 0.0256, 0.0290, 0.0245], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:20:32,636 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1773, 2.2035, 2.8383, 2.4590, 2.5515, 4.7206, 2.2761, 2.4831], device='cuda:2'), covar=tensor([0.0823, 0.1544, 0.0869, 0.0862, 0.1305, 0.0240, 0.1222, 0.1455], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:20:36,123 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2663, 2.1211, 1.8190, 2.1251, 2.0296, 1.9540, 2.0651, 2.8337], device='cuda:2'), covar=tensor([0.3897, 0.4179, 0.3601, 0.3835, 0.3817, 0.2657, 0.3780, 0.1639], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0261, 0.0232, 0.0275, 0.0252, 0.0222, 0.0252, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:20:40,958 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4809, 2.5850, 2.4491, 1.8210, 2.3044, 2.6739, 2.7393, 2.2027], device='cuda:2'), covar=tensor([0.0565, 0.0558, 0.0690, 0.0867, 0.1203, 0.0633, 0.0576, 0.1026], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0120, 0.0125, 0.0138, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:20:42,654 INFO [finetune.py:976] (2/7) Epoch 21, batch 1550, loss[loss=0.1898, simple_loss=0.251, pruned_loss=0.06427, over 4795.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2514, pruned_loss=0.05432, over 952813.17 frames. ], batch size: 45, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:20:43,367 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:48,774 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1602, 2.0557, 2.0727, 1.4946, 2.0782, 2.1709, 2.2009, 1.7159], device='cuda:2'), covar=tensor([0.0580, 0.0659, 0.0751, 0.0885, 0.0729, 0.0735, 0.0587, 0.1183], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0119, 0.0125, 0.0138, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:20:50,851 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116116.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:07,216 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6622, 1.5641, 1.0826, 0.2708, 1.2637, 1.5021, 1.5372, 1.5028], device='cuda:2'), covar=tensor([0.1065, 0.0867, 0.1424, 0.2031, 0.1434, 0.2378, 0.2215, 0.0881], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0197, 0.0181, 0.0209, 0.0208, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:21:15,951 INFO [finetune.py:976] (2/7) Epoch 21, batch 1600, loss[loss=0.1635, simple_loss=0.2358, pruned_loss=0.04562, over 4821.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2482, pruned_loss=0.05302, over 953038.47 frames. ], batch size: 38, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:21:17,775 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.061e+01 1.555e+02 1.841e+02 2.223e+02 4.654e+02, threshold=3.683e+02, percent-clipped=1.0 2023-03-27 01:21:20,331 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4879, 2.3264, 2.4148, 1.6627, 2.3699, 2.5332, 2.5596, 1.9480], device='cuda:2'), covar=tensor([0.0519, 0.0621, 0.0683, 0.0872, 0.0705, 0.0696, 0.0585, 0.1112], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0135, 0.0138, 0.0119, 0.0124, 0.0137, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:21:23,356 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116166.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:29,517 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2652, 2.9638, 2.8264, 1.2701, 3.0321, 2.2185, 0.6036, 1.9610], device='cuda:2'), covar=tensor([0.2618, 0.2221, 0.1891, 0.3630, 0.1456, 0.1256, 0.4616, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0160, 0.0130, 0.0162, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:21:31,971 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116177.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:49,854 INFO [finetune.py:976] (2/7) Epoch 21, batch 1650, loss[loss=0.1844, simple_loss=0.2569, pruned_loss=0.05599, over 4905.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2458, pruned_loss=0.05247, over 954446.47 frames. ], batch size: 37, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:21:56,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9794, 1.3627, 0.9430, 1.7502, 2.2214, 1.5408, 1.7396, 1.7335], device='cuda:2'), covar=tensor([0.1273, 0.1940, 0.1786, 0.1083, 0.1728, 0.1761, 0.1271, 0.1826], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:22:19,470 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116248.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:22:23,483 INFO [finetune.py:976] (2/7) Epoch 21, batch 1700, loss[loss=0.1735, simple_loss=0.2442, pruned_loss=0.05142, over 4841.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2434, pruned_loss=0.0514, over 955772.52 frames. ], batch size: 47, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:22:25,324 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.524e+01 1.502e+02 1.774e+02 2.116e+02 3.203e+02, threshold=3.548e+02, percent-clipped=0.0 2023-03-27 01:22:59,246 INFO [finetune.py:976] (2/7) Epoch 21, batch 1750, loss[loss=0.1837, simple_loss=0.2552, pruned_loss=0.05611, over 4915.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2457, pruned_loss=0.0528, over 955925.77 frames. ], batch size: 37, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:23:19,670 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116323.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:23:51,735 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2020, 3.6229, 3.7931, 4.0894, 4.0014, 3.7371, 4.2937, 1.4416], device='cuda:2'), covar=tensor([0.0852, 0.0867, 0.0915, 0.0832, 0.1196, 0.1605, 0.0701, 0.5672], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0240, 0.0278, 0.0290, 0.0331, 0.0284, 0.0301, 0.0297], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:23:58,754 INFO [finetune.py:976] (2/7) Epoch 21, batch 1800, loss[loss=0.2096, simple_loss=0.2835, pruned_loss=0.06789, over 4751.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2476, pruned_loss=0.05274, over 955822.16 frames. ], batch size: 59, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:23:59,532 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5973, 0.6843, 1.5871, 1.4313, 1.3693, 1.3030, 1.3836, 1.5854], device='cuda:2'), covar=tensor([0.3591, 0.3717, 0.3357, 0.3485, 0.4753, 0.3702, 0.4352, 0.3147], device='cuda:2'), in_proj_covar=tensor([0.0254, 0.0241, 0.0262, 0.0281, 0.0280, 0.0256, 0.0290, 0.0245], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:24:00,588 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.248e+02 1.627e+02 1.938e+02 2.363e+02 5.057e+02, threshold=3.876e+02, percent-clipped=3.0 2023-03-27 01:24:02,494 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4091, 2.3299, 1.8868, 0.9425, 2.0238, 1.8576, 1.7157, 2.1080], device='cuda:2'), covar=tensor([0.1017, 0.0673, 0.1608, 0.2119, 0.1424, 0.2390, 0.2205, 0.0923], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0199, 0.0183, 0.0210, 0.0209, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:24:08,548 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116370.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:24:09,811 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9691, 1.7199, 2.3682, 1.5173, 2.0785, 2.2114, 1.5467, 2.5042], device='cuda:2'), covar=tensor([0.1411, 0.1963, 0.1398, 0.2095, 0.0936, 0.1638, 0.2692, 0.0759], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0202, 0.0189, 0.0188, 0.0173, 0.0212, 0.0217, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:24:12,244 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5302, 1.4509, 1.3098, 1.6008, 1.5693, 1.5435, 1.0727, 1.3164], device='cuda:2'), covar=tensor([0.2158, 0.1993, 0.1904, 0.1608, 0.1549, 0.1280, 0.2456, 0.1813], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0213, 0.0195, 0.0244, 0.0189, 0.0219, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:24:18,614 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:24:25,692 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8241, 4.5249, 4.2705, 2.3667, 4.6754, 3.4960, 0.9374, 3.1284], device='cuda:2'), covar=tensor([0.2090, 0.1257, 0.1188, 0.2687, 0.0703, 0.0782, 0.4171, 0.1282], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0177, 0.0158, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:24:31,788 INFO [finetune.py:976] (2/7) Epoch 21, batch 1850, loss[loss=0.1806, simple_loss=0.2429, pruned_loss=0.05918, over 4764.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2491, pruned_loss=0.05387, over 954422.83 frames. ], batch size: 54, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:24:34,918 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 01:24:50,468 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2370, 2.1560, 1.6453, 2.2641, 2.0973, 1.8308, 2.5186, 2.1880], device='cuda:2'), covar=tensor([0.1276, 0.2244, 0.3054, 0.2488, 0.2564, 0.1723, 0.2978, 0.1850], device='cuda:2'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0253, 0.0248, 0.0203, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:25:00,243 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:04,915 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6271, 2.7785, 2.4406, 1.7933, 2.7332, 2.8038, 2.8823, 2.3157], device='cuda:2'), covar=tensor([0.0715, 0.0629, 0.0839, 0.0957, 0.0537, 0.0781, 0.0677, 0.1039], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0140, 0.0121, 0.0126, 0.0139, 0.0141, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:25:05,404 INFO [finetune.py:976] (2/7) Epoch 21, batch 1900, loss[loss=0.1623, simple_loss=0.2315, pruned_loss=0.0466, over 4760.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2497, pruned_loss=0.05334, over 955468.69 frames. ], batch size: 28, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:25:07,232 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.572e+02 1.880e+02 2.123e+02 3.861e+02, threshold=3.760e+02, percent-clipped=0.0 2023-03-27 01:25:10,211 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116461.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:16,978 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:38,738 INFO [finetune.py:976] (2/7) Epoch 21, batch 1950, loss[loss=0.1355, simple_loss=0.2163, pruned_loss=0.02733, over 4746.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2485, pruned_loss=0.05293, over 953380.59 frames. ], batch size: 27, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:25:39,486 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:40,705 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116507.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:25:45,372 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5307, 1.3665, 2.0289, 1.8387, 1.5728, 3.4682, 1.2990, 1.5306], device='cuda:2'), covar=tensor([0.0960, 0.1893, 0.1165, 0.0942, 0.1669, 0.0222, 0.1626, 0.1807], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:26:07,963 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116548.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:26:11,459 INFO [finetune.py:976] (2/7) Epoch 21, batch 2000, loss[loss=0.1326, simple_loss=0.2095, pruned_loss=0.02792, over 4796.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2461, pruned_loss=0.05234, over 953131.12 frames. ], batch size: 29, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:26:13,785 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.123e+01 1.433e+02 1.690e+02 2.067e+02 3.885e+02, threshold=3.380e+02, percent-clipped=2.0 2023-03-27 01:26:19,758 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:26:39,350 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116596.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:26:39,409 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1438, 2.0103, 1.9796, 0.8795, 2.2540, 2.4524, 2.0954, 1.8270], device='cuda:2'), covar=tensor([0.0920, 0.0608, 0.0516, 0.0691, 0.0586, 0.0755, 0.0498, 0.0728], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0125, 0.0124, 0.0130, 0.0128, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9837e-05, 1.0803e-04, 8.9403e-05, 8.7394e-05, 9.1482e-05, 9.1759e-05, 1.0159e-04, 1.0590e-04], device='cuda:2') 2023-03-27 01:26:44,682 INFO [finetune.py:976] (2/7) Epoch 21, batch 2050, loss[loss=0.2279, simple_loss=0.2704, pruned_loss=0.09273, over 4427.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2436, pruned_loss=0.05184, over 954397.97 frames. ], batch size: 19, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:26:54,226 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116618.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:26:56,687 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6610, 2.4827, 2.0024, 1.0102, 2.2540, 2.0260, 1.8937, 2.2476], device='cuda:2'), covar=tensor([0.0900, 0.0781, 0.1630, 0.2111, 0.1367, 0.2380, 0.2260, 0.0985], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0191, 0.0197, 0.0181, 0.0207, 0.0207, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:27:11,119 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 01:27:18,445 INFO [finetune.py:976] (2/7) Epoch 21, batch 2100, loss[loss=0.1489, simple_loss=0.2108, pruned_loss=0.04351, over 4713.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2434, pruned_loss=0.05218, over 955543.66 frames. ], batch size: 23, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:27:20,846 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.617e+02 1.783e+02 2.198e+02 6.495e+02, threshold=3.567e+02, percent-clipped=4.0 2023-03-27 01:27:29,061 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116670.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:27:34,426 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116679.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:27:34,470 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116679.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:27:51,925 INFO [finetune.py:976] (2/7) Epoch 21, batch 2150, loss[loss=0.2083, simple_loss=0.2681, pruned_loss=0.07428, over 4827.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2466, pruned_loss=0.05373, over 954955.67 frames. ], batch size: 30, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:27:52,037 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:00,963 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:26,697 INFO [finetune.py:976] (2/7) Epoch 21, batch 2200, loss[loss=0.1274, simple_loss=0.2004, pruned_loss=0.02718, over 4771.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2491, pruned_loss=0.05447, over 956032.89 frames. ], batch size: 26, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:28:30,714 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.636e+02 2.054e+02 2.505e+02 6.138e+02, threshold=4.108e+02, percent-clipped=5.0 2023-03-27 01:28:32,675 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116761.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:39,690 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116765.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:44,466 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116772.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:53,680 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4156, 1.3631, 1.8373, 1.7698, 1.5776, 3.1799, 1.3004, 1.5575], device='cuda:2'), covar=tensor([0.0962, 0.1842, 0.1179, 0.0933, 0.1575, 0.0243, 0.1527, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:29:19,238 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116802.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:29:19,266 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116802.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:20,383 INFO [finetune.py:976] (2/7) Epoch 21, batch 2250, loss[loss=0.1372, simple_loss=0.2005, pruned_loss=0.03697, over 3998.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2508, pruned_loss=0.05515, over 955328.79 frames. ], batch size: 17, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:29:28,966 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116809.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:39,989 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:30:01,467 INFO [finetune.py:976] (2/7) Epoch 21, batch 2300, loss[loss=0.1424, simple_loss=0.2191, pruned_loss=0.03284, over 4891.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2504, pruned_loss=0.05405, over 956189.83 frames. ], batch size: 32, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:30:04,886 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.477e+02 1.740e+02 2.172e+02 4.454e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 01:30:06,812 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116861.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:30:08,066 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116863.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:30:34,137 INFO [finetune.py:976] (2/7) Epoch 21, batch 2350, loss[loss=0.1809, simple_loss=0.2544, pruned_loss=0.0537, over 4754.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2485, pruned_loss=0.0531, over 956764.74 frames. ], batch size: 59, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:31:07,400 INFO [finetune.py:976] (2/7) Epoch 21, batch 2400, loss[loss=0.1734, simple_loss=0.237, pruned_loss=0.05492, over 4829.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2464, pruned_loss=0.05263, over 956750.44 frames. ], batch size: 33, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:31:09,768 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.566e+02 1.863e+02 2.219e+02 3.648e+02, threshold=3.726e+02, percent-clipped=1.0 2023-03-27 01:31:16,925 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2522, 1.3694, 1.7170, 1.7367, 1.5487, 3.2087, 1.3588, 1.5205], device='cuda:2'), covar=tensor([0.1014, 0.1895, 0.1094, 0.0932, 0.1613, 0.0268, 0.1540, 0.1853], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:31:21,634 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:31:25,173 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116979.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:31:36,699 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4397, 3.0559, 2.8057, 1.5072, 2.9604, 2.4551, 2.4061, 2.6900], device='cuda:2'), covar=tensor([0.0878, 0.0910, 0.1985, 0.2295, 0.1877, 0.2583, 0.2068, 0.1193], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0199, 0.0183, 0.0209, 0.0208, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:31:40,995 INFO [finetune.py:976] (2/7) Epoch 21, batch 2450, loss[loss=0.1849, simple_loss=0.2455, pruned_loss=0.06215, over 4898.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2434, pruned_loss=0.05162, over 955817.22 frames. ], batch size: 35, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:31:57,424 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:14,703 INFO [finetune.py:976] (2/7) Epoch 21, batch 2500, loss[loss=0.1638, simple_loss=0.2447, pruned_loss=0.04149, over 4809.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2448, pruned_loss=0.05272, over 953560.75 frames. ], batch size: 45, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:32:17,114 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.510e+02 1.797e+02 2.340e+02 3.968e+02, threshold=3.593e+02, percent-clipped=1.0 2023-03-27 01:32:18,388 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:40,226 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 01:32:46,783 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:47,903 INFO [finetune.py:976] (2/7) Epoch 21, batch 2550, loss[loss=0.2238, simple_loss=0.2877, pruned_loss=0.0799, over 4856.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2489, pruned_loss=0.0536, over 954675.58 frames. ], batch size: 44, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:32:52,335 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6283, 1.4616, 1.6156, 0.8545, 1.5265, 1.5790, 1.5705, 1.3994], device='cuda:2'), covar=tensor([0.0627, 0.0793, 0.0622, 0.0965, 0.0924, 0.0818, 0.0665, 0.1232], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0134, 0.0138, 0.0119, 0.0124, 0.0137, 0.0139, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:33:07,518 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1359, 4.7928, 4.4226, 2.5268, 4.9094, 3.7735, 0.8290, 3.3341], device='cuda:2'), covar=tensor([0.2010, 0.1877, 0.1324, 0.3139, 0.0679, 0.0783, 0.4647, 0.1493], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0178, 0.0158, 0.0130, 0.0161, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:33:19,397 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117150.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:21,772 INFO [finetune.py:976] (2/7) Epoch 21, batch 2600, loss[loss=0.1895, simple_loss=0.2698, pruned_loss=0.05461, over 4905.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2496, pruned_loss=0.05398, over 955236.93 frames. ], batch size: 43, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:33:24,219 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.533e+02 1.830e+02 2.226e+02 4.351e+02, threshold=3.661e+02, percent-clipped=3.0 2023-03-27 01:33:24,297 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117158.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:26,115 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:38,970 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9409, 2.1024, 1.8032, 1.9104, 2.7196, 2.5934, 2.2001, 2.1530], device='cuda:2'), covar=tensor([0.0367, 0.0391, 0.0554, 0.0346, 0.0196, 0.0536, 0.0306, 0.0415], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0107, 0.0143, 0.0112, 0.0099, 0.0110, 0.0099, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5503e-05, 8.1804e-05, 1.1277e-04, 8.5667e-05, 7.6888e-05, 8.1206e-05, 7.3981e-05, 8.5809e-05], device='cuda:2') 2023-03-27 01:33:48,246 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 01:34:06,271 INFO [finetune.py:976] (2/7) Epoch 21, batch 2650, loss[loss=0.1818, simple_loss=0.2591, pruned_loss=0.05228, over 4903.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2508, pruned_loss=0.05452, over 955064.37 frames. ], batch size: 37, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:34:09,386 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117209.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:34:18,200 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7326, 1.4473, 0.9172, 1.6459, 2.1341, 1.5224, 1.5894, 1.6267], device='cuda:2'), covar=tensor([0.1417, 0.1897, 0.1859, 0.1162, 0.1923, 0.1952, 0.1379, 0.1989], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0092, 0.0119, 0.0093, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:34:26,095 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=117221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:03,766 INFO [finetune.py:976] (2/7) Epoch 21, batch 2700, loss[loss=0.1902, simple_loss=0.2644, pruned_loss=0.058, over 4876.00 frames. ], tot_loss[loss=0.179, simple_loss=0.25, pruned_loss=0.05403, over 951067.11 frames. ], batch size: 31, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:35:03,885 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3530, 2.2753, 1.8714, 0.9433, 1.9953, 1.8259, 1.6869, 2.0097], device='cuda:2'), covar=tensor([0.1031, 0.0765, 0.1717, 0.2249, 0.1485, 0.2342, 0.2303, 0.1099], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0197, 0.0182, 0.0208, 0.0207, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:35:06,200 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.522e+02 1.732e+02 2.127e+02 4.053e+02, threshold=3.464e+02, percent-clipped=3.0 2023-03-27 01:35:20,303 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 01:35:23,789 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117274.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:30,660 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=117282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:36,575 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2354, 2.1805, 1.8495, 2.3727, 2.1877, 1.9949, 2.5216, 2.3173], device='cuda:2'), covar=tensor([0.1181, 0.1999, 0.2450, 0.2068, 0.2060, 0.1308, 0.2673, 0.1519], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0188, 0.0235, 0.0253, 0.0247, 0.0204, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:35:45,278 INFO [finetune.py:976] (2/7) Epoch 21, batch 2750, loss[loss=0.1545, simple_loss=0.2161, pruned_loss=0.04643, over 4845.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2477, pruned_loss=0.05337, over 950388.33 frames. ], batch size: 49, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:35:52,673 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6344, 1.5106, 1.0883, 0.2538, 1.2569, 1.4845, 1.4146, 1.4490], device='cuda:2'), covar=tensor([0.0882, 0.0791, 0.1303, 0.1994, 0.1289, 0.2266, 0.2408, 0.0838], device='cuda:2'), in_proj_covar=tensor([0.0167, 0.0189, 0.0194, 0.0180, 0.0206, 0.0205, 0.0219, 0.0192], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:35:56,248 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117322.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:01,562 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=117329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:18,643 INFO [finetune.py:976] (2/7) Epoch 21, batch 2800, loss[loss=0.1658, simple_loss=0.2155, pruned_loss=0.05805, over 4050.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2433, pruned_loss=0.05156, over 950892.64 frames. ], batch size: 17, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:36:21,562 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.495e+02 1.752e+02 2.115e+02 2.888e+02, threshold=3.503e+02, percent-clipped=0.0 2023-03-27 01:36:22,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117360.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:42,938 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=117390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:48,412 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.62 vs. limit=5.0 2023-03-27 01:36:49,657 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6062, 1.1503, 0.7917, 1.3949, 2.0706, 0.8275, 1.3263, 1.3278], device='cuda:2'), covar=tensor([0.1553, 0.2192, 0.1736, 0.1291, 0.1970, 0.1924, 0.1681, 0.2072], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:36:52,478 INFO [finetune.py:976] (2/7) Epoch 21, batch 2850, loss[loss=0.1561, simple_loss=0.2352, pruned_loss=0.03851, over 4840.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2412, pruned_loss=0.05062, over 952483.51 frames. ], batch size: 30, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:36:53,991 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 01:36:54,969 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117408.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:37:25,543 INFO [finetune.py:976] (2/7) Epoch 21, batch 2900, loss[loss=0.1424, simple_loss=0.2304, pruned_loss=0.02715, over 4823.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2449, pruned_loss=0.05224, over 950481.68 frames. ], batch size: 33, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:37:28,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.554e+02 1.875e+02 2.295e+02 6.888e+02, threshold=3.749e+02, percent-clipped=2.0 2023-03-27 01:37:28,516 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117458.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:37:32,138 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 01:37:57,556 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7938, 1.6825, 1.4893, 1.8910, 2.4079, 1.8993, 1.6012, 1.4114], device='cuda:2'), covar=tensor([0.2187, 0.1967, 0.1927, 0.1629, 0.1556, 0.1210, 0.2396, 0.1916], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0195, 0.0243, 0.0188, 0.0218, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:37:59,203 INFO [finetune.py:976] (2/7) Epoch 21, batch 2950, loss[loss=0.2291, simple_loss=0.2824, pruned_loss=0.0879, over 4817.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2494, pruned_loss=0.05404, over 952069.86 frames. ], batch size: 30, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:38:00,492 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:38:00,543 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8053, 1.7556, 1.6949, 1.6843, 1.3228, 4.1319, 1.7890, 2.0048], device='cuda:2'), covar=tensor([0.3173, 0.2397, 0.1940, 0.2214, 0.1578, 0.0140, 0.2273, 0.1184], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0123, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 01:38:32,247 INFO [finetune.py:976] (2/7) Epoch 21, batch 3000, loss[loss=0.1555, simple_loss=0.2425, pruned_loss=0.03422, over 4821.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2501, pruned_loss=0.05386, over 953327.41 frames. ], batch size: 39, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:38:32,247 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 01:38:34,030 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8216, 1.2224, 1.9714, 1.7696, 1.7081, 1.5786, 1.7194, 1.7960], device='cuda:2'), covar=tensor([0.3582, 0.3808, 0.3109, 0.3730, 0.4624, 0.3650, 0.4381, 0.2849], device='cuda:2'), in_proj_covar=tensor([0.0255, 0.0243, 0.0264, 0.0283, 0.0281, 0.0257, 0.0291, 0.0245], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:38:42,800 INFO [finetune.py:1010] (2/7) Epoch 21, validation: loss=0.1567, simple_loss=0.2253, pruned_loss=0.04408, over 2265189.00 frames. 2023-03-27 01:38:42,800 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 01:38:45,675 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.534e+02 1.924e+02 2.362e+02 3.621e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-27 01:38:48,291 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 01:38:53,815 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-27 01:39:00,136 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:39:17,642 INFO [finetune.py:976] (2/7) Epoch 21, batch 3050, loss[loss=0.162, simple_loss=0.2414, pruned_loss=0.04128, over 4739.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.251, pruned_loss=0.0538, over 953347.69 frames. ], batch size: 54, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:39:44,891 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 01:40:13,669 INFO [finetune.py:976] (2/7) Epoch 21, batch 3100, loss[loss=0.1626, simple_loss=0.2429, pruned_loss=0.04115, over 4817.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2482, pruned_loss=0.05254, over 953146.98 frames. ], batch size: 38, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:40:19,667 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.482e+02 1.759e+02 2.208e+02 4.258e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-27 01:40:46,224 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3794, 2.0712, 2.7106, 1.6561, 2.3568, 2.6197, 1.8058, 2.7115], device='cuda:2'), covar=tensor([0.1193, 0.1986, 0.1350, 0.2137, 0.0880, 0.1325, 0.2883, 0.0852], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0205, 0.0191, 0.0190, 0.0174, 0.0214, 0.0218, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:40:46,800 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:40:58,311 INFO [finetune.py:976] (2/7) Epoch 21, batch 3150, loss[loss=0.1699, simple_loss=0.2402, pruned_loss=0.04985, over 4907.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2454, pruned_loss=0.05211, over 954016.29 frames. ], batch size: 36, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:41:31,635 INFO [finetune.py:976] (2/7) Epoch 21, batch 3200, loss[loss=0.1529, simple_loss=0.2258, pruned_loss=0.04002, over 4911.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2424, pruned_loss=0.05109, over 953775.88 frames. ], batch size: 36, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:41:34,036 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.569e+02 1.801e+02 2.101e+02 4.822e+02, threshold=3.602e+02, percent-clipped=2.0 2023-03-27 01:42:02,034 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2224, 2.8719, 2.7973, 1.2150, 3.0313, 2.2131, 0.7393, 1.8725], device='cuda:2'), covar=tensor([0.2408, 0.2308, 0.1620, 0.3416, 0.1321, 0.1137, 0.4118, 0.1600], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0178, 0.0157, 0.0130, 0.0160, 0.0122, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:42:05,172 INFO [finetune.py:976] (2/7) Epoch 21, batch 3250, loss[loss=0.1524, simple_loss=0.2333, pruned_loss=0.03575, over 4937.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2417, pruned_loss=0.05086, over 951921.26 frames. ], batch size: 38, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:42:23,379 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7695, 4.4420, 4.1650, 2.0137, 4.4932, 3.3823, 0.7863, 2.9682], device='cuda:2'), covar=tensor([0.2509, 0.1827, 0.1262, 0.3505, 0.0878, 0.0888, 0.4727, 0.1513], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0177, 0.0157, 0.0129, 0.0160, 0.0122, 0.0146, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 01:42:38,401 INFO [finetune.py:976] (2/7) Epoch 21, batch 3300, loss[loss=0.184, simple_loss=0.2615, pruned_loss=0.05324, over 4898.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.245, pruned_loss=0.05139, over 953969.38 frames. ], batch size: 35, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:42:40,847 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.638e+02 1.917e+02 2.241e+02 9.038e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-27 01:42:54,833 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117877.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:43:11,570 INFO [finetune.py:976] (2/7) Epoch 21, batch 3350, loss[loss=0.2109, simple_loss=0.2785, pruned_loss=0.07167, over 4802.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2467, pruned_loss=0.05221, over 954021.88 frames. ], batch size: 45, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:43:25,773 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:43:45,043 INFO [finetune.py:976] (2/7) Epoch 21, batch 3400, loss[loss=0.175, simple_loss=0.2562, pruned_loss=0.04696, over 4818.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2482, pruned_loss=0.05251, over 954762.60 frames. ], batch size: 45, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:43:47,450 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.189e+02 1.619e+02 1.880e+02 2.233e+02 5.629e+02, threshold=3.761e+02, percent-clipped=2.0 2023-03-27 01:44:05,869 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:18,638 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9310, 2.0391, 1.6995, 1.7067, 2.4615, 2.4407, 1.9632, 2.0135], device='cuda:2'), covar=tensor([0.0367, 0.0397, 0.0618, 0.0370, 0.0245, 0.0506, 0.0429, 0.0388], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0107, 0.0144, 0.0112, 0.0099, 0.0110, 0.0101, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.5933e-05, 8.2125e-05, 1.1337e-04, 8.5943e-05, 7.7362e-05, 8.1266e-05, 7.4969e-05, 8.6007e-05], device='cuda:2') 2023-03-27 01:44:19,699 INFO [finetune.py:976] (2/7) Epoch 21, batch 3450, loss[loss=0.2056, simple_loss=0.2612, pruned_loss=0.07494, over 4758.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2481, pruned_loss=0.05205, over 955713.23 frames. ], batch size: 28, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:44:21,127 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-27 01:44:39,325 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:39,414 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8920, 1.7743, 1.5191, 1.3812, 1.8913, 1.6176, 1.8431, 1.8633], device='cuda:2'), covar=tensor([0.1488, 0.2044, 0.3244, 0.2667, 0.2837, 0.1824, 0.3091, 0.1888], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0188, 0.0235, 0.0253, 0.0247, 0.0204, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:44:42,942 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1798, 1.9768, 1.4542, 0.5784, 1.6475, 1.8144, 1.6331, 1.8277], device='cuda:2'), covar=tensor([0.0880, 0.0740, 0.1448, 0.2034, 0.1360, 0.2458, 0.2185, 0.0862], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0190, 0.0197, 0.0182, 0.0208, 0.0208, 0.0220, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:44:44,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:59,765 INFO [finetune.py:976] (2/7) Epoch 21, batch 3500, loss[loss=0.1597, simple_loss=0.2233, pruned_loss=0.04809, over 4217.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2466, pruned_loss=0.05201, over 955875.18 frames. ], batch size: 18, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:45:02,213 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.127e+01 1.501e+02 1.833e+02 2.184e+02 3.839e+02, threshold=3.666e+02, percent-clipped=2.0 2023-03-27 01:45:33,198 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8249, 1.0224, 1.8345, 1.6903, 1.5955, 1.5134, 1.6723, 1.7619], device='cuda:2'), covar=tensor([0.3464, 0.3459, 0.2782, 0.3359, 0.4061, 0.3383, 0.3413, 0.2657], device='cuda:2'), in_proj_covar=tensor([0.0256, 0.0243, 0.0264, 0.0283, 0.0282, 0.0257, 0.0291, 0.0246], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:45:52,327 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118101.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:45:57,919 INFO [finetune.py:976] (2/7) Epoch 21, batch 3550, loss[loss=0.1856, simple_loss=0.2488, pruned_loss=0.06116, over 4907.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2437, pruned_loss=0.05128, over 953893.28 frames. ], batch size: 35, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:46:27,275 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.10 vs. limit=2.0 2023-03-27 01:46:30,089 INFO [finetune.py:976] (2/7) Epoch 21, batch 3600, loss[loss=0.1634, simple_loss=0.2251, pruned_loss=0.05082, over 4761.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2408, pruned_loss=0.05025, over 956097.21 frames. ], batch size: 26, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:46:33,040 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.558e+02 1.902e+02 2.180e+02 3.976e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-27 01:46:39,827 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118168.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:46:51,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4707, 1.7971, 0.8598, 2.3065, 2.6761, 1.9718, 1.9971, 2.1119], device='cuda:2'), covar=tensor([0.1278, 0.1790, 0.1952, 0.1110, 0.1617, 0.1732, 0.1271, 0.1983], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0121, 0.0094, 0.0099, 0.0090], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 01:46:54,044 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0113, 2.0160, 1.9211, 2.3947, 2.4516, 2.3198, 1.7729, 1.6835], device='cuda:2'), covar=tensor([0.2291, 0.1854, 0.1768, 0.1433, 0.1810, 0.1119, 0.2509, 0.1967], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0213, 0.0195, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:47:03,624 INFO [finetune.py:976] (2/7) Epoch 21, batch 3650, loss[loss=0.176, simple_loss=0.259, pruned_loss=0.0465, over 4757.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2427, pruned_loss=0.0511, over 955643.39 frames. ], batch size: 59, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:47:19,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6307, 2.3414, 1.8512, 0.9772, 1.9983, 2.0059, 1.8289, 2.0474], device='cuda:2'), covar=tensor([0.0781, 0.0882, 0.1759, 0.2097, 0.1530, 0.2110, 0.2285, 0.0999], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0197, 0.0182, 0.0208, 0.0208, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:47:19,817 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118229.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:47:36,721 INFO [finetune.py:976] (2/7) Epoch 21, batch 3700, loss[loss=0.1481, simple_loss=0.2323, pruned_loss=0.03198, over 4820.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2464, pruned_loss=0.05244, over 954966.69 frames. ], batch size: 38, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:47:37,982 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2736, 2.3335, 2.1368, 1.5319, 2.2250, 2.3803, 2.2811, 1.9547], device='cuda:2'), covar=tensor([0.0598, 0.0542, 0.0723, 0.0873, 0.0628, 0.0662, 0.0651, 0.1027], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0135, 0.0139, 0.0120, 0.0125, 0.0138, 0.0139, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:47:39,056 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.607e+02 1.941e+02 2.377e+02 3.454e+02, threshold=3.882e+02, percent-clipped=0.0 2023-03-27 01:47:46,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:00,447 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118289.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:10,298 INFO [finetune.py:976] (2/7) Epoch 21, batch 3750, loss[loss=0.1871, simple_loss=0.2626, pruned_loss=0.05579, over 4894.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2482, pruned_loss=0.05282, over 954211.55 frames. ], batch size: 37, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:48:26,332 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:26,956 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118330.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:40,957 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:40,977 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4421, 2.3560, 2.1246, 2.4331, 2.3583, 2.3079, 2.2684, 3.2036], device='cuda:2'), covar=tensor([0.3646, 0.4750, 0.3212, 0.4288, 0.4104, 0.2366, 0.4394, 0.1534], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0233, 0.0277, 0.0253, 0.0223, 0.0252, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:48:43,713 INFO [finetune.py:976] (2/7) Epoch 21, batch 3800, loss[loss=0.1969, simple_loss=0.2707, pruned_loss=0.06153, over 4844.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2488, pruned_loss=0.05232, over 954553.25 frames. ], batch size: 30, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:48:46,092 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.364e+01 1.561e+02 1.815e+02 2.293e+02 4.441e+02, threshold=3.631e+02, percent-clipped=1.0 2023-03-27 01:49:06,630 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:49:08,519 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 01:49:11,212 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118396.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:49:17,070 INFO [finetune.py:976] (2/7) Epoch 21, batch 3850, loss[loss=0.1991, simple_loss=0.2686, pruned_loss=0.06477, over 4891.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2472, pruned_loss=0.05135, over 955309.77 frames. ], batch size: 43, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:49:41,132 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1324, 2.1350, 1.8618, 2.3547, 2.9037, 2.2016, 2.2469, 1.6401], device='cuda:2'), covar=tensor([0.2110, 0.1938, 0.1841, 0.1542, 0.1530, 0.1134, 0.1882, 0.1845], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0243, 0.0188, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:49:50,287 INFO [finetune.py:976] (2/7) Epoch 21, batch 3900, loss[loss=0.1672, simple_loss=0.2314, pruned_loss=0.05153, over 4848.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2453, pruned_loss=0.05093, over 957865.01 frames. ], batch size: 49, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:49:52,685 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.530e+02 1.819e+02 2.327e+02 4.856e+02, threshold=3.639e+02, percent-clipped=2.0 2023-03-27 01:50:25,014 INFO [finetune.py:976] (2/7) Epoch 21, batch 3950, loss[loss=0.1504, simple_loss=0.2256, pruned_loss=0.03756, over 4876.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2425, pruned_loss=0.05002, over 957792.16 frames. ], batch size: 31, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:50:28,578 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3460, 1.5354, 0.6448, 2.1381, 2.7049, 1.8455, 1.9371, 1.9525], device='cuda:2'), covar=tensor([0.1269, 0.1987, 0.2208, 0.1056, 0.1632, 0.1789, 0.1335, 0.1938], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 01:50:43,586 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118524.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 01:51:19,653 INFO [finetune.py:976] (2/7) Epoch 21, batch 4000, loss[loss=0.214, simple_loss=0.2794, pruned_loss=0.07424, over 4823.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2424, pruned_loss=0.05065, over 956850.94 frames. ], batch size: 39, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:51:26,592 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.742e+01 1.543e+02 1.809e+02 2.201e+02 4.154e+02, threshold=3.618e+02, percent-clipped=2.0 2023-03-27 01:51:56,640 INFO [finetune.py:976] (2/7) Epoch 21, batch 4050, loss[loss=0.197, simple_loss=0.2738, pruned_loss=0.06005, over 4828.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2449, pruned_loss=0.05173, over 955596.85 frames. ], batch size: 30, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:52:07,470 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-27 01:52:11,484 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118625.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:15,695 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118631.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:19,368 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 01:52:24,044 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5994, 1.1746, 0.9945, 1.5680, 1.9784, 1.4482, 1.4231, 1.6236], device='cuda:2'), covar=tensor([0.1559, 0.2104, 0.1929, 0.1242, 0.2103, 0.2133, 0.1486, 0.1829], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:52:24,639 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118645.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:30,074 INFO [finetune.py:976] (2/7) Epoch 21, batch 4100, loss[loss=0.1591, simple_loss=0.2365, pruned_loss=0.04081, over 4737.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2483, pruned_loss=0.05287, over 956869.96 frames. ], batch size: 59, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:52:33,508 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.828e+01 1.601e+02 1.824e+02 2.338e+02 3.980e+02, threshold=3.647e+02, percent-clipped=1.0 2023-03-27 01:52:51,562 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:56,323 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:58,683 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118696.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:53:03,515 INFO [finetune.py:976] (2/7) Epoch 21, batch 4150, loss[loss=0.2018, simple_loss=0.2814, pruned_loss=0.06112, over 4806.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2495, pruned_loss=0.05352, over 955166.50 frames. ], batch size: 40, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:53:31,225 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118744.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:53:37,401 INFO [finetune.py:976] (2/7) Epoch 21, batch 4200, loss[loss=0.138, simple_loss=0.2195, pruned_loss=0.02828, over 4761.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2503, pruned_loss=0.0531, over 955901.39 frames. ], batch size: 26, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:53:39,820 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.932e+01 1.492e+02 1.761e+02 2.066e+02 3.902e+02, threshold=3.521e+02, percent-clipped=1.0 2023-03-27 01:53:40,031 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.09 vs. limit=5.0 2023-03-27 01:53:45,823 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 01:53:47,361 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0664, 1.6907, 2.3451, 3.7868, 2.6052, 2.5690, 0.8037, 3.0970], device='cuda:2'), covar=tensor([0.1492, 0.1369, 0.1306, 0.0649, 0.0704, 0.1827, 0.1932, 0.0445], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 01:54:11,363 INFO [finetune.py:976] (2/7) Epoch 21, batch 4250, loss[loss=0.1429, simple_loss=0.2211, pruned_loss=0.03237, over 4811.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2481, pruned_loss=0.0521, over 956278.31 frames. ], batch size: 25, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:54:25,965 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:54:42,831 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1894, 2.0584, 1.7472, 1.9023, 2.1716, 1.8610, 2.2413, 2.1839], device='cuda:2'), covar=tensor([0.1377, 0.2016, 0.3074, 0.2518, 0.2570, 0.1811, 0.2947, 0.1818], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0237, 0.0254, 0.0248, 0.0206, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:54:45,139 INFO [finetune.py:976] (2/7) Epoch 21, batch 4300, loss[loss=0.1553, simple_loss=0.2219, pruned_loss=0.04433, over 4911.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2462, pruned_loss=0.05207, over 957005.82 frames. ], batch size: 46, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:54:47,578 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.563e+02 1.855e+02 2.179e+02 3.656e+02, threshold=3.709e+02, percent-clipped=1.0 2023-03-27 01:54:57,140 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-27 01:54:57,552 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:04,009 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9110, 1.7468, 1.6566, 2.0238, 2.2269, 1.9830, 1.5432, 1.5931], device='cuda:2'), covar=tensor([0.1916, 0.1868, 0.1714, 0.1433, 0.1472, 0.1161, 0.2238, 0.1768], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0196, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:55:18,872 INFO [finetune.py:976] (2/7) Epoch 21, batch 4350, loss[loss=0.1642, simple_loss=0.242, pruned_loss=0.04319, over 4918.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2429, pruned_loss=0.0508, over 957007.13 frames. ], batch size: 35, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:55:33,226 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:48,467 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:59,300 INFO [finetune.py:976] (2/7) Epoch 21, batch 4400, loss[loss=0.204, simple_loss=0.2863, pruned_loss=0.06082, over 4817.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2435, pruned_loss=0.05108, over 956643.09 frames. ], batch size: 38, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:56:01,709 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.753e+01 1.466e+02 1.745e+02 2.136e+02 3.634e+02, threshold=3.490e+02, percent-clipped=0.0 2023-03-27 01:56:17,886 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118973.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:31,287 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:36,940 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:40,524 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118993.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:51,290 INFO [finetune.py:976] (2/7) Epoch 21, batch 4450, loss[loss=0.1972, simple_loss=0.2815, pruned_loss=0.05645, over 4775.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2482, pruned_loss=0.05194, over 956479.79 frames. ], batch size: 54, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:57:11,409 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:16,767 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4615, 1.5426, 1.4694, 0.8880, 1.6544, 1.8778, 1.8507, 1.3828], device='cuda:2'), covar=tensor([0.1006, 0.0649, 0.0589, 0.0591, 0.0473, 0.0595, 0.0378, 0.0725], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0126, 0.0123, 0.0130, 0.0128, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.9306e-05, 1.0747e-04, 8.9880e-05, 8.6632e-05, 9.1639e-05, 9.1562e-05, 1.0119e-04, 1.0565e-04], device='cuda:2') 2023-03-27 01:57:20,386 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 01:57:25,002 INFO [finetune.py:976] (2/7) Epoch 21, batch 4500, loss[loss=0.1862, simple_loss=0.2617, pruned_loss=0.05536, over 4813.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2515, pruned_loss=0.05415, over 958408.04 frames. ], batch size: 40, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:57:27,416 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.548e+02 1.910e+02 2.429e+02 4.520e+02, threshold=3.820e+02, percent-clipped=3.0 2023-03-27 01:57:37,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9084, 1.3092, 1.7741, 1.8757, 1.6489, 1.6831, 1.7480, 1.7307], device='cuda:2'), covar=tensor([0.4489, 0.4170, 0.3795, 0.3904, 0.5305, 0.4319, 0.4906, 0.3699], device='cuda:2'), in_proj_covar=tensor([0.0254, 0.0241, 0.0263, 0.0281, 0.0279, 0.0256, 0.0289, 0.0244], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:57:38,336 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:49,667 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5708, 2.4283, 2.0472, 1.0691, 2.2382, 1.9322, 1.7376, 2.1951], device='cuda:2'), covar=tensor([0.0801, 0.0726, 0.1330, 0.1971, 0.1288, 0.2244, 0.2219, 0.0947], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0183, 0.0210, 0.0209, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:57:51,473 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119093.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:58,451 INFO [finetune.py:976] (2/7) Epoch 21, batch 4550, loss[loss=0.1755, simple_loss=0.2289, pruned_loss=0.0611, over 4287.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2527, pruned_loss=0.05418, over 957336.90 frames. ], batch size: 18, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:58:19,746 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:58:31,551 INFO [finetune.py:976] (2/7) Epoch 21, batch 4600, loss[loss=0.2119, simple_loss=0.2697, pruned_loss=0.07711, over 4891.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2514, pruned_loss=0.05328, over 956777.91 frames. ], batch size: 32, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:58:31,664 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119154.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:58:34,457 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.653e+02 1.869e+02 2.317e+02 3.451e+02, threshold=3.738e+02, percent-clipped=0.0 2023-03-27 01:58:59,856 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0324, 1.8087, 1.6595, 1.5653, 1.7214, 1.7031, 1.7661, 2.4321], device='cuda:2'), covar=tensor([0.3253, 0.3516, 0.2811, 0.3524, 0.3550, 0.2135, 0.3309, 0.1601], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0232, 0.0277, 0.0254, 0.0224, 0.0253, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:59:05,262 INFO [finetune.py:976] (2/7) Epoch 21, batch 4650, loss[loss=0.1656, simple_loss=0.2338, pruned_loss=0.0487, over 4746.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2493, pruned_loss=0.05343, over 955687.05 frames. ], batch size: 27, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:59:05,388 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8461, 2.6457, 2.1791, 1.1331, 2.3856, 2.1433, 2.0265, 2.3642], device='cuda:2'), covar=tensor([0.0739, 0.0808, 0.1597, 0.2093, 0.1289, 0.1925, 0.2055, 0.0978], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0200, 0.0183, 0.0211, 0.0210, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 01:59:37,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5743, 1.1931, 0.7416, 1.4104, 2.0421, 0.7360, 1.2543, 1.3460], device='cuda:2'), covar=tensor([0.1490, 0.2220, 0.1880, 0.1316, 0.1852, 0.1917, 0.1579, 0.2201], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 01:59:37,962 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.86 vs. limit=5.0 2023-03-27 01:59:38,313 INFO [finetune.py:976] (2/7) Epoch 21, batch 4700, loss[loss=0.1953, simple_loss=0.2636, pruned_loss=0.06351, over 4828.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2464, pruned_loss=0.05233, over 956669.23 frames. ], batch size: 30, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:59:40,726 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.521e+02 1.791e+02 2.382e+02 6.096e+02, threshold=3.583e+02, percent-clipped=7.0 2023-03-27 01:59:59,484 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:00:05,286 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8113, 3.3645, 3.4978, 3.6894, 3.5965, 3.3651, 3.8747, 1.2994], device='cuda:2'), covar=tensor([0.0950, 0.0870, 0.0967, 0.1068, 0.1449, 0.1800, 0.0938, 0.5661], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0244, 0.0280, 0.0293, 0.0335, 0.0285, 0.0304, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:00:11,568 INFO [finetune.py:976] (2/7) Epoch 21, batch 4750, loss[loss=0.1628, simple_loss=0.229, pruned_loss=0.04825, over 4763.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2444, pruned_loss=0.05141, over 957994.65 frames. ], batch size: 27, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:00:31,333 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:00:36,548 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6270, 1.7224, 1.3940, 1.7081, 2.0590, 1.9819, 1.6548, 1.4853], device='cuda:2'), covar=tensor([0.0360, 0.0319, 0.0624, 0.0279, 0.0194, 0.0451, 0.0355, 0.0429], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0106, 0.0143, 0.0111, 0.0099, 0.0110, 0.0100, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.5983e-05, 8.1465e-05, 1.1248e-04, 8.5469e-05, 7.6758e-05, 8.1520e-05, 7.4604e-05, 8.5532e-05], device='cuda:2') 2023-03-27 02:00:44,656 INFO [finetune.py:976] (2/7) Epoch 21, batch 4800, loss[loss=0.1936, simple_loss=0.2695, pruned_loss=0.05885, over 4903.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2458, pruned_loss=0.05203, over 957734.93 frames. ], batch size: 35, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:00:47,499 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.488e+02 1.781e+02 2.195e+02 3.360e+02, threshold=3.562e+02, percent-clipped=0.0 2023-03-27 02:01:22,405 INFO [finetune.py:976] (2/7) Epoch 21, batch 4850, loss[loss=0.171, simple_loss=0.2311, pruned_loss=0.05545, over 4803.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2477, pruned_loss=0.05221, over 956662.60 frames. ], batch size: 25, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:01:55,527 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119431.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:02:15,264 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119449.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:02:19,175 INFO [finetune.py:976] (2/7) Epoch 21, batch 4900, loss[loss=0.1598, simple_loss=0.2301, pruned_loss=0.0447, over 4754.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2499, pruned_loss=0.05295, over 957201.82 frames. ], batch size: 26, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:02:25,214 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.684e+02 1.937e+02 2.365e+02 4.201e+02, threshold=3.874e+02, percent-clipped=2.0 2023-03-27 02:02:30,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0777, 1.9442, 1.7402, 1.8416, 1.8482, 1.8302, 1.8868, 2.5358], device='cuda:2'), covar=tensor([0.3314, 0.4018, 0.2921, 0.3539, 0.3865, 0.2295, 0.3550, 0.1542], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0262, 0.0233, 0.0277, 0.0254, 0.0223, 0.0253, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:02:55,580 INFO [finetune.py:976] (2/7) Epoch 21, batch 4950, loss[loss=0.1347, simple_loss=0.1796, pruned_loss=0.04487, over 4302.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.25, pruned_loss=0.05309, over 956590.78 frames. ], batch size: 18, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:03:00,499 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4901, 1.5677, 1.5282, 0.8559, 1.6022, 1.7934, 1.8314, 1.4014], device='cuda:2'), covar=tensor([0.0850, 0.0512, 0.0442, 0.0494, 0.0389, 0.0478, 0.0261, 0.0612], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0126, 0.0123, 0.0131, 0.0129, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.0017e-05, 1.0814e-04, 9.0498e-05, 8.6992e-05, 9.2010e-05, 9.1845e-05, 1.0199e-04, 1.0618e-04], device='cuda:2') 2023-03-27 02:03:02,769 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119514.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:03:28,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7441, 1.6047, 1.5252, 1.6213, 1.4363, 3.6851, 1.6924, 2.2018], device='cuda:2'), covar=tensor([0.3830, 0.2963, 0.2353, 0.2902, 0.1697, 0.0258, 0.2377, 0.1040], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0114, 0.0097, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:03:29,003 INFO [finetune.py:976] (2/7) Epoch 21, batch 5000, loss[loss=0.1656, simple_loss=0.2348, pruned_loss=0.04823, over 4807.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2481, pruned_loss=0.05223, over 956315.49 frames. ], batch size: 25, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:03:32,979 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.554e+02 1.853e+02 2.138e+02 3.358e+02, threshold=3.705e+02, percent-clipped=0.0 2023-03-27 02:03:43,306 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:04:02,226 INFO [finetune.py:976] (2/7) Epoch 21, batch 5050, loss[loss=0.2007, simple_loss=0.2505, pruned_loss=0.07544, over 4839.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2456, pruned_loss=0.05161, over 957448.15 frames. ], batch size: 30, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:04:15,667 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6376, 1.5264, 1.9097, 3.0391, 2.0389, 2.2432, 1.0457, 2.5613], device='cuda:2'), covar=tensor([0.1729, 0.1346, 0.1257, 0.0639, 0.0895, 0.1188, 0.1821, 0.0553], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0135, 0.0166, 0.0102, 0.0139, 0.0127, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 02:04:22,177 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1248, 2.1034, 2.1532, 1.4754, 2.0819, 2.2595, 2.2665, 1.7236], device='cuda:2'), covar=tensor([0.0610, 0.0647, 0.0699, 0.0967, 0.0708, 0.0670, 0.0604, 0.1176], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0140, 0.0121, 0.0126, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:04:25,858 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7887, 1.7779, 1.5047, 1.8014, 2.1739, 2.1552, 1.7109, 1.5942], device='cuda:2'), covar=tensor([0.0308, 0.0295, 0.0625, 0.0286, 0.0197, 0.0433, 0.0285, 0.0378], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0107, 0.0144, 0.0112, 0.0099, 0.0111, 0.0101, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.6355e-05, 8.1776e-05, 1.1316e-04, 8.5920e-05, 7.7374e-05, 8.1867e-05, 7.5206e-05, 8.5941e-05], device='cuda:2') 2023-03-27 02:04:35,255 INFO [finetune.py:976] (2/7) Epoch 21, batch 5100, loss[loss=0.1283, simple_loss=0.1948, pruned_loss=0.03089, over 4824.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2425, pruned_loss=0.05071, over 957838.14 frames. ], batch size: 30, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:04:37,213 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 02:04:39,211 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.479e+02 1.750e+02 2.173e+02 3.976e+02, threshold=3.500e+02, percent-clipped=1.0 2023-03-27 02:04:43,482 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119665.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:04:53,764 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 02:05:08,858 INFO [finetune.py:976] (2/7) Epoch 21, batch 5150, loss[loss=0.1711, simple_loss=0.2395, pruned_loss=0.05137, over 4772.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2424, pruned_loss=0.05083, over 957004.51 frames. ], batch size: 26, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:05:21,679 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6889, 1.3104, 0.9490, 1.6730, 2.2406, 1.2142, 1.4592, 1.5487], device='cuda:2'), covar=tensor([0.1496, 0.2075, 0.1789, 0.1157, 0.1713, 0.1690, 0.1501, 0.2041], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 02:05:24,657 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:05:27,602 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119731.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:05:38,883 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119749.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:05:42,228 INFO [finetune.py:976] (2/7) Epoch 21, batch 5200, loss[loss=0.2164, simple_loss=0.2826, pruned_loss=0.07511, over 4719.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2461, pruned_loss=0.05207, over 956247.33 frames. ], batch size: 59, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:05:45,719 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.656e+02 1.877e+02 2.270e+02 4.720e+02, threshold=3.754e+02, percent-clipped=1.0 2023-03-27 02:05:59,327 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:06:10,788 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119797.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:06:12,063 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9358, 1.3611, 0.8711, 1.8507, 2.3223, 1.6849, 1.4579, 1.7583], device='cuda:2'), covar=tensor([0.1605, 0.2203, 0.2197, 0.1308, 0.1970, 0.2127, 0.1611, 0.2090], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 02:06:15,130 INFO [finetune.py:976] (2/7) Epoch 21, batch 5250, loss[loss=0.1876, simple_loss=0.2615, pruned_loss=0.05681, over 4146.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2484, pruned_loss=0.05231, over 956893.04 frames. ], batch size: 65, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:06:22,202 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2310, 2.1081, 1.6837, 2.0419, 2.1826, 1.8977, 2.4019, 2.2143], device='cuda:2'), covar=tensor([0.1388, 0.2006, 0.3165, 0.2648, 0.2674, 0.1791, 0.2954, 0.1783], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0188, 0.0234, 0.0252, 0.0246, 0.0203, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:06:41,197 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0538, 1.9407, 1.5790, 1.8038, 2.0012, 1.7369, 2.2035, 2.0258], device='cuda:2'), covar=tensor([0.1392, 0.2022, 0.3168, 0.2510, 0.2579, 0.1719, 0.2728, 0.1762], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0188, 0.0234, 0.0252, 0.0246, 0.0203, 0.0214, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:06:59,457 INFO [finetune.py:976] (2/7) Epoch 21, batch 5300, loss[loss=0.1821, simple_loss=0.2576, pruned_loss=0.05326, over 4866.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2509, pruned_loss=0.05331, over 957910.92 frames. ], batch size: 34, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:07:07,164 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.978e+01 1.512e+02 1.747e+02 2.069e+02 4.039e+02, threshold=3.495e+02, percent-clipped=1.0 2023-03-27 02:07:18,527 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:07:45,070 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0310, 1.8500, 1.7225, 1.9136, 1.8313, 1.8725, 1.7630, 2.5166], device='cuda:2'), covar=tensor([0.3870, 0.5003, 0.3478, 0.4119, 0.4440, 0.2464, 0.4273, 0.1900], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0261, 0.0232, 0.0276, 0.0253, 0.0223, 0.0253, 0.0233], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:07:54,828 INFO [finetune.py:976] (2/7) Epoch 21, batch 5350, loss[loss=0.1691, simple_loss=0.2388, pruned_loss=0.04967, over 4921.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2501, pruned_loss=0.05311, over 956424.80 frames. ], batch size: 38, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:08:05,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7426, 1.7942, 1.6239, 1.5733, 2.2931, 2.2867, 1.9108, 1.8333], device='cuda:2'), covar=tensor([0.0455, 0.0431, 0.0615, 0.0465, 0.0360, 0.0702, 0.0508, 0.0439], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0107, 0.0144, 0.0112, 0.0099, 0.0111, 0.0101, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.6179e-05, 8.2005e-05, 1.1326e-04, 8.5813e-05, 7.7337e-05, 8.2187e-05, 7.4974e-05, 8.6131e-05], device='cuda:2') 2023-03-27 02:08:28,086 INFO [finetune.py:976] (2/7) Epoch 21, batch 5400, loss[loss=0.1278, simple_loss=0.203, pruned_loss=0.02629, over 4745.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2478, pruned_loss=0.05266, over 956899.31 frames. ], batch size: 23, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:08:31,173 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.474e+02 1.662e+02 2.015e+02 3.492e+02, threshold=3.324e+02, percent-clipped=0.0 2023-03-27 02:08:45,734 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3666, 3.8006, 3.9744, 4.2456, 4.1380, 3.9040, 4.4567, 1.3940], device='cuda:2'), covar=tensor([0.0752, 0.0834, 0.0957, 0.1023, 0.1254, 0.1423, 0.0667, 0.5869], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0242, 0.0279, 0.0289, 0.0331, 0.0282, 0.0301, 0.0296], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:08:52,788 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6187, 1.4362, 1.0034, 0.2363, 1.1264, 1.4785, 1.4203, 1.3958], device='cuda:2'), covar=tensor([0.0868, 0.0803, 0.1542, 0.2120, 0.1478, 0.2318, 0.2395, 0.0947], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0198, 0.0183, 0.0209, 0.0209, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:08:54,078 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.31 vs. limit=5.0 2023-03-27 02:09:02,904 INFO [finetune.py:976] (2/7) Epoch 21, batch 5450, loss[loss=0.1491, simple_loss=0.2204, pruned_loss=0.03891, over 4813.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2452, pruned_loss=0.05212, over 957215.96 frames. ], batch size: 25, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:09:13,781 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=120021.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:09:28,654 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 02:09:36,195 INFO [finetune.py:976] (2/7) Epoch 21, batch 5500, loss[loss=0.1733, simple_loss=0.2387, pruned_loss=0.05395, over 4825.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2413, pruned_loss=0.05038, over 958081.10 frames. ], batch size: 30, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:09:39,679 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.953e+01 1.422e+02 1.683e+02 2.099e+02 3.794e+02, threshold=3.366e+02, percent-clipped=2.0 2023-03-27 02:10:09,944 INFO [finetune.py:976] (2/7) Epoch 21, batch 5550, loss[loss=0.214, simple_loss=0.288, pruned_loss=0.06996, over 4748.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2437, pruned_loss=0.05165, over 958276.42 frames. ], batch size: 59, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:10:35,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9852, 2.8334, 2.5060, 3.3499, 2.9402, 2.6644, 3.4817, 2.9592], device='cuda:2'), covar=tensor([0.1118, 0.1901, 0.2548, 0.1901, 0.2166, 0.1440, 0.2202, 0.1497], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0187, 0.0233, 0.0252, 0.0245, 0.0202, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:10:37,732 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 02:10:42,283 INFO [finetune.py:976] (2/7) Epoch 21, batch 5600, loss[loss=0.1711, simple_loss=0.2471, pruned_loss=0.04755, over 4864.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2482, pruned_loss=0.05292, over 957979.66 frames. ], batch size: 44, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:10:45,198 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.550e+02 1.831e+02 2.203e+02 3.727e+02, threshold=3.662e+02, percent-clipped=1.0 2023-03-27 02:10:52,168 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=120170.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:11:12,065 INFO [finetune.py:976] (2/7) Epoch 21, batch 5650, loss[loss=0.1531, simple_loss=0.2168, pruned_loss=0.0447, over 4537.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2498, pruned_loss=0.05272, over 957359.26 frames. ], batch size: 19, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:11:20,663 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=120218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:11:32,535 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6928, 0.8251, 1.7042, 1.6172, 1.5722, 1.5188, 1.5490, 1.6365], device='cuda:2'), covar=tensor([0.3150, 0.3441, 0.3081, 0.3106, 0.3998, 0.3372, 0.4069, 0.2728], device='cuda:2'), in_proj_covar=tensor([0.0257, 0.0244, 0.0264, 0.0284, 0.0283, 0.0259, 0.0293, 0.0248], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:11:41,962 INFO [finetune.py:976] (2/7) Epoch 21, batch 5700, loss[loss=0.1686, simple_loss=0.2287, pruned_loss=0.05426, over 3531.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.246, pruned_loss=0.05261, over 936733.37 frames. ], batch size: 15, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:11:44,944 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.458e+02 1.705e+02 2.128e+02 3.595e+02, threshold=3.409e+02, percent-clipped=0.0 2023-03-27 02:12:12,191 INFO [finetune.py:976] (2/7) Epoch 22, batch 0, loss[loss=0.1975, simple_loss=0.2567, pruned_loss=0.0691, over 4818.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2567, pruned_loss=0.0691, over 4818.00 frames. ], batch size: 30, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:12:12,192 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 02:12:27,778 INFO [finetune.py:1010] (2/7) Epoch 22, validation: loss=0.1597, simple_loss=0.228, pruned_loss=0.04574, over 2265189.00 frames. 2023-03-27 02:12:27,778 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 02:13:15,377 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=120321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:13:27,846 INFO [finetune.py:976] (2/7) Epoch 22, batch 50, loss[loss=0.175, simple_loss=0.2463, pruned_loss=0.05189, over 4770.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2535, pruned_loss=0.05597, over 216875.59 frames. ], batch size: 28, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:13:36,593 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2577, 2.2769, 1.9017, 2.2330, 2.1285, 2.2012, 2.1549, 3.0203], device='cuda:2'), covar=tensor([0.3450, 0.3951, 0.3050, 0.4025, 0.4040, 0.2288, 0.3983, 0.1417], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0232, 0.0277, 0.0253, 0.0223, 0.0252, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:13:48,726 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.631e+02 1.972e+02 2.363e+02 4.295e+02, threshold=3.943e+02, percent-clipped=3.0 2023-03-27 02:13:55,424 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=120369.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:14:04,333 INFO [finetune.py:976] (2/7) Epoch 22, batch 100, loss[loss=0.1707, simple_loss=0.2466, pruned_loss=0.04734, over 4815.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2459, pruned_loss=0.05209, over 382223.38 frames. ], batch size: 41, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:14:30,009 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6106, 1.2064, 0.8931, 1.5903, 2.0206, 1.2449, 1.3517, 1.6756], device='cuda:2'), covar=tensor([0.1325, 0.1850, 0.1830, 0.1086, 0.1801, 0.1857, 0.1359, 0.1686], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 02:14:37,022 INFO [finetune.py:976] (2/7) Epoch 22, batch 150, loss[loss=0.1506, simple_loss=0.2212, pruned_loss=0.04006, over 4829.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.24, pruned_loss=0.05112, over 511554.72 frames. ], batch size: 33, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:14:55,336 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.445e+02 1.737e+02 2.098e+02 4.550e+02, threshold=3.473e+02, percent-clipped=2.0 2023-03-27 02:15:10,227 INFO [finetune.py:976] (2/7) Epoch 22, batch 200, loss[loss=0.1554, simple_loss=0.2306, pruned_loss=0.04011, over 4860.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2393, pruned_loss=0.05087, over 608195.17 frames. ], batch size: 44, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:15:39,210 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7551, 1.3152, 1.0555, 1.6552, 1.9749, 1.2972, 1.5051, 1.7125], device='cuda:2'), covar=tensor([0.1243, 0.1802, 0.1780, 0.1020, 0.1810, 0.1902, 0.1219, 0.1623], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 02:15:42,763 INFO [finetune.py:976] (2/7) Epoch 22, batch 250, loss[loss=0.1753, simple_loss=0.2496, pruned_loss=0.05053, over 4814.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2407, pruned_loss=0.05091, over 683759.91 frames. ], batch size: 39, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:16:01,910 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.519e+02 1.843e+02 2.180e+02 3.548e+02, threshold=3.686e+02, percent-clipped=1.0 2023-03-27 02:16:06,979 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 02:16:16,407 INFO [finetune.py:976] (2/7) Epoch 22, batch 300, loss[loss=0.1745, simple_loss=0.2607, pruned_loss=0.04415, over 4806.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2462, pruned_loss=0.0528, over 742406.34 frames. ], batch size: 45, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:16:42,208 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 02:16:50,474 INFO [finetune.py:976] (2/7) Epoch 22, batch 350, loss[loss=0.2103, simple_loss=0.2757, pruned_loss=0.07246, over 4789.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2488, pruned_loss=0.05373, over 789371.39 frames. ], batch size: 45, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:17:03,033 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1066, 1.9747, 1.5920, 2.0033, 2.0431, 1.7475, 2.3844, 2.1236], device='cuda:2'), covar=tensor([0.1276, 0.1946, 0.2914, 0.2440, 0.2521, 0.1650, 0.2808, 0.1644], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0189, 0.0236, 0.0254, 0.0247, 0.0204, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:17:05,199 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7705, 1.8100, 1.5345, 1.8615, 2.2267, 1.9248, 1.7545, 1.4689], device='cuda:2'), covar=tensor([0.2257, 0.1976, 0.1957, 0.1583, 0.1720, 0.1243, 0.2254, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0209, 0.0213, 0.0194, 0.0242, 0.0188, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:17:09,294 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.647e+02 1.881e+02 2.280e+02 4.594e+02, threshold=3.762e+02, percent-clipped=3.0 2023-03-27 02:17:19,811 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=120676.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:17:23,372 INFO [finetune.py:976] (2/7) Epoch 22, batch 400, loss[loss=0.1717, simple_loss=0.2517, pruned_loss=0.04588, over 4901.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2511, pruned_loss=0.05409, over 826692.07 frames. ], batch size: 35, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:17:57,267 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 02:18:11,217 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.8093, 1.8884, 1.8791, 1.1074, 2.0979, 2.1565, 2.2120, 1.7124], device='cuda:2'), covar=tensor([0.0777, 0.0567, 0.0504, 0.0546, 0.0445, 0.0644, 0.0350, 0.0700], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0125, 0.0122, 0.0129, 0.0127, 0.0139, 0.0146], device='cuda:2'), out_proj_covar=tensor([8.8323e-05, 1.0656e-04, 8.9174e-05, 8.5793e-05, 9.0821e-05, 9.0657e-05, 9.9960e-05, 1.0475e-04], device='cuda:2') 2023-03-27 02:18:12,315 INFO [finetune.py:976] (2/7) Epoch 22, batch 450, loss[loss=0.2151, simple_loss=0.2744, pruned_loss=0.07792, over 4902.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2497, pruned_loss=0.05388, over 854768.92 frames. ], batch size: 36, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:18:20,733 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=120737.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:18:32,348 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4819, 1.4006, 1.2635, 1.5292, 1.6888, 1.5828, 1.0091, 1.2817], device='cuda:2'), covar=tensor([0.2411, 0.2185, 0.2219, 0.1755, 0.1565, 0.1354, 0.2705, 0.2026], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0209, 0.0213, 0.0195, 0.0243, 0.0189, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:18:41,590 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2414, 1.5620, 1.5721, 0.8990, 1.5588, 1.7952, 1.8135, 1.4633], device='cuda:2'), covar=tensor([0.0936, 0.0650, 0.0548, 0.0559, 0.0552, 0.0510, 0.0360, 0.0783], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0125, 0.0122, 0.0130, 0.0128, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.8690e-05, 1.0704e-04, 8.9404e-05, 8.6209e-05, 9.1330e-05, 9.1131e-05, 1.0035e-04, 1.0522e-04], device='cuda:2') 2023-03-27 02:18:43,318 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.581e+02 1.866e+02 2.243e+02 3.725e+02, threshold=3.731e+02, percent-clipped=0.0 2023-03-27 02:18:47,421 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 02:19:07,337 INFO [finetune.py:976] (2/7) Epoch 22, batch 500, loss[loss=0.1385, simple_loss=0.2072, pruned_loss=0.03487, over 4765.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2476, pruned_loss=0.05319, over 878269.50 frames. ], batch size: 28, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:19:40,280 INFO [finetune.py:976] (2/7) Epoch 22, batch 550, loss[loss=0.1767, simple_loss=0.2518, pruned_loss=0.05083, over 4814.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.245, pruned_loss=0.05264, over 894019.31 frames. ], batch size: 40, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:19:58,124 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.544e+02 1.778e+02 2.172e+02 3.720e+02, threshold=3.555e+02, percent-clipped=0.0 2023-03-27 02:20:13,119 INFO [finetune.py:976] (2/7) Epoch 22, batch 600, loss[loss=0.1655, simple_loss=0.2297, pruned_loss=0.05059, over 4704.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2438, pruned_loss=0.05261, over 906913.04 frames. ], batch size: 23, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:20:31,081 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0675, 2.0023, 2.1586, 1.7104, 2.0564, 2.2238, 2.2093, 1.7149], device='cuda:2'), covar=tensor([0.0474, 0.0507, 0.0485, 0.0643, 0.0762, 0.0497, 0.0432, 0.1013], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0140, 0.0121, 0.0126, 0.0139, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:20:46,529 INFO [finetune.py:976] (2/7) Epoch 22, batch 650, loss[loss=0.1991, simple_loss=0.2728, pruned_loss=0.06268, over 4839.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2457, pruned_loss=0.05244, over 916908.47 frames. ], batch size: 47, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:04,771 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.580e+02 1.877e+02 2.237e+02 3.344e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-27 02:21:20,031 INFO [finetune.py:976] (2/7) Epoch 22, batch 700, loss[loss=0.1698, simple_loss=0.2508, pruned_loss=0.04441, over 4916.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2479, pruned_loss=0.05321, over 923568.98 frames. ], batch size: 37, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:29,843 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 02:21:32,893 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121003.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:21:35,815 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0334, 1.9881, 1.5893, 1.9922, 2.0125, 1.6893, 2.3541, 2.0224], device='cuda:2'), covar=tensor([0.1474, 0.1936, 0.3119, 0.2520, 0.2706, 0.1893, 0.3555, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0188, 0.0236, 0.0254, 0.0248, 0.0204, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:21:46,608 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 02:21:51,581 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.11 vs. limit=5.0 2023-03-27 02:21:53,221 INFO [finetune.py:976] (2/7) Epoch 22, batch 750, loss[loss=0.1604, simple_loss=0.2363, pruned_loss=0.04224, over 4836.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2491, pruned_loss=0.0536, over 931476.62 frames. ], batch size: 49, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:53,298 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:22:09,807 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.617e+02 1.906e+02 2.410e+02 4.829e+02, threshold=3.812e+02, percent-clipped=5.0 2023-03-27 02:22:14,817 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121064.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:22:26,766 INFO [finetune.py:976] (2/7) Epoch 22, batch 800, loss[loss=0.1794, simple_loss=0.2417, pruned_loss=0.0585, over 4883.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2475, pruned_loss=0.05293, over 936676.92 frames. ], batch size: 32, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:22:27,626 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.83 vs. limit=5.0 2023-03-27 02:23:10,266 INFO [finetune.py:976] (2/7) Epoch 22, batch 850, loss[loss=0.241, simple_loss=0.2789, pruned_loss=0.1015, over 4824.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2459, pruned_loss=0.05231, over 939954.23 frames. ], batch size: 38, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:23:28,699 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.269e+01 1.509e+02 1.830e+02 2.168e+02 4.982e+02, threshold=3.659e+02, percent-clipped=1.0 2023-03-27 02:23:31,336 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3835, 2.2400, 2.1484, 2.5236, 3.0547, 2.4063, 2.3073, 1.8381], device='cuda:2'), covar=tensor([0.2203, 0.1869, 0.1808, 0.1495, 0.1450, 0.1092, 0.1966, 0.1831], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0209, 0.0213, 0.0195, 0.0241, 0.0188, 0.0217, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:23:37,198 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:23:46,337 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 02:23:58,238 INFO [finetune.py:976] (2/7) Epoch 22, batch 900, loss[loss=0.1882, simple_loss=0.2525, pruned_loss=0.06196, over 4869.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2445, pruned_loss=0.05178, over 943944.82 frames. ], batch size: 34, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:24:27,743 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8817, 1.8608, 1.5977, 1.8825, 1.6048, 4.4597, 1.7293, 2.0518], device='cuda:2'), covar=tensor([0.3414, 0.2414, 0.2268, 0.2419, 0.1569, 0.0112, 0.2468, 0.1223], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:24:38,340 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:24:42,942 INFO [finetune.py:976] (2/7) Epoch 22, batch 950, loss[loss=0.1704, simple_loss=0.2482, pruned_loss=0.04626, over 4900.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.243, pruned_loss=0.05134, over 945819.80 frames. ], batch size: 35, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:24:44,276 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7710, 1.6105, 2.3739, 3.6092, 2.4243, 2.6391, 0.9730, 3.0401], device='cuda:2'), covar=tensor([0.1731, 0.1406, 0.1268, 0.0481, 0.0787, 0.1227, 0.1920, 0.0406], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0136, 0.0166, 0.0102, 0.0139, 0.0126, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 02:24:50,281 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7274, 1.5862, 1.5423, 1.6043, 1.0943, 3.5046, 1.3466, 1.6579], device='cuda:2'), covar=tensor([0.3336, 0.2422, 0.2130, 0.2451, 0.1781, 0.0220, 0.2770, 0.1338], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:24:59,302 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.960e+01 1.496e+02 1.804e+02 2.225e+02 4.174e+02, threshold=3.608e+02, percent-clipped=1.0 2023-03-27 02:25:16,227 INFO [finetune.py:976] (2/7) Epoch 22, batch 1000, loss[loss=0.1886, simple_loss=0.2657, pruned_loss=0.05574, over 4211.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2461, pruned_loss=0.05277, over 945006.57 frames. ], batch size: 65, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:25:26,014 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.70 vs. limit=5.0 2023-03-27 02:25:33,750 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121311.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:25:42,365 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3005, 2.1697, 1.7923, 2.0991, 2.3596, 2.0339, 2.5199, 2.3180], device='cuda:2'), covar=tensor([0.1307, 0.1929, 0.3048, 0.2456, 0.2376, 0.1686, 0.3151, 0.1772], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0188, 0.0235, 0.0254, 0.0247, 0.0204, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:25:49,276 INFO [finetune.py:976] (2/7) Epoch 22, batch 1050, loss[loss=0.1827, simple_loss=0.2575, pruned_loss=0.05392, over 4871.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2479, pruned_loss=0.05254, over 949215.06 frames. ], batch size: 31, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:25:49,382 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:25:51,801 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5788, 1.5270, 1.4596, 0.8937, 1.6078, 1.8261, 1.7565, 1.3662], device='cuda:2'), covar=tensor([0.0808, 0.0541, 0.0538, 0.0527, 0.0524, 0.0570, 0.0354, 0.0667], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0124, 0.0132, 0.0130, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9814e-05, 1.0860e-04, 9.0406e-05, 8.7095e-05, 9.2787e-05, 9.2661e-05, 1.0186e-04, 1.0648e-04], device='cuda:2') 2023-03-27 02:26:05,530 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.613e+02 2.056e+02 2.633e+02 6.948e+02, threshold=4.113e+02, percent-clipped=5.0 2023-03-27 02:26:05,614 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121359.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:09,272 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5690, 2.4873, 2.0254, 1.1113, 2.0878, 1.9259, 1.7986, 2.1881], device='cuda:2'), covar=tensor([0.0999, 0.0776, 0.1969, 0.2292, 0.1667, 0.2459, 0.2361, 0.1164], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0199, 0.0183, 0.0210, 0.0209, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:26:12,285 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6320, 3.5196, 3.3050, 1.6144, 3.6446, 2.7702, 0.7064, 2.4168], device='cuda:2'), covar=tensor([0.2689, 0.1887, 0.1615, 0.3266, 0.1079, 0.1016, 0.4530, 0.1594], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0158, 0.0129, 0.0159, 0.0122, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 02:26:13,508 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121372.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 02:26:19,238 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:20,876 INFO [finetune.py:976] (2/7) Epoch 22, batch 1100, loss[loss=0.1675, simple_loss=0.2479, pruned_loss=0.04348, over 4922.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2493, pruned_loss=0.05347, over 948944.52 frames. ], batch size: 33, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:26:31,164 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0996, 1.9400, 2.7232, 4.3304, 2.9491, 2.9103, 0.9473, 3.7352], device='cuda:2'), covar=tensor([0.1794, 0.1390, 0.1283, 0.0528, 0.0741, 0.1336, 0.1990, 0.0349], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0118, 0.0136, 0.0167, 0.0102, 0.0139, 0.0127, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 02:26:36,777 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121407.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:53,107 INFO [finetune.py:976] (2/7) Epoch 22, batch 1150, loss[loss=0.1766, simple_loss=0.2639, pruned_loss=0.04464, over 4922.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2499, pruned_loss=0.05356, over 950877.90 frames. ], batch size: 42, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:27:02,650 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5713, 2.4602, 2.0116, 1.0366, 2.1288, 1.8708, 1.7705, 2.1931], device='cuda:2'), covar=tensor([0.0838, 0.0725, 0.1706, 0.2185, 0.1511, 0.2459, 0.2412, 0.1025], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0199, 0.0183, 0.0210, 0.0209, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:27:10,453 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.055e+01 1.683e+02 1.964e+02 2.424e+02 3.625e+02, threshold=3.928e+02, percent-clipped=0.0 2023-03-27 02:27:15,984 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:27:20,147 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3988, 2.9092, 2.7761, 1.2734, 3.0225, 2.2568, 0.7831, 1.8683], device='cuda:2'), covar=tensor([0.2473, 0.2195, 0.1896, 0.3669, 0.1428, 0.1206, 0.4206, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0159, 0.0130, 0.0159, 0.0122, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 02:27:25,671 INFO [finetune.py:976] (2/7) Epoch 22, batch 1200, loss[loss=0.1443, simple_loss=0.2143, pruned_loss=0.03711, over 4904.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2476, pruned_loss=0.05225, over 951360.52 frames. ], batch size: 35, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:27:44,333 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8851, 1.6978, 1.5580, 1.3219, 1.6745, 1.6986, 1.6286, 2.2461], device='cuda:2'), covar=tensor([0.4059, 0.4015, 0.3295, 0.3622, 0.3955, 0.2423, 0.3318, 0.1865], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0262, 0.0232, 0.0277, 0.0255, 0.0225, 0.0254, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:27:49,695 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121520.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:27:57,407 INFO [finetune.py:976] (2/7) Epoch 22, batch 1250, loss[loss=0.1754, simple_loss=0.2359, pruned_loss=0.05749, over 4749.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2458, pruned_loss=0.052, over 952906.90 frames. ], batch size: 28, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:28:26,677 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.487e+02 1.790e+02 2.203e+02 4.512e+02, threshold=3.581e+02, percent-clipped=1.0 2023-03-27 02:28:40,600 INFO [finetune.py:976] (2/7) Epoch 22, batch 1300, loss[loss=0.1501, simple_loss=0.221, pruned_loss=0.03959, over 4790.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2443, pruned_loss=0.05206, over 954644.63 frames. ], batch size: 29, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:29:08,212 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0379, 1.9218, 1.6867, 1.7109, 1.8293, 1.8573, 1.8304, 2.5153], device='cuda:2'), covar=tensor([0.4170, 0.4013, 0.3349, 0.3754, 0.3849, 0.2639, 0.3692, 0.1835], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0233, 0.0278, 0.0256, 0.0225, 0.0254, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:29:39,000 INFO [finetune.py:976] (2/7) Epoch 22, batch 1350, loss[loss=0.1997, simple_loss=0.2643, pruned_loss=0.06753, over 4901.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2439, pruned_loss=0.05199, over 954969.17 frames. ], batch size: 43, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:30:02,099 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.546e+02 1.913e+02 2.289e+02 6.231e+02, threshold=3.826e+02, percent-clipped=1.0 2023-03-27 02:30:02,194 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:30:03,606 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-27 02:30:04,670 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6117, 2.4470, 2.0699, 1.0182, 2.1715, 1.8693, 1.7550, 2.2013], device='cuda:2'), covar=tensor([0.0821, 0.0698, 0.1719, 0.2178, 0.1465, 0.2313, 0.2205, 0.0969], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:30:07,017 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121667.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 02:30:16,102 INFO [finetune.py:976] (2/7) Epoch 22, batch 1400, loss[loss=0.2295, simple_loss=0.3059, pruned_loss=0.07658, over 4922.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2474, pruned_loss=0.05298, over 955161.33 frames. ], batch size: 42, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:30:34,337 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121707.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:30:49,373 INFO [finetune.py:976] (2/7) Epoch 22, batch 1450, loss[loss=0.149, simple_loss=0.2219, pruned_loss=0.03806, over 4776.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.25, pruned_loss=0.05393, over 954226.74 frames. ], batch size: 26, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:31:08,591 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.585e+02 1.838e+02 2.176e+02 4.319e+02, threshold=3.676e+02, percent-clipped=1.0 2023-03-27 02:31:11,093 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121763.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:31:22,461 INFO [finetune.py:976] (2/7) Epoch 22, batch 1500, loss[loss=0.2458, simple_loss=0.303, pruned_loss=0.09432, over 4739.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2514, pruned_loss=0.05404, over 954601.22 frames. ], batch size: 59, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:31:49,145 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:31:56,350 INFO [finetune.py:976] (2/7) Epoch 22, batch 1550, loss[loss=0.1551, simple_loss=0.2343, pruned_loss=0.03795, over 4794.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.252, pruned_loss=0.05451, over 954000.08 frames. ], batch size: 25, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:32:15,624 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.581e+02 1.856e+02 2.152e+02 3.350e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-27 02:32:21,172 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121868.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:32:29,537 INFO [finetune.py:976] (2/7) Epoch 22, batch 1600, loss[loss=0.1513, simple_loss=0.2209, pruned_loss=0.04085, over 4769.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2495, pruned_loss=0.05362, over 955006.41 frames. ], batch size: 26, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:32:29,649 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4544, 2.6661, 2.5193, 1.7738, 2.4105, 2.7198, 2.7642, 2.2181], device='cuda:2'), covar=tensor([0.0607, 0.0533, 0.0641, 0.0869, 0.0905, 0.0615, 0.0567, 0.0992], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0140, 0.0121, 0.0127, 0.0139, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:32:34,501 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3810, 1.6366, 1.6249, 0.8773, 1.7150, 1.9189, 1.8569, 1.4853], device='cuda:2'), covar=tensor([0.0963, 0.0628, 0.0523, 0.0537, 0.0416, 0.0629, 0.0354, 0.0676], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0152, 0.0128, 0.0125, 0.0134, 0.0131, 0.0144, 0.0150], device='cuda:2'), out_proj_covar=tensor([9.1203e-05, 1.0974e-04, 9.1743e-05, 8.7854e-05, 9.4147e-05, 9.3810e-05, 1.0298e-04, 1.0756e-04], device='cuda:2') 2023-03-27 02:32:56,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3574, 3.8245, 3.9583, 4.1855, 4.1361, 3.9161, 4.4509, 1.2452], device='cuda:2'), covar=tensor([0.0763, 0.0879, 0.0969, 0.1047, 0.1165, 0.1632, 0.0731, 0.6315], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0282, 0.0291, 0.0335, 0.0285, 0.0306, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:33:02,673 INFO [finetune.py:976] (2/7) Epoch 22, batch 1650, loss[loss=0.1744, simple_loss=0.2363, pruned_loss=0.05626, over 4898.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2468, pruned_loss=0.05284, over 955973.35 frames. ], batch size: 46, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:33:13,734 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121950.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:33:22,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.553e+02 1.837e+02 2.107e+02 3.976e+02, threshold=3.675e+02, percent-clipped=1.0 2023-03-27 02:33:24,326 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8797, 1.8948, 1.6429, 2.0449, 2.3579, 2.1263, 1.6364, 1.5343], device='cuda:2'), covar=tensor([0.2223, 0.1900, 0.1913, 0.1552, 0.1673, 0.1164, 0.2338, 0.1933], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0209, 0.0213, 0.0194, 0.0242, 0.0188, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:33:26,614 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0258, 1.6901, 2.2880, 1.6137, 2.0859, 2.2472, 1.6457, 2.3011], device='cuda:2'), covar=tensor([0.1156, 0.1849, 0.1463, 0.1983, 0.0895, 0.1394, 0.2430, 0.0827], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0205, 0.0191, 0.0189, 0.0174, 0.0213, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:33:33,839 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121967.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:33:43,473 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3672, 4.6507, 4.8994, 5.1808, 5.0786, 4.8460, 5.4847, 1.6554], device='cuda:2'), covar=tensor([0.0697, 0.0737, 0.0720, 0.0953, 0.1207, 0.1455, 0.0459, 0.5972], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0244, 0.0281, 0.0290, 0.0334, 0.0284, 0.0305, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:33:46,459 INFO [finetune.py:976] (2/7) Epoch 22, batch 1700, loss[loss=0.1435, simple_loss=0.224, pruned_loss=0.03149, over 4749.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2435, pruned_loss=0.05157, over 956081.16 frames. ], batch size: 59, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:33:50,205 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4588, 1.4151, 1.8989, 2.9160, 1.9648, 2.1450, 0.8834, 2.4765], device='cuda:2'), covar=tensor([0.1666, 0.1378, 0.1184, 0.0623, 0.0807, 0.1256, 0.1749, 0.0534], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0165, 0.0100, 0.0137, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 02:34:13,903 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122011.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:34:16,849 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122015.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:34:36,868 INFO [finetune.py:976] (2/7) Epoch 22, batch 1750, loss[loss=0.1398, simple_loss=0.2108, pruned_loss=0.03436, over 4784.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2446, pruned_loss=0.05153, over 958061.20 frames. ], batch size: 28, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:35:06,505 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.629e+02 1.889e+02 2.189e+02 5.095e+02, threshold=3.778e+02, percent-clipped=2.0 2023-03-27 02:35:10,503 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122063.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:22,845 INFO [finetune.py:976] (2/7) Epoch 22, batch 1800, loss[loss=0.1911, simple_loss=0.2701, pruned_loss=0.05605, over 4884.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.249, pruned_loss=0.05258, over 959917.85 frames. ], batch size: 32, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:35:41,159 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122111.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:46,331 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:56,274 INFO [finetune.py:976] (2/7) Epoch 22, batch 1850, loss[loss=0.2303, simple_loss=0.2933, pruned_loss=0.08361, over 4905.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2507, pruned_loss=0.05284, over 959189.58 frames. ], batch size: 37, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:36:12,755 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.044e+01 1.577e+02 1.909e+02 2.256e+02 5.766e+02, threshold=3.818e+02, percent-clipped=1.0 2023-03-27 02:36:27,302 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122178.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:36:29,612 INFO [finetune.py:976] (2/7) Epoch 22, batch 1900, loss[loss=0.1578, simple_loss=0.2212, pruned_loss=0.04718, over 4769.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2508, pruned_loss=0.05276, over 958207.46 frames. ], batch size: 26, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:36:37,619 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122195.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:37:03,560 INFO [finetune.py:976] (2/7) Epoch 22, batch 1950, loss[loss=0.1728, simple_loss=0.2461, pruned_loss=0.04979, over 4773.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2499, pruned_loss=0.05203, over 958250.42 frames. ], batch size: 28, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:37:05,017 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 02:37:18,293 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122256.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:37:19,977 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.844e+01 1.567e+02 1.786e+02 2.225e+02 4.203e+02, threshold=3.573e+02, percent-clipped=2.0 2023-03-27 02:37:33,478 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4677, 2.4561, 2.4104, 1.6639, 2.2927, 2.6240, 2.6597, 2.1253], device='cuda:2'), covar=tensor([0.0553, 0.0579, 0.0676, 0.0862, 0.0874, 0.0719, 0.0591, 0.1003], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0126, 0.0140, 0.0140, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:37:36,890 INFO [finetune.py:976] (2/7) Epoch 22, batch 2000, loss[loss=0.1677, simple_loss=0.2448, pruned_loss=0.04529, over 4795.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2476, pruned_loss=0.05157, over 957280.36 frames. ], batch size: 29, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:37:51,526 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122306.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:37:59,446 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122319.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:38:10,018 INFO [finetune.py:976] (2/7) Epoch 22, batch 2050, loss[loss=0.1404, simple_loss=0.2168, pruned_loss=0.03196, over 4912.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2447, pruned_loss=0.05127, over 959147.07 frames. ], batch size: 36, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:38:21,746 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 02:38:24,598 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1402, 1.8126, 2.1749, 2.1209, 1.8585, 1.8880, 2.0778, 2.0462], device='cuda:2'), covar=tensor([0.3783, 0.3810, 0.2879, 0.3850, 0.4666, 0.3764, 0.4385, 0.2766], device='cuda:2'), in_proj_covar=tensor([0.0258, 0.0243, 0.0264, 0.0284, 0.0283, 0.0259, 0.0292, 0.0246], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:38:26,819 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.990e+01 1.410e+02 1.787e+02 2.088e+02 3.673e+02, threshold=3.574e+02, percent-clipped=1.0 2023-03-27 02:38:32,443 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4205, 1.5289, 1.7045, 0.8942, 1.6778, 1.7429, 1.8078, 1.5373], device='cuda:2'), covar=tensor([0.0817, 0.0716, 0.0475, 0.0496, 0.0470, 0.0769, 0.0366, 0.0652], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0123, 0.0132, 0.0131, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9912e-05, 1.0857e-04, 9.0949e-05, 8.7036e-05, 9.3184e-05, 9.3290e-05, 1.0204e-04, 1.0645e-04], device='cuda:2') 2023-03-27 02:38:43,034 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:38:44,602 INFO [finetune.py:976] (2/7) Epoch 22, batch 2100, loss[loss=0.1387, simple_loss=0.2104, pruned_loss=0.03353, over 4870.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2437, pruned_loss=0.05127, over 957702.48 frames. ], batch size: 34, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:39:28,192 INFO [finetune.py:976] (2/7) Epoch 22, batch 2150, loss[loss=0.2041, simple_loss=0.2667, pruned_loss=0.07078, over 4862.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2466, pruned_loss=0.05282, over 956963.01 frames. ], batch size: 31, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:39:37,109 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 02:40:03,769 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.591e+02 1.910e+02 2.352e+02 5.051e+02, threshold=3.819e+02, percent-clipped=3.0 2023-03-27 02:40:13,697 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:40:16,629 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122473.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 02:40:26,431 INFO [finetune.py:976] (2/7) Epoch 22, batch 2200, loss[loss=0.2014, simple_loss=0.2691, pruned_loss=0.06689, over 4862.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.247, pruned_loss=0.05277, over 953477.38 frames. ], batch size: 31, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:40:31,345 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-27 02:40:42,900 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5944, 1.3487, 2.1512, 1.6647, 1.6543, 3.9081, 1.2364, 1.4924], device='cuda:2'), covar=tensor([0.1123, 0.2303, 0.1537, 0.1204, 0.1909, 0.0242, 0.2060, 0.2404], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:40:56,788 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:40:58,943 INFO [finetune.py:976] (2/7) Epoch 22, batch 2250, loss[loss=0.1948, simple_loss=0.2582, pruned_loss=0.06565, over 4059.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2475, pruned_loss=0.05268, over 952837.03 frames. ], batch size: 65, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:41:12,978 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:41:17,776 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.977e+01 1.468e+02 1.830e+02 2.095e+02 3.153e+02, threshold=3.659e+02, percent-clipped=0.0 2023-03-27 02:41:31,700 INFO [finetune.py:976] (2/7) Epoch 22, batch 2300, loss[loss=0.1622, simple_loss=0.2303, pruned_loss=0.04711, over 4922.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2473, pruned_loss=0.05202, over 954988.36 frames. ], batch size: 42, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:41:49,478 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122606.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:42:05,213 INFO [finetune.py:976] (2/7) Epoch 22, batch 2350, loss[loss=0.1896, simple_loss=0.2411, pruned_loss=0.0691, over 4785.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2465, pruned_loss=0.05211, over 952196.34 frames. ], batch size: 51, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:42:11,173 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7830, 1.6814, 1.4882, 1.8747, 2.0146, 1.8742, 1.3460, 1.4777], device='cuda:2'), covar=tensor([0.2180, 0.1941, 0.1897, 0.1525, 0.1540, 0.1159, 0.2384, 0.1799], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0211, 0.0215, 0.0197, 0.0244, 0.0190, 0.0218, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:42:21,478 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:42:24,441 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.417e+02 1.625e+02 2.018e+02 3.172e+02, threshold=3.250e+02, percent-clipped=0.0 2023-03-27 02:42:34,177 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122675.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:42:38,346 INFO [finetune.py:976] (2/7) Epoch 22, batch 2400, loss[loss=0.1712, simple_loss=0.2408, pruned_loss=0.05075, over 4930.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.244, pruned_loss=0.05116, over 953285.90 frames. ], batch size: 38, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:43:00,885 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 02:43:11,471 INFO [finetune.py:976] (2/7) Epoch 22, batch 2450, loss[loss=0.2169, simple_loss=0.2839, pruned_loss=0.07497, over 4807.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2414, pruned_loss=0.0498, over 953142.06 frames. ], batch size: 45, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:43:31,104 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.588e+02 1.841e+02 2.130e+02 2.968e+02, threshold=3.682e+02, percent-clipped=0.0 2023-03-27 02:43:39,663 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122773.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:43:43,943 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2121, 2.0035, 1.5279, 0.7008, 1.6957, 1.8374, 1.6541, 1.8043], device='cuda:2'), covar=tensor([0.1061, 0.0775, 0.1552, 0.1886, 0.1219, 0.2107, 0.2180, 0.0879], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0197, 0.0180, 0.0208, 0.0205, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:43:45,020 INFO [finetune.py:976] (2/7) Epoch 22, batch 2500, loss[loss=0.2037, simple_loss=0.2803, pruned_loss=0.06353, over 4931.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2415, pruned_loss=0.05001, over 953234.93 frames. ], batch size: 42, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:43:47,569 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0904, 1.8044, 1.8913, 0.8581, 2.1099, 2.3795, 2.0503, 1.7497], device='cuda:2'), covar=tensor([0.0965, 0.0872, 0.0511, 0.0701, 0.0524, 0.0586, 0.0461, 0.0783], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0128, 0.0124, 0.0132, 0.0131, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.0198e-05, 1.0880e-04, 9.1497e-05, 8.7222e-05, 9.3029e-05, 9.3505e-05, 1.0192e-04, 1.0683e-04], device='cuda:2') 2023-03-27 02:44:21,540 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122821.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:23,364 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:28,112 INFO [finetune.py:976] (2/7) Epoch 22, batch 2550, loss[loss=0.1771, simple_loss=0.2525, pruned_loss=0.05081, over 4744.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2437, pruned_loss=0.05004, over 954941.13 frames. ], batch size: 59, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:44:42,575 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122851.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:53,792 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.566e+02 1.889e+02 2.331e+02 3.878e+02, threshold=3.777e+02, percent-clipped=1.0 2023-03-27 02:45:07,316 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:45:13,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4936, 1.6259, 1.5798, 0.8923, 1.7183, 1.8741, 1.9348, 1.4851], device='cuda:2'), covar=tensor([0.1056, 0.0647, 0.0527, 0.0655, 0.0622, 0.0703, 0.0306, 0.0832], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0128, 0.0124, 0.0132, 0.0131, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.0143e-05, 1.0875e-04, 9.1351e-05, 8.7155e-05, 9.2907e-05, 9.3493e-05, 1.0182e-04, 1.0689e-04], device='cuda:2') 2023-03-27 02:45:22,081 INFO [finetune.py:976] (2/7) Epoch 22, batch 2600, loss[loss=0.1816, simple_loss=0.2614, pruned_loss=0.05091, over 4821.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2462, pruned_loss=0.05135, over 952297.32 frames. ], batch size: 38, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:45:24,796 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 02:45:40,401 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122899.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:46:02,174 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2816, 1.5109, 1.5877, 0.8636, 1.5830, 1.7911, 1.8657, 1.4273], device='cuda:2'), covar=tensor([0.0957, 0.0623, 0.0567, 0.0574, 0.0479, 0.0826, 0.0333, 0.0736], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0151, 0.0128, 0.0124, 0.0132, 0.0131, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([9.0315e-05, 1.0882e-04, 9.1482e-05, 8.7231e-05, 9.3152e-05, 9.3561e-05, 1.0186e-04, 1.0725e-04], device='cuda:2') 2023-03-27 02:46:02,677 INFO [finetune.py:976] (2/7) Epoch 22, batch 2650, loss[loss=0.1664, simple_loss=0.2569, pruned_loss=0.03792, over 4745.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2482, pruned_loss=0.05207, over 953562.41 frames. ], batch size: 27, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:46:03,432 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:46:04,662 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5354, 1.5068, 1.2852, 1.4760, 1.8401, 1.7652, 1.4796, 1.3376], device='cuda:2'), covar=tensor([0.0355, 0.0296, 0.0618, 0.0299, 0.0202, 0.0406, 0.0324, 0.0411], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0105, 0.0141, 0.0109, 0.0098, 0.0110, 0.0099, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.5284e-05, 8.0741e-05, 1.1043e-04, 8.3961e-05, 7.5892e-05, 8.1213e-05, 7.3480e-05, 8.4552e-05], device='cuda:2') 2023-03-27 02:46:19,588 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.898e+01 1.493e+02 1.785e+02 2.135e+02 3.458e+02, threshold=3.570e+02, percent-clipped=0.0 2023-03-27 02:46:31,727 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122975.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:46:35,902 INFO [finetune.py:976] (2/7) Epoch 22, batch 2700, loss[loss=0.1955, simple_loss=0.2565, pruned_loss=0.06726, over 4915.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2473, pruned_loss=0.05165, over 952566.55 frames. ], batch size: 38, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:46:47,126 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7529, 1.6664, 2.0329, 1.4385, 1.7820, 2.0412, 1.5632, 2.1551], device='cuda:2'), covar=tensor([0.1397, 0.2076, 0.1422, 0.1795, 0.1011, 0.1400, 0.2842, 0.0860], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0208, 0.0193, 0.0191, 0.0177, 0.0215, 0.0219, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:47:04,318 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123023.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:47:09,737 INFO [finetune.py:976] (2/7) Epoch 22, batch 2750, loss[loss=0.1598, simple_loss=0.2283, pruned_loss=0.04569, over 4825.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2446, pruned_loss=0.05119, over 952924.16 frames. ], batch size: 33, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:47:24,427 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4538, 1.5978, 2.2617, 1.8362, 1.9087, 4.0104, 1.6310, 1.8253], device='cuda:2'), covar=tensor([0.0947, 0.1636, 0.1167, 0.0913, 0.1362, 0.0188, 0.1298, 0.1607], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:47:26,634 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.465e+02 1.704e+02 2.045e+02 3.535e+02, threshold=3.409e+02, percent-clipped=0.0 2023-03-27 02:47:42,980 INFO [finetune.py:976] (2/7) Epoch 22, batch 2800, loss[loss=0.1527, simple_loss=0.2231, pruned_loss=0.04115, over 4859.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2415, pruned_loss=0.04999, over 953427.96 frames. ], batch size: 44, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:47:54,101 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-27 02:48:10,889 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123124.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:48:16,588 INFO [finetune.py:976] (2/7) Epoch 22, batch 2850, loss[loss=0.192, simple_loss=0.2542, pruned_loss=0.06494, over 4758.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2409, pruned_loss=0.05031, over 953446.95 frames. ], batch size: 54, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:48:27,487 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5592, 1.4347, 2.0658, 3.0974, 2.1890, 2.0982, 1.0188, 2.6395], device='cuda:2'), covar=tensor([0.1705, 0.1406, 0.1223, 0.0620, 0.0760, 0.1613, 0.1754, 0.0529], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0134, 0.0164, 0.0100, 0.0137, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 02:48:33,487 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.606e+02 1.919e+02 2.375e+02 6.875e+02, threshold=3.839e+02, percent-clipped=7.0 2023-03-27 02:48:42,230 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:48:49,677 INFO [finetune.py:976] (2/7) Epoch 22, batch 2900, loss[loss=0.2044, simple_loss=0.2736, pruned_loss=0.06759, over 4837.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2433, pruned_loss=0.05085, over 954428.65 frames. ], batch size: 33, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:13,008 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9246, 1.7966, 1.6332, 1.3669, 1.9083, 1.7265, 1.7606, 1.9330], device='cuda:2'), covar=tensor([0.1350, 0.1663, 0.2833, 0.2327, 0.2397, 0.1655, 0.2245, 0.1694], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0237, 0.0255, 0.0249, 0.0205, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:49:22,637 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:49:25,027 INFO [finetune.py:976] (2/7) Epoch 22, batch 2950, loss[loss=0.172, simple_loss=0.2569, pruned_loss=0.04358, over 4935.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2475, pruned_loss=0.05207, over 955074.16 frames. ], batch size: 38, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:42,391 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.515e+01 1.556e+02 1.822e+02 2.269e+02 3.192e+02, threshold=3.643e+02, percent-clipped=0.0 2023-03-27 02:49:47,301 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8384, 1.8023, 1.5262, 1.9275, 2.3862, 1.9713, 1.5855, 1.5008], device='cuda:2'), covar=tensor([0.2297, 0.1905, 0.1995, 0.1596, 0.1529, 0.1135, 0.2334, 0.1977], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0196, 0.0242, 0.0189, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:49:59,812 INFO [finetune.py:976] (2/7) Epoch 22, batch 3000, loss[loss=0.1527, simple_loss=0.2284, pruned_loss=0.03849, over 4908.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2478, pruned_loss=0.052, over 955243.97 frames. ], batch size: 42, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:59,812 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 02:50:03,356 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6953, 1.5695, 1.5800, 1.6179, 1.0217, 3.0457, 1.1797, 1.5914], device='cuda:2'), covar=tensor([0.3238, 0.2357, 0.2030, 0.2293, 0.1802, 0.0270, 0.2503, 0.1250], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0122, 0.0124, 0.0114, 0.0096, 0.0095, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:50:15,178 INFO [finetune.py:1010] (2/7) Epoch 22, validation: loss=0.1575, simple_loss=0.2256, pruned_loss=0.04471, over 2265189.00 frames. 2023-03-27 02:50:15,178 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 02:50:47,933 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:51:04,457 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 02:51:07,174 INFO [finetune.py:976] (2/7) Epoch 22, batch 3050, loss[loss=0.2859, simple_loss=0.332, pruned_loss=0.1199, over 4207.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.249, pruned_loss=0.0527, over 954075.44 frames. ], batch size: 66, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:51:23,209 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 02:51:24,938 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.73 vs. limit=5.0 2023-03-27 02:51:27,199 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.922e+01 1.647e+02 1.960e+02 2.377e+02 4.726e+02, threshold=3.920e+02, percent-clipped=6.0 2023-03-27 02:51:36,576 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123375.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:51:41,108 INFO [finetune.py:976] (2/7) Epoch 22, batch 3100, loss[loss=0.167, simple_loss=0.2376, pruned_loss=0.04821, over 4853.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2472, pruned_loss=0.05218, over 955701.41 frames. ], batch size: 31, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:52:11,756 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-27 02:52:12,232 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123428.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:52:14,579 INFO [finetune.py:976] (2/7) Epoch 22, batch 3150, loss[loss=0.1719, simple_loss=0.2383, pruned_loss=0.0527, over 4758.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2456, pruned_loss=0.05239, over 958346.56 frames. ], batch size: 28, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:52:20,511 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8335, 1.6980, 1.6308, 1.8029, 1.3117, 3.8687, 1.5879, 2.1025], device='cuda:2'), covar=tensor([0.3120, 0.2452, 0.2146, 0.2317, 0.1775, 0.0170, 0.2354, 0.1155], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0122, 0.0124, 0.0114, 0.0096, 0.0095, 0.0096], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:52:32,139 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4092, 1.4864, 1.5813, 0.8314, 1.5974, 1.8064, 1.8515, 1.4625], device='cuda:2'), covar=tensor([0.0974, 0.0670, 0.0522, 0.0589, 0.0474, 0.0718, 0.0356, 0.0645], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0132, 0.0130, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9640e-05, 1.0783e-04, 9.0974e-05, 8.6168e-05, 9.2567e-05, 9.2822e-05, 1.0089e-04, 1.0616e-04], device='cuda:2') 2023-03-27 02:52:34,405 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.485e+02 1.808e+02 2.093e+02 3.688e+02, threshold=3.617e+02, percent-clipped=0.0 2023-03-27 02:52:35,780 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4755, 2.2599, 2.7852, 1.7957, 2.3974, 2.7180, 2.0420, 2.9171], device='cuda:2'), covar=tensor([0.1332, 0.1734, 0.1495, 0.2213, 0.0988, 0.1422, 0.2672, 0.0805], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0206, 0.0190, 0.0188, 0.0174, 0.0213, 0.0216, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:52:40,035 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2810, 1.1382, 1.1690, 1.1308, 1.5208, 1.4230, 1.2860, 1.1320], device='cuda:2'), covar=tensor([0.0409, 0.0339, 0.0681, 0.0351, 0.0242, 0.0609, 0.0334, 0.0435], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0106, 0.0142, 0.0110, 0.0098, 0.0110, 0.0100, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.5904e-05, 8.1444e-05, 1.1134e-04, 8.4611e-05, 7.6372e-05, 8.1177e-05, 7.4228e-05, 8.4744e-05], device='cuda:2') 2023-03-27 02:52:47,863 INFO [finetune.py:976] (2/7) Epoch 22, batch 3200, loss[loss=0.1658, simple_loss=0.2346, pruned_loss=0.04854, over 4823.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2425, pruned_loss=0.05118, over 955414.85 frames. ], batch size: 41, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:52:52,354 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123489.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:53:18,980 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:53:21,318 INFO [finetune.py:976] (2/7) Epoch 22, batch 3250, loss[loss=0.1582, simple_loss=0.241, pruned_loss=0.03763, over 4789.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2428, pruned_loss=0.05118, over 957345.68 frames. ], batch size: 29, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:53:31,215 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 02:53:41,211 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.285e+01 1.537e+02 1.740e+02 2.121e+02 3.629e+02, threshold=3.481e+02, percent-clipped=1.0 2023-03-27 02:53:41,299 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7439, 3.7289, 3.6056, 1.6323, 3.8632, 3.0126, 0.7028, 2.7447], device='cuda:2'), covar=tensor([0.2440, 0.2087, 0.1534, 0.3601, 0.1107, 0.1056, 0.4693, 0.1538], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0157, 0.0129, 0.0160, 0.0122, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 02:53:51,147 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:53:53,652 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1306, 2.0702, 1.6426, 1.9837, 2.1048, 1.7835, 2.3598, 2.1321], device='cuda:2'), covar=tensor([0.1287, 0.1967, 0.2963, 0.2517, 0.2488, 0.1672, 0.2934, 0.1683], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0188, 0.0234, 0.0253, 0.0247, 0.0204, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:53:54,732 INFO [finetune.py:976] (2/7) Epoch 22, batch 3300, loss[loss=0.1547, simple_loss=0.2407, pruned_loss=0.03439, over 4765.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2461, pruned_loss=0.0516, over 957516.75 frames. ], batch size: 28, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:54:03,841 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1714, 3.6605, 3.8403, 4.0366, 3.9209, 3.6176, 4.2769, 1.2279], device='cuda:2'), covar=tensor([0.0800, 0.0848, 0.0918, 0.0977, 0.1432, 0.1708, 0.0709, 0.6079], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0246, 0.0282, 0.0293, 0.0338, 0.0287, 0.0307, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:54:14,290 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.53 vs. limit=2.0 2023-03-27 02:54:28,299 INFO [finetune.py:976] (2/7) Epoch 22, batch 3350, loss[loss=0.2116, simple_loss=0.274, pruned_loss=0.0746, over 4819.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2484, pruned_loss=0.05217, over 958120.26 frames. ], batch size: 33, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:54:47,671 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.542e+02 1.809e+02 2.055e+02 5.285e+02, threshold=3.617e+02, percent-clipped=2.0 2023-03-27 02:54:53,807 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.03 vs. limit=5.0 2023-03-27 02:54:54,270 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123670.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:55:01,512 INFO [finetune.py:976] (2/7) Epoch 22, batch 3400, loss[loss=0.1921, simple_loss=0.268, pruned_loss=0.05812, over 4846.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2505, pruned_loss=0.05315, over 956487.34 frames. ], batch size: 44, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:55:01,586 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0333, 4.1092, 3.9097, 2.2166, 4.2461, 3.2156, 1.4704, 3.0797], device='cuda:2'), covar=tensor([0.2150, 0.1869, 0.1447, 0.2984, 0.0907, 0.0945, 0.3722, 0.1198], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0157, 0.0129, 0.0160, 0.0122, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 02:55:54,857 INFO [finetune.py:976] (2/7) Epoch 22, batch 3450, loss[loss=0.1588, simple_loss=0.2188, pruned_loss=0.04938, over 4020.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.25, pruned_loss=0.05313, over 955490.19 frames. ], batch size: 17, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:55:56,775 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7040, 1.6502, 2.3229, 2.0567, 1.9921, 4.4802, 1.8320, 1.8622], device='cuda:2'), covar=tensor([0.0846, 0.1703, 0.1033, 0.0914, 0.1437, 0.0177, 0.1280, 0.1666], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 02:56:26,817 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.545e+02 1.898e+02 2.347e+02 3.548e+02, threshold=3.797e+02, percent-clipped=0.0 2023-03-27 02:56:45,230 INFO [finetune.py:976] (2/7) Epoch 22, batch 3500, loss[loss=0.1818, simple_loss=0.2441, pruned_loss=0.05982, over 4911.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2485, pruned_loss=0.05328, over 955129.54 frames. ], batch size: 46, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:56:46,495 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:57:01,810 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 02:57:07,424 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4649, 2.9973, 2.7883, 1.4830, 2.9559, 2.4653, 2.3454, 2.7156], device='cuda:2'), covar=tensor([0.0849, 0.0822, 0.1791, 0.2210, 0.1627, 0.2135, 0.2013, 0.1164], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0191, 0.0198, 0.0182, 0.0210, 0.0206, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:57:18,476 INFO [finetune.py:976] (2/7) Epoch 22, batch 3550, loss[loss=0.1391, simple_loss=0.2075, pruned_loss=0.03538, over 4817.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2454, pruned_loss=0.05204, over 956212.76 frames. ], batch size: 25, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:57:36,084 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.884e+01 1.429e+02 1.766e+02 2.143e+02 3.754e+02, threshold=3.531e+02, percent-clipped=0.0 2023-03-27 02:57:45,440 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:57:52,354 INFO [finetune.py:976] (2/7) Epoch 22, batch 3600, loss[loss=0.1801, simple_loss=0.2422, pruned_loss=0.05904, over 4872.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2433, pruned_loss=0.05156, over 957375.35 frames. ], batch size: 34, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:58:10,029 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8855, 1.3193, 1.9334, 1.8759, 1.6851, 1.6323, 1.8112, 1.8216], device='cuda:2'), covar=tensor([0.3709, 0.3773, 0.2938, 0.3578, 0.4620, 0.3585, 0.4202, 0.2823], device='cuda:2'), in_proj_covar=tensor([0.0260, 0.0245, 0.0266, 0.0287, 0.0286, 0.0261, 0.0295, 0.0248], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:58:25,719 INFO [finetune.py:976] (2/7) Epoch 22, batch 3650, loss[loss=0.1457, simple_loss=0.2286, pruned_loss=0.03137, over 4908.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2438, pruned_loss=0.05131, over 957770.61 frames. ], batch size: 37, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:58:26,479 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:58:43,173 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.106e+01 1.598e+02 1.925e+02 2.525e+02 5.508e+02, threshold=3.851e+02, percent-clipped=5.0 2023-03-27 02:58:49,849 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123970.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:58:59,577 INFO [finetune.py:976] (2/7) Epoch 22, batch 3700, loss[loss=0.1784, simple_loss=0.2641, pruned_loss=0.04636, over 4805.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2477, pruned_loss=0.05237, over 956168.15 frames. ], batch size: 40, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:58:59,775 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-27 02:59:07,078 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0239, 1.7034, 2.0678, 2.0695, 1.8033, 1.7679, 2.0354, 1.9981], device='cuda:2'), covar=tensor([0.4257, 0.4291, 0.3373, 0.4020, 0.4868, 0.4063, 0.4995, 0.3003], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0244, 0.0265, 0.0286, 0.0285, 0.0261, 0.0294, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 02:59:23,316 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124018.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:59:34,574 INFO [finetune.py:976] (2/7) Epoch 22, batch 3750, loss[loss=0.1772, simple_loss=0.255, pruned_loss=0.04971, over 4822.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2488, pruned_loss=0.05266, over 956613.20 frames. ], batch size: 30, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 02:59:37,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124036.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:59:51,701 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.541e+02 1.786e+02 2.095e+02 2.976e+02, threshold=3.572e+02, percent-clipped=0.0 2023-03-27 02:59:53,021 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124062.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:03,091 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1022, 1.9939, 1.7158, 1.6957, 2.0698, 1.8389, 2.2466, 2.0907], device='cuda:2'), covar=tensor([0.1307, 0.1851, 0.2820, 0.2517, 0.2349, 0.1656, 0.2693, 0.1731], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0188, 0.0235, 0.0253, 0.0247, 0.0204, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:00:06,947 INFO [finetune.py:976] (2/7) Epoch 22, batch 3800, loss[loss=0.1836, simple_loss=0.247, pruned_loss=0.06006, over 4803.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2503, pruned_loss=0.05322, over 956929.49 frames. ], batch size: 25, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:00:08,693 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:13,513 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3571, 1.3847, 1.4168, 0.7808, 1.4885, 1.6055, 1.7366, 1.2998], device='cuda:2'), covar=tensor([0.0825, 0.0552, 0.0603, 0.0500, 0.0473, 0.0677, 0.0319, 0.0691], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0128, 0.0123, 0.0132, 0.0131, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9964e-05, 1.0832e-04, 9.1478e-05, 8.6676e-05, 9.2768e-05, 9.3073e-05, 1.0127e-04, 1.0651e-04], device='cuda:2') 2023-03-27 03:00:17,182 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124097.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:39,618 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:51,384 INFO [finetune.py:976] (2/7) Epoch 22, batch 3850, loss[loss=0.1959, simple_loss=0.2685, pruned_loss=0.06167, over 4819.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.248, pruned_loss=0.05173, over 955247.40 frames. ], batch size: 40, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:00:51,453 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124132.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:01:20,789 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.486e+02 1.746e+02 2.060e+02 3.550e+02, threshold=3.491e+02, percent-clipped=0.0 2023-03-27 03:01:46,862 INFO [finetune.py:976] (2/7) Epoch 22, batch 3900, loss[loss=0.1785, simple_loss=0.2527, pruned_loss=0.0521, over 4909.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2451, pruned_loss=0.05118, over 954748.76 frames. ], batch size: 36, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:02:08,427 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8446, 1.3516, 1.9442, 1.8682, 1.6857, 1.6305, 1.8433, 1.8699], device='cuda:2'), covar=tensor([0.3902, 0.3865, 0.3174, 0.3615, 0.4447, 0.3635, 0.4353, 0.2980], device='cuda:2'), in_proj_covar=tensor([0.0258, 0.0243, 0.0264, 0.0285, 0.0284, 0.0260, 0.0294, 0.0246], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:02:08,450 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-27 03:02:13,307 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=2.00 vs. limit=2.0 2023-03-27 03:02:17,915 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:02:18,697 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.61 vs. limit=2.0 2023-03-27 03:02:20,285 INFO [finetune.py:976] (2/7) Epoch 22, batch 3950, loss[loss=0.148, simple_loss=0.2324, pruned_loss=0.03181, over 4899.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2427, pruned_loss=0.05026, over 956234.53 frames. ], batch size: 35, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:02:39,959 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.559e+02 1.897e+02 2.284e+02 3.853e+02, threshold=3.794e+02, percent-clipped=3.0 2023-03-27 03:02:47,332 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:02:50,426 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 03:02:53,670 INFO [finetune.py:976] (2/7) Epoch 22, batch 4000, loss[loss=0.1514, simple_loss=0.233, pruned_loss=0.03486, over 4820.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2433, pruned_loss=0.05138, over 953280.38 frames. ], batch size: 39, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:03:06,186 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 03:03:26,975 INFO [finetune.py:976] (2/7) Epoch 22, batch 4050, loss[loss=0.1891, simple_loss=0.2678, pruned_loss=0.05522, over 4817.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2452, pruned_loss=0.05197, over 952275.26 frames. ], batch size: 40, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:03:27,709 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124333.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:03:39,272 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124348.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:03:46,865 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.585e+02 1.909e+02 2.213e+02 3.315e+02, threshold=3.818e+02, percent-clipped=0.0 2023-03-27 03:04:00,187 INFO [finetune.py:976] (2/7) Epoch 22, batch 4100, loss[loss=0.1982, simple_loss=0.275, pruned_loss=0.06075, over 4842.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2488, pruned_loss=0.0529, over 952032.34 frames. ], batch size: 44, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:04:07,293 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124392.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:14,212 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2688, 1.3788, 1.5960, 1.4992, 1.5161, 3.0121, 1.3530, 1.5049], device='cuda:2'), covar=tensor([0.1050, 0.1795, 0.1126, 0.1026, 0.1694, 0.0282, 0.1513, 0.1828], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:04:19,530 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124409.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:04:24,897 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124418.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:29,726 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0315, 1.8906, 2.0315, 1.3487, 1.9416, 2.1614, 1.9947, 1.6366], device='cuda:2'), covar=tensor([0.0534, 0.0599, 0.0626, 0.0837, 0.0648, 0.0540, 0.0562, 0.1061], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0120, 0.0125, 0.0137, 0.0138, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:04:33,281 INFO [finetune.py:976] (2/7) Epoch 22, batch 4150, loss[loss=0.1833, simple_loss=0.2531, pruned_loss=0.05674, over 4860.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2492, pruned_loss=0.05245, over 953053.71 frames. ], batch size: 34, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:04:33,388 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:38,303 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 03:04:53,595 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.819e+01 1.555e+02 1.746e+02 2.178e+02 5.076e+02, threshold=3.492e+02, percent-clipped=2.0 2023-03-27 03:05:06,934 INFO [finetune.py:976] (2/7) Epoch 22, batch 4200, loss[loss=0.2003, simple_loss=0.2648, pruned_loss=0.06786, over 4907.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2493, pruned_loss=0.05249, over 951025.29 frames. ], batch size: 36, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:05:10,159 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2214, 2.1826, 1.8715, 2.2810, 2.1341, 2.1109, 2.0757, 3.0518], device='cuda:2'), covar=tensor([0.4037, 0.5210, 0.3637, 0.4721, 0.4741, 0.2449, 0.4751, 0.1749], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0235, 0.0278, 0.0256, 0.0226, 0.0255, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:05:14,220 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:05:22,247 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2818, 2.0983, 2.1668, 1.4914, 2.1501, 2.3781, 2.2883, 1.8192], device='cuda:2'), covar=tensor([0.0579, 0.0662, 0.0601, 0.0820, 0.0678, 0.0582, 0.0544, 0.1069], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0136, 0.0139, 0.0120, 0.0125, 0.0138, 0.0138, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:05:40,305 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:05:42,244 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.83 vs. limit=5.0 2023-03-27 03:05:42,633 INFO [finetune.py:976] (2/7) Epoch 22, batch 4250, loss[loss=0.1802, simple_loss=0.2434, pruned_loss=0.05849, over 4715.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2467, pruned_loss=0.05158, over 950413.04 frames. ], batch size: 54, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:06:01,650 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-27 03:06:02,064 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.200e+01 1.490e+02 1.704e+02 2.072e+02 3.423e+02, threshold=3.408e+02, percent-clipped=0.0 2023-03-27 03:06:12,353 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:06:17,860 INFO [finetune.py:976] (2/7) Epoch 22, batch 4300, loss[loss=0.1716, simple_loss=0.2397, pruned_loss=0.05178, over 4719.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2434, pruned_loss=0.05059, over 952022.33 frames. ], batch size: 23, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:07:10,855 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124628.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:07:17,350 INFO [finetune.py:976] (2/7) Epoch 22, batch 4350, loss[loss=0.1605, simple_loss=0.2238, pruned_loss=0.04859, over 4856.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2409, pruned_loss=0.05001, over 951778.86 frames. ], batch size: 47, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:07:21,127 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:07:48,794 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.554e+01 1.414e+02 1.739e+02 2.148e+02 3.782e+02, threshold=3.479e+02, percent-clipped=2.0 2023-03-27 03:07:50,754 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124663.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:06,295 INFO [finetune.py:976] (2/7) Epoch 22, batch 4400, loss[loss=0.1841, simple_loss=0.2467, pruned_loss=0.06071, over 4847.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.241, pruned_loss=0.05055, over 952875.05 frames. ], batch size: 30, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:08:12,498 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:17,332 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124699.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:20,763 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124704.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:08:31,155 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:35,321 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124724.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:40,100 INFO [finetune.py:976] (2/7) Epoch 22, batch 4450, loss[loss=0.1511, simple_loss=0.2354, pruned_loss=0.03343, over 4876.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2437, pruned_loss=0.05125, over 951903.92 frames. ], batch size: 32, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:08:45,008 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:50,796 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.38 vs. limit=5.0 2023-03-27 03:08:58,177 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.229e+02 1.653e+02 1.910e+02 2.206e+02 5.425e+02, threshold=3.820e+02, percent-clipped=3.0 2023-03-27 03:09:02,350 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:13,853 INFO [finetune.py:976] (2/7) Epoch 22, batch 4500, loss[loss=0.2321, simple_loss=0.3057, pruned_loss=0.07923, over 4918.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2454, pruned_loss=0.05176, over 951753.66 frames. ], batch size: 42, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:09:17,604 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:26,906 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1513, 1.2696, 1.3180, 0.6420, 1.2104, 1.5369, 1.5662, 1.2615], device='cuda:2'), covar=tensor([0.0859, 0.0540, 0.0578, 0.0512, 0.0499, 0.0631, 0.0315, 0.0674], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0123, 0.0132, 0.0131, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.9824e-05, 1.0866e-04, 9.1221e-05, 8.6671e-05, 9.2432e-05, 9.3121e-05, 1.0162e-04, 1.0729e-04], device='cuda:2') 2023-03-27 03:09:38,625 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:47,291 INFO [finetune.py:976] (2/7) Epoch 22, batch 4550, loss[loss=0.171, simple_loss=0.2493, pruned_loss=0.04635, over 4781.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2468, pruned_loss=0.05176, over 953207.44 frames. ], batch size: 51, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:09:56,522 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3809, 1.2553, 1.2715, 1.2419, 0.8036, 2.3174, 0.7418, 1.1019], device='cuda:2'), covar=tensor([0.3464, 0.2716, 0.2383, 0.2662, 0.2018, 0.0349, 0.2822, 0.1430], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:10:04,841 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.469e+02 1.742e+02 1.998e+02 4.285e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 03:10:12,762 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-27 03:10:19,592 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:10:20,090 INFO [finetune.py:976] (2/7) Epoch 22, batch 4600, loss[loss=0.1237, simple_loss=0.2031, pruned_loss=0.02217, over 3998.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.248, pruned_loss=0.05176, over 954248.92 frames. ], batch size: 17, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:10:48,839 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2585, 2.2771, 1.9830, 2.4246, 2.2359, 2.1900, 2.1740, 3.0608], device='cuda:2'), covar=tensor([0.3526, 0.4330, 0.3335, 0.3929, 0.4136, 0.2463, 0.4546, 0.1578], device='cuda:2'), in_proj_covar=tensor([0.0286, 0.0261, 0.0233, 0.0275, 0.0254, 0.0223, 0.0252, 0.0234], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:10:50,961 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124928.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:10:53,306 INFO [finetune.py:976] (2/7) Epoch 22, batch 4650, loss[loss=0.1793, simple_loss=0.2516, pruned_loss=0.05352, over 4844.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2449, pruned_loss=0.05067, over 956608.23 frames. ], batch size: 47, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:11:10,703 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.467e+02 1.675e+02 2.008e+02 3.769e+02, threshold=3.350e+02, percent-clipped=1.0 2023-03-27 03:11:21,851 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:11:24,380 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2399, 2.1165, 1.8407, 2.0793, 1.9784, 2.0107, 2.0362, 2.7996], device='cuda:2'), covar=tensor([0.3388, 0.4331, 0.3173, 0.3830, 0.4156, 0.2299, 0.3876, 0.1657], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0262, 0.0234, 0.0275, 0.0254, 0.0224, 0.0252, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:11:24,420 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.67 vs. limit=5.0 2023-03-27 03:11:26,384 INFO [finetune.py:976] (2/7) Epoch 22, batch 4700, loss[loss=0.1164, simple_loss=0.1941, pruned_loss=0.0193, over 4767.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2417, pruned_loss=0.0495, over 958871.74 frames. ], batch size: 28, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:11:36,920 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:11:43,190 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125004.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:11:52,773 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125019.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:12:07,770 INFO [finetune.py:976] (2/7) Epoch 22, batch 4750, loss[loss=0.1785, simple_loss=0.2447, pruned_loss=0.0561, over 4832.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2408, pruned_loss=0.04957, over 958939.92 frames. ], batch size: 30, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:12:30,810 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125052.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:12:39,937 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.467e+02 1.710e+02 2.084e+02 3.277e+02, threshold=3.420e+02, percent-clipped=0.0 2023-03-27 03:13:08,202 INFO [finetune.py:976] (2/7) Epoch 22, batch 4800, loss[loss=0.2018, simple_loss=0.2662, pruned_loss=0.06868, over 4816.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2444, pruned_loss=0.05121, over 958619.84 frames. ], batch size: 30, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:13:16,546 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:13:42,387 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125128.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:13:45,183 INFO [finetune.py:976] (2/7) Epoch 22, batch 4850, loss[loss=0.2103, simple_loss=0.2838, pruned_loss=0.06846, over 4850.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2471, pruned_loss=0.05178, over 957092.01 frames. ], batch size: 44, lr: 3.13e-03, grad_scale: 64.0 2023-03-27 03:13:47,688 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:13:50,717 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 03:14:04,216 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.665e+02 1.915e+02 2.224e+02 3.844e+02, threshold=3.831e+02, percent-clipped=1.0 2023-03-27 03:14:14,552 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125176.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:14:18,598 INFO [finetune.py:976] (2/7) Epoch 22, batch 4900, loss[loss=0.1871, simple_loss=0.2517, pruned_loss=0.06122, over 4924.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2489, pruned_loss=0.05275, over 956196.96 frames. ], batch size: 33, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:14:22,840 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0281, 1.6938, 2.2666, 1.5259, 2.0495, 2.2004, 1.6589, 2.2530], device='cuda:2'), covar=tensor([0.1262, 0.2067, 0.1648, 0.1987, 0.0943, 0.1319, 0.2735, 0.0864], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0206, 0.0190, 0.0189, 0.0173, 0.0213, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:14:23,441 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125189.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:14:37,281 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-27 03:14:52,065 INFO [finetune.py:976] (2/7) Epoch 22, batch 4950, loss[loss=0.156, simple_loss=0.2313, pruned_loss=0.04035, over 4734.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2491, pruned_loss=0.05246, over 956436.42 frames. ], batch size: 27, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:15:12,017 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.500e+02 1.887e+02 2.169e+02 5.445e+02, threshold=3.774e+02, percent-clipped=5.0 2023-03-27 03:15:25,140 INFO [finetune.py:976] (2/7) Epoch 22, batch 5000, loss[loss=0.1722, simple_loss=0.2282, pruned_loss=0.05817, over 4763.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2483, pruned_loss=0.05255, over 956254.63 frames. ], batch size: 59, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:15:33,516 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125294.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:15:50,214 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125319.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:15:58,103 INFO [finetune.py:976] (2/7) Epoch 22, batch 5050, loss[loss=0.1491, simple_loss=0.2228, pruned_loss=0.03772, over 4906.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2456, pruned_loss=0.05211, over 957650.51 frames. ], batch size: 43, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:16:05,257 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125342.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:15,311 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:18,273 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.708e+01 1.493e+02 1.877e+02 2.229e+02 3.838e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-27 03:16:22,417 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125367.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:27,984 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:31,543 INFO [finetune.py:976] (2/7) Epoch 22, batch 5100, loss[loss=0.1588, simple_loss=0.2303, pruned_loss=0.04368, over 4909.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2437, pruned_loss=0.05162, over 959030.72 frames. ], batch size: 32, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:16:55,560 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125417.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:16:58,435 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:05,016 INFO [finetune.py:976] (2/7) Epoch 22, batch 5150, loss[loss=0.1137, simple_loss=0.1815, pruned_loss=0.02296, over 3910.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2423, pruned_loss=0.05106, over 955137.70 frames. ], batch size: 17, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:17:13,013 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:27,521 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-27 03:17:33,158 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.608e+02 1.768e+02 2.290e+02 4.207e+02, threshold=3.536e+02, percent-clipped=1.0 2023-03-27 03:17:45,306 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125476.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:48,790 INFO [finetune.py:976] (2/7) Epoch 22, batch 5200, loss[loss=0.1969, simple_loss=0.2704, pruned_loss=0.06174, over 4892.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2444, pruned_loss=0.05161, over 953336.39 frames. ], batch size: 36, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:17:48,935 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125482.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:54,881 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125484.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:18:39,396 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125524.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:18:44,721 INFO [finetune.py:976] (2/7) Epoch 22, batch 5250, loss[loss=0.1937, simple_loss=0.2645, pruned_loss=0.06142, over 4786.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2479, pruned_loss=0.05235, over 954974.21 frames. ], batch size: 29, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:18:51,000 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 03:19:03,922 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.572e+02 1.864e+02 2.118e+02 3.675e+02, threshold=3.728e+02, percent-clipped=1.0 2023-03-27 03:19:18,601 INFO [finetune.py:976] (2/7) Epoch 22, batch 5300, loss[loss=0.1499, simple_loss=0.2026, pruned_loss=0.04855, over 4344.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2483, pruned_loss=0.05214, over 954005.64 frames. ], batch size: 19, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:19:22,401 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8340, 1.2916, 1.7279, 1.7895, 1.5896, 1.5946, 1.6940, 1.7483], device='cuda:2'), covar=tensor([0.4516, 0.4291, 0.3626, 0.4102, 0.5071, 0.4078, 0.4838, 0.3524], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0244, 0.0265, 0.0287, 0.0285, 0.0262, 0.0295, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:19:32,533 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125603.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:19:52,428 INFO [finetune.py:976] (2/7) Epoch 22, batch 5350, loss[loss=0.1468, simple_loss=0.2276, pruned_loss=0.03299, over 4764.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2483, pruned_loss=0.05161, over 953738.69 frames. ], batch size: 28, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:20:10,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.759e+01 1.403e+02 1.743e+02 2.180e+02 4.274e+02, threshold=3.486e+02, percent-clipped=1.0 2023-03-27 03:20:11,596 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 03:20:13,358 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125664.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:20:22,141 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1532, 1.8263, 2.5283, 4.1435, 2.8224, 2.8442, 0.8245, 3.4685], device='cuda:2'), covar=tensor([0.1670, 0.1388, 0.1430, 0.0469, 0.0731, 0.1388, 0.2019, 0.0369], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0101, 0.0136, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 03:20:25,088 INFO [finetune.py:976] (2/7) Epoch 22, batch 5400, loss[loss=0.194, simple_loss=0.2511, pruned_loss=0.06843, over 4812.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2463, pruned_loss=0.05143, over 955705.04 frames. ], batch size: 51, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:20:27,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6339, 1.4214, 2.1015, 3.0835, 2.0514, 2.3273, 1.0669, 2.6311], device='cuda:2'), covar=tensor([0.1658, 0.1374, 0.1158, 0.0596, 0.0801, 0.1488, 0.1773, 0.0451], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0101, 0.0136, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 03:20:44,652 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125712.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:20:53,983 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:20:58,061 INFO [finetune.py:976] (2/7) Epoch 22, batch 5450, loss[loss=0.1604, simple_loss=0.2312, pruned_loss=0.04483, over 4902.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.243, pruned_loss=0.05034, over 956689.09 frames. ], batch size: 32, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:20:58,129 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:17,000 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.470e+02 1.717e+02 2.000e+02 3.602e+02, threshold=3.434e+02, percent-clipped=1.0 2023-03-27 03:21:27,702 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:31,103 INFO [finetune.py:976] (2/7) Epoch 22, batch 5500, loss[loss=0.1989, simple_loss=0.2645, pruned_loss=0.06669, over 4845.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2408, pruned_loss=0.04967, over 957761.85 frames. ], batch size: 49, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:21:32,419 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:33,651 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125786.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:42,777 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9314, 2.6224, 2.2374, 2.8414, 2.6834, 2.4293, 3.1751, 2.8061], device='cuda:2'), covar=tensor([0.1225, 0.2080, 0.2718, 0.2335, 0.2476, 0.1694, 0.2791, 0.1667], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0235, 0.0253, 0.0248, 0.0204, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:21:43,531 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-27 03:21:53,422 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 03:21:55,189 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4743, 0.9604, 0.7630, 1.3247, 1.9211, 0.7850, 1.1937, 1.2726], device='cuda:2'), covar=tensor([0.1545, 0.2261, 0.1730, 0.1216, 0.1906, 0.1916, 0.1617, 0.2073], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0093, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 03:22:04,442 INFO [finetune.py:976] (2/7) Epoch 22, batch 5550, loss[loss=0.1781, simple_loss=0.255, pruned_loss=0.05057, over 4778.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2422, pruned_loss=0.05025, over 958681.31 frames. ], batch size: 29, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:22:04,489 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125832.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:22:32,019 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.619e+02 1.949e+02 2.379e+02 4.295e+02, threshold=3.899e+02, percent-clipped=6.0 2023-03-27 03:22:48,802 INFO [finetune.py:976] (2/7) Epoch 22, batch 5600, loss[loss=0.169, simple_loss=0.2551, pruned_loss=0.04142, over 4910.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2456, pruned_loss=0.05121, over 958204.42 frames. ], batch size: 37, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:22:52,402 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1464, 1.2626, 1.3290, 0.7144, 1.2544, 1.5072, 1.5939, 1.2395], device='cuda:2'), covar=tensor([0.0861, 0.0542, 0.0519, 0.0465, 0.0482, 0.0605, 0.0300, 0.0633], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0128, 0.0123, 0.0131, 0.0131, 0.0141, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9931e-05, 1.0840e-04, 9.1649e-05, 8.6331e-05, 9.1925e-05, 9.3299e-05, 1.0103e-04, 1.0704e-04], device='cuda:2') 2023-03-27 03:23:20,375 INFO [finetune.py:976] (2/7) Epoch 22, batch 5650, loss[loss=0.1869, simple_loss=0.2569, pruned_loss=0.05849, over 4916.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2491, pruned_loss=0.05227, over 956352.59 frames. ], batch size: 36, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:23:39,012 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9480, 2.6614, 2.1888, 1.2934, 2.5088, 2.3572, 2.1044, 2.3714], device='cuda:2'), covar=tensor([0.0659, 0.0797, 0.1570, 0.1739, 0.1197, 0.1730, 0.1809, 0.0881], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:23:46,874 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9859, 1.7132, 1.5134, 1.5965, 2.1461, 2.2140, 1.8070, 1.6798], device='cuda:2'), covar=tensor([0.0342, 0.0379, 0.0665, 0.0373, 0.0219, 0.0372, 0.0330, 0.0394], device='cuda:2'), in_proj_covar=tensor([0.0097, 0.0105, 0.0140, 0.0110, 0.0097, 0.0110, 0.0100, 0.0110], device='cuda:2'), out_proj_covar=tensor([7.5608e-05, 8.0642e-05, 1.1020e-04, 8.4323e-05, 7.5637e-05, 8.0976e-05, 7.4049e-05, 8.3962e-05], device='cuda:2') 2023-03-27 03:23:47,407 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125959.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:23:48,544 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.919e+01 1.591e+02 1.865e+02 2.221e+02 3.612e+02, threshold=3.730e+02, percent-clipped=0.0 2023-03-27 03:24:11,011 INFO [finetune.py:976] (2/7) Epoch 22, batch 5700, loss[loss=0.1816, simple_loss=0.2305, pruned_loss=0.06639, over 3936.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2441, pruned_loss=0.051, over 937572.30 frames. ], batch size: 17, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:24:20,536 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125998.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:24:37,871 INFO [finetune.py:976] (2/7) Epoch 23, batch 0, loss[loss=0.1435, simple_loss=0.2198, pruned_loss=0.03354, over 4857.00 frames. ], tot_loss[loss=0.1435, simple_loss=0.2198, pruned_loss=0.03354, over 4857.00 frames. ], batch size: 44, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:24:37,871 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 03:24:44,100 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8232, 1.7384, 2.0983, 2.9349, 2.0083, 2.3070, 1.1871, 2.4975], device='cuda:2'), covar=tensor([0.1370, 0.1055, 0.0965, 0.0620, 0.0772, 0.1151, 0.1431, 0.0483], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 03:24:45,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0445, 1.8061, 2.0708, 1.3279, 2.0047, 2.0417, 2.0852, 1.6963], device='cuda:2'), covar=tensor([0.0529, 0.0705, 0.0579, 0.0825, 0.0735, 0.0607, 0.0518, 0.1071], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0134, 0.0138, 0.0119, 0.0123, 0.0137, 0.0137, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:24:53,038 INFO [finetune.py:1010] (2/7) Epoch 23, validation: loss=0.1587, simple_loss=0.2268, pruned_loss=0.04533, over 2265189.00 frames. 2023-03-27 03:24:53,039 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 03:24:59,387 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126012.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:25:12,088 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:27,621 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-27 03:25:30,342 INFO [finetune.py:976] (2/7) Epoch 23, batch 50, loss[loss=0.1618, simple_loss=0.2403, pruned_loss=0.04165, over 4816.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2449, pruned_loss=0.05088, over 216322.42 frames. ], batch size: 33, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:25:30,463 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126059.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:31,916 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:32,453 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.266e+01 1.545e+02 1.923e+02 2.325e+02 3.929e+02, threshold=3.846e+02, percent-clipped=1.0 2023-03-27 03:25:39,331 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 03:25:42,869 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:44,663 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:45,302 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126081.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:58,033 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 03:26:03,745 INFO [finetune.py:976] (2/7) Epoch 23, batch 100, loss[loss=0.1784, simple_loss=0.2452, pruned_loss=0.05583, over 4691.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2429, pruned_loss=0.05226, over 381432.02 frames. ], batch size: 23, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:26:14,892 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:26:29,110 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8166, 1.7650, 1.6682, 1.7259, 1.3054, 3.6966, 1.5132, 2.0221], device='cuda:2'), covar=tensor([0.3242, 0.2379, 0.2042, 0.2321, 0.1644, 0.0167, 0.2574, 0.1164], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:26:37,114 INFO [finetune.py:976] (2/7) Epoch 23, batch 150, loss[loss=0.1604, simple_loss=0.2289, pruned_loss=0.04596, over 4937.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2385, pruned_loss=0.05093, over 508855.61 frames. ], batch size: 38, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:26:38,293 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.556e+02 1.791e+02 2.255e+02 5.687e+02, threshold=3.583e+02, percent-clipped=3.0 2023-03-27 03:26:46,112 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1356, 2.0667, 1.7241, 2.0861, 1.9359, 1.9447, 1.9721, 2.6885], device='cuda:2'), covar=tensor([0.4041, 0.4116, 0.3500, 0.3570, 0.4009, 0.2446, 0.3889, 0.1867], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0235, 0.0277, 0.0256, 0.0226, 0.0254, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:26:53,317 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4645, 3.0295, 2.7306, 1.4458, 2.9431, 2.4756, 2.3676, 2.6873], device='cuda:2'), covar=tensor([0.0684, 0.0877, 0.1797, 0.2174, 0.1531, 0.1934, 0.1888, 0.1092], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0200, 0.0182, 0.0210, 0.0208, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:26:57,588 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2967, 1.3796, 1.5291, 1.0536, 1.2505, 1.5105, 1.3697, 1.6683], device='cuda:2'), covar=tensor([0.1196, 0.2172, 0.1248, 0.1489, 0.0960, 0.1146, 0.2853, 0.0784], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0205, 0.0191, 0.0189, 0.0173, 0.0214, 0.0215, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:27:10,672 INFO [finetune.py:976] (2/7) Epoch 23, batch 200, loss[loss=0.1419, simple_loss=0.2163, pruned_loss=0.03373, over 4759.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2355, pruned_loss=0.04952, over 607474.42 frames. ], batch size: 28, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:27:27,633 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-27 03:27:46,631 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-27 03:28:03,733 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-27 03:28:05,167 INFO [finetune.py:976] (2/7) Epoch 23, batch 250, loss[loss=0.1732, simple_loss=0.2473, pruned_loss=0.04951, over 4819.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2401, pruned_loss=0.05084, over 685649.36 frames. ], batch size: 39, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:28:05,282 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126259.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:28:06,383 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.625e+02 1.849e+02 2.180e+02 4.181e+02, threshold=3.697e+02, percent-clipped=1.0 2023-03-27 03:28:17,414 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-27 03:28:37,018 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:28:40,576 INFO [finetune.py:976] (2/7) Epoch 23, batch 300, loss[loss=0.1669, simple_loss=0.2519, pruned_loss=0.04094, over 4901.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2437, pruned_loss=0.05144, over 745742.70 frames. ], batch size: 37, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:28:42,559 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4127, 1.4575, 1.1508, 1.4347, 1.7847, 1.5848, 1.3774, 1.2425], device='cuda:2'), covar=tensor([0.0435, 0.0354, 0.0690, 0.0319, 0.0228, 0.0506, 0.0362, 0.0464], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0106, 0.0142, 0.0111, 0.0098, 0.0111, 0.0101, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.6370e-05, 8.1298e-05, 1.1109e-04, 8.5153e-05, 7.6149e-05, 8.1804e-05, 7.4820e-05, 8.4583e-05], device='cuda:2') 2023-03-27 03:28:52,709 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-27 03:28:53,652 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1195, 1.8757, 2.5370, 1.5957, 2.1241, 2.4610, 1.8736, 2.5457], device='cuda:2'), covar=tensor([0.1277, 0.1942, 0.1346, 0.1972, 0.0915, 0.1347, 0.2460, 0.0916], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0205, 0.0191, 0.0189, 0.0173, 0.0214, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:29:20,739 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126354.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:24,664 INFO [finetune.py:976] (2/7) Epoch 23, batch 350, loss[loss=0.1536, simple_loss=0.2421, pruned_loss=0.03257, over 4766.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2453, pruned_loss=0.05094, over 791646.22 frames. ], batch size: 28, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:29:25,831 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.521e+01 1.514e+02 1.805e+02 2.248e+02 3.946e+02, threshold=3.610e+02, percent-clipped=1.0 2023-03-27 03:29:39,005 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:39,577 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126381.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:49,856 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7454, 1.7269, 1.7239, 1.0529, 1.8800, 2.0792, 2.0951, 1.5520], device='cuda:2'), covar=tensor([0.0977, 0.0718, 0.0574, 0.0566, 0.0456, 0.0682, 0.0363, 0.0751], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9473e-05, 1.0771e-04, 9.0994e-05, 8.5981e-05, 9.1504e-05, 9.2042e-05, 1.0046e-04, 1.0606e-04], device='cuda:2') 2023-03-27 03:29:59,416 INFO [finetune.py:976] (2/7) Epoch 23, batch 400, loss[loss=0.1572, simple_loss=0.2395, pruned_loss=0.0374, over 4849.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2455, pruned_loss=0.05022, over 827857.44 frames. ], batch size: 44, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:30:21,835 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126429.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:29,721 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126441.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:35,177 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:40,549 INFO [finetune.py:976] (2/7) Epoch 23, batch 450, loss[loss=0.195, simple_loss=0.2605, pruned_loss=0.06477, over 4834.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2452, pruned_loss=0.05031, over 857111.40 frames. ], batch size: 33, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:30:42,257 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.347e+01 1.414e+02 1.639e+02 2.017e+02 3.767e+02, threshold=3.277e+02, percent-clipped=3.0 2023-03-27 03:31:10,964 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1397, 1.9605, 1.7976, 1.9939, 1.8290, 1.8764, 1.9208, 2.6589], device='cuda:2'), covar=tensor([0.3395, 0.4127, 0.3157, 0.3628, 0.4242, 0.2350, 0.3547, 0.1523], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0234, 0.0278, 0.0257, 0.0226, 0.0254, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:31:13,829 INFO [finetune.py:976] (2/7) Epoch 23, batch 500, loss[loss=0.1561, simple_loss=0.2174, pruned_loss=0.0474, over 4826.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2419, pruned_loss=0.04924, over 879725.86 frames. ], batch size: 38, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:31:15,658 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:31:47,405 INFO [finetune.py:976] (2/7) Epoch 23, batch 550, loss[loss=0.2027, simple_loss=0.2662, pruned_loss=0.06965, over 4840.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2413, pruned_loss=0.04959, over 898657.56 frames. ], batch size: 47, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:31:48,615 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.588e+02 1.845e+02 2.407e+02 4.330e+02, threshold=3.691e+02, percent-clipped=4.0 2023-03-27 03:32:02,768 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6073, 2.4403, 1.9626, 2.6152, 2.3995, 2.1049, 2.8552, 2.5829], device='cuda:2'), covar=tensor([0.1205, 0.2056, 0.2801, 0.2607, 0.2610, 0.1458, 0.3717, 0.1543], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0190, 0.0237, 0.0256, 0.0251, 0.0207, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:32:21,217 INFO [finetune.py:976] (2/7) Epoch 23, batch 600, loss[loss=0.2067, simple_loss=0.2834, pruned_loss=0.06497, over 4808.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2424, pruned_loss=0.05026, over 912817.36 frames. ], batch size: 51, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:32:54,468 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.64 vs. limit=5.0 2023-03-27 03:33:02,528 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:33:03,144 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7818, 1.2841, 0.9215, 1.5923, 2.2025, 1.3805, 1.5656, 1.6243], device='cuda:2'), covar=tensor([0.1517, 0.2197, 0.1935, 0.1223, 0.1936, 0.1990, 0.1536, 0.2069], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 03:33:04,391 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2126, 1.3855, 0.9084, 1.8215, 2.4887, 1.7629, 1.7049, 1.7094], device='cuda:2'), covar=tensor([0.1522, 0.2328, 0.2126, 0.1314, 0.1939, 0.2109, 0.1580, 0.2247], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 03:33:05,530 INFO [finetune.py:976] (2/7) Epoch 23, batch 650, loss[loss=0.1902, simple_loss=0.2712, pruned_loss=0.05457, over 4819.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2474, pruned_loss=0.05203, over 923810.35 frames. ], batch size: 38, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:33:06,763 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.579e+02 1.902e+02 2.270e+02 1.001e+03, threshold=3.804e+02, percent-clipped=1.0 2023-03-27 03:33:42,520 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126702.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:33:47,146 INFO [finetune.py:976] (2/7) Epoch 23, batch 700, loss[loss=0.1822, simple_loss=0.2579, pruned_loss=0.05327, over 4909.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2475, pruned_loss=0.0516, over 930967.69 frames. ], batch size: 36, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:34:11,566 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7375, 4.0692, 4.2766, 4.5208, 4.4957, 4.2455, 4.8212, 1.8032], device='cuda:2'), covar=tensor([0.0764, 0.0886, 0.0922, 0.1127, 0.1329, 0.1669, 0.0643, 0.5481], device='cuda:2'), in_proj_covar=tensor([0.0348, 0.0245, 0.0279, 0.0290, 0.0334, 0.0284, 0.0304, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:34:11,568 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126736.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:34:23,318 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 03:34:29,142 INFO [finetune.py:976] (2/7) Epoch 23, batch 750, loss[loss=0.1582, simple_loss=0.2382, pruned_loss=0.03908, over 4775.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2492, pruned_loss=0.05274, over 937122.55 frames. ], batch size: 29, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:34:30,819 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.063e+01 1.442e+02 1.710e+02 2.057e+02 3.398e+02, threshold=3.419e+02, percent-clipped=0.0 2023-03-27 03:35:00,639 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:35:02,394 INFO [finetune.py:976] (2/7) Epoch 23, batch 800, loss[loss=0.1677, simple_loss=0.2277, pruned_loss=0.05388, over 3775.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2494, pruned_loss=0.0533, over 937999.13 frames. ], batch size: 16, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:35:07,299 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126816.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:35:44,511 INFO [finetune.py:976] (2/7) Epoch 23, batch 850, loss[loss=0.1475, simple_loss=0.2116, pruned_loss=0.04164, over 4552.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2467, pruned_loss=0.05168, over 941036.55 frames. ], batch size: 20, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:35:45,682 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.329e+01 1.478e+02 1.768e+02 2.024e+02 3.574e+02, threshold=3.536e+02, percent-clipped=1.0 2023-03-27 03:35:56,155 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126877.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:36:18,421 INFO [finetune.py:976] (2/7) Epoch 23, batch 900, loss[loss=0.159, simple_loss=0.227, pruned_loss=0.04544, over 4746.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2429, pruned_loss=0.0503, over 944827.33 frames. ], batch size: 27, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:36:52,033 INFO [finetune.py:976] (2/7) Epoch 23, batch 950, loss[loss=0.1814, simple_loss=0.2387, pruned_loss=0.06207, over 4890.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2418, pruned_loss=0.05029, over 947507.47 frames. ], batch size: 43, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:36:53,227 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.486e+02 1.775e+02 2.132e+02 3.351e+02, threshold=3.551e+02, percent-clipped=0.0 2023-03-27 03:37:26,097 INFO [finetune.py:976] (2/7) Epoch 23, batch 1000, loss[loss=0.1476, simple_loss=0.2131, pruned_loss=0.04108, over 4789.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2443, pruned_loss=0.05088, over 949735.28 frames. ], batch size: 26, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:37:43,503 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127036.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:01,443 INFO [finetune.py:976] (2/7) Epoch 23, batch 1050, loss[loss=0.1736, simple_loss=0.2525, pruned_loss=0.04735, over 4905.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2462, pruned_loss=0.0512, over 950190.64 frames. ], batch size: 36, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:38:02,652 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.597e+02 1.830e+02 2.248e+02 5.450e+02, threshold=3.660e+02, percent-clipped=4.0 2023-03-27 03:38:27,252 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:40,378 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7959, 2.0565, 1.6893, 1.7509, 2.3167, 2.3176, 1.9270, 1.9313], device='cuda:2'), covar=tensor([0.0414, 0.0335, 0.0547, 0.0349, 0.0261, 0.0547, 0.0376, 0.0378], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0106, 0.0142, 0.0111, 0.0098, 0.0111, 0.0100, 0.0111], device='cuda:2'), out_proj_covar=tensor([7.6285e-05, 8.1075e-05, 1.1157e-04, 8.5068e-05, 7.6121e-05, 8.1688e-05, 7.4479e-05, 8.4585e-05], device='cuda:2') 2023-03-27 03:38:42,633 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127106.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:44,895 INFO [finetune.py:976] (2/7) Epoch 23, batch 1100, loss[loss=0.183, simple_loss=0.2504, pruned_loss=0.05776, over 4898.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2474, pruned_loss=0.05182, over 950062.62 frames. ], batch size: 36, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:38:47,394 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.2426, 3.6077, 3.8774, 4.0703, 3.9816, 3.8002, 4.3579, 1.2988], device='cuda:2'), covar=tensor([0.0857, 0.1020, 0.0848, 0.1098, 0.1445, 0.1717, 0.0768, 0.6163], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0246, 0.0280, 0.0291, 0.0335, 0.0285, 0.0306, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:39:14,054 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 03:39:14,540 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127154.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:39:21,916 INFO [finetune.py:976] (2/7) Epoch 23, batch 1150, loss[loss=0.1604, simple_loss=0.2338, pruned_loss=0.04353, over 4856.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2488, pruned_loss=0.0525, over 949939.77 frames. ], batch size: 31, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:39:23,615 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.547e+02 1.781e+02 2.032e+02 3.739e+02, threshold=3.562e+02, percent-clipped=1.0 2023-03-27 03:39:35,833 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:40:04,147 INFO [finetune.py:976] (2/7) Epoch 23, batch 1200, loss[loss=0.1633, simple_loss=0.2234, pruned_loss=0.05159, over 4264.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2475, pruned_loss=0.0521, over 951011.07 frames. ], batch size: 65, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:40:29,638 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8379, 1.0752, 1.9011, 1.8212, 1.6772, 1.6134, 1.7514, 1.8162], device='cuda:2'), covar=tensor([0.3617, 0.3660, 0.2980, 0.3395, 0.4077, 0.3425, 0.3825, 0.2821], device='cuda:2'), in_proj_covar=tensor([0.0260, 0.0244, 0.0265, 0.0288, 0.0286, 0.0263, 0.0295, 0.0248], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:40:32,984 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127249.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:40:44,043 INFO [finetune.py:976] (2/7) Epoch 23, batch 1250, loss[loss=0.1414, simple_loss=0.2247, pruned_loss=0.02901, over 4779.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2455, pruned_loss=0.05165, over 953619.31 frames. ], batch size: 29, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:40:45,212 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.489e+02 1.742e+02 2.248e+02 3.707e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 03:40:53,089 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3142, 1.3815, 1.4978, 0.7281, 1.4158, 1.7173, 1.7122, 1.3601], device='cuda:2'), covar=tensor([0.0869, 0.0622, 0.0522, 0.0523, 0.0456, 0.0530, 0.0325, 0.0692], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0149, 0.0127, 0.0122, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([9.0002e-05, 1.0741e-04, 9.1234e-05, 8.6173e-05, 9.1971e-05, 9.2002e-05, 1.0106e-04, 1.0632e-04], device='cuda:2') 2023-03-27 03:41:21,264 INFO [finetune.py:976] (2/7) Epoch 23, batch 1300, loss[loss=0.129, simple_loss=0.2064, pruned_loss=0.02577, over 4807.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2425, pruned_loss=0.05042, over 955479.47 frames. ], batch size: 25, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:41:22,022 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127310.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:41:43,832 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5615, 1.5340, 1.5367, 1.6010, 1.2136, 2.7536, 1.3181, 1.6879], device='cuda:2'), covar=tensor([0.3013, 0.2107, 0.1846, 0.2092, 0.1568, 0.0431, 0.2963, 0.1121], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0117, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:41:45,078 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0086, 2.7220, 2.5006, 1.3426, 2.6106, 2.1046, 2.0789, 2.3618], device='cuda:2'), covar=tensor([0.1028, 0.0918, 0.1763, 0.2225, 0.1779, 0.2445, 0.2143, 0.1291], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:41:54,394 INFO [finetune.py:976] (2/7) Epoch 23, batch 1350, loss[loss=0.1933, simple_loss=0.2532, pruned_loss=0.06675, over 4816.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.241, pruned_loss=0.05003, over 952860.72 frames. ], batch size: 39, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:41:55,606 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.500e+02 1.797e+02 2.250e+02 4.549e+02, threshold=3.594e+02, percent-clipped=1.0 2023-03-27 03:42:27,769 INFO [finetune.py:976] (2/7) Epoch 23, batch 1400, loss[loss=0.1735, simple_loss=0.2328, pruned_loss=0.05711, over 4766.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2458, pruned_loss=0.05197, over 952984.34 frames. ], batch size: 26, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:42:40,739 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127427.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:43:01,001 INFO [finetune.py:976] (2/7) Epoch 23, batch 1450, loss[loss=0.188, simple_loss=0.2624, pruned_loss=0.0568, over 4856.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2474, pruned_loss=0.05221, over 953114.34 frames. ], batch size: 44, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:43:03,312 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.817e+01 1.596e+02 1.913e+02 2.194e+02 3.811e+02, threshold=3.827e+02, percent-clipped=1.0 2023-03-27 03:43:09,875 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:43:25,057 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127488.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:43:46,709 INFO [finetune.py:976] (2/7) Epoch 23, batch 1500, loss[loss=0.1767, simple_loss=0.2558, pruned_loss=0.04876, over 4877.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.249, pruned_loss=0.05291, over 951996.47 frames. ], batch size: 32, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:43:50,885 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7816, 3.9717, 3.6195, 1.8256, 4.0015, 3.0355, 1.0326, 2.7971], device='cuda:2'), covar=tensor([0.2301, 0.2001, 0.1755, 0.3455, 0.1149, 0.1019, 0.4722, 0.1480], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0129, 0.0161, 0.0124, 0.0150, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 03:43:54,391 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127520.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:43:59,921 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:44:16,062 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8922, 1.6646, 1.5606, 1.3023, 1.6918, 1.6678, 1.7028, 2.2387], device='cuda:2'), covar=tensor([0.3870, 0.4014, 0.3178, 0.3412, 0.3601, 0.2325, 0.3310, 0.1790], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0263, 0.0234, 0.0276, 0.0256, 0.0227, 0.0254, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:44:20,425 INFO [finetune.py:976] (2/7) Epoch 23, batch 1550, loss[loss=0.1303, simple_loss=0.2142, pruned_loss=0.02318, over 4791.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.249, pruned_loss=0.05261, over 951235.62 frames. ], batch size: 29, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:44:22,241 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.462e+02 1.755e+02 2.123e+02 3.197e+02, threshold=3.511e+02, percent-clipped=0.0 2023-03-27 03:44:48,015 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127590.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:45:04,791 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:45:07,683 INFO [finetune.py:976] (2/7) Epoch 23, batch 1600, loss[loss=0.1273, simple_loss=0.2106, pruned_loss=0.02198, over 4722.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.246, pruned_loss=0.0516, over 950483.68 frames. ], batch size: 59, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:45:40,959 INFO [finetune.py:976] (2/7) Epoch 23, batch 1650, loss[loss=0.1544, simple_loss=0.2169, pruned_loss=0.04593, over 4814.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2435, pruned_loss=0.05094, over 953520.65 frames. ], batch size: 41, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:45:43,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.567e+02 1.812e+02 2.212e+02 4.212e+02, threshold=3.624e+02, percent-clipped=4.0 2023-03-27 03:45:43,476 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127662.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:46:19,743 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5170, 1.4585, 1.4372, 1.4872, 1.0856, 2.9236, 1.0959, 1.5679], device='cuda:2'), covar=tensor([0.3525, 0.2681, 0.2203, 0.2483, 0.1850, 0.0311, 0.2647, 0.1322], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:46:26,934 INFO [finetune.py:976] (2/7) Epoch 23, batch 1700, loss[loss=0.2537, simple_loss=0.2907, pruned_loss=0.1084, over 4929.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2427, pruned_loss=0.0514, over 955292.54 frames. ], batch size: 38, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:46:34,055 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6675, 1.5725, 1.4065, 1.7451, 1.7327, 1.7035, 1.0655, 1.4276], device='cuda:2'), covar=tensor([0.2083, 0.1996, 0.1909, 0.1643, 0.1484, 0.1208, 0.2378, 0.1821], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0196, 0.0243, 0.0189, 0.0217, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:46:36,933 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127723.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:47:00,092 INFO [finetune.py:976] (2/7) Epoch 23, batch 1750, loss[loss=0.1754, simple_loss=0.2579, pruned_loss=0.0464, over 4812.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2431, pruned_loss=0.05163, over 952985.00 frames. ], batch size: 45, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:47:01,897 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.511e+02 1.783e+02 2.157e+02 3.427e+02, threshold=3.565e+02, percent-clipped=0.0 2023-03-27 03:47:13,881 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:47:16,783 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127783.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:47:33,985 INFO [finetune.py:976] (2/7) Epoch 23, batch 1800, loss[loss=0.2276, simple_loss=0.2852, pruned_loss=0.08499, over 4864.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2471, pruned_loss=0.05284, over 952080.07 frames. ], batch size: 31, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:47:35,350 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6850, 1.6831, 1.6569, 1.6804, 1.4254, 3.3608, 1.6875, 2.0250], device='cuda:2'), covar=tensor([0.2914, 0.2241, 0.1818, 0.2079, 0.1516, 0.0237, 0.2590, 0.1088], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 03:47:54,758 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127840.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:01,835 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:07,744 INFO [finetune.py:976] (2/7) Epoch 23, batch 1850, loss[loss=0.2119, simple_loss=0.2816, pruned_loss=0.07107, over 4728.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2477, pruned_loss=0.05248, over 949588.69 frames. ], batch size: 59, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:48:09,575 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.256e+01 1.480e+02 1.674e+02 2.095e+02 4.093e+02, threshold=3.347e+02, percent-clipped=1.0 2023-03-27 03:48:16,150 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:24,925 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127885.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:40,233 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127905.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:42,624 INFO [finetune.py:976] (2/7) Epoch 23, batch 1900, loss[loss=0.191, simple_loss=0.257, pruned_loss=0.06247, over 4802.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2487, pruned_loss=0.05219, over 952359.48 frames. ], batch size: 51, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:48:43,933 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:53,628 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-27 03:48:58,814 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:49:05,397 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1959, 1.8477, 2.2360, 2.2053, 1.9342, 1.9243, 2.1317, 2.0957], device='cuda:2'), covar=tensor([0.4144, 0.3968, 0.3085, 0.3917, 0.4930, 0.4074, 0.4860, 0.2968], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0243, 0.0263, 0.0287, 0.0285, 0.0262, 0.0294, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:49:11,955 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127953.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:49:16,014 INFO [finetune.py:976] (2/7) Epoch 23, batch 1950, loss[loss=0.1722, simple_loss=0.238, pruned_loss=0.05322, over 4882.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2475, pruned_loss=0.0515, over 955817.69 frames. ], batch size: 35, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:49:17,847 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.653e+01 1.391e+02 1.764e+02 2.084e+02 3.552e+02, threshold=3.528e+02, percent-clipped=1.0 2023-03-27 03:49:53,065 INFO [finetune.py:976] (2/7) Epoch 23, batch 2000, loss[loss=0.1609, simple_loss=0.2369, pruned_loss=0.0424, over 4750.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.244, pruned_loss=0.05014, over 955903.75 frames. ], batch size: 27, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:50:03,200 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128018.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:50:38,970 INFO [finetune.py:976] (2/7) Epoch 23, batch 2050, loss[loss=0.1619, simple_loss=0.2232, pruned_loss=0.05033, over 4732.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2409, pruned_loss=0.04963, over 955333.92 frames. ], batch size: 59, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:50:41,269 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.483e+02 1.749e+02 2.101e+02 3.191e+02, threshold=3.498e+02, percent-clipped=0.0 2023-03-27 03:50:54,692 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128083.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:51:13,973 INFO [finetune.py:976] (2/7) Epoch 23, batch 2100, loss[loss=0.2768, simple_loss=0.3311, pruned_loss=0.1112, over 4213.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2409, pruned_loss=0.0502, over 953131.28 frames. ], batch size: 66, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:51:21,786 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1104, 1.8447, 1.8954, 0.9895, 2.1517, 2.3237, 2.0708, 1.7514], device='cuda:2'), covar=tensor([0.0946, 0.0793, 0.0602, 0.0689, 0.0646, 0.0786, 0.0497, 0.0764], device='cuda:2'), in_proj_covar=tensor([0.0125, 0.0151, 0.0128, 0.0124, 0.0132, 0.0131, 0.0143, 0.0150], device='cuda:2'), out_proj_covar=tensor([9.0727e-05, 1.0861e-04, 9.1702e-05, 8.7141e-05, 9.2338e-05, 9.3143e-05, 1.0186e-04, 1.0742e-04], device='cuda:2') 2023-03-27 03:51:40,712 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128131.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:51:43,606 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128135.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:51:52,022 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1854, 2.0678, 2.1628, 1.0553, 2.5305, 2.7528, 2.3694, 1.9055], device='cuda:2'), covar=tensor([0.0976, 0.0762, 0.0549, 0.0754, 0.0517, 0.0632, 0.0476, 0.0790], device='cuda:2'), in_proj_covar=tensor([0.0124, 0.0150, 0.0128, 0.0123, 0.0131, 0.0130, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([9.0354e-05, 1.0822e-04, 9.1403e-05, 8.6840e-05, 9.2067e-05, 9.2740e-05, 1.0151e-04, 1.0695e-04], device='cuda:2') 2023-03-27 03:52:00,528 INFO [finetune.py:976] (2/7) Epoch 23, batch 2150, loss[loss=0.1537, simple_loss=0.2267, pruned_loss=0.04037, over 4422.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2434, pruned_loss=0.05102, over 952769.74 frames. ], batch size: 19, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:52:02,379 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.558e+02 1.811e+02 2.178e+02 3.611e+02, threshold=3.622e+02, percent-clipped=2.0 2023-03-27 03:52:03,097 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1171, 1.2594, 0.7138, 1.9761, 2.3842, 1.7171, 1.6137, 1.7105], device='cuda:2'), covar=tensor([0.1526, 0.2286, 0.2242, 0.1258, 0.1966, 0.2054, 0.1614, 0.2163], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 03:52:17,018 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128185.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:31,847 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128206.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:33,601 INFO [finetune.py:976] (2/7) Epoch 23, batch 2200, loss[loss=0.2124, simple_loss=0.2886, pruned_loss=0.06812, over 4823.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.246, pruned_loss=0.05165, over 951228.31 frames. ], batch size: 38, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:52:46,631 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:49,641 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:53:07,264 INFO [finetune.py:976] (2/7) Epoch 23, batch 2250, loss[loss=0.1761, simple_loss=0.2453, pruned_loss=0.05347, over 4832.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2454, pruned_loss=0.05127, over 949527.71 frames. ], batch size: 30, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:53:09,084 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.465e+01 1.491e+02 1.772e+02 2.217e+02 3.841e+02, threshold=3.544e+02, percent-clipped=1.0 2023-03-27 03:53:19,663 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8297, 3.7730, 3.6135, 1.9913, 3.9591, 2.8952, 0.9220, 2.7430], device='cuda:2'), covar=tensor([0.2266, 0.1753, 0.1490, 0.3001, 0.1016, 0.0991, 0.4186, 0.1352], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0178, 0.0160, 0.0129, 0.0161, 0.0124, 0.0150, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 03:53:40,844 INFO [finetune.py:976] (2/7) Epoch 23, batch 2300, loss[loss=0.1986, simple_loss=0.2731, pruned_loss=0.06207, over 4897.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2458, pruned_loss=0.05102, over 952250.19 frames. ], batch size: 36, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:53:47,290 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128318.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:53:47,864 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9782, 4.3909, 4.2181, 2.6954, 4.5390, 3.3305, 0.7627, 3.2136], device='cuda:2'), covar=tensor([0.2428, 0.1814, 0.1314, 0.2749, 0.0800, 0.0903, 0.4643, 0.1316], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0177, 0.0159, 0.0128, 0.0160, 0.0123, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 03:53:48,506 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128320.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:13,558 INFO [finetune.py:976] (2/7) Epoch 23, batch 2350, loss[loss=0.161, simple_loss=0.2356, pruned_loss=0.04323, over 4895.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2433, pruned_loss=0.04967, over 951463.91 frames. ], batch size: 35, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:54:15,916 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.510e+02 1.720e+02 2.103e+02 3.385e+02, threshold=3.440e+02, percent-clipped=0.0 2023-03-27 03:54:18,449 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128366.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:26,721 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2778, 1.9618, 2.3487, 2.2800, 1.9598, 1.9836, 2.2136, 2.1353], device='cuda:2'), covar=tensor([0.3970, 0.4133, 0.3149, 0.3790, 0.5321, 0.4056, 0.4722, 0.3054], device='cuda:2'), in_proj_covar=tensor([0.0257, 0.0243, 0.0262, 0.0286, 0.0284, 0.0261, 0.0293, 0.0246], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:54:28,987 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128381.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:46,896 INFO [finetune.py:976] (2/7) Epoch 23, batch 2400, loss[loss=0.1659, simple_loss=0.2378, pruned_loss=0.04698, over 4807.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2408, pruned_loss=0.04871, over 953956.13 frames. ], batch size: 25, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:54:49,455 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128413.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:06,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128435.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:13,894 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6938, 1.6091, 1.4020, 1.5938, 2.0917, 1.9186, 1.7043, 1.4627], device='cuda:2'), covar=tensor([0.0372, 0.0348, 0.0621, 0.0350, 0.0177, 0.0479, 0.0315, 0.0429], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0107, 0.0144, 0.0112, 0.0100, 0.0111, 0.0102, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7409e-05, 8.1938e-05, 1.1317e-04, 8.5631e-05, 7.7438e-05, 8.2181e-05, 7.5591e-05, 8.5578e-05], device='cuda:2') 2023-03-27 03:55:35,543 INFO [finetune.py:976] (2/7) Epoch 23, batch 2450, loss[loss=0.1415, simple_loss=0.2169, pruned_loss=0.03301, over 4915.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2384, pruned_loss=0.04837, over 956207.20 frames. ], batch size: 37, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:55:41,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.054e+01 1.492e+02 1.779e+02 2.209e+02 4.084e+02, threshold=3.557e+02, percent-clipped=2.0 2023-03-27 03:55:49,824 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:53,395 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8482, 1.7587, 1.6120, 1.9965, 2.0961, 1.9327, 1.3392, 1.5266], device='cuda:2'), covar=tensor([0.2107, 0.1864, 0.1822, 0.1528, 0.1575, 0.1189, 0.2382, 0.1900], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0214, 0.0197, 0.0245, 0.0190, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:55:55,744 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:09,760 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 03:56:10,250 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:12,479 INFO [finetune.py:976] (2/7) Epoch 23, batch 2500, loss[loss=0.2487, simple_loss=0.3052, pruned_loss=0.09613, over 4278.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2406, pruned_loss=0.04945, over 955909.66 frames. ], batch size: 65, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:56:25,957 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:25,989 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0409, 1.8213, 2.0149, 1.3109, 2.0199, 2.0990, 1.9622, 1.7568], device='cuda:2'), covar=tensor([0.0542, 0.0696, 0.0657, 0.0810, 0.0712, 0.0623, 0.0646, 0.1038], device='cuda:2'), in_proj_covar=tensor([0.0128, 0.0133, 0.0137, 0.0118, 0.0122, 0.0135, 0.0135, 0.0159], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:56:48,372 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128554.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:55,781 INFO [finetune.py:976] (2/7) Epoch 23, batch 2550, loss[loss=0.173, simple_loss=0.2383, pruned_loss=0.05384, over 4753.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2421, pruned_loss=0.04959, over 954770.09 frames. ], batch size: 23, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:56:58,588 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.563e+02 1.837e+02 2.202e+02 4.665e+02, threshold=3.674e+02, percent-clipped=3.0 2023-03-27 03:57:11,630 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:57:30,115 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 03:57:33,477 INFO [finetune.py:976] (2/7) Epoch 23, batch 2600, loss[loss=0.2244, simple_loss=0.2899, pruned_loss=0.07945, over 4899.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.244, pruned_loss=0.0498, over 954055.74 frames. ], batch size: 36, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:57:43,529 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.74 vs. limit=5.0 2023-03-27 03:58:01,232 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.77 vs. limit=5.0 2023-03-27 03:58:07,053 INFO [finetune.py:976] (2/7) Epoch 23, batch 2650, loss[loss=0.1932, simple_loss=0.2726, pruned_loss=0.05684, over 4810.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2463, pruned_loss=0.05043, over 955363.69 frames. ], batch size: 40, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:58:08,890 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.579e+02 1.820e+02 2.183e+02 3.562e+02, threshold=3.640e+02, percent-clipped=0.0 2023-03-27 03:58:18,912 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128676.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:58:19,025 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 03:58:40,903 INFO [finetune.py:976] (2/7) Epoch 23, batch 2700, loss[loss=0.1879, simple_loss=0.2623, pruned_loss=0.05678, over 4729.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2464, pruned_loss=0.05016, over 952437.93 frames. ], batch size: 54, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:58:46,923 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8199, 3.3686, 3.5005, 3.6722, 3.5809, 3.3855, 3.8820, 1.3014], device='cuda:2'), covar=tensor([0.0909, 0.0832, 0.0948, 0.1149, 0.1388, 0.1520, 0.0901, 0.5455], device='cuda:2'), in_proj_covar=tensor([0.0343, 0.0244, 0.0278, 0.0289, 0.0335, 0.0282, 0.0300, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:59:01,161 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9417, 1.8189, 1.5993, 1.4343, 1.9654, 1.7337, 1.8305, 1.9433], device='cuda:2'), covar=tensor([0.1367, 0.1754, 0.2949, 0.2378, 0.2566, 0.1705, 0.2872, 0.1782], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0189, 0.0234, 0.0255, 0.0249, 0.0205, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 03:59:14,674 INFO [finetune.py:976] (2/7) Epoch 23, batch 2750, loss[loss=0.1366, simple_loss=0.2149, pruned_loss=0.02911, over 4800.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2435, pruned_loss=0.04935, over 952417.08 frames. ], batch size: 29, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:59:16,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.314e+01 1.516e+02 1.813e+02 2.146e+02 3.615e+02, threshold=3.627e+02, percent-clipped=0.0 2023-03-27 03:59:21,323 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128769.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:59:30,031 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:59:48,512 INFO [finetune.py:976] (2/7) Epoch 23, batch 2800, loss[loss=0.1642, simple_loss=0.2417, pruned_loss=0.04335, over 4745.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2416, pruned_loss=0.04873, over 955466.56 frames. ], batch size: 23, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 04:00:10,916 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128842.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:00:22,070 INFO [finetune.py:976] (2/7) Epoch 23, batch 2850, loss[loss=0.1288, simple_loss=0.1965, pruned_loss=0.03055, over 4357.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2389, pruned_loss=0.04797, over 952651.64 frames. ], batch size: 19, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 04:00:23,884 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.021e+01 1.432e+02 1.754e+02 2.169e+02 4.729e+02, threshold=3.508e+02, percent-clipped=1.0 2023-03-27 04:00:53,416 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 04:00:55,186 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128890.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:01:08,141 INFO [finetune.py:976] (2/7) Epoch 23, batch 2900, loss[loss=0.1944, simple_loss=0.271, pruned_loss=0.05885, over 4904.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2419, pruned_loss=0.04917, over 952694.94 frames. ], batch size: 35, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:01:36,741 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128951.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:01:41,988 INFO [finetune.py:976] (2/7) Epoch 23, batch 2950, loss[loss=0.2052, simple_loss=0.2761, pruned_loss=0.06711, over 4818.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2442, pruned_loss=0.04951, over 952668.94 frames. ], batch size: 40, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:01:43,784 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.806e+01 1.599e+02 2.012e+02 2.314e+02 4.261e+02, threshold=4.024e+02, percent-clipped=1.0 2023-03-27 04:01:52,343 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:02:00,941 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9653, 1.3999, 0.8478, 1.9761, 2.2759, 1.7258, 1.6955, 1.6804], device='cuda:2'), covar=tensor([0.1330, 0.1910, 0.2063, 0.1074, 0.1893, 0.1986, 0.1387, 0.1846], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:02:30,449 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.29 vs. limit=5.0 2023-03-27 04:02:31,486 INFO [finetune.py:976] (2/7) Epoch 23, batch 3000, loss[loss=0.1503, simple_loss=0.2246, pruned_loss=0.03799, over 4826.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2469, pruned_loss=0.05092, over 953282.04 frames. ], batch size: 47, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:02:31,486 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 04:02:42,304 INFO [finetune.py:1010] (2/7) Epoch 23, validation: loss=0.1567, simple_loss=0.225, pruned_loss=0.04424, over 2265189.00 frames. 2023-03-27 04:02:42,305 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 04:02:51,934 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129024.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:03:14,563 INFO [finetune.py:976] (2/7) Epoch 23, batch 3050, loss[loss=0.1728, simple_loss=0.2411, pruned_loss=0.05231, over 4824.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.248, pruned_loss=0.05125, over 954328.87 frames. ], batch size: 38, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:03:16,830 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.606e+02 1.899e+02 2.236e+02 5.313e+02, threshold=3.798e+02, percent-clipped=3.0 2023-03-27 04:03:22,097 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:03:23,968 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8146, 1.0595, 1.8217, 1.7421, 1.5401, 1.4746, 1.6465, 1.7109], device='cuda:2'), covar=tensor([0.3141, 0.3372, 0.2778, 0.3104, 0.4200, 0.3533, 0.3540, 0.2649], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0243, 0.0263, 0.0287, 0.0285, 0.0262, 0.0293, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:03:37,878 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-27 04:03:47,842 INFO [finetune.py:976] (2/7) Epoch 23, batch 3100, loss[loss=0.1801, simple_loss=0.2501, pruned_loss=0.05509, over 4747.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2457, pruned_loss=0.05041, over 955172.42 frames. ], batch size: 54, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:03:53,277 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:04:06,394 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129137.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:04:20,603 INFO [finetune.py:976] (2/7) Epoch 23, batch 3150, loss[loss=0.1658, simple_loss=0.2446, pruned_loss=0.04347, over 4917.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2429, pruned_loss=0.05022, over 955532.34 frames. ], batch size: 37, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:04:22,455 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.200e+02 1.503e+02 1.751e+02 2.296e+02 3.694e+02, threshold=3.502e+02, percent-clipped=0.0 2023-03-27 04:04:30,669 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5085, 2.0430, 2.3425, 2.4359, 2.1285, 2.1308, 2.3045, 2.2689], device='cuda:2'), covar=tensor([0.3965, 0.4141, 0.3429, 0.4035, 0.5316, 0.4346, 0.4961, 0.3177], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0243, 0.0263, 0.0288, 0.0285, 0.0262, 0.0294, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:04:33,503 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4710, 1.8542, 0.8541, 2.3581, 2.9249, 1.9867, 2.2679, 2.0939], device='cuda:2'), covar=tensor([0.1325, 0.1891, 0.2054, 0.1028, 0.1494, 0.1737, 0.1340, 0.1876], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:05:01,880 INFO [finetune.py:976] (2/7) Epoch 23, batch 3200, loss[loss=0.1522, simple_loss=0.2302, pruned_loss=0.03714, over 4828.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2393, pruned_loss=0.04868, over 955486.69 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:05:16,623 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8052, 1.7791, 1.6771, 2.0483, 2.1448, 2.0011, 1.5553, 1.5241], device='cuda:2'), covar=tensor([0.1959, 0.1749, 0.1642, 0.1409, 0.1534, 0.1041, 0.2173, 0.1757], device='cuda:2'), in_proj_covar=tensor([0.0242, 0.0208, 0.0211, 0.0195, 0.0242, 0.0188, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:05:18,602 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.36 vs. limit=5.0 2023-03-27 04:05:22,787 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 04:05:26,817 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129246.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:05:37,442 INFO [finetune.py:976] (2/7) Epoch 23, batch 3250, loss[loss=0.2431, simple_loss=0.3058, pruned_loss=0.09021, over 4810.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2389, pruned_loss=0.04829, over 954016.48 frames. ], batch size: 41, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:05:39,768 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.902e+01 1.487e+02 1.735e+02 2.015e+02 4.622e+02, threshold=3.470e+02, percent-clipped=1.0 2023-03-27 04:06:05,014 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-27 04:06:11,748 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-27 04:06:22,328 INFO [finetune.py:976] (2/7) Epoch 23, batch 3300, loss[loss=0.2018, simple_loss=0.2633, pruned_loss=0.07015, over 4714.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2438, pruned_loss=0.05045, over 956771.61 frames. ], batch size: 59, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:06:51,492 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8712, 1.7600, 1.5681, 1.9253, 2.4296, 1.9591, 1.7461, 1.5268], device='cuda:2'), covar=tensor([0.2004, 0.1942, 0.1816, 0.1521, 0.1617, 0.1158, 0.2196, 0.1836], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0209, 0.0211, 0.0195, 0.0243, 0.0189, 0.0215, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:06:56,063 INFO [finetune.py:976] (2/7) Epoch 23, batch 3350, loss[loss=0.1537, simple_loss=0.2395, pruned_loss=0.03401, over 4820.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2463, pruned_loss=0.05133, over 957637.72 frames. ], batch size: 38, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:06:57,831 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.264e+01 1.616e+02 1.807e+02 2.144e+02 4.365e+02, threshold=3.613e+02, percent-clipped=1.0 2023-03-27 04:07:22,300 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7078, 1.2991, 0.8371, 1.5584, 2.0856, 1.2562, 1.5489, 1.6163], device='cuda:2'), covar=tensor([0.1452, 0.1963, 0.1862, 0.1136, 0.1777, 0.1986, 0.1382, 0.1853], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:07:47,796 INFO [finetune.py:976] (2/7) Epoch 23, batch 3400, loss[loss=0.2567, simple_loss=0.3226, pruned_loss=0.09541, over 4173.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2492, pruned_loss=0.05284, over 957940.64 frames. ], batch size: 65, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:07:50,360 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129413.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:08:02,170 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4038, 1.4586, 1.7055, 1.7506, 1.5610, 3.1279, 1.2370, 1.4927], device='cuda:2'), covar=tensor([0.0934, 0.1626, 0.1070, 0.0871, 0.1494, 0.0258, 0.1373, 0.1647], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0076, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:08:07,358 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:08:18,296 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8786, 1.8375, 1.6000, 1.9867, 2.4873, 2.0075, 1.6873, 1.5287], device='cuda:2'), covar=tensor([0.2015, 0.1772, 0.1773, 0.1444, 0.1361, 0.1150, 0.2105, 0.1782], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0209, 0.0212, 0.0196, 0.0244, 0.0189, 0.0215, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:08:21,220 INFO [finetune.py:976] (2/7) Epoch 23, batch 3450, loss[loss=0.2306, simple_loss=0.2874, pruned_loss=0.08686, over 4829.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2502, pruned_loss=0.0536, over 957249.41 frames. ], batch size: 47, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:08:23,468 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.549e+02 1.981e+02 2.372e+02 4.494e+02, threshold=3.962e+02, percent-clipped=6.0 2023-03-27 04:08:31,804 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129474.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:08:35,891 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7316, 3.4526, 3.4741, 1.6734, 3.7008, 2.8491, 1.3206, 2.5698], device='cuda:2'), covar=tensor([0.2728, 0.2001, 0.1353, 0.3030, 0.1025, 0.0886, 0.3531, 0.1327], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0160, 0.0129, 0.0161, 0.0124, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:08:37,590 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6940, 3.6349, 3.4954, 1.5480, 3.7550, 2.8411, 0.9375, 2.5295], device='cuda:2'), covar=tensor([0.2322, 0.2104, 0.1588, 0.3571, 0.1134, 0.1023, 0.4280, 0.1493], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0160, 0.0129, 0.0161, 0.0124, 0.0148, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:08:39,422 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129485.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:08:50,182 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4372, 1.4454, 1.2428, 1.4168, 1.7704, 1.6290, 1.4468, 1.2651], device='cuda:2'), covar=tensor([0.0372, 0.0314, 0.0636, 0.0310, 0.0217, 0.0422, 0.0342, 0.0405], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0107, 0.0145, 0.0111, 0.0100, 0.0112, 0.0102, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7537e-05, 8.2075e-05, 1.1352e-04, 8.5426e-05, 7.7486e-05, 8.2409e-05, 7.5864e-05, 8.5455e-05], device='cuda:2') 2023-03-27 04:08:52,010 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3559, 1.9149, 2.5687, 1.7239, 2.3521, 2.6431, 1.8847, 2.6589], device='cuda:2'), covar=tensor([0.1114, 0.1817, 0.1134, 0.1736, 0.0839, 0.1060, 0.2335, 0.0800], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0206, 0.0191, 0.0189, 0.0173, 0.0214, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:08:54,911 INFO [finetune.py:976] (2/7) Epoch 23, batch 3500, loss[loss=0.1678, simple_loss=0.2387, pruned_loss=0.04844, over 4873.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2472, pruned_loss=0.0528, over 957103.08 frames. ], batch size: 34, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:09:16,336 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5419, 1.5138, 1.7779, 1.8116, 1.6822, 3.2124, 1.3911, 1.5852], device='cuda:2'), covar=tensor([0.0930, 0.1803, 0.1086, 0.0916, 0.1535, 0.0257, 0.1522, 0.1838], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0082, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:09:20,970 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129546.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:09:28,783 INFO [finetune.py:976] (2/7) Epoch 23, batch 3550, loss[loss=0.1538, simple_loss=0.2322, pruned_loss=0.03767, over 4916.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2441, pruned_loss=0.05201, over 955489.43 frames. ], batch size: 46, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:09:30,572 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.427e+01 1.468e+02 1.701e+02 2.000e+02 4.470e+02, threshold=3.402e+02, percent-clipped=1.0 2023-03-27 04:09:41,569 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8684, 2.7515, 2.5901, 1.9700, 2.7629, 2.8628, 2.9991, 2.2944], device='cuda:2'), covar=tensor([0.0641, 0.0682, 0.0904, 0.0913, 0.0636, 0.0822, 0.0653, 0.1193], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0135, 0.0138, 0.0119, 0.0124, 0.0137, 0.0137, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:09:52,428 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129594.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:10:11,047 INFO [finetune.py:976] (2/7) Epoch 23, batch 3600, loss[loss=0.1582, simple_loss=0.2252, pruned_loss=0.04553, over 4872.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2417, pruned_loss=0.05081, over 955512.51 frames. ], batch size: 31, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:10:12,958 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129612.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:10:44,782 INFO [finetune.py:976] (2/7) Epoch 23, batch 3650, loss[loss=0.1807, simple_loss=0.2577, pruned_loss=0.05183, over 4825.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2453, pruned_loss=0.0524, over 951690.68 frames. ], batch size: 47, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:10:46,575 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.535e+02 1.802e+02 2.202e+02 3.404e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-27 04:10:53,482 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129673.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:11:23,355 INFO [finetune.py:976] (2/7) Epoch 23, batch 3700, loss[loss=0.1709, simple_loss=0.2474, pruned_loss=0.04721, over 4764.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2494, pruned_loss=0.05415, over 953000.67 frames. ], batch size: 26, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:12:00,258 INFO [finetune.py:976] (2/7) Epoch 23, batch 3750, loss[loss=0.176, simple_loss=0.2546, pruned_loss=0.04866, over 4920.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2496, pruned_loss=0.05368, over 954162.23 frames. ], batch size: 42, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:12:02,069 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.237e+01 1.564e+02 1.874e+02 2.284e+02 3.839e+02, threshold=3.748e+02, percent-clipped=2.0 2023-03-27 04:12:06,379 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129769.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:12:20,326 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129791.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:12:35,161 INFO [finetune.py:976] (2/7) Epoch 23, batch 3800, loss[loss=0.1878, simple_loss=0.2592, pruned_loss=0.05818, over 4810.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.25, pruned_loss=0.05368, over 953272.76 frames. ], batch size: 39, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:12:47,429 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9226, 4.6572, 4.3896, 2.3017, 4.7534, 3.5812, 0.7538, 3.0850], device='cuda:2'), covar=tensor([0.2312, 0.1652, 0.1344, 0.3062, 0.0769, 0.0903, 0.4386, 0.1418], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0130, 0.0161, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:13:16,848 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129852.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:13:20,297 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1548, 1.8809, 2.5470, 1.6758, 2.2178, 2.5163, 1.7333, 2.6634], device='cuda:2'), covar=tensor([0.1357, 0.1956, 0.1308, 0.2014, 0.0955, 0.1321, 0.2840, 0.0752], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0206, 0.0191, 0.0190, 0.0174, 0.0215, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:13:20,389 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.78 vs. limit=5.0 2023-03-27 04:13:21,831 INFO [finetune.py:976] (2/7) Epoch 23, batch 3850, loss[loss=0.1735, simple_loss=0.2377, pruned_loss=0.05467, over 4785.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.248, pruned_loss=0.0527, over 954682.87 frames. ], batch size: 51, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:13:24,150 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.288e+01 1.597e+02 1.881e+02 2.160e+02 3.613e+02, threshold=3.763e+02, percent-clipped=0.0 2023-03-27 04:13:24,855 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7168, 2.4463, 3.0670, 1.9463, 2.6415, 3.2403, 2.2565, 3.2131], device='cuda:2'), covar=tensor([0.1242, 0.1882, 0.1392, 0.2106, 0.0977, 0.1247, 0.2350, 0.0648], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0191, 0.0190, 0.0174, 0.0215, 0.0217, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:13:55,057 INFO [finetune.py:976] (2/7) Epoch 23, batch 3900, loss[loss=0.16, simple_loss=0.2263, pruned_loss=0.04684, over 4848.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2449, pruned_loss=0.0516, over 953132.91 frames. ], batch size: 49, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:14:09,493 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9090, 0.9975, 1.8199, 1.8429, 1.6448, 1.6334, 1.7066, 1.7562], device='cuda:2'), covar=tensor([0.3794, 0.4074, 0.3419, 0.3641, 0.4958, 0.3802, 0.4329, 0.3074], device='cuda:2'), in_proj_covar=tensor([0.0257, 0.0242, 0.0263, 0.0286, 0.0285, 0.0261, 0.0293, 0.0246], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:14:27,720 INFO [finetune.py:976] (2/7) Epoch 23, batch 3950, loss[loss=0.1828, simple_loss=0.243, pruned_loss=0.06126, over 4902.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.242, pruned_loss=0.05054, over 954587.65 frames. ], batch size: 32, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:14:29,947 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.481e+02 1.820e+02 2.101e+02 4.779e+02, threshold=3.640e+02, percent-clipped=1.0 2023-03-27 04:14:30,038 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4985, 3.9918, 4.1433, 4.2038, 4.2909, 4.0228, 4.5529, 1.9759], device='cuda:2'), covar=tensor([0.0694, 0.0730, 0.0782, 0.0972, 0.1064, 0.1293, 0.0643, 0.4711], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0247, 0.0279, 0.0293, 0.0338, 0.0286, 0.0305, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:14:34,548 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129968.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:14:56,367 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130000.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:14:58,099 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:15:00,820 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 04:15:02,811 INFO [finetune.py:976] (2/7) Epoch 23, batch 4000, loss[loss=0.1542, simple_loss=0.2342, pruned_loss=0.03708, over 4881.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2418, pruned_loss=0.05093, over 955485.05 frames. ], batch size: 32, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:15:45,415 INFO [finetune.py:976] (2/7) Epoch 23, batch 4050, loss[loss=0.1971, simple_loss=0.2708, pruned_loss=0.06169, over 4755.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2455, pruned_loss=0.05255, over 953952.89 frames. ], batch size: 59, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:15:47,210 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130061.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:15:47,689 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.665e+02 1.960e+02 2.479e+02 5.275e+02, threshold=3.921e+02, percent-clipped=4.0 2023-03-27 04:15:48,454 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130063.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:15:53,623 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130069.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:16:19,203 INFO [finetune.py:976] (2/7) Epoch 23, batch 4100, loss[loss=0.2773, simple_loss=0.3285, pruned_loss=0.113, over 4063.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2476, pruned_loss=0.05242, over 955273.46 frames. ], batch size: 65, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:16:26,578 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130117.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:16:54,985 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:17:02,631 INFO [finetune.py:976] (2/7) Epoch 23, batch 4150, loss[loss=0.1698, simple_loss=0.2407, pruned_loss=0.04942, over 4257.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2494, pruned_loss=0.0535, over 956405.85 frames. ], batch size: 66, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:17:04,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.505e+02 1.863e+02 2.291e+02 4.324e+02, threshold=3.726e+02, percent-clipped=3.0 2023-03-27 04:17:11,648 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.03 vs. limit=5.0 2023-03-27 04:17:25,228 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 04:17:36,688 INFO [finetune.py:976] (2/7) Epoch 23, batch 4200, loss[loss=0.1838, simple_loss=0.2595, pruned_loss=0.05407, over 4902.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2493, pruned_loss=0.05315, over 954529.00 frames. ], batch size: 36, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:18:24,042 INFO [finetune.py:976] (2/7) Epoch 23, batch 4250, loss[loss=0.1502, simple_loss=0.2316, pruned_loss=0.03437, over 4900.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2454, pruned_loss=0.05142, over 954472.69 frames. ], batch size: 36, lr: 3.08e-03, grad_scale: 64.0 2023-03-27 04:18:25,853 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.516e+02 1.759e+02 2.094e+02 3.793e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-27 04:18:30,099 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130268.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:18:31,482 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 04:18:32,653 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-27 04:18:57,492 INFO [finetune.py:976] (2/7) Epoch 23, batch 4300, loss[loss=0.1617, simple_loss=0.2241, pruned_loss=0.04967, over 4936.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2426, pruned_loss=0.05025, over 955186.24 frames. ], batch size: 38, lr: 3.08e-03, grad_scale: 64.0 2023-03-27 04:19:02,812 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130316.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:19:29,233 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:19:30,470 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130358.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:19:31,027 INFO [finetune.py:976] (2/7) Epoch 23, batch 4350, loss[loss=0.1995, simple_loss=0.2677, pruned_loss=0.06562, over 4823.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2403, pruned_loss=0.04955, over 957277.93 frames. ], batch size: 40, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:19:33,423 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.417e+02 1.746e+02 2.112e+02 4.412e+02, threshold=3.492e+02, percent-clipped=1.0 2023-03-27 04:19:48,190 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:04,340 INFO [finetune.py:976] (2/7) Epoch 23, batch 4400, loss[loss=0.1825, simple_loss=0.2598, pruned_loss=0.05253, over 4836.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2417, pruned_loss=0.04991, over 957200.75 frames. ], batch size: 44, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:20:06,307 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0164, 2.7322, 2.5904, 1.3579, 2.6268, 2.1205, 2.1218, 2.4882], device='cuda:2'), covar=tensor([0.0987, 0.0885, 0.1855, 0.2212, 0.1782, 0.2235, 0.2219, 0.1200], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0200, 0.0182, 0.0211, 0.0209, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:20:19,497 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:24,183 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130438.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:30,588 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:31,777 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130447.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:46,864 INFO [finetune.py:976] (2/7) Epoch 23, batch 4450, loss[loss=0.1865, simple_loss=0.2624, pruned_loss=0.05534, over 4811.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2446, pruned_loss=0.05022, over 957067.05 frames. ], batch size: 45, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:20:49,238 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.491e+02 1.813e+02 2.246e+02 3.707e+02, threshold=3.626e+02, percent-clipped=3.0 2023-03-27 04:21:02,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8285, 1.2173, 1.8612, 1.8342, 1.6525, 1.6128, 1.7820, 1.7184], device='cuda:2'), covar=tensor([0.3795, 0.3984, 0.3089, 0.3464, 0.4615, 0.3705, 0.4207, 0.3019], device='cuda:2'), in_proj_covar=tensor([0.0257, 0.0242, 0.0263, 0.0286, 0.0285, 0.0261, 0.0293, 0.0247], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:21:10,562 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:11,728 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130495.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:14,711 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130499.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:20,674 INFO [finetune.py:976] (2/7) Epoch 23, batch 4500, loss[loss=0.1491, simple_loss=0.2276, pruned_loss=0.03527, over 4763.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2445, pruned_loss=0.04963, over 956260.48 frames. ], batch size: 28, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:21:34,581 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-27 04:21:53,312 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:22:04,071 INFO [finetune.py:976] (2/7) Epoch 23, batch 4550, loss[loss=0.1456, simple_loss=0.2255, pruned_loss=0.03281, over 4779.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2459, pruned_loss=0.0505, over 955210.18 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:22:06,504 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.768e+01 1.546e+02 1.777e+02 2.233e+02 3.779e+02, threshold=3.553e+02, percent-clipped=2.0 2023-03-27 04:22:07,796 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130565.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:22:15,567 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9536, 1.2405, 1.9041, 1.8880, 1.7215, 1.6628, 1.8132, 1.8044], device='cuda:2'), covar=tensor([0.4174, 0.4151, 0.3449, 0.3870, 0.5022, 0.4119, 0.4558, 0.3250], device='cuda:2'), in_proj_covar=tensor([0.0256, 0.0241, 0.0261, 0.0285, 0.0284, 0.0260, 0.0291, 0.0245], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:22:37,457 INFO [finetune.py:976] (2/7) Epoch 23, batch 4600, loss[loss=0.1845, simple_loss=0.2472, pruned_loss=0.06093, over 4786.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2453, pruned_loss=0.05002, over 955303.87 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:22:37,570 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130609.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:22:45,928 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.7299, 3.2896, 3.4483, 3.5554, 3.4815, 3.2555, 3.7762, 1.3519], device='cuda:2'), covar=tensor([0.0886, 0.0954, 0.0991, 0.1111, 0.1377, 0.1801, 0.0960, 0.5736], device='cuda:2'), in_proj_covar=tensor([0.0349, 0.0248, 0.0282, 0.0295, 0.0341, 0.0288, 0.0308, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:22:47,765 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:10,854 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130656.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:17,076 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130658.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:23:17,582 INFO [finetune.py:976] (2/7) Epoch 23, batch 4650, loss[loss=0.1688, simple_loss=0.228, pruned_loss=0.05481, over 4729.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2429, pruned_loss=0.05001, over 955639.53 frames. ], batch size: 59, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:23:19,982 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.906e+01 1.515e+02 1.768e+02 2.232e+02 6.495e+02, threshold=3.536e+02, percent-clipped=3.0 2023-03-27 04:23:46,245 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-27 04:23:54,690 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:55,884 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130706.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:58,154 INFO [finetune.py:976] (2/7) Epoch 23, batch 4700, loss[loss=0.1624, simple_loss=0.2301, pruned_loss=0.04734, over 4822.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2404, pruned_loss=0.04934, over 956404.50 frames. ], batch size: 39, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:24:18,050 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:24:31,365 INFO [finetune.py:976] (2/7) Epoch 23, batch 4750, loss[loss=0.1512, simple_loss=0.2315, pruned_loss=0.0354, over 4790.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2388, pruned_loss=0.04876, over 955161.25 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:24:34,231 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.528e+02 1.803e+02 2.150e+02 3.686e+02, threshold=3.606e+02, percent-clipped=2.0 2023-03-27 04:24:49,882 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:24:54,008 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:25:04,662 INFO [finetune.py:976] (2/7) Epoch 23, batch 4800, loss[loss=0.2031, simple_loss=0.2844, pruned_loss=0.06088, over 4848.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2417, pruned_loss=0.05003, over 953480.05 frames. ], batch size: 47, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:25:37,285 INFO [finetune.py:976] (2/7) Epoch 23, batch 4850, loss[loss=0.1938, simple_loss=0.2716, pruned_loss=0.05801, over 4908.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2445, pruned_loss=0.05101, over 953529.69 frames. ], batch size: 36, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:25:40,093 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.613e+02 1.947e+02 2.336e+02 6.046e+02, threshold=3.894e+02, percent-clipped=4.0 2023-03-27 04:25:44,710 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 04:25:56,437 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6187, 1.5998, 1.3102, 1.4841, 1.9432, 1.8175, 1.6008, 1.3778], device='cuda:2'), covar=tensor([0.0321, 0.0308, 0.0639, 0.0327, 0.0215, 0.0460, 0.0292, 0.0427], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0106, 0.0143, 0.0111, 0.0100, 0.0111, 0.0101, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7094e-05, 8.1266e-05, 1.1218e-04, 8.5097e-05, 7.7430e-05, 8.2163e-05, 7.5232e-05, 8.5038e-05], device='cuda:2') 2023-03-27 04:26:15,656 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130904.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:26:19,114 INFO [finetune.py:976] (2/7) Epoch 23, batch 4900, loss[loss=0.1651, simple_loss=0.2349, pruned_loss=0.04764, over 4923.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2462, pruned_loss=0.05131, over 954826.56 frames. ], batch size: 33, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:26:24,930 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7744, 3.4720, 3.3990, 1.8995, 3.6272, 2.9211, 1.2479, 2.6383], device='cuda:2'), covar=tensor([0.2815, 0.1795, 0.1451, 0.2944, 0.1020, 0.0845, 0.3606, 0.1285], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0176, 0.0159, 0.0128, 0.0159, 0.0122, 0.0146, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:26:28,434 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130921.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:26:43,571 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.95 vs. limit=2.0 2023-03-27 04:26:50,681 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 04:26:52,303 INFO [finetune.py:976] (2/7) Epoch 23, batch 4950, loss[loss=0.1448, simple_loss=0.2214, pruned_loss=0.03411, over 4920.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2475, pruned_loss=0.05217, over 953950.86 frames. ], batch size: 37, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:26:57,594 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.290e+01 1.586e+02 1.789e+02 2.374e+02 3.586e+02, threshold=3.578e+02, percent-clipped=0.0 2023-03-27 04:27:26,932 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 04:27:36,349 INFO [finetune.py:976] (2/7) Epoch 23, batch 5000, loss[loss=0.1992, simple_loss=0.2609, pruned_loss=0.06869, over 4281.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2458, pruned_loss=0.05138, over 955143.91 frames. ], batch size: 65, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:27:57,573 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:09,926 INFO [finetune.py:976] (2/7) Epoch 23, batch 5050, loss[loss=0.1298, simple_loss=0.2089, pruned_loss=0.02538, over 4751.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2423, pruned_loss=0.0502, over 952517.88 frames. ], batch size: 27, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:28:12,371 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.381e+02 1.770e+02 2.059e+02 4.416e+02, threshold=3.539e+02, percent-clipped=4.0 2023-03-27 04:28:41,480 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:41,509 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:45,130 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131094.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:57,922 INFO [finetune.py:976] (2/7) Epoch 23, batch 5100, loss[loss=0.1438, simple_loss=0.2179, pruned_loss=0.03486, over 4904.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2398, pruned_loss=0.04925, over 954219.33 frames. ], batch size: 37, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:29:17,263 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:20,883 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:31,085 INFO [finetune.py:976] (2/7) Epoch 23, batch 5150, loss[loss=0.1319, simple_loss=0.2056, pruned_loss=0.02908, over 4733.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2394, pruned_loss=0.04949, over 953445.57 frames. ], batch size: 23, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:29:34,466 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.214e+01 1.572e+02 1.903e+02 2.241e+02 4.010e+02, threshold=3.805e+02, percent-clipped=1.0 2023-03-27 04:29:41,807 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131174.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:46,548 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5387, 1.4881, 2.1777, 1.8073, 1.7902, 4.0557, 1.3753, 1.6741], device='cuda:2'), covar=tensor([0.0959, 0.1695, 0.1261, 0.0906, 0.1548, 0.0211, 0.1459, 0.1792], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:29:57,493 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7895, 1.1815, 0.7850, 1.6008, 2.1484, 1.4517, 1.5976, 1.5841], device='cuda:2'), covar=tensor([0.1437, 0.2198, 0.1991, 0.1217, 0.1918, 0.1922, 0.1473, 0.2044], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:30:01,320 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131204.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:30:04,269 INFO [finetune.py:976] (2/7) Epoch 23, batch 5200, loss[loss=0.1973, simple_loss=0.2844, pruned_loss=0.0551, over 4909.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2434, pruned_loss=0.05056, over 952756.62 frames. ], batch size: 36, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:30:10,891 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 04:30:12,619 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:30:23,014 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131235.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:30:33,244 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131252.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:30:37,393 INFO [finetune.py:976] (2/7) Epoch 23, batch 5250, loss[loss=0.1527, simple_loss=0.226, pruned_loss=0.03976, over 4787.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2465, pruned_loss=0.05112, over 953736.97 frames. ], batch size: 51, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:30:40,886 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.531e+02 1.792e+02 2.239e+02 3.281e+02, threshold=3.585e+02, percent-clipped=0.0 2023-03-27 04:30:44,380 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:31:21,562 INFO [finetune.py:976] (2/7) Epoch 23, batch 5300, loss[loss=0.1739, simple_loss=0.2516, pruned_loss=0.04807, over 4770.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2468, pruned_loss=0.05111, over 954291.44 frames. ], batch size: 28, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:31:49,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1861, 2.1134, 1.7554, 0.7855, 1.8817, 1.8208, 1.7057, 1.9931], device='cuda:2'), covar=tensor([0.0916, 0.0735, 0.1435, 0.1940, 0.1216, 0.1890, 0.1971, 0.0871], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0182, 0.0210, 0.0210, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:31:54,360 INFO [finetune.py:976] (2/7) Epoch 23, batch 5350, loss[loss=0.1302, simple_loss=0.1928, pruned_loss=0.03374, over 4749.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2464, pruned_loss=0.05088, over 953360.43 frames. ], batch size: 23, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:31:57,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.529e+02 1.830e+02 2.196e+02 3.219e+02, threshold=3.659e+02, percent-clipped=0.0 2023-03-27 04:31:57,517 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6144, 2.1726, 3.0151, 1.8217, 2.4874, 2.7872, 1.8694, 2.8660], device='cuda:2'), covar=tensor([0.1157, 0.2111, 0.1044, 0.2027, 0.0938, 0.1489, 0.2796, 0.0869], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0205, 0.0191, 0.0190, 0.0172, 0.0213, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:32:32,846 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-27 04:32:38,097 INFO [finetune.py:976] (2/7) Epoch 23, batch 5400, loss[loss=0.2134, simple_loss=0.2787, pruned_loss=0.07405, over 4897.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2439, pruned_loss=0.04993, over 955057.47 frames. ], batch size: 43, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:32:38,217 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131409.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:32:42,422 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131416.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:11,756 INFO [finetune.py:976] (2/7) Epoch 23, batch 5450, loss[loss=0.1775, simple_loss=0.2442, pruned_loss=0.05545, over 4838.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2419, pruned_loss=0.04964, over 955448.62 frames. ], batch size: 47, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:33:14,785 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.514e+02 1.875e+02 2.409e+02 5.439e+02, threshold=3.749e+02, percent-clipped=4.0 2023-03-27 04:33:18,555 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131470.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:23,309 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131477.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:33,341 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131492.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:51,830 INFO [finetune.py:976] (2/7) Epoch 23, batch 5500, loss[loss=0.1533, simple_loss=0.2063, pruned_loss=0.0501, over 4458.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2396, pruned_loss=0.04878, over 956560.89 frames. ], batch size: 19, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:33:58,481 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4359, 1.4686, 2.2153, 1.6503, 1.6923, 3.9412, 1.3680, 1.6286], device='cuda:2'), covar=tensor([0.0873, 0.1775, 0.1112, 0.0977, 0.1594, 0.0216, 0.1518, 0.1818], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:34:12,693 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131530.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:34:29,165 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131553.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:34:33,175 INFO [finetune.py:976] (2/7) Epoch 23, batch 5550, loss[loss=0.193, simple_loss=0.2709, pruned_loss=0.05753, over 4899.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.241, pruned_loss=0.04967, over 956335.51 frames. ], batch size: 32, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:34:36,710 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.823e+01 1.422e+02 1.728e+02 2.186e+02 5.215e+02, threshold=3.457e+02, percent-clipped=2.0 2023-03-27 04:34:39,875 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6859, 0.7141, 1.7527, 1.6944, 1.5729, 1.5597, 1.5956, 1.7009], device='cuda:2'), covar=tensor([0.3437, 0.3748, 0.2985, 0.3230, 0.4145, 0.3377, 0.3901, 0.2805], device='cuda:2'), in_proj_covar=tensor([0.0259, 0.0245, 0.0265, 0.0288, 0.0288, 0.0264, 0.0296, 0.0249], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:35:04,718 INFO [finetune.py:976] (2/7) Epoch 23, batch 5600, loss[loss=0.1836, simple_loss=0.2581, pruned_loss=0.05457, over 4912.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2439, pruned_loss=0.05046, over 955883.61 frames. ], batch size: 43, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:35:30,063 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1543, 2.6588, 3.2741, 2.3130, 2.8487, 3.5262, 2.6278, 3.2624], device='cuda:2'), covar=tensor([0.0819, 0.1557, 0.1235, 0.1634, 0.0849, 0.0907, 0.1965, 0.0725], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0205, 0.0191, 0.0189, 0.0172, 0.0213, 0.0215, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:35:31,201 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3565, 1.3836, 1.5845, 1.5217, 1.5889, 2.9321, 1.4296, 1.4991], device='cuda:2'), covar=tensor([0.0991, 0.1857, 0.1085, 0.1010, 0.1705, 0.0293, 0.1486, 0.1879], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0081, 0.0072, 0.0076, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:35:34,623 INFO [finetune.py:976] (2/7) Epoch 23, batch 5650, loss[loss=0.1585, simple_loss=0.2377, pruned_loss=0.03968, over 4875.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2468, pruned_loss=0.0512, over 952403.47 frames. ], batch size: 34, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:35:37,860 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.955e+01 1.494e+02 1.801e+02 2.339e+02 4.576e+02, threshold=3.601e+02, percent-clipped=4.0 2023-03-27 04:36:04,503 INFO [finetune.py:976] (2/7) Epoch 23, batch 5700, loss[loss=0.1426, simple_loss=0.2041, pruned_loss=0.04052, over 3750.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2438, pruned_loss=0.05099, over 935961.45 frames. ], batch size: 16, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:36:11,624 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1516, 1.4990, 1.0493, 1.8194, 2.4314, 1.7663, 1.7702, 1.9705], device='cuda:2'), covar=tensor([0.1234, 0.1980, 0.1855, 0.1158, 0.1664, 0.1680, 0.1480, 0.1753], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:36:16,894 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5690, 3.1835, 3.0569, 1.6527, 3.2512, 2.5760, 0.9981, 2.2457], device='cuda:2'), covar=tensor([0.1773, 0.1888, 0.1424, 0.3220, 0.0987, 0.0999, 0.3907, 0.1625], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0129, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:36:40,048 INFO [finetune.py:976] (2/7) Epoch 24, batch 0, loss[loss=0.1851, simple_loss=0.2588, pruned_loss=0.05568, over 4919.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2588, pruned_loss=0.05568, over 4919.00 frames. ], batch size: 42, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:36:40,048 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 04:36:43,054 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1725, 2.0187, 1.9570, 1.8933, 1.9473, 2.1020, 2.0339, 2.6244], device='cuda:2'), covar=tensor([0.3784, 0.4216, 0.3297, 0.3545, 0.3806, 0.2389, 0.3319, 0.1901], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:36:49,940 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1687, 2.0650, 1.9968, 1.9091, 1.9812, 2.0940, 2.0372, 2.6347], device='cuda:2'), covar=tensor([0.3217, 0.3943, 0.3061, 0.3359, 0.3683, 0.2218, 0.3320, 0.1736], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:36:50,761 INFO [finetune.py:1010] (2/7) Epoch 24, validation: loss=0.1594, simple_loss=0.227, pruned_loss=0.04592, over 2265189.00 frames. 2023-03-27 04:36:50,762 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 04:36:54,265 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5847, 1.4282, 1.9165, 1.7856, 1.6210, 3.4383, 1.3873, 1.5259], device='cuda:2'), covar=tensor([0.0923, 0.1821, 0.1147, 0.0942, 0.1638, 0.0219, 0.1472, 0.1872], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0081, 0.0072, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:37:07,462 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.206e+01 1.398e+02 1.674e+02 2.004e+02 3.219e+02, threshold=3.348e+02, percent-clipped=0.0 2023-03-27 04:37:08,155 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131765.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:37:12,920 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131772.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:37:25,290 INFO [finetune.py:976] (2/7) Epoch 24, batch 50, loss[loss=0.1647, simple_loss=0.2332, pruned_loss=0.04811, over 4813.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2473, pruned_loss=0.05218, over 214968.07 frames. ], batch size: 38, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:38:02,585 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:07,275 INFO [finetune.py:976] (2/7) Epoch 24, batch 100, loss[loss=0.177, simple_loss=0.2489, pruned_loss=0.05257, over 4907.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2412, pruned_loss=0.04898, over 380694.57 frames. ], batch size: 36, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:38:15,489 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131848.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:17,980 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0534, 1.7075, 2.0534, 1.5289, 1.9394, 2.0951, 2.1135, 1.3534], device='cuda:2'), covar=tensor([0.0641, 0.1083, 0.0708, 0.0935, 0.0871, 0.0743, 0.0715, 0.1907], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0135, 0.0138, 0.0119, 0.0125, 0.0137, 0.0137, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:38:20,991 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131857.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:25,097 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.662e+01 1.465e+02 1.761e+02 2.142e+02 3.724e+02, threshold=3.523e+02, percent-clipped=1.0 2023-03-27 04:38:34,589 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131878.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:40,500 INFO [finetune.py:976] (2/7) Epoch 24, batch 150, loss[loss=0.1863, simple_loss=0.2596, pruned_loss=0.05644, over 4900.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2378, pruned_loss=0.04786, over 509128.45 frames. ], batch size: 37, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:39:10,824 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:39:29,542 INFO [finetune.py:976] (2/7) Epoch 24, batch 200, loss[loss=0.1924, simple_loss=0.2671, pruned_loss=0.0588, over 4803.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2374, pruned_loss=0.04852, over 608078.10 frames. ], batch size: 51, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:39:51,183 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.517e+02 1.799e+02 2.123e+02 6.232e+02, threshold=3.598e+02, percent-clipped=3.0 2023-03-27 04:40:06,650 INFO [finetune.py:976] (2/7) Epoch 24, batch 250, loss[loss=0.2351, simple_loss=0.2963, pruned_loss=0.08695, over 4807.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2426, pruned_loss=0.05129, over 685072.81 frames. ], batch size: 51, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:40:36,968 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132031.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:40:41,433 INFO [finetune.py:976] (2/7) Epoch 24, batch 300, loss[loss=0.1668, simple_loss=0.2488, pruned_loss=0.04235, over 4827.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2454, pruned_loss=0.05139, over 746780.69 frames. ], batch size: 39, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:40:52,201 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-27 04:40:53,241 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132054.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:40:59,162 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.635e+02 1.887e+02 2.261e+02 6.512e+02, threshold=3.774e+02, percent-clipped=2.0 2023-03-27 04:40:59,870 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132065.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:04,178 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:08,896 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6616, 1.5103, 1.5344, 1.5574, 0.9718, 2.9361, 1.1529, 1.5673], device='cuda:2'), covar=tensor([0.3197, 0.2567, 0.2122, 0.2389, 0.1900, 0.0253, 0.2561, 0.1274], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:41:14,119 INFO [finetune.py:976] (2/7) Epoch 24, batch 350, loss[loss=0.1658, simple_loss=0.251, pruned_loss=0.04035, over 4911.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2479, pruned_loss=0.05186, over 793890.72 frames. ], batch size: 38, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:41:17,750 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132092.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:33,451 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132113.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:34,745 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132115.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:41:42,166 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132120.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:42,206 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2349, 1.4504, 0.9388, 2.0082, 2.6152, 1.8807, 1.8942, 2.0210], device='cuda:2'), covar=tensor([0.1408, 0.2051, 0.1850, 0.1221, 0.1664, 0.1785, 0.1389, 0.1935], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 04:41:56,059 INFO [finetune.py:976] (2/7) Epoch 24, batch 400, loss[loss=0.1597, simple_loss=0.2355, pruned_loss=0.04197, over 4927.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2498, pruned_loss=0.05242, over 830434.00 frames. ], batch size: 33, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:42:03,879 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132148.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:42:15,409 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.594e+02 1.883e+02 2.349e+02 4.739e+02, threshold=3.766e+02, percent-clipped=1.0 2023-03-27 04:42:29,848 INFO [finetune.py:976] (2/7) Epoch 24, batch 450, loss[loss=0.1356, simple_loss=0.2051, pruned_loss=0.03306, over 4293.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2486, pruned_loss=0.05235, over 858361.88 frames. ], batch size: 18, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:42:33,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1441, 1.9359, 1.4962, 0.5958, 1.6793, 1.7699, 1.6103, 1.8523], device='cuda:2'), covar=tensor([0.0990, 0.0816, 0.1634, 0.1989, 0.1311, 0.2490, 0.2353, 0.0865], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0182, 0.0211, 0.0211, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:42:36,329 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132196.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:42:55,015 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:43:02,536 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5825, 1.4354, 1.3742, 0.6858, 1.6123, 1.6187, 1.7074, 1.3247], device='cuda:2'), covar=tensor([0.0729, 0.0536, 0.0637, 0.0496, 0.0444, 0.0599, 0.0310, 0.0637], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0146, 0.0124, 0.0120, 0.0129, 0.0128, 0.0138, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.7761e-05, 1.0549e-04, 8.8653e-05, 8.4635e-05, 9.0829e-05, 9.0907e-05, 9.8689e-05, 1.0488e-04], device='cuda:2') 2023-03-27 04:43:13,242 INFO [finetune.py:976] (2/7) Epoch 24, batch 500, loss[loss=0.201, simple_loss=0.2673, pruned_loss=0.06738, over 4907.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2453, pruned_loss=0.05078, over 881268.38 frames. ], batch size: 43, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:43:32,460 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.902e+01 1.519e+02 1.809e+02 2.135e+02 3.897e+02, threshold=3.617e+02, percent-clipped=1.0 2023-03-27 04:43:36,955 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2789, 2.2361, 1.8748, 2.2619, 2.0925, 2.1598, 2.1107, 2.9569], device='cuda:2'), covar=tensor([0.3319, 0.4689, 0.3328, 0.4187, 0.4250, 0.2454, 0.4306, 0.1500], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0262, 0.0232, 0.0274, 0.0256, 0.0225, 0.0252, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:43:46,922 INFO [finetune.py:976] (2/7) Epoch 24, batch 550, loss[loss=0.1539, simple_loss=0.2292, pruned_loss=0.03932, over 4816.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2416, pruned_loss=0.04952, over 899914.47 frames. ], batch size: 39, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:44:26,902 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 04:44:30,204 INFO [finetune.py:976] (2/7) Epoch 24, batch 600, loss[loss=0.2057, simple_loss=0.287, pruned_loss=0.0622, over 4807.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2439, pruned_loss=0.05088, over 912960.59 frames. ], batch size: 51, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:44:58,205 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.737e+01 1.576e+02 1.859e+02 2.330e+02 3.343e+02, threshold=3.718e+02, percent-clipped=0.0 2023-03-27 04:45:12,585 INFO [finetune.py:976] (2/7) Epoch 24, batch 650, loss[loss=0.1755, simple_loss=0.2474, pruned_loss=0.05182, over 4940.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2458, pruned_loss=0.0516, over 922729.04 frames. ], batch size: 38, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:45:12,658 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132387.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:45:28,112 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132410.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:45:43,338 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 04:45:46,140 INFO [finetune.py:976] (2/7) Epoch 24, batch 700, loss[loss=0.2148, simple_loss=0.2831, pruned_loss=0.07319, over 4808.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2463, pruned_loss=0.05126, over 929928.50 frames. ], batch size: 45, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:46:03,853 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.592e+02 1.911e+02 2.262e+02 4.191e+02, threshold=3.822e+02, percent-clipped=2.0 2023-03-27 04:46:07,521 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5365, 1.3690, 1.2186, 1.5552, 1.6207, 1.6000, 1.0335, 1.2797], device='cuda:2'), covar=tensor([0.2081, 0.2020, 0.1973, 0.1647, 0.1466, 0.1201, 0.2291, 0.1915], device='cuda:2'), in_proj_covar=tensor([0.0243, 0.0208, 0.0213, 0.0195, 0.0242, 0.0188, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:46:19,326 INFO [finetune.py:976] (2/7) Epoch 24, batch 750, loss[loss=0.1924, simple_loss=0.2638, pruned_loss=0.06052, over 4853.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2471, pruned_loss=0.05172, over 933785.12 frames. ], batch size: 31, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:46:20,836 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-27 04:46:36,493 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:46:58,682 INFO [finetune.py:976] (2/7) Epoch 24, batch 800, loss[loss=0.1632, simple_loss=0.2218, pruned_loss=0.05228, over 4927.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2472, pruned_loss=0.05133, over 938505.26 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:47:10,467 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8485, 1.8061, 2.4285, 2.1216, 2.0879, 4.6207, 1.8556, 2.0575], device='cuda:2'), covar=tensor([0.0951, 0.1848, 0.1141, 0.0959, 0.1558, 0.0191, 0.1462, 0.1788], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:47:17,154 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132561.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:47:19,959 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.775e+01 1.480e+02 1.729e+02 2.075e+02 4.531e+02, threshold=3.459e+02, percent-clipped=1.0 2023-03-27 04:47:31,181 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5613, 1.4254, 2.0621, 3.0987, 2.0770, 2.3059, 1.0023, 2.6111], device='cuda:2'), covar=tensor([0.1656, 0.1285, 0.1108, 0.0491, 0.0772, 0.1315, 0.1665, 0.0413], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0131, 0.0161, 0.0099, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 04:47:35,955 INFO [finetune.py:976] (2/7) Epoch 24, batch 850, loss[loss=0.1593, simple_loss=0.2295, pruned_loss=0.04458, over 4726.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2443, pruned_loss=0.05041, over 942559.65 frames. ], batch size: 59, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:47:54,073 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7641, 2.4609, 2.1052, 0.9543, 2.3252, 2.1645, 2.0508, 2.4287], device='cuda:2'), covar=tensor([0.0683, 0.0900, 0.1499, 0.2047, 0.1348, 0.2215, 0.1907, 0.0859], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0200, 0.0180, 0.0209, 0.0209, 0.0224, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:48:18,613 INFO [finetune.py:976] (2/7) Epoch 24, batch 900, loss[loss=0.1701, simple_loss=0.2424, pruned_loss=0.04888, over 4807.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2418, pruned_loss=0.04958, over 945433.22 frames. ], batch size: 45, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:48:32,508 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3276, 2.2794, 1.9152, 0.9604, 2.0229, 1.9054, 1.7317, 2.1156], device='cuda:2'), covar=tensor([0.1007, 0.0783, 0.1419, 0.1862, 0.1295, 0.1974, 0.2068, 0.0987], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0200, 0.0180, 0.0209, 0.0208, 0.0224, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:48:33,113 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132660.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:48:35,407 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.417e+02 1.718e+02 2.002e+02 3.598e+02, threshold=3.436e+02, percent-clipped=1.0 2023-03-27 04:48:52,530 INFO [finetune.py:976] (2/7) Epoch 24, batch 950, loss[loss=0.2048, simple_loss=0.2798, pruned_loss=0.06488, over 4911.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.241, pruned_loss=0.04958, over 949301.41 frames. ], batch size: 43, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:48:52,613 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132687.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:07,081 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132710.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:49:13,769 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-27 04:49:14,314 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132721.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:15,512 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1535, 1.2141, 1.4961, 1.0360, 1.2310, 1.4491, 1.1816, 1.5839], device='cuda:2'), covar=tensor([0.1391, 0.2539, 0.1314, 0.1682, 0.1021, 0.1376, 0.3299, 0.0910], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0207, 0.0191, 0.0191, 0.0173, 0.0214, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:49:26,440 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:28,061 INFO [finetune.py:976] (2/7) Epoch 24, batch 1000, loss[loss=0.1799, simple_loss=0.2512, pruned_loss=0.05432, over 4919.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2438, pruned_loss=0.05045, over 951714.98 frames. ], batch size: 38, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:49:38,671 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:50,801 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132758.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:54,880 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.613e+02 1.803e+02 2.355e+02 4.590e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-27 04:50:17,564 INFO [finetune.py:976] (2/7) Epoch 24, batch 1050, loss[loss=0.1497, simple_loss=0.2229, pruned_loss=0.03821, over 4784.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.246, pruned_loss=0.05079, over 952353.12 frames. ], batch size: 26, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:50:31,460 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:50:35,098 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0784, 1.9314, 1.6784, 1.7493, 1.7845, 1.7314, 1.8593, 2.5114], device='cuda:2'), covar=tensor([0.3817, 0.3900, 0.3297, 0.3594, 0.3949, 0.2442, 0.3678, 0.1761], device='cuda:2'), in_proj_covar=tensor([0.0292, 0.0264, 0.0236, 0.0278, 0.0259, 0.0228, 0.0255, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:50:51,431 INFO [finetune.py:976] (2/7) Epoch 24, batch 1100, loss[loss=0.192, simple_loss=0.2618, pruned_loss=0.06113, over 4918.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2483, pruned_loss=0.0513, over 954384.20 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:51:08,747 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.623e+02 1.902e+02 2.304e+02 4.937e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-27 04:51:24,185 INFO [finetune.py:976] (2/7) Epoch 24, batch 1150, loss[loss=0.1717, simple_loss=0.2497, pruned_loss=0.04683, over 4828.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2502, pruned_loss=0.05258, over 954507.04 frames. ], batch size: 47, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:51:57,324 INFO [finetune.py:976] (2/7) Epoch 24, batch 1200, loss[loss=0.1245, simple_loss=0.2042, pruned_loss=0.02239, over 4870.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2488, pruned_loss=0.05197, over 955586.87 frames. ], batch size: 31, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:52:24,716 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.055e+01 1.470e+02 1.716e+02 2.148e+02 3.548e+02, threshold=3.432e+02, percent-clipped=0.0 2023-03-27 04:52:36,782 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6549, 3.7218, 3.6197, 1.9768, 3.7840, 2.9794, 0.8303, 2.5866], device='cuda:2'), covar=tensor([0.2362, 0.1750, 0.1385, 0.2797, 0.0943, 0.0952, 0.4215, 0.1434], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0179, 0.0162, 0.0129, 0.0161, 0.0124, 0.0149, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:52:40,280 INFO [finetune.py:976] (2/7) Epoch 24, batch 1250, loss[loss=0.1899, simple_loss=0.2492, pruned_loss=0.06525, over 4916.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2458, pruned_loss=0.05151, over 953823.16 frames. ], batch size: 37, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:52:46,410 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-27 04:52:59,659 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133016.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:53:15,460 INFO [finetune.py:976] (2/7) Epoch 24, batch 1300, loss[loss=0.1612, simple_loss=0.2272, pruned_loss=0.04756, over 4757.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2423, pruned_loss=0.05027, over 952228.18 frames. ], batch size: 27, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:53:42,213 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.345e+01 1.496e+02 1.852e+02 2.141e+02 4.041e+02, threshold=3.705e+02, percent-clipped=2.0 2023-03-27 04:53:57,207 INFO [finetune.py:976] (2/7) Epoch 24, batch 1350, loss[loss=0.167, simple_loss=0.2412, pruned_loss=0.04638, over 4821.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2429, pruned_loss=0.05111, over 953124.06 frames. ], batch size: 51, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:54:07,957 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:54:31,058 INFO [finetune.py:976] (2/7) Epoch 24, batch 1400, loss[loss=0.163, simple_loss=0.2506, pruned_loss=0.03766, over 4845.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2449, pruned_loss=0.05151, over 954817.54 frames. ], batch size: 44, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:54:59,477 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.355e+01 1.624e+02 1.914e+02 2.315e+02 3.947e+02, threshold=3.828e+02, percent-clipped=1.0 2023-03-27 04:55:19,623 INFO [finetune.py:976] (2/7) Epoch 24, batch 1450, loss[loss=0.1405, simple_loss=0.2268, pruned_loss=0.02709, over 4801.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.247, pruned_loss=0.05228, over 955400.58 frames. ], batch size: 41, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:55:20,978 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4829, 1.4083, 2.1440, 1.7200, 1.7885, 4.0333, 1.4658, 1.6420], device='cuda:2'), covar=tensor([0.0964, 0.1715, 0.1303, 0.1002, 0.1515, 0.0197, 0.1489, 0.1774], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0082, 0.0073, 0.0076, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:55:56,690 INFO [finetune.py:976] (2/7) Epoch 24, batch 1500, loss[loss=0.1902, simple_loss=0.252, pruned_loss=0.06421, over 4866.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2485, pruned_loss=0.05273, over 956165.17 frames. ], batch size: 31, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:56:15,024 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.579e+02 1.864e+02 2.355e+02 5.095e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-27 04:56:30,467 INFO [finetune.py:976] (2/7) Epoch 24, batch 1550, loss[loss=0.1627, simple_loss=0.2372, pruned_loss=0.04403, over 4842.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2487, pruned_loss=0.05211, over 957675.61 frames. ], batch size: 44, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:56:50,670 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=133316.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:04,265 INFO [finetune.py:976] (2/7) Epoch 24, batch 1600, loss[loss=0.1687, simple_loss=0.2308, pruned_loss=0.05332, over 4733.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2458, pruned_loss=0.05158, over 957568.16 frames. ], batch size: 54, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:57:08,025 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1000, 2.0541, 2.2629, 1.4993, 2.1019, 2.2874, 2.2913, 1.8752], device='cuda:2'), covar=tensor([0.0629, 0.0763, 0.0699, 0.0916, 0.0778, 0.0757, 0.0619, 0.1146], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0136, 0.0139, 0.0119, 0.0127, 0.0138, 0.0138, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 04:57:12,173 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7872, 4.0795, 3.9444, 2.3603, 4.1842, 3.3296, 1.2653, 3.0292], device='cuda:2'), covar=tensor([0.2267, 0.1651, 0.1462, 0.2686, 0.0948, 0.0878, 0.3891, 0.1347], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0179, 0.0162, 0.0130, 0.0162, 0.0124, 0.0149, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 04:57:28,469 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.451e+02 1.796e+02 2.043e+02 3.402e+02, threshold=3.593e+02, percent-clipped=0.0 2023-03-27 04:57:28,563 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=133364.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:38,825 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133374.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:57:40,048 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:46,648 INFO [finetune.py:976] (2/7) Epoch 24, batch 1650, loss[loss=0.1695, simple_loss=0.2382, pruned_loss=0.05036, over 4770.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2433, pruned_loss=0.05058, over 957866.60 frames. ], batch size: 28, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:57:51,581 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 04:57:56,870 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=133402.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:18,798 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3224, 1.4188, 2.0329, 1.6559, 1.5639, 3.3965, 1.4368, 1.6033], device='cuda:2'), covar=tensor([0.1074, 0.1735, 0.1115, 0.0999, 0.1537, 0.0247, 0.1413, 0.1804], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:58:19,421 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133435.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:58:20,500 INFO [finetune.py:976] (2/7) Epoch 24, batch 1700, loss[loss=0.2003, simple_loss=0.258, pruned_loss=0.07134, over 4907.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2409, pruned_loss=0.04952, over 958482.28 frames. ], batch size: 43, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:58:20,617 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:31,234 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=133450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:48,615 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.499e+01 1.457e+02 1.770e+02 2.219e+02 3.253e+02, threshold=3.541e+02, percent-clipped=0.0 2023-03-27 04:58:55,873 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:59:04,124 INFO [finetune.py:976] (2/7) Epoch 24, batch 1750, loss[loss=0.1899, simple_loss=0.2619, pruned_loss=0.05895, over 4894.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2418, pruned_loss=0.04985, over 958286.90 frames. ], batch size: 35, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:59:36,816 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133535.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:59:37,911 INFO [finetune.py:976] (2/7) Epoch 24, batch 1800, loss[loss=0.164, simple_loss=0.2413, pruned_loss=0.04338, over 4829.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2443, pruned_loss=0.0506, over 958512.07 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:59:48,794 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4662, 1.4091, 1.3158, 1.4137, 1.1258, 3.4919, 1.4655, 1.8308], device='cuda:2'), covar=tensor([0.4289, 0.3217, 0.2726, 0.3140, 0.1995, 0.0246, 0.2592, 0.1268], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0122, 0.0123, 0.0114, 0.0097, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 04:59:57,744 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.575e+02 1.839e+02 2.282e+02 3.463e+02, threshold=3.677e+02, percent-clipped=0.0 2023-03-27 05:00:23,457 INFO [finetune.py:976] (2/7) Epoch 24, batch 1850, loss[loss=0.1638, simple_loss=0.2446, pruned_loss=0.04153, over 4871.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2456, pruned_loss=0.05083, over 958245.24 frames. ], batch size: 32, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:04,053 INFO [finetune.py:976] (2/7) Epoch 24, batch 1900, loss[loss=0.1748, simple_loss=0.2543, pruned_loss=0.04765, over 4804.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2466, pruned_loss=0.0514, over 954700.92 frames. ], batch size: 40, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:14,220 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133652.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:01:21,802 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.540e+02 1.881e+02 2.277e+02 3.366e+02, threshold=3.762e+02, percent-clipped=0.0 2023-03-27 05:01:37,658 INFO [finetune.py:976] (2/7) Epoch 24, batch 1950, loss[loss=0.1677, simple_loss=0.23, pruned_loss=0.05265, over 4782.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2448, pruned_loss=0.05019, over 955914.24 frames. ], batch size: 29, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:40,829 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-27 05:01:55,029 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133713.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:02:06,300 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133730.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 05:02:07,531 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:02:11,409 INFO [finetune.py:976] (2/7) Epoch 24, batch 2000, loss[loss=0.2119, simple_loss=0.2676, pruned_loss=0.07813, over 4748.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2435, pruned_loss=0.0504, over 957945.19 frames. ], batch size: 54, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:02:28,712 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.381e+01 1.373e+02 1.735e+02 2.243e+02 3.912e+02, threshold=3.469e+02, percent-clipped=2.0 2023-03-27 05:02:35,626 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6739, 1.6134, 1.9731, 1.2149, 1.7052, 1.8648, 1.5510, 2.1319], device='cuda:2'), covar=tensor([0.1153, 0.1853, 0.1283, 0.1775, 0.0929, 0.1467, 0.2683, 0.0765], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0210, 0.0194, 0.0193, 0.0175, 0.0216, 0.0219, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:02:54,161 INFO [finetune.py:976] (2/7) Epoch 24, batch 2050, loss[loss=0.1888, simple_loss=0.2511, pruned_loss=0.06323, over 4919.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2403, pruned_loss=0.04944, over 959944.89 frames. ], batch size: 37, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:03:23,220 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:03:27,929 INFO [finetune.py:976] (2/7) Epoch 24, batch 2100, loss[loss=0.1701, simple_loss=0.2576, pruned_loss=0.04133, over 4803.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.24, pruned_loss=0.04927, over 958835.29 frames. ], batch size: 45, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:03:47,589 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.827e+01 1.541e+02 1.860e+02 2.293e+02 6.118e+02, threshold=3.720e+02, percent-clipped=3.0 2023-03-27 05:04:00,389 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4299, 2.2312, 2.0349, 2.3627, 2.1853, 2.2445, 2.2050, 2.9442], device='cuda:2'), covar=tensor([0.3531, 0.4324, 0.3243, 0.3435, 0.3542, 0.2342, 0.3703, 0.1669], device='cuda:2'), in_proj_covar=tensor([0.0292, 0.0264, 0.0235, 0.0277, 0.0258, 0.0228, 0.0255, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:04:11,228 INFO [finetune.py:976] (2/7) Epoch 24, batch 2150, loss[loss=0.1917, simple_loss=0.2663, pruned_loss=0.0586, over 4935.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.243, pruned_loss=0.05018, over 958882.08 frames. ], batch size: 38, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:04:25,084 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-27 05:04:44,944 INFO [finetune.py:976] (2/7) Epoch 24, batch 2200, loss[loss=0.2253, simple_loss=0.277, pruned_loss=0.08673, over 4058.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2456, pruned_loss=0.0508, over 956415.77 frames. ], batch size: 65, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:05:02,713 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.464e+02 1.824e+02 2.239e+02 3.694e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-27 05:05:24,725 INFO [finetune.py:976] (2/7) Epoch 24, batch 2250, loss[loss=0.186, simple_loss=0.2567, pruned_loss=0.05768, over 4779.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2477, pruned_loss=0.05168, over 954878.72 frames. ], batch size: 29, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:05:24,799 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6631, 4.1802, 4.0102, 2.1432, 4.2683, 3.1566, 0.7374, 2.8935], device='cuda:2'), covar=tensor([0.2804, 0.1566, 0.1216, 0.3065, 0.0770, 0.0865, 0.4457, 0.1374], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0129, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:05:24,840 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7098, 1.7044, 1.6228, 1.6355, 1.4260, 4.0053, 1.7192, 2.0003], device='cuda:2'), covar=tensor([0.3204, 0.2465, 0.2025, 0.2369, 0.1633, 0.0128, 0.2504, 0.1222], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0123, 0.0113, 0.0097, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:05:35,432 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133997.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:05:49,821 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:07,537 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134030.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 05:06:09,768 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:12,679 INFO [finetune.py:976] (2/7) Epoch 24, batch 2300, loss[loss=0.2115, simple_loss=0.2764, pruned_loss=0.07329, over 4919.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2473, pruned_loss=0.05116, over 954889.76 frames. ], batch size: 33, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:06:27,053 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134058.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:31,056 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.533e+02 1.723e+02 2.089e+02 4.293e+02, threshold=3.445e+02, percent-clipped=2.0 2023-03-27 05:06:40,131 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134078.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 05:06:41,359 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:46,523 INFO [finetune.py:976] (2/7) Epoch 24, batch 2350, loss[loss=0.1279, simple_loss=0.2001, pruned_loss=0.0278, over 4744.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2448, pruned_loss=0.05033, over 955394.75 frames. ], batch size: 23, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:07:14,953 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:07:19,176 INFO [finetune.py:976] (2/7) Epoch 24, batch 2400, loss[loss=0.1755, simple_loss=0.2461, pruned_loss=0.05248, over 4897.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2431, pruned_loss=0.05062, over 955871.49 frames. ], batch size: 32, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:07:38,330 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.553e+01 1.434e+02 1.789e+02 2.166e+02 3.942e+02, threshold=3.577e+02, percent-clipped=1.0 2023-03-27 05:07:47,956 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:07:55,471 INFO [finetune.py:976] (2/7) Epoch 24, batch 2450, loss[loss=0.1855, simple_loss=0.2568, pruned_loss=0.05709, over 4859.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2413, pruned_loss=0.05032, over 955253.91 frames. ], batch size: 44, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:08:01,887 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-27 05:08:30,428 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.3515, 4.6244, 4.9037, 5.0927, 5.0817, 4.8377, 5.4505, 1.5938], device='cuda:2'), covar=tensor([0.0750, 0.0827, 0.0846, 0.0851, 0.1229, 0.1659, 0.0516, 0.6148], device='cuda:2'), in_proj_covar=tensor([0.0347, 0.0247, 0.0280, 0.0293, 0.0335, 0.0286, 0.0307, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:08:36,910 INFO [finetune.py:976] (2/7) Epoch 24, batch 2500, loss[loss=0.1334, simple_loss=0.2132, pruned_loss=0.0268, over 4758.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2415, pruned_loss=0.05025, over 953650.86 frames. ], batch size: 27, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:08:55,715 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.396e+01 1.499e+02 1.865e+02 2.171e+02 5.575e+02, threshold=3.730e+02, percent-clipped=1.0 2023-03-27 05:09:20,384 INFO [finetune.py:976] (2/7) Epoch 24, batch 2550, loss[loss=0.1873, simple_loss=0.2793, pruned_loss=0.04766, over 4804.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2445, pruned_loss=0.05074, over 954422.32 frames. ], batch size: 41, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:09:32,217 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134304.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:34,653 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134308.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:35,286 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134309.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:52,962 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5558, 1.4524, 2.1821, 3.1470, 2.0974, 2.2614, 1.1609, 2.6064], device='cuda:2'), covar=tensor([0.1797, 0.1468, 0.1210, 0.0597, 0.0873, 0.1477, 0.1713, 0.0524], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0100, 0.0136, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:09:53,628 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1036, 1.9979, 1.7205, 1.9597, 1.8943, 1.9219, 1.9705, 2.6699], device='cuda:2'), covar=tensor([0.3683, 0.4034, 0.3160, 0.3451, 0.4030, 0.2363, 0.3649, 0.1607], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0262, 0.0233, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:09:54,095 INFO [finetune.py:976] (2/7) Epoch 24, batch 2600, loss[loss=0.184, simple_loss=0.2617, pruned_loss=0.05319, over 4909.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2456, pruned_loss=0.05057, over 955585.00 frames. ], batch size: 37, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:10:01,385 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1261, 2.0337, 1.7129, 1.9606, 1.8893, 1.8950, 1.9851, 2.6837], device='cuda:2'), covar=tensor([0.3632, 0.4136, 0.3049, 0.3753, 0.4042, 0.2324, 0.3490, 0.1638], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0262, 0.0233, 0.0276, 0.0256, 0.0226, 0.0254, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:10:04,828 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134353.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:07,199 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:12,060 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.471e+02 1.806e+02 2.184e+02 4.519e+02, threshold=3.612e+02, percent-clipped=2.0 2023-03-27 05:10:13,312 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134365.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:16,850 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134370.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:29,856 INFO [finetune.py:976] (2/7) Epoch 24, batch 2650, loss[loss=0.1963, simple_loss=0.2568, pruned_loss=0.06795, over 4770.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2473, pruned_loss=0.0506, over 958533.31 frames. ], batch size: 26, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:10:38,659 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.51 vs. limit=2.0 2023-03-27 05:11:21,024 INFO [finetune.py:976] (2/7) Epoch 24, batch 2700, loss[loss=0.1487, simple_loss=0.2234, pruned_loss=0.03699, over 4866.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2464, pruned_loss=0.04998, over 959585.89 frames. ], batch size: 31, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:11:39,165 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.416e+02 1.758e+02 2.159e+02 3.599e+02, threshold=3.516e+02, percent-clipped=0.0 2023-03-27 05:11:54,599 INFO [finetune.py:976] (2/7) Epoch 24, batch 2750, loss[loss=0.1912, simple_loss=0.256, pruned_loss=0.06323, over 4908.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2446, pruned_loss=0.05, over 959678.56 frames. ], batch size: 37, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:12:01,829 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0771, 0.9921, 0.9997, 0.3927, 0.8781, 1.1699, 1.1584, 0.9715], device='cuda:2'), covar=tensor([0.0828, 0.0564, 0.0550, 0.0480, 0.0599, 0.0564, 0.0367, 0.0586], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0147, 0.0125, 0.0121, 0.0130, 0.0128, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.8195e-05, 1.0571e-04, 8.9538e-05, 8.4912e-05, 9.1069e-05, 9.1291e-05, 1.0003e-04, 1.0497e-04], device='cuda:2') 2023-03-27 05:12:13,127 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 05:12:27,873 INFO [finetune.py:976] (2/7) Epoch 24, batch 2800, loss[loss=0.1501, simple_loss=0.2173, pruned_loss=0.04145, over 4917.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2408, pruned_loss=0.04875, over 957307.39 frames. ], batch size: 46, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:12:46,113 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.456e+02 1.823e+02 2.196e+02 4.309e+02, threshold=3.645e+02, percent-clipped=3.0 2023-03-27 05:12:46,246 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2317, 2.0333, 1.5395, 0.7043, 1.6919, 1.8956, 1.7369, 1.9270], device='cuda:2'), covar=tensor([0.0718, 0.0744, 0.1453, 0.1733, 0.1278, 0.1922, 0.2120, 0.0807], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0190, 0.0198, 0.0180, 0.0209, 0.0208, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:13:01,598 INFO [finetune.py:976] (2/7) Epoch 24, batch 2850, loss[loss=0.2067, simple_loss=0.2831, pruned_loss=0.06519, over 4911.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2399, pruned_loss=0.04901, over 956203.93 frames. ], batch size: 37, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:13:03,550 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1734, 1.3265, 1.4756, 0.6602, 1.3528, 1.5986, 1.6224, 1.3748], device='cuda:2'), covar=tensor([0.0974, 0.0639, 0.0531, 0.0506, 0.0493, 0.0644, 0.0367, 0.0757], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0126, 0.0121, 0.0130, 0.0129, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.8536e-05, 1.0619e-04, 8.9814e-05, 8.5219e-05, 9.1282e-05, 9.1586e-05, 1.0027e-04, 1.0542e-04], device='cuda:2') 2023-03-27 05:13:45,376 INFO [finetune.py:976] (2/7) Epoch 24, batch 2900, loss[loss=0.1972, simple_loss=0.2725, pruned_loss=0.06093, over 4840.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2423, pruned_loss=0.04979, over 957777.47 frames. ], batch size: 49, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:13:50,543 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 05:13:56,302 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134653.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:00,538 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134660.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:03,944 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.559e+02 1.783e+02 2.063e+02 3.902e+02, threshold=3.566e+02, percent-clipped=1.0 2023-03-27 05:14:04,025 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134665.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:20,894 INFO [finetune.py:976] (2/7) Epoch 24, batch 2950, loss[loss=0.2092, simple_loss=0.2908, pruned_loss=0.06379, over 4907.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2454, pruned_loss=0.05028, over 956944.97 frames. ], batch size: 36, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:14:37,441 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134701.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:52,797 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4925, 2.4899, 2.0781, 2.6363, 2.4669, 2.4820, 2.4719, 3.3867], device='cuda:2'), covar=tensor([0.3502, 0.3823, 0.3139, 0.3417, 0.3559, 0.2271, 0.3275, 0.1519], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0276, 0.0257, 0.0227, 0.0254, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:15:02,624 INFO [finetune.py:976] (2/7) Epoch 24, batch 3000, loss[loss=0.152, simple_loss=0.2278, pruned_loss=0.03809, over 4732.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2468, pruned_loss=0.05077, over 956033.73 frames. ], batch size: 59, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:15:02,624 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 05:15:13,333 INFO [finetune.py:1010] (2/7) Epoch 24, validation: loss=0.1561, simple_loss=0.2251, pruned_loss=0.0436, over 2265189.00 frames. 2023-03-27 05:15:13,334 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 05:15:15,770 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5594, 1.4084, 2.2758, 3.3613, 2.2216, 2.4039, 1.3095, 2.7630], device='cuda:2'), covar=tensor([0.1786, 0.1536, 0.1160, 0.0541, 0.0803, 0.1284, 0.1613, 0.0503], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0163, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:15:31,258 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.527e+02 1.858e+02 2.241e+02 5.364e+02, threshold=3.716e+02, percent-clipped=3.0 2023-03-27 05:15:36,150 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5162, 2.4766, 1.9298, 2.7234, 2.4600, 2.0940, 2.9503, 2.5639], device='cuda:2'), covar=tensor([0.1239, 0.2087, 0.3008, 0.2478, 0.2499, 0.1604, 0.3100, 0.1691], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0255, 0.0251, 0.0207, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:15:48,110 INFO [finetune.py:976] (2/7) Epoch 24, batch 3050, loss[loss=0.1526, simple_loss=0.2341, pruned_loss=0.0355, over 4864.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2476, pruned_loss=0.0512, over 955921.37 frames. ], batch size: 31, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:15:58,202 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-27 05:16:39,365 INFO [finetune.py:976] (2/7) Epoch 24, batch 3100, loss[loss=0.1644, simple_loss=0.2332, pruned_loss=0.04781, over 4745.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.247, pruned_loss=0.0512, over 957278.17 frames. ], batch size: 27, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:16:56,667 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134862.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:16:58,376 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.835e+01 1.385e+02 1.668e+02 2.115e+02 5.080e+02, threshold=3.336e+02, percent-clipped=1.0 2023-03-27 05:17:12,755 INFO [finetune.py:976] (2/7) Epoch 24, batch 3150, loss[loss=0.1537, simple_loss=0.2197, pruned_loss=0.04381, over 4771.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2437, pruned_loss=0.0509, over 957239.66 frames. ], batch size: 28, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:17:30,519 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5289, 1.5762, 1.2754, 1.4356, 1.9164, 1.8876, 1.6189, 1.3968], device='cuda:2'), covar=tensor([0.0398, 0.0372, 0.0738, 0.0393, 0.0261, 0.0509, 0.0342, 0.0508], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0112, 0.0100, 0.0113, 0.0102, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7383e-05, 8.1264e-05, 1.1287e-04, 8.5576e-05, 7.7874e-05, 8.3733e-05, 7.5942e-05, 8.5518e-05], device='cuda:2') 2023-03-27 05:17:32,346 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5731, 2.2897, 1.9137, 0.9136, 2.2543, 1.9858, 1.6180, 2.1915], device='cuda:2'), covar=tensor([0.0903, 0.1017, 0.2071, 0.2425, 0.1501, 0.2415, 0.2895, 0.1215], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0189, 0.0198, 0.0179, 0.0209, 0.0207, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:17:37,233 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134923.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:17:46,591 INFO [finetune.py:976] (2/7) Epoch 24, batch 3200, loss[loss=0.1747, simple_loss=0.2385, pruned_loss=0.0554, over 4818.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2402, pruned_loss=0.04989, over 954290.40 frames. ], batch size: 41, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:18:03,019 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134960.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:04,197 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1452, 3.5975, 3.7769, 3.9956, 3.9039, 3.5528, 4.2307, 1.3261], device='cuda:2'), covar=tensor([0.0864, 0.0889, 0.1018, 0.1086, 0.1372, 0.1831, 0.0763, 0.5734], device='cuda:2'), in_proj_covar=tensor([0.0346, 0.0246, 0.0280, 0.0292, 0.0335, 0.0286, 0.0306, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:18:05,934 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.665e+01 1.513e+02 1.793e+02 2.252e+02 3.579e+02, threshold=3.586e+02, percent-clipped=1.0 2023-03-27 05:18:06,037 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134965.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:22,479 INFO [finetune.py:976] (2/7) Epoch 24, batch 3250, loss[loss=0.1702, simple_loss=0.2518, pruned_loss=0.04432, over 4913.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2398, pruned_loss=0.04976, over 952579.57 frames. ], batch size: 36, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:18:45,538 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:46,186 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9392, 1.8860, 2.0994, 1.3575, 1.8137, 2.0464, 2.1246, 1.5401], device='cuda:2'), covar=tensor([0.0632, 0.0739, 0.0670, 0.0940, 0.0935, 0.0675, 0.0577, 0.1306], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0136, 0.0139, 0.0119, 0.0127, 0.0138, 0.0138, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:18:48,602 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135013.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:50,454 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6999, 1.8259, 1.6059, 1.8497, 1.4528, 4.5145, 1.6862, 2.2232], device='cuda:2'), covar=tensor([0.3328, 0.2450, 0.2144, 0.2284, 0.1673, 0.0140, 0.2486, 0.1145], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:19:03,037 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 05:19:04,066 INFO [finetune.py:976] (2/7) Epoch 24, batch 3300, loss[loss=0.171, simple_loss=0.2283, pruned_loss=0.05682, over 4389.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2427, pruned_loss=0.05042, over 953741.10 frames. ], batch size: 19, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:19:23,522 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.563e+02 1.899e+02 2.275e+02 5.700e+02, threshold=3.799e+02, percent-clipped=2.0 2023-03-27 05:19:44,196 INFO [finetune.py:976] (2/7) Epoch 24, batch 3350, loss[loss=0.2064, simple_loss=0.2809, pruned_loss=0.06596, over 4829.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2464, pruned_loss=0.05165, over 955462.86 frames. ], batch size: 33, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:20:21,440 INFO [finetune.py:976] (2/7) Epoch 24, batch 3400, loss[loss=0.2893, simple_loss=0.3285, pruned_loss=0.1251, over 4883.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2498, pruned_loss=0.0534, over 955454.28 frames. ], batch size: 43, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:20:40,369 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.584e+02 1.828e+02 2.150e+02 3.792e+02, threshold=3.656e+02, percent-clipped=0.0 2023-03-27 05:20:44,638 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:20:54,273 INFO [finetune.py:976] (2/7) Epoch 24, batch 3450, loss[loss=0.119, simple_loss=0.1938, pruned_loss=0.0221, over 4783.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2485, pruned_loss=0.05253, over 954802.63 frames. ], batch size: 29, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:21:27,773 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:21:40,737 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=135232.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:21:47,100 INFO [finetune.py:976] (2/7) Epoch 24, batch 3500, loss[loss=0.1688, simple_loss=0.2455, pruned_loss=0.04605, over 4821.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2451, pruned_loss=0.05174, over 954550.20 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:21:54,420 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-27 05:22:06,082 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.678e+01 1.506e+02 1.714e+02 2.011e+02 3.544e+02, threshold=3.428e+02, percent-clipped=0.0 2023-03-27 05:22:12,754 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1672, 2.0322, 2.0738, 2.3317, 2.4651, 2.3894, 1.8271, 1.9458], device='cuda:2'), covar=tensor([0.1819, 0.1696, 0.1521, 0.1371, 0.1376, 0.0890, 0.1983, 0.1625], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0215, 0.0198, 0.0245, 0.0190, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:22:20,454 INFO [finetune.py:976] (2/7) Epoch 24, batch 3550, loss[loss=0.138, simple_loss=0.2133, pruned_loss=0.03134, over 4800.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2413, pruned_loss=0.05011, over 954756.65 frames. ], batch size: 51, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:22:35,356 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7850, 1.6245, 2.1043, 3.2792, 2.2836, 2.3922, 1.1784, 2.7698], device='cuda:2'), covar=tensor([0.1748, 0.1442, 0.1349, 0.0610, 0.0807, 0.1200, 0.1899, 0.0554], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0114, 0.0132, 0.0162, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:22:54,386 INFO [finetune.py:976] (2/7) Epoch 24, batch 3600, loss[loss=0.1477, simple_loss=0.2288, pruned_loss=0.03333, over 4756.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2394, pruned_loss=0.04944, over 955598.30 frames. ], batch size: 54, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:23:12,795 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.699e+01 1.474e+02 1.759e+02 2.084e+02 3.295e+02, threshold=3.517e+02, percent-clipped=0.0 2023-03-27 05:23:28,231 INFO [finetune.py:976] (2/7) Epoch 24, batch 3650, loss[loss=0.1815, simple_loss=0.2549, pruned_loss=0.05408, over 4795.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2421, pruned_loss=0.0507, over 955169.51 frames. ], batch size: 45, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:24:11,249 INFO [finetune.py:976] (2/7) Epoch 24, batch 3700, loss[loss=0.1575, simple_loss=0.2244, pruned_loss=0.04526, over 4081.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2459, pruned_loss=0.05166, over 955111.33 frames. ], batch size: 17, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:24:14,381 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8323, 1.6811, 1.4379, 1.3688, 1.6060, 1.6466, 1.6183, 2.2406], device='cuda:2'), covar=tensor([0.3869, 0.3916, 0.3219, 0.3374, 0.3516, 0.2321, 0.3450, 0.1680], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0235, 0.0276, 0.0257, 0.0228, 0.0254, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:24:28,521 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.118e+01 1.614e+02 1.999e+02 2.429e+02 5.138e+02, threshold=3.998e+02, percent-clipped=6.0 2023-03-27 05:24:37,977 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6366, 3.5402, 3.3150, 1.6881, 3.6380, 2.8880, 0.8129, 2.6087], device='cuda:2'), covar=tensor([0.2612, 0.1746, 0.1601, 0.3225, 0.1017, 0.0923, 0.4181, 0.1383], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0181, 0.0162, 0.0130, 0.0162, 0.0125, 0.0150, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:24:43,338 INFO [finetune.py:976] (2/7) Epoch 24, batch 3750, loss[loss=0.167, simple_loss=0.2461, pruned_loss=0.04396, over 4814.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2477, pruned_loss=0.05246, over 953914.23 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:24:51,659 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2809, 2.1685, 1.7494, 0.9313, 2.0168, 1.8421, 1.6646, 2.0086], device='cuda:2'), covar=tensor([0.0867, 0.0685, 0.1559, 0.1871, 0.1072, 0.2023, 0.2091, 0.0836], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0189, 0.0197, 0.0179, 0.0208, 0.0206, 0.0221, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:24:54,694 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7490, 2.3966, 3.0950, 4.5153, 3.2982, 3.0753, 1.7263, 3.7269], device='cuda:2'), covar=tensor([0.1401, 0.1133, 0.1136, 0.0520, 0.0593, 0.1310, 0.1601, 0.0368], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:25:12,893 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=135518.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:25:18,842 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135527.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:25:20,683 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 05:25:26,760 INFO [finetune.py:976] (2/7) Epoch 24, batch 3800, loss[loss=0.1642, simple_loss=0.2409, pruned_loss=0.04375, over 4745.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2479, pruned_loss=0.05204, over 952525.14 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:25:44,706 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.524e+02 1.815e+02 2.221e+02 4.659e+02, threshold=3.630e+02, percent-clipped=3.0 2023-03-27 05:25:45,384 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:26:00,453 INFO [finetune.py:976] (2/7) Epoch 24, batch 3850, loss[loss=0.1324, simple_loss=0.2132, pruned_loss=0.02586, over 4748.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2465, pruned_loss=0.05146, over 951505.77 frames. ], batch size: 23, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:26:10,993 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1717, 2.0942, 2.1759, 1.4316, 2.0995, 2.3028, 2.2919, 1.8592], device='cuda:2'), covar=tensor([0.0567, 0.0616, 0.0653, 0.0899, 0.0733, 0.0630, 0.0560, 0.0977], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0135, 0.0139, 0.0118, 0.0126, 0.0137, 0.0137, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:26:38,627 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135630.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:26:45,793 INFO [finetune.py:976] (2/7) Epoch 24, batch 3900, loss[loss=0.1645, simple_loss=0.2361, pruned_loss=0.04648, over 4752.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2444, pruned_loss=0.05103, over 954433.30 frames. ], batch size: 26, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:27:10,715 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.672e+01 1.400e+02 1.667e+02 1.961e+02 4.314e+02, threshold=3.334e+02, percent-clipped=1.0 2023-03-27 05:27:14,835 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4526, 1.4220, 1.6108, 2.4838, 1.7003, 2.2017, 0.9515, 2.1871], device='cuda:2'), covar=tensor([0.1721, 0.1352, 0.1183, 0.0776, 0.0943, 0.1087, 0.1562, 0.0573], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0101, 0.0136, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:27:26,038 INFO [finetune.py:976] (2/7) Epoch 24, batch 3950, loss[loss=0.1488, simple_loss=0.2183, pruned_loss=0.03967, over 4830.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2413, pruned_loss=0.05003, over 953969.99 frames. ], batch size: 25, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:27:29,116 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=135691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:27:45,923 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3745, 1.2428, 1.7876, 2.7315, 1.7675, 2.1557, 1.0083, 2.3778], device='cuda:2'), covar=tensor([0.2043, 0.2032, 0.1488, 0.1185, 0.1061, 0.1595, 0.1938, 0.0786], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:27:58,422 INFO [finetune.py:976] (2/7) Epoch 24, batch 4000, loss[loss=0.2316, simple_loss=0.2914, pruned_loss=0.08588, over 4907.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2408, pruned_loss=0.04998, over 954820.19 frames. ], batch size: 36, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:28:16,421 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.536e+01 1.548e+02 1.897e+02 2.315e+02 3.943e+02, threshold=3.793e+02, percent-clipped=5.0 2023-03-27 05:28:31,232 INFO [finetune.py:976] (2/7) Epoch 24, batch 4050, loss[loss=0.175, simple_loss=0.2487, pruned_loss=0.05068, over 4756.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2441, pruned_loss=0.05112, over 953221.12 frames. ], batch size: 28, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:28:38,881 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5791, 3.5148, 3.2684, 1.8066, 3.6582, 2.7989, 1.0224, 2.5581], device='cuda:2'), covar=tensor([0.2679, 0.2029, 0.1688, 0.3258, 0.0950, 0.0997, 0.4191, 0.1438], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0182, 0.0163, 0.0131, 0.0163, 0.0125, 0.0150, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:28:54,308 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7426, 4.3548, 4.1792, 2.3186, 4.5180, 3.2922, 0.8846, 3.0168], device='cuda:2'), covar=tensor([0.2793, 0.1750, 0.1360, 0.3023, 0.0915, 0.0961, 0.4444, 0.1426], device='cuda:2'), in_proj_covar=tensor([0.0154, 0.0181, 0.0163, 0.0131, 0.0163, 0.0125, 0.0150, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:28:59,151 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=135827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:29:09,952 INFO [finetune.py:976] (2/7) Epoch 24, batch 4100, loss[loss=0.168, simple_loss=0.2367, pruned_loss=0.04967, over 4754.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2459, pruned_loss=0.05134, over 953318.06 frames. ], batch size: 28, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:29:32,641 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.978e+01 1.562e+02 1.866e+02 2.353e+02 4.250e+02, threshold=3.731e+02, percent-clipped=2.0 2023-03-27 05:29:39,219 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135875.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:29:46,949 INFO [finetune.py:976] (2/7) Epoch 24, batch 4150, loss[loss=0.2057, simple_loss=0.2795, pruned_loss=0.06597, over 4915.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2477, pruned_loss=0.05265, over 948950.70 frames. ], batch size: 46, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:29:53,002 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0380, 2.6854, 2.5121, 1.2855, 2.6704, 2.1137, 2.0922, 2.4361], device='cuda:2'), covar=tensor([0.0909, 0.0829, 0.1681, 0.2060, 0.1505, 0.2149, 0.2028, 0.1102], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0190, 0.0199, 0.0180, 0.0209, 0.0208, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:29:59,164 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 05:30:08,014 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-27 05:30:30,442 INFO [finetune.py:976] (2/7) Epoch 24, batch 4200, loss[loss=0.1186, simple_loss=0.1988, pruned_loss=0.01925, over 4749.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2479, pruned_loss=0.05249, over 950220.02 frames. ], batch size: 27, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:30:45,723 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7090, 1.6283, 1.6025, 1.6631, 1.3398, 3.6846, 1.4488, 1.9721], device='cuda:2'), covar=tensor([0.3099, 0.2479, 0.2043, 0.2288, 0.1676, 0.0169, 0.2613, 0.1130], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0123, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:30:46,393 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1251, 2.0048, 1.6849, 1.8729, 2.1011, 1.7859, 2.2284, 2.1454], device='cuda:2'), covar=tensor([0.1177, 0.1907, 0.2726, 0.2273, 0.2286, 0.1551, 0.3190, 0.1561], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0238, 0.0255, 0.0251, 0.0207, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:30:49,319 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.053e+01 1.587e+02 1.796e+02 2.438e+02 3.967e+02, threshold=3.591e+02, percent-clipped=1.0 2023-03-27 05:31:00,645 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:03,040 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135986.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:03,578 INFO [finetune.py:976] (2/7) Epoch 24, batch 4250, loss[loss=0.2289, simple_loss=0.2883, pruned_loss=0.08476, over 4852.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2463, pruned_loss=0.05217, over 951091.73 frames. ], batch size: 47, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:31:08,474 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:18,930 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.2312, 2.9053, 2.6405, 1.3531, 2.7330, 2.2552, 2.3005, 2.6689], device='cuda:2'), covar=tensor([0.0901, 0.0746, 0.1810, 0.2123, 0.1746, 0.2339, 0.1932, 0.1045], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0180, 0.0211, 0.0208, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:31:45,375 INFO [finetune.py:976] (2/7) Epoch 24, batch 4300, loss[loss=0.1441, simple_loss=0.2205, pruned_loss=0.03379, over 4827.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.244, pruned_loss=0.05151, over 948862.30 frames. ], batch size: 41, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:31:49,184 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136043.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:32:02,591 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136055.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:32:04,916 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8796, 1.8445, 1.9404, 1.1453, 1.9631, 2.0044, 1.9664, 1.6497], device='cuda:2'), covar=tensor([0.0536, 0.0613, 0.0595, 0.0843, 0.0964, 0.0589, 0.0540, 0.1088], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0135, 0.0138, 0.0118, 0.0125, 0.0136, 0.0136, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:32:14,138 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.579e+01 1.491e+02 1.827e+02 2.181e+02 5.621e+02, threshold=3.653e+02, percent-clipped=1.0 2023-03-27 05:32:31,267 INFO [finetune.py:976] (2/7) Epoch 24, batch 4350, loss[loss=0.189, simple_loss=0.2534, pruned_loss=0.06224, over 4853.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.242, pruned_loss=0.0508, over 950088.37 frames. ], batch size: 44, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:32:48,464 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4154, 1.3964, 1.5205, 0.6877, 1.4960, 1.4627, 1.4953, 1.3224], device='cuda:2'), covar=tensor([0.0633, 0.0825, 0.0721, 0.1009, 0.1015, 0.0732, 0.0691, 0.1340], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0135, 0.0138, 0.0118, 0.0125, 0.0137, 0.0137, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:32:55,439 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9067, 1.8803, 1.6537, 2.1180, 2.4424, 2.0735, 1.7471, 1.5538], device='cuda:2'), covar=tensor([0.1965, 0.1708, 0.1706, 0.1352, 0.1492, 0.1049, 0.2144, 0.1693], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0210, 0.0214, 0.0196, 0.0243, 0.0190, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:33:04,533 INFO [finetune.py:976] (2/7) Epoch 24, batch 4400, loss[loss=0.1938, simple_loss=0.2667, pruned_loss=0.06046, over 4904.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2429, pruned_loss=0.05058, over 951010.18 frames. ], batch size: 37, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:33:08,144 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:33:23,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.540e+02 1.819e+02 2.170e+02 3.954e+02, threshold=3.638e+02, percent-clipped=3.0 2023-03-27 05:33:31,817 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 05:33:37,775 INFO [finetune.py:976] (2/7) Epoch 24, batch 4450, loss[loss=0.2159, simple_loss=0.2846, pruned_loss=0.07362, over 4820.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2483, pruned_loss=0.05274, over 951503.69 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:33:46,918 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7656, 1.2987, 0.8349, 1.5705, 2.1074, 1.5292, 1.5384, 1.6700], device='cuda:2'), covar=tensor([0.1520, 0.2096, 0.2024, 0.1269, 0.1983, 0.1959, 0.1446, 0.1940], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0111, 0.0092, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 05:33:48,790 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136203.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:33:58,227 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5387, 1.5670, 1.3601, 1.5279, 1.7823, 1.7507, 1.5475, 1.3745], device='cuda:2'), covar=tensor([0.0382, 0.0307, 0.0701, 0.0323, 0.0242, 0.0457, 0.0339, 0.0421], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0106, 0.0145, 0.0112, 0.0100, 0.0113, 0.0102, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.7374e-05, 8.1329e-05, 1.1368e-04, 8.5427e-05, 7.8115e-05, 8.4028e-05, 7.5859e-05, 8.5613e-05], device='cuda:2') 2023-03-27 05:34:13,599 INFO [finetune.py:976] (2/7) Epoch 24, batch 4500, loss[loss=0.1837, simple_loss=0.2551, pruned_loss=0.05609, over 4748.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2496, pruned_loss=0.05262, over 952902.14 frames. ], batch size: 27, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:34:39,505 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.509e+02 1.852e+02 2.239e+02 3.856e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-27 05:34:54,258 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:34:54,761 INFO [finetune.py:976] (2/7) Epoch 24, batch 4550, loss[loss=0.2111, simple_loss=0.2752, pruned_loss=0.07353, over 4816.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2486, pruned_loss=0.05174, over 953748.30 frames. ], batch size: 33, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:34:59,781 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5685, 1.5582, 2.0945, 1.8539, 1.6873, 3.6006, 1.4877, 1.7350], device='cuda:2'), covar=tensor([0.1069, 0.1864, 0.1150, 0.1051, 0.1690, 0.0322, 0.1606, 0.1951], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:35:01,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 05:35:02,322 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 05:35:22,735 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2262, 2.1625, 1.7151, 1.9799, 2.0844, 1.8195, 2.3463, 2.2333], device='cuda:2'), covar=tensor([0.1242, 0.1740, 0.2782, 0.2636, 0.2625, 0.1662, 0.3406, 0.1591], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0255, 0.0251, 0.0207, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:35:28,209 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136334.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:28,910 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:30,007 INFO [finetune.py:976] (2/7) Epoch 24, batch 4600, loss[loss=0.1838, simple_loss=0.254, pruned_loss=0.05678, over 4819.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2459, pruned_loss=0.05048, over 953164.88 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:35:35,226 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136338.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:45,774 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:56,270 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.624e+02 1.856e+02 2.259e+02 4.732e+02, threshold=3.713e+02, percent-clipped=2.0 2023-03-27 05:36:11,519 INFO [finetune.py:976] (2/7) Epoch 24, batch 4650, loss[loss=0.1745, simple_loss=0.2385, pruned_loss=0.05532, over 4834.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2438, pruned_loss=0.05009, over 953997.88 frames. ], batch size: 30, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:36:17,128 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136396.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:36:45,425 INFO [finetune.py:976] (2/7) Epoch 24, batch 4700, loss[loss=0.1411, simple_loss=0.2122, pruned_loss=0.03497, over 4808.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2411, pruned_loss=0.04977, over 953884.71 frames. ], batch size: 51, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:37:13,734 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.479e+02 1.754e+02 2.064e+02 3.231e+02, threshold=3.507e+02, percent-clipped=0.0 2023-03-27 05:37:22,878 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1750, 1.9805, 1.8498, 2.0629, 1.8474, 1.9173, 1.8830, 2.5807], device='cuda:2'), covar=tensor([0.3061, 0.3891, 0.2895, 0.3181, 0.4218, 0.2128, 0.3708, 0.1415], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0276, 0.0257, 0.0228, 0.0255, 0.0236], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:37:37,698 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7608, 1.7702, 1.7472, 1.7429, 1.5131, 3.3316, 1.7078, 2.0323], device='cuda:2'), covar=tensor([0.2677, 0.1974, 0.1657, 0.1866, 0.1348, 0.0266, 0.2401, 0.0989], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0122, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:37:38,205 INFO [finetune.py:976] (2/7) Epoch 24, batch 4750, loss[loss=0.1968, simple_loss=0.2502, pruned_loss=0.07165, over 4200.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2385, pruned_loss=0.04905, over 952878.04 frames. ], batch size: 65, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:37:44,542 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 05:37:44,924 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:38:10,344 INFO [finetune.py:976] (2/7) Epoch 24, batch 4800, loss[loss=0.1714, simple_loss=0.2487, pruned_loss=0.04703, over 4756.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.243, pruned_loss=0.0511, over 952054.30 frames. ], batch size: 26, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:38:15,928 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-27 05:38:28,976 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.582e+02 2.021e+02 2.347e+02 5.093e+02, threshold=4.042e+02, percent-clipped=3.0 2023-03-27 05:38:44,073 INFO [finetune.py:976] (2/7) Epoch 24, batch 4850, loss[loss=0.2046, simple_loss=0.2784, pruned_loss=0.06545, over 4717.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2461, pruned_loss=0.05146, over 953467.06 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 32.0 2023-03-27 05:39:11,852 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4360, 2.2658, 1.9969, 2.4726, 2.9246, 2.4128, 2.4748, 1.8377], device='cuda:2'), covar=tensor([0.2234, 0.1937, 0.1913, 0.1560, 0.1729, 0.1086, 0.1811, 0.1830], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0197, 0.0245, 0.0191, 0.0218, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:39:17,546 INFO [finetune.py:976] (2/7) Epoch 24, batch 4900, loss[loss=0.2157, simple_loss=0.2817, pruned_loss=0.07486, over 4213.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2473, pruned_loss=0.05199, over 951772.00 frames. ], batch size: 65, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:39:18,281 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:22,289 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:26,545 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136650.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:41,834 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6039, 1.6151, 1.5709, 0.9382, 1.7548, 1.9144, 1.8799, 1.4387], device='cuda:2'), covar=tensor([0.1077, 0.0625, 0.0570, 0.0585, 0.0455, 0.0637, 0.0336, 0.0909], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0131, 0.0130, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9679e-05, 1.0756e-04, 9.0576e-05, 8.5814e-05, 9.1966e-05, 9.2232e-05, 1.0115e-04, 1.0623e-04], device='cuda:2') 2023-03-27 05:39:42,308 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.600e+02 1.925e+02 2.438e+02 3.559e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-27 05:39:50,745 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-27 05:39:59,909 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136686.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:00,476 INFO [finetune.py:976] (2/7) Epoch 24, batch 4950, loss[loss=0.1828, simple_loss=0.2592, pruned_loss=0.0532, over 4830.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2483, pruned_loss=0.05211, over 952529.21 frames. ], batch size: 30, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:40:03,943 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:08,765 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136698.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:12,455 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:33,840 INFO [finetune.py:976] (2/7) Epoch 24, batch 5000, loss[loss=0.1381, simple_loss=0.2169, pruned_loss=0.0296, over 4906.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2462, pruned_loss=0.05129, over 953385.64 frames. ], batch size: 37, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:41:02,608 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.520e+02 1.782e+02 2.173e+02 3.913e+02, threshold=3.563e+02, percent-clipped=1.0 2023-03-27 05:41:17,086 INFO [finetune.py:976] (2/7) Epoch 24, batch 5050, loss[loss=0.1358, simple_loss=0.2053, pruned_loss=0.03316, over 4736.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2436, pruned_loss=0.05059, over 954702.71 frames. ], batch size: 54, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:41:25,179 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136798.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:41:30,691 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:41:49,839 INFO [finetune.py:976] (2/7) Epoch 24, batch 5100, loss[loss=0.1945, simple_loss=0.2544, pruned_loss=0.06733, over 4905.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2417, pruned_loss=0.05061, over 954603.43 frames. ], batch size: 43, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:41:56,303 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136846.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:42:11,831 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.576e+02 1.921e+02 2.257e+02 4.191e+02, threshold=3.841e+02, percent-clipped=1.0 2023-03-27 05:42:13,189 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136867.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:42:28,081 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6145, 1.4750, 1.8713, 1.1786, 1.7522, 1.8675, 1.4468, 2.0675], device='cuda:2'), covar=tensor([0.1160, 0.2126, 0.1274, 0.1719, 0.0859, 0.1187, 0.2637, 0.0693], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0205, 0.0189, 0.0188, 0.0172, 0.0211, 0.0213, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:42:35,177 INFO [finetune.py:976] (2/7) Epoch 24, batch 5150, loss[loss=0.2286, simple_loss=0.3108, pruned_loss=0.07326, over 4218.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2412, pruned_loss=0.05026, over 952138.73 frames. ], batch size: 65, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:00,926 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4781, 1.5487, 1.5369, 0.8629, 1.5808, 1.8400, 1.8670, 1.3651], device='cuda:2'), covar=tensor([0.0860, 0.0595, 0.0498, 0.0510, 0.0460, 0.0497, 0.0276, 0.0673], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0131, 0.0130, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9506e-05, 1.0733e-04, 9.0392e-05, 8.5617e-05, 9.1759e-05, 9.2396e-05, 1.0097e-04, 1.0611e-04], device='cuda:2') 2023-03-27 05:43:16,539 INFO [finetune.py:976] (2/7) Epoch 24, batch 5200, loss[loss=0.1686, simple_loss=0.2371, pruned_loss=0.05003, over 4896.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2451, pruned_loss=0.05146, over 952416.47 frames. ], batch size: 32, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:35,515 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.133e+02 1.647e+02 1.940e+02 2.397e+02 3.428e+02, threshold=3.879e+02, percent-clipped=0.0 2023-03-27 05:43:42,474 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-27 05:43:48,851 INFO [finetune.py:976] (2/7) Epoch 24, batch 5250, loss[loss=0.1552, simple_loss=0.2155, pruned_loss=0.04745, over 4340.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2469, pruned_loss=0.05148, over 953413.52 frames. ], batch size: 19, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:51,878 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136991.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:43:57,216 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136999.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:44:22,623 INFO [finetune.py:976] (2/7) Epoch 24, batch 5300, loss[loss=0.1927, simple_loss=0.2672, pruned_loss=0.05913, over 4835.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.248, pruned_loss=0.05121, over 954768.91 frames. ], batch size: 49, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:44:23,938 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137039.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:44:34,523 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4187, 1.2993, 1.3138, 1.2940, 0.8003, 2.3058, 0.7793, 1.2024], device='cuda:2'), covar=tensor([0.3461, 0.2643, 0.2380, 0.2585, 0.2027, 0.0339, 0.2850, 0.1403], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:44:42,412 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.600e+02 1.832e+02 2.198e+02 3.821e+02, threshold=3.665e+02, percent-clipped=0.0 2023-03-27 05:45:04,708 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5617, 1.4378, 1.3087, 1.6124, 1.6156, 1.5820, 0.9964, 1.2720], device='cuda:2'), covar=tensor([0.2035, 0.2026, 0.1962, 0.1535, 0.1579, 0.1276, 0.2568, 0.1884], device='cuda:2'), in_proj_covar=tensor([0.0244, 0.0211, 0.0214, 0.0196, 0.0244, 0.0189, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:45:05,791 INFO [finetune.py:976] (2/7) Epoch 24, batch 5350, loss[loss=0.1802, simple_loss=0.2497, pruned_loss=0.05539, over 4910.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2491, pruned_loss=0.05147, over 956343.02 frames. ], batch size: 37, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:45:38,835 INFO [finetune.py:976] (2/7) Epoch 24, batch 5400, loss[loss=0.178, simple_loss=0.2508, pruned_loss=0.05257, over 4818.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2465, pruned_loss=0.05084, over 955918.18 frames. ], batch size: 33, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:45:57,159 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=137162.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:45:58,904 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.720e+01 1.449e+02 1.770e+02 2.209e+02 4.288e+02, threshold=3.541e+02, percent-clipped=1.0 2023-03-27 05:46:22,823 INFO [finetune.py:976] (2/7) Epoch 24, batch 5450, loss[loss=0.1534, simple_loss=0.2134, pruned_loss=0.0467, over 4825.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2439, pruned_loss=0.05035, over 955026.98 frames. ], batch size: 30, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:46:37,894 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8561, 3.3924, 3.5683, 3.7030, 3.6206, 3.4213, 3.9371, 1.2300], device='cuda:2'), covar=tensor([0.1048, 0.1018, 0.1078, 0.1267, 0.1553, 0.1747, 0.0925, 0.5978], device='cuda:2'), in_proj_covar=tensor([0.0344, 0.0245, 0.0280, 0.0292, 0.0337, 0.0283, 0.0304, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:46:55,994 INFO [finetune.py:976] (2/7) Epoch 24, batch 5500, loss[loss=0.2247, simple_loss=0.2895, pruned_loss=0.07998, over 4818.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2412, pruned_loss=0.04961, over 953713.32 frames. ], batch size: 38, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:46:59,239 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 05:47:13,426 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.493e+02 1.897e+02 2.213e+02 3.719e+02, threshold=3.794e+02, percent-clipped=2.0 2023-03-27 05:47:17,462 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4730, 3.3498, 3.2086, 1.4722, 3.4467, 2.6322, 0.7148, 2.2435], device='cuda:2'), covar=tensor([0.2546, 0.1989, 0.1622, 0.3467, 0.1210, 0.1030, 0.4459, 0.1675], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0176, 0.0159, 0.0128, 0.0160, 0.0122, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:47:36,857 INFO [finetune.py:976] (2/7) Epoch 24, batch 5550, loss[loss=0.1688, simple_loss=0.2558, pruned_loss=0.04089, over 4790.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2436, pruned_loss=0.05034, over 955078.67 frames. ], batch size: 51, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:47:47,644 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=137299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:48:20,577 INFO [finetune.py:976] (2/7) Epoch 24, batch 5600, loss[loss=0.1543, simple_loss=0.233, pruned_loss=0.03777, over 4846.00 frames. ], tot_loss[loss=0.174, simple_loss=0.246, pruned_loss=0.05097, over 955771.10 frames. ], batch size: 47, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:48:26,395 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:48:34,400 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.61 vs. limit=2.0 2023-03-27 05:48:37,190 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4649, 1.0865, 0.7733, 1.2778, 1.8188, 0.8074, 1.2073, 1.2686], device='cuda:2'), covar=tensor([0.1535, 0.2171, 0.1766, 0.1236, 0.2123, 0.2027, 0.1492, 0.2084], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 05:48:37,702 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.584e+02 1.843e+02 2.259e+02 3.753e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-27 05:48:51,115 INFO [finetune.py:976] (2/7) Epoch 24, batch 5650, loss[loss=0.1833, simple_loss=0.2604, pruned_loss=0.05306, over 4793.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2484, pruned_loss=0.05129, over 953286.56 frames. ], batch size: 51, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:49:20,946 INFO [finetune.py:976] (2/7) Epoch 24, batch 5700, loss[loss=0.1402, simple_loss=0.2031, pruned_loss=0.03865, over 4217.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2449, pruned_loss=0.05006, over 939674.72 frames. ], batch size: 18, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:49:29,278 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9309, 1.8016, 1.9728, 1.1816, 1.9331, 1.9641, 1.9005, 1.7212], device='cuda:2'), covar=tensor([0.0516, 0.0649, 0.0591, 0.0857, 0.0911, 0.0681, 0.0566, 0.1059], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0136, 0.0139, 0.0119, 0.0126, 0.0138, 0.0138, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:49:35,728 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=137462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:49:52,032 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.393e+01 1.414e+02 1.686e+02 2.138e+02 3.465e+02, threshold=3.373e+02, percent-clipped=0.0 2023-03-27 05:49:52,048 INFO [finetune.py:976] (2/7) Epoch 25, batch 0, loss[loss=0.1477, simple_loss=0.2242, pruned_loss=0.03564, over 4771.00 frames. ], tot_loss[loss=0.1477, simple_loss=0.2242, pruned_loss=0.03564, over 4771.00 frames. ], batch size: 26, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:49:52,048 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 05:49:57,919 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6516, 3.6653, 3.4680, 1.5787, 3.6446, 2.8087, 0.7593, 2.5603], device='cuda:2'), covar=tensor([0.1766, 0.1483, 0.1524, 0.3143, 0.1013, 0.0966, 0.3516, 0.1344], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 05:50:06,683 INFO [finetune.py:1010] (2/7) Epoch 25, validation: loss=0.1587, simple_loss=0.2267, pruned_loss=0.04536, over 2265189.00 frames. 2023-03-27 05:50:06,683 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 05:50:46,518 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137510.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:50:50,024 INFO [finetune.py:976] (2/7) Epoch 25, batch 50, loss[loss=0.1861, simple_loss=0.2622, pruned_loss=0.05498, over 4812.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2539, pruned_loss=0.05567, over 217232.05 frames. ], batch size: 38, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:51:18,273 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0145, 1.9013, 1.9976, 1.3049, 2.0248, 2.1608, 2.0355, 1.7110], device='cuda:2'), covar=tensor([0.0529, 0.0719, 0.0717, 0.0889, 0.0752, 0.0602, 0.0600, 0.1089], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0139, 0.0119, 0.0126, 0.0138, 0.0138, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:51:21,853 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-27 05:51:25,237 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.580e+02 1.851e+02 2.170e+02 4.183e+02, threshold=3.702e+02, percent-clipped=2.0 2023-03-27 05:51:25,253 INFO [finetune.py:976] (2/7) Epoch 25, batch 100, loss[loss=0.2059, simple_loss=0.2582, pruned_loss=0.07682, over 4822.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2433, pruned_loss=0.05148, over 383049.85 frames. ], batch size: 38, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:51:59,265 INFO [finetune.py:976] (2/7) Epoch 25, batch 150, loss[loss=0.1872, simple_loss=0.2647, pruned_loss=0.0549, over 4830.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2397, pruned_loss=0.05029, over 510079.98 frames. ], batch size: 39, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:52:01,641 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7807, 1.7708, 2.2189, 3.6713, 2.5219, 2.5300, 0.8680, 3.0187], device='cuda:2'), covar=tensor([0.1751, 0.1304, 0.1330, 0.0492, 0.0766, 0.1228, 0.2019, 0.0444], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0164, 0.0102, 0.0137, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:52:27,751 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.49 vs. limit=2.0 2023-03-27 05:52:33,561 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.544e+02 1.791e+02 2.141e+02 4.771e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-27 05:52:33,577 INFO [finetune.py:976] (2/7) Epoch 25, batch 200, loss[loss=0.1795, simple_loss=0.2414, pruned_loss=0.05883, over 4888.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2382, pruned_loss=0.04922, over 609693.79 frames. ], batch size: 35, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:53:26,996 INFO [finetune.py:976] (2/7) Epoch 25, batch 250, loss[loss=0.1947, simple_loss=0.2621, pruned_loss=0.06366, over 4934.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2423, pruned_loss=0.05085, over 687151.61 frames. ], batch size: 33, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:53:47,818 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1902, 1.0465, 1.4796, 2.3200, 1.4694, 1.9992, 0.8793, 2.0663], device='cuda:2'), covar=tensor([0.2199, 0.2195, 0.1448, 0.1114, 0.1236, 0.1659, 0.1835, 0.0743], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0135, 0.0165, 0.0103, 0.0138, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 05:54:00,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.518e+01 1.603e+02 1.998e+02 2.287e+02 4.515e+02, threshold=3.995e+02, percent-clipped=2.0 2023-03-27 05:54:00,407 INFO [finetune.py:976] (2/7) Epoch 25, batch 300, loss[loss=0.1556, simple_loss=0.2231, pruned_loss=0.04401, over 4193.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2439, pruned_loss=0.05097, over 746180.45 frames. ], batch size: 18, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:54:33,830 INFO [finetune.py:976] (2/7) Epoch 25, batch 350, loss[loss=0.1774, simple_loss=0.2529, pruned_loss=0.05093, over 4892.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2466, pruned_loss=0.05192, over 794395.35 frames. ], batch size: 43, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:55:00,494 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-27 05:55:07,123 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.541e+02 1.822e+02 2.129e+02 2.910e+02, threshold=3.644e+02, percent-clipped=0.0 2023-03-27 05:55:07,139 INFO [finetune.py:976] (2/7) Epoch 25, batch 400, loss[loss=0.148, simple_loss=0.2272, pruned_loss=0.03434, over 4806.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2471, pruned_loss=0.0513, over 829674.13 frames. ], batch size: 45, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:55:31,436 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=137892.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:55:31,489 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.09 vs. limit=2.0 2023-03-27 05:55:33,197 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=137895.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:55:53,569 INFO [finetune.py:976] (2/7) Epoch 25, batch 450, loss[loss=0.1689, simple_loss=0.2465, pruned_loss=0.04565, over 4747.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2463, pruned_loss=0.0512, over 858215.69 frames. ], batch size: 54, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:56:13,828 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1947, 2.1134, 1.8008, 2.1511, 2.1668, 1.9118, 2.4686, 2.2298], device='cuda:2'), covar=tensor([0.1279, 0.2072, 0.2655, 0.2354, 0.2434, 0.1514, 0.2704, 0.1603], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0235, 0.0254, 0.0250, 0.0206, 0.0215, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:56:19,219 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=137953.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:56:20,995 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=137956.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:56:22,222 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6136, 2.5160, 2.1101, 2.7686, 2.5414, 2.1928, 3.0810, 2.6541], device='cuda:2'), covar=tensor([0.1318, 0.2334, 0.3159, 0.2512, 0.2656, 0.1813, 0.2883, 0.1863], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0190, 0.0234, 0.0255, 0.0250, 0.0206, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:56:26,870 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.118e+02 1.489e+02 1.801e+02 2.267e+02 5.324e+02, threshold=3.602e+02, percent-clipped=3.0 2023-03-27 05:56:26,886 INFO [finetune.py:976] (2/7) Epoch 25, batch 500, loss[loss=0.2223, simple_loss=0.2803, pruned_loss=0.08211, over 4904.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.244, pruned_loss=0.05014, over 881171.21 frames. ], batch size: 36, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:57:01,547 INFO [finetune.py:976] (2/7) Epoch 25, batch 550, loss[loss=0.1602, simple_loss=0.2279, pruned_loss=0.04622, over 4842.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2414, pruned_loss=0.0496, over 899170.92 frames. ], batch size: 49, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:57:14,698 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7306, 1.3268, 0.7410, 1.5957, 2.0879, 1.4375, 1.5840, 1.7393], device='cuda:2'), covar=tensor([0.1490, 0.2110, 0.2027, 0.1218, 0.1868, 0.1821, 0.1449, 0.1795], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0093, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 05:57:22,758 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5072, 1.4857, 1.9875, 1.8868, 1.5607, 3.6322, 1.4349, 1.5282], device='cuda:2'), covar=tensor([0.1243, 0.2354, 0.1179, 0.1128, 0.1946, 0.0289, 0.1955, 0.2364], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 05:57:34,657 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.805e+01 1.427e+02 1.740e+02 1.994e+02 3.808e+02, threshold=3.480e+02, percent-clipped=1.0 2023-03-27 05:57:34,673 INFO [finetune.py:976] (2/7) Epoch 25, batch 600, loss[loss=0.2098, simple_loss=0.2826, pruned_loss=0.06846, over 4856.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2416, pruned_loss=0.05031, over 910251.40 frames. ], batch size: 47, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:58:01,511 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2464, 2.2091, 1.8871, 2.4466, 2.1861, 1.9425, 2.5711, 2.3543], device='cuda:2'), covar=tensor([0.1169, 0.2086, 0.2588, 0.2238, 0.2226, 0.1439, 0.3307, 0.1432], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0234, 0.0254, 0.0250, 0.0206, 0.0215, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:58:07,364 INFO [finetune.py:976] (2/7) Epoch 25, batch 650, loss[loss=0.157, simple_loss=0.2365, pruned_loss=0.03879, over 4838.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.245, pruned_loss=0.05174, over 917988.18 frames. ], batch size: 47, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:58:22,942 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3996, 3.7905, 4.0775, 4.2564, 4.1768, 3.8624, 4.4708, 1.4538], device='cuda:2'), covar=tensor([0.0750, 0.0798, 0.0816, 0.0973, 0.1254, 0.1694, 0.0690, 0.5711], device='cuda:2'), in_proj_covar=tensor([0.0343, 0.0245, 0.0278, 0.0291, 0.0335, 0.0283, 0.0302, 0.0298], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 05:58:59,108 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.613e+02 1.882e+02 2.325e+02 5.163e+02, threshold=3.765e+02, percent-clipped=4.0 2023-03-27 05:58:59,124 INFO [finetune.py:976] (2/7) Epoch 25, batch 700, loss[loss=0.2044, simple_loss=0.2685, pruned_loss=0.07016, over 4817.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2465, pruned_loss=0.05158, over 927304.94 frames. ], batch size: 51, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:59:16,066 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.43 vs. limit=5.0 2023-03-27 05:59:32,527 INFO [finetune.py:976] (2/7) Epoch 25, batch 750, loss[loss=0.1877, simple_loss=0.2587, pruned_loss=0.0583, over 4827.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2462, pruned_loss=0.05102, over 931301.43 frames. ], batch size: 47, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:59:53,523 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138248.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:59:55,845 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138251.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:59:56,510 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4654, 1.4115, 1.4946, 0.7468, 1.5686, 1.4992, 1.5444, 1.3522], device='cuda:2'), covar=tensor([0.0617, 0.0798, 0.0742, 0.1023, 0.0799, 0.0809, 0.0632, 0.1317], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0140, 0.0120, 0.0126, 0.0139, 0.0138, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:00:05,193 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.386e+01 1.513e+02 1.803e+02 2.270e+02 6.862e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-27 06:00:05,209 INFO [finetune.py:976] (2/7) Epoch 25, batch 800, loss[loss=0.1762, simple_loss=0.258, pruned_loss=0.04723, over 4726.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2469, pruned_loss=0.0509, over 937009.98 frames. ], batch size: 54, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:00:08,345 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138270.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:38,347 INFO [finetune.py:976] (2/7) Epoch 25, batch 850, loss[loss=0.1905, simple_loss=0.2503, pruned_loss=0.06541, over 4776.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2461, pruned_loss=0.05114, over 941267.95 frames. ], batch size: 51, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:00:44,518 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138322.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:54,541 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138331.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:56,969 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6993, 1.6174, 1.5691, 1.6307, 1.2306, 3.7067, 1.5344, 2.0169], device='cuda:2'), covar=tensor([0.3252, 0.2328, 0.2055, 0.2365, 0.1661, 0.0158, 0.2398, 0.1125], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:01:17,939 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3804, 2.2820, 1.9816, 2.2288, 2.1990, 2.2055, 2.1938, 2.9156], device='cuda:2'), covar=tensor([0.3357, 0.3762, 0.2959, 0.3397, 0.3264, 0.2520, 0.3173, 0.1671], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0258, 0.0228, 0.0256, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:01:22,090 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 06:01:24,848 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.388e+02 1.706e+02 2.091e+02 3.563e+02, threshold=3.412e+02, percent-clipped=0.0 2023-03-27 06:01:24,864 INFO [finetune.py:976] (2/7) Epoch 25, batch 900, loss[loss=0.2271, simple_loss=0.2823, pruned_loss=0.08595, over 4739.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2434, pruned_loss=0.05024, over 945545.13 frames. ], batch size: 26, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:01:29,883 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4241, 2.2876, 1.9846, 2.5711, 2.3454, 2.1037, 2.7485, 2.4793], device='cuda:2'), covar=tensor([0.1408, 0.2165, 0.3079, 0.2267, 0.2754, 0.1776, 0.2413, 0.1753], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0191, 0.0236, 0.0256, 0.0251, 0.0207, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:01:36,348 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138383.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:01:48,471 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1468, 1.8740, 2.2228, 2.2011, 1.9431, 1.9237, 2.1637, 2.0798], device='cuda:2'), covar=tensor([0.4682, 0.4489, 0.3423, 0.4277, 0.5208, 0.4060, 0.5075, 0.3137], device='cuda:2'), in_proj_covar=tensor([0.0263, 0.0247, 0.0266, 0.0292, 0.0292, 0.0269, 0.0298, 0.0249], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:01:49,678 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2257, 2.1575, 1.8349, 1.9929, 2.1576, 1.9241, 2.3367, 2.2565], device='cuda:2'), covar=tensor([0.1253, 0.1861, 0.2741, 0.2246, 0.2438, 0.1654, 0.2575, 0.1689], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0191, 0.0236, 0.0256, 0.0251, 0.0207, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:01:57,678 INFO [finetune.py:976] (2/7) Epoch 25, batch 950, loss[loss=0.2201, simple_loss=0.2903, pruned_loss=0.07493, over 4901.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2417, pruned_loss=0.05014, over 948038.85 frames. ], batch size: 43, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:01:59,042 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.4921, 3.0943, 2.8700, 1.4957, 3.0563, 2.5295, 2.3889, 2.6407], device='cuda:2'), covar=tensor([0.0875, 0.0811, 0.1782, 0.2079, 0.1435, 0.2048, 0.1953, 0.1175], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0190, 0.0200, 0.0180, 0.0208, 0.0208, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:02:06,965 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6621, 1.1882, 0.9657, 1.5466, 1.9933, 1.2809, 1.4652, 1.5065], device='cuda:2'), covar=tensor([0.1485, 0.2098, 0.1837, 0.1243, 0.1917, 0.1914, 0.1430, 0.2057], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0092, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:02:12,334 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4083, 1.3167, 1.2503, 1.3440, 1.5943, 1.5426, 1.3854, 1.2262], device='cuda:2'), covar=tensor([0.0323, 0.0335, 0.0668, 0.0321, 0.0262, 0.0465, 0.0339, 0.0446], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0107, 0.0146, 0.0112, 0.0101, 0.0113, 0.0102, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.7920e-05, 8.1785e-05, 1.1426e-04, 8.5953e-05, 7.8575e-05, 8.3898e-05, 7.5978e-05, 8.6114e-05], device='cuda:2') 2023-03-27 06:02:21,111 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4163, 1.4907, 1.9298, 1.8850, 1.6079, 3.5182, 1.4214, 1.5863], device='cuda:2'), covar=tensor([0.0989, 0.1759, 0.1100, 0.0881, 0.1558, 0.0235, 0.1478, 0.1766], device='cuda:2'), in_proj_covar=tensor([0.0073, 0.0081, 0.0072, 0.0075, 0.0090, 0.0079, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:02:30,846 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.447e+02 1.781e+02 2.298e+02 4.571e+02, threshold=3.563e+02, percent-clipped=3.0 2023-03-27 06:02:30,862 INFO [finetune.py:976] (2/7) Epoch 25, batch 1000, loss[loss=0.1701, simple_loss=0.2426, pruned_loss=0.04882, over 4837.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2424, pruned_loss=0.05026, over 948314.27 frames. ], batch size: 30, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:02:59,300 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0180, 1.6124, 2.1166, 2.0390, 1.8457, 1.7977, 1.9314, 2.0227], device='cuda:2'), covar=tensor([0.4122, 0.3969, 0.3137, 0.3504, 0.4906, 0.3928, 0.5063, 0.3071], device='cuda:2'), in_proj_covar=tensor([0.0262, 0.0246, 0.0265, 0.0291, 0.0291, 0.0268, 0.0298, 0.0249], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:03:03,759 INFO [finetune.py:976] (2/7) Epoch 25, batch 1050, loss[loss=0.1786, simple_loss=0.251, pruned_loss=0.05308, over 4898.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2422, pruned_loss=0.04924, over 948004.54 frames. ], batch size: 35, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:03:04,566 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 06:03:25,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:03:25,978 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6476, 1.6911, 1.5938, 1.7742, 1.3946, 3.0891, 1.4342, 1.8619], device='cuda:2'), covar=tensor([0.2983, 0.2176, 0.1880, 0.2121, 0.1480, 0.0291, 0.2640, 0.1084], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0112, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:03:27,155 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:03:39,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.570e+02 1.874e+02 2.177e+02 7.699e+02, threshold=3.747e+02, percent-clipped=3.0 2023-03-27 06:03:39,095 INFO [finetune.py:976] (2/7) Epoch 25, batch 1100, loss[loss=0.1789, simple_loss=0.2542, pruned_loss=0.05175, over 4718.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2442, pruned_loss=0.0504, over 948555.87 frames. ], batch size: 54, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:04:17,870 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138596.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:04:19,631 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138599.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:04:24,240 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 06:04:29,871 INFO [finetune.py:976] (2/7) Epoch 25, batch 1150, loss[loss=0.1803, simple_loss=0.2606, pruned_loss=0.05004, over 4819.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2451, pruned_loss=0.05045, over 950104.29 frames. ], batch size: 39, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:04:39,029 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:03,419 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.832e+01 1.520e+02 1.733e+02 2.222e+02 3.582e+02, threshold=3.466e+02, percent-clipped=0.0 2023-03-27 06:05:03,435 INFO [finetune.py:976] (2/7) Epoch 25, batch 1200, loss[loss=0.1513, simple_loss=0.223, pruned_loss=0.03979, over 4837.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.244, pruned_loss=0.04981, over 949913.06 frames. ], batch size: 47, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:05:13,800 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138678.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:14,420 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7652, 1.7634, 1.6861, 1.8065, 1.5349, 3.3378, 1.7108, 2.0205], device='cuda:2'), covar=tensor([0.2676, 0.1914, 0.1816, 0.1916, 0.1342, 0.0284, 0.2305, 0.1016], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:05:15,633 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138681.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:18,721 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138686.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:25,897 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138697.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:28,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5946, 1.5815, 1.3509, 1.4759, 1.8449, 1.8009, 1.6066, 1.3360], device='cuda:2'), covar=tensor([0.0386, 0.0327, 0.0683, 0.0366, 0.0207, 0.0465, 0.0360, 0.0468], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0108, 0.0147, 0.0113, 0.0102, 0.0115, 0.0103, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.8758e-05, 8.2786e-05, 1.1525e-04, 8.6799e-05, 7.9364e-05, 8.4956e-05, 7.6574e-05, 8.6967e-05], device='cuda:2') 2023-03-27 06:05:37,221 INFO [finetune.py:976] (2/7) Epoch 25, batch 1250, loss[loss=0.1578, simple_loss=0.2237, pruned_loss=0.0459, over 4907.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2427, pruned_loss=0.04956, over 950139.91 frames. ], batch size: 43, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:05:55,587 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138742.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:59,139 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138747.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:06:03,947 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9854, 2.7224, 2.5878, 1.4252, 2.8027, 2.2059, 2.0220, 2.4522], device='cuda:2'), covar=tensor([0.1062, 0.0727, 0.1704, 0.1956, 0.1284, 0.2124, 0.2180, 0.1131], device='cuda:2'), in_proj_covar=tensor([0.0168, 0.0189, 0.0198, 0.0179, 0.0207, 0.0207, 0.0221, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:06:05,717 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138758.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:06:12,321 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.434e+02 1.700e+02 2.124e+02 3.591e+02, threshold=3.400e+02, percent-clipped=1.0 2023-03-27 06:06:12,337 INFO [finetune.py:976] (2/7) Epoch 25, batch 1300, loss[loss=0.1889, simple_loss=0.2421, pruned_loss=0.06782, over 4824.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2415, pruned_loss=0.04964, over 952542.14 frames. ], batch size: 40, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:06:57,378 INFO [finetune.py:976] (2/7) Epoch 25, batch 1350, loss[loss=0.1295, simple_loss=0.2043, pruned_loss=0.02732, over 4764.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2413, pruned_loss=0.04992, over 952464.55 frames. ], batch size: 27, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:07:31,276 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.573e+02 1.999e+02 2.321e+02 4.595e+02, threshold=3.999e+02, percent-clipped=3.0 2023-03-27 06:07:31,292 INFO [finetune.py:976] (2/7) Epoch 25, batch 1400, loss[loss=0.1378, simple_loss=0.2127, pruned_loss=0.03143, over 4795.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2447, pruned_loss=0.05025, over 952675.34 frames. ], batch size: 25, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:08:01,082 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138909.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:04,578 INFO [finetune.py:976] (2/7) Epoch 25, batch 1450, loss[loss=0.1485, simple_loss=0.2211, pruned_loss=0.03795, over 4788.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2465, pruned_loss=0.051, over 952590.93 frames. ], batch size: 29, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:08:11,751 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138926.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:28,038 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1238, 2.0176, 2.1384, 1.3862, 2.1869, 2.2528, 2.1901, 1.7444], device='cuda:2'), covar=tensor([0.0596, 0.0750, 0.0743, 0.0924, 0.0675, 0.0707, 0.0651, 0.1209], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0120, 0.0127, 0.0140, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:08:38,070 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.768e+01 1.472e+02 1.793e+02 2.201e+02 3.947e+02, threshold=3.587e+02, percent-clipped=0.0 2023-03-27 06:08:38,086 INFO [finetune.py:976] (2/7) Epoch 25, batch 1500, loss[loss=0.2062, simple_loss=0.2817, pruned_loss=0.06532, over 4894.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.249, pruned_loss=0.05218, over 953352.10 frames. ], batch size: 32, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:08:41,657 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138970.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:08:44,018 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:46,504 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:55,700 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9093, 1.9952, 1.6396, 2.2317, 2.5439, 2.1341, 2.0940, 1.4876], device='cuda:2'), covar=tensor([0.2402, 0.2008, 0.1973, 0.1553, 0.1976, 0.1291, 0.2218, 0.2036], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0198, 0.0245, 0.0192, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:09:23,186 INFO [finetune.py:976] (2/7) Epoch 25, batch 1550, loss[loss=0.1426, simple_loss=0.2219, pruned_loss=0.03169, over 4755.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2492, pruned_loss=0.05199, over 955533.15 frames. ], batch size: 28, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:09:34,899 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:46,774 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139037.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:49,765 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 06:09:50,280 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139042.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:57,463 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139053.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:10:04,645 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6351, 1.2619, 0.7706, 1.4234, 2.0336, 1.3277, 1.4772, 1.5504], device='cuda:2'), covar=tensor([0.2080, 0.2804, 0.2480, 0.1841, 0.2398, 0.2625, 0.2021, 0.2929], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0093, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 06:10:05,109 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.522e+01 1.375e+02 1.683e+02 2.077e+02 3.862e+02, threshold=3.366e+02, percent-clipped=3.0 2023-03-27 06:10:05,125 INFO [finetune.py:976] (2/7) Epoch 25, batch 1600, loss[loss=0.1687, simple_loss=0.2469, pruned_loss=0.04525, over 4737.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2456, pruned_loss=0.05059, over 954571.00 frames. ], batch size: 26, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:10:38,952 INFO [finetune.py:976] (2/7) Epoch 25, batch 1650, loss[loss=0.1418, simple_loss=0.2153, pruned_loss=0.03413, over 4924.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2433, pruned_loss=0.05012, over 957250.16 frames. ], batch size: 37, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:11:12,570 INFO [finetune.py:976] (2/7) Epoch 25, batch 1700, loss[loss=0.1419, simple_loss=0.2094, pruned_loss=0.03719, over 4825.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2417, pruned_loss=0.04976, over 958691.70 frames. ], batch size: 25, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:11:13,175 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.831e+01 1.435e+02 1.759e+02 2.193e+02 3.727e+02, threshold=3.518e+02, percent-clipped=3.0 2023-03-27 06:11:15,830 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 06:11:56,394 INFO [finetune.py:976] (2/7) Epoch 25, batch 1750, loss[loss=0.1737, simple_loss=0.2459, pruned_loss=0.05073, over 4888.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2435, pruned_loss=0.05059, over 958278.92 frames. ], batch size: 32, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:12:20,713 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:12:37,807 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 06:12:39,404 INFO [finetune.py:976] (2/7) Epoch 25, batch 1800, loss[loss=0.1843, simple_loss=0.2533, pruned_loss=0.05771, over 4786.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2459, pruned_loss=0.05063, over 957095.85 frames. ], batch size: 29, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:12:39,476 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139265.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 06:12:39,957 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.974e+01 1.553e+02 1.833e+02 2.131e+02 4.022e+02, threshold=3.667e+02, percent-clipped=3.0 2023-03-27 06:12:46,687 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139276.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:12:55,151 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139289.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:02,261 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:13,309 INFO [finetune.py:976] (2/7) Epoch 25, batch 1850, loss[loss=0.1599, simple_loss=0.2368, pruned_loss=0.04151, over 4765.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2468, pruned_loss=0.05084, over 958023.03 frames. ], batch size: 28, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:13:15,265 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0106, 2.0358, 1.7551, 2.0754, 1.6467, 4.6657, 1.7606, 2.4009], device='cuda:2'), covar=tensor([0.3067, 0.2267, 0.2055, 0.2254, 0.1438, 0.0116, 0.2348, 0.1051], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0112, 0.0096, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:13:27,856 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139337.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:27,883 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139337.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 06:13:30,867 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139342.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:36,175 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:38,381 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139353.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:46,546 INFO [finetune.py:976] (2/7) Epoch 25, batch 1900, loss[loss=0.2319, simple_loss=0.2909, pruned_loss=0.08641, over 4891.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2471, pruned_loss=0.0509, over 956977.65 frames. ], batch size: 35, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:13:47,140 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.553e+02 1.835e+02 2.150e+02 3.557e+02, threshold=3.671e+02, percent-clipped=0.0 2023-03-27 06:13:59,662 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139385.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:03,131 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:10,501 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139401.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:19,883 INFO [finetune.py:976] (2/7) Epoch 25, batch 1950, loss[loss=0.1534, simple_loss=0.2275, pruned_loss=0.03971, over 4788.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2454, pruned_loss=0.05016, over 956037.96 frames. ], batch size: 45, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:14:38,476 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1928, 2.2884, 1.8893, 2.2991, 2.1942, 2.1239, 2.1618, 3.0107], device='cuda:2'), covar=tensor([0.3957, 0.4635, 0.3376, 0.4035, 0.4319, 0.2531, 0.4148, 0.1514], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0264, 0.0236, 0.0276, 0.0259, 0.0230, 0.0257, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:14:38,992 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5354, 3.1300, 2.9956, 1.6083, 3.3058, 2.4790, 1.0675, 2.3679], device='cuda:2'), covar=tensor([0.2956, 0.2100, 0.1788, 0.3173, 0.1190, 0.1093, 0.3804, 0.1524], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 06:15:11,616 INFO [finetune.py:976] (2/7) Epoch 25, batch 2000, loss[loss=0.143, simple_loss=0.2182, pruned_loss=0.03391, over 4854.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2434, pruned_loss=0.04975, over 953211.59 frames. ], batch size: 47, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:15:12,709 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.253e+01 1.362e+02 1.721e+02 2.187e+02 3.038e+02, threshold=3.442e+02, percent-clipped=0.0 2023-03-27 06:15:45,238 INFO [finetune.py:976] (2/7) Epoch 25, batch 2050, loss[loss=0.1689, simple_loss=0.2374, pruned_loss=0.05013, over 4823.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2409, pruned_loss=0.04884, over 955108.61 frames. ], batch size: 30, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:15:55,026 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 06:16:03,123 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6834, 1.5969, 1.5454, 1.5807, 1.3528, 3.3304, 1.5005, 1.9328], device='cuda:2'), covar=tensor([0.3095, 0.2291, 0.2033, 0.2293, 0.1622, 0.0200, 0.2626, 0.1141], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:16:18,436 INFO [finetune.py:976] (2/7) Epoch 25, batch 2100, loss[loss=0.2119, simple_loss=0.2719, pruned_loss=0.0759, over 4891.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2406, pruned_loss=0.049, over 954400.42 frames. ], batch size: 32, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:16:18,534 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139565.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 06:16:20,121 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.352e+01 1.449e+02 1.714e+02 2.109e+02 3.824e+02, threshold=3.428e+02, percent-clipped=2.0 2023-03-27 06:16:22,744 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139571.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:30,397 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139582.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:38,028 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139594.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:50,719 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139613.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:52,379 INFO [finetune.py:976] (2/7) Epoch 25, batch 2150, loss[loss=0.1665, simple_loss=0.2344, pruned_loss=0.04927, over 4902.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2449, pruned_loss=0.05083, over 951398.87 frames. ], batch size: 32, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:17:09,456 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139632.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 06:17:09,508 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:21,659 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:27,568 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139645.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:37,502 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 06:17:43,824 INFO [finetune.py:976] (2/7) Epoch 25, batch 2200, loss[loss=0.1692, simple_loss=0.2371, pruned_loss=0.05065, over 4692.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2477, pruned_loss=0.05156, over 952981.46 frames. ], batch size: 23, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:17:45,447 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.512e+02 1.789e+02 2.111e+02 3.462e+02, threshold=3.578e+02, percent-clipped=1.0 2023-03-27 06:18:07,148 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-27 06:18:14,303 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-27 06:18:17,067 INFO [finetune.py:976] (2/7) Epoch 25, batch 2250, loss[loss=0.1777, simple_loss=0.256, pruned_loss=0.04971, over 4924.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2488, pruned_loss=0.05174, over 952756.73 frames. ], batch size: 42, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:18:31,161 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5884, 1.3446, 0.8264, 1.5746, 2.0275, 1.4475, 1.5230, 1.7670], device='cuda:2'), covar=tensor([0.1859, 0.2441, 0.2180, 0.1444, 0.2102, 0.2428, 0.1716, 0.2275], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:18:43,834 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8408, 1.1837, 1.8921, 1.8544, 1.6541, 1.6195, 1.7508, 1.7896], device='cuda:2'), covar=tensor([0.3859, 0.3960, 0.3129, 0.3441, 0.4549, 0.3790, 0.4444, 0.3149], device='cuda:2'), in_proj_covar=tensor([0.0262, 0.0245, 0.0265, 0.0291, 0.0291, 0.0267, 0.0297, 0.0249], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:18:50,822 INFO [finetune.py:976] (2/7) Epoch 25, batch 2300, loss[loss=0.1734, simple_loss=0.2575, pruned_loss=0.04469, over 4798.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2489, pruned_loss=0.05138, over 951594.60 frames. ], batch size: 41, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:18:52,007 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.533e+02 1.822e+02 2.118e+02 3.916e+02, threshold=3.645e+02, percent-clipped=1.0 2023-03-27 06:18:53,808 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139769.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:01,491 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-27 06:19:06,798 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139789.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:16,886 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139804.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:18,128 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:23,928 INFO [finetune.py:976] (2/7) Epoch 25, batch 2350, loss[loss=0.1902, simple_loss=0.2601, pruned_loss=0.06011, over 4787.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2467, pruned_loss=0.05084, over 951403.16 frames. ], batch size: 29, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:19:35,060 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:41,103 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1416, 2.2196, 2.3656, 1.0608, 2.7351, 2.9850, 2.4777, 2.0881], device='cuda:2'), covar=tensor([0.0835, 0.0817, 0.0495, 0.0754, 0.0505, 0.0582, 0.0448, 0.0736], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0148, 0.0128, 0.0122, 0.0130, 0.0129, 0.0141, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.9260e-05, 1.0646e-04, 9.1087e-05, 8.6100e-05, 9.1098e-05, 9.1912e-05, 1.0058e-04, 1.0530e-04], device='cuda:2') 2023-03-27 06:19:54,883 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:54,893 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:07,501 INFO [finetune.py:976] (2/7) Epoch 25, batch 2400, loss[loss=0.2065, simple_loss=0.2628, pruned_loss=0.0751, over 4828.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2443, pruned_loss=0.05023, over 952538.22 frames. ], batch size: 39, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:20:07,631 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:10,610 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.173e+01 1.419e+02 1.768e+02 2.081e+02 3.267e+02, threshold=3.536e+02, percent-clipped=0.0 2023-03-27 06:20:10,748 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139867.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:35,958 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139894.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:47,267 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:50,037 INFO [finetune.py:976] (2/7) Epoch 25, batch 2450, loss[loss=0.1264, simple_loss=0.1998, pruned_loss=0.02653, over 4755.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2417, pruned_loss=0.04962, over 955153.26 frames. ], batch size: 28, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:20:57,366 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139927.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:01,395 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139932.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:21:05,442 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:08,390 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139942.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:10,190 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6119, 1.5540, 1.4988, 1.5820, 1.0110, 3.5121, 1.3444, 1.8389], device='cuda:2'), covar=tensor([0.3103, 0.2334, 0.2027, 0.2252, 0.1733, 0.0208, 0.2515, 0.1131], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0112, 0.0096, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:21:10,192 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:22,671 INFO [finetune.py:976] (2/7) Epoch 25, batch 2500, loss[loss=0.2027, simple_loss=0.2767, pruned_loss=0.06437, over 4846.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.243, pruned_loss=0.05022, over 956276.96 frames. ], batch size: 47, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:21:23,976 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-27 06:21:24,360 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.523e+02 1.884e+02 2.422e+02 3.755e+02, threshold=3.768e+02, percent-clipped=3.0 2023-03-27 06:21:32,354 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139980.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:42,134 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139993.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:57,548 INFO [finetune.py:976] (2/7) Epoch 25, batch 2550, loss[loss=0.2457, simple_loss=0.3083, pruned_loss=0.0915, over 4806.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2459, pruned_loss=0.05098, over 956529.64 frames. ], batch size: 41, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:22:17,072 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2474, 2.1724, 1.8980, 2.2845, 2.1912, 1.9577, 2.4766, 2.3446], device='cuda:2'), covar=tensor([0.1093, 0.1908, 0.2480, 0.2133, 0.2130, 0.1337, 0.2753, 0.1314], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0191, 0.0236, 0.0254, 0.0250, 0.0206, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:22:33,237 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3531, 2.2412, 1.7425, 2.2168, 2.1591, 1.9081, 2.4644, 2.3139], device='cuda:2'), covar=tensor([0.1284, 0.2068, 0.2986, 0.2746, 0.2570, 0.1745, 0.3290, 0.1650], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0191, 0.0236, 0.0254, 0.0250, 0.0206, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:22:36,077 INFO [finetune.py:976] (2/7) Epoch 25, batch 2600, loss[loss=0.1747, simple_loss=0.2602, pruned_loss=0.04457, over 4813.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.247, pruned_loss=0.05139, over 953404.79 frames. ], batch size: 40, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:22:42,049 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.019e+02 1.547e+02 1.969e+02 2.320e+02 4.703e+02, threshold=3.938e+02, percent-clipped=1.0 2023-03-27 06:23:14,151 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140103.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:16,050 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 06:23:19,355 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140110.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:22,259 INFO [finetune.py:976] (2/7) Epoch 25, batch 2650, loss[loss=0.2016, simple_loss=0.2735, pruned_loss=0.06489, over 4763.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2468, pruned_loss=0.05095, over 953766.43 frames. ], batch size: 27, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:23:28,792 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:41,818 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:51,792 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140160.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:53,482 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140162.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:55,145 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:55,633 INFO [finetune.py:976] (2/7) Epoch 25, batch 2700, loss[loss=0.154, simple_loss=0.219, pruned_loss=0.04454, over 4676.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2445, pruned_loss=0.0497, over 950657.75 frames. ], batch size: 23, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:23:56,849 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.476e+02 1.708e+02 2.136e+02 4.297e+02, threshold=3.417e+02, percent-clipped=1.0 2023-03-27 06:23:59,422 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:01,228 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1920, 1.9952, 1.5446, 0.7426, 1.7422, 1.9651, 1.7365, 1.9186], device='cuda:2'), covar=tensor([0.0699, 0.0715, 0.1195, 0.1563, 0.1048, 0.1583, 0.1743, 0.0652], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0180, 0.0209, 0.0210, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:24:22,843 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140206.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:28,675 INFO [finetune.py:976] (2/7) Epoch 25, batch 2750, loss[loss=0.1388, simple_loss=0.2184, pruned_loss=0.02961, over 4757.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2429, pruned_loss=0.04967, over 951067.31 frames. ], batch size: 26, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:24:36,541 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140227.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:43,709 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:57,685 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3694, 2.0735, 2.8057, 1.6452, 2.4429, 2.5595, 1.9252, 2.7296], device='cuda:2'), covar=tensor([0.1334, 0.2035, 0.1514, 0.2193, 0.0833, 0.1457, 0.2710, 0.0787], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0208, 0.0193, 0.0192, 0.0176, 0.0214, 0.0217, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:24:58,270 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140259.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:01,787 INFO [finetune.py:976] (2/7) Epoch 25, batch 2800, loss[loss=0.1367, simple_loss=0.2099, pruned_loss=0.03172, over 4897.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2396, pruned_loss=0.04852, over 952524.72 frames. ], batch size: 32, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:25:02,938 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.479e+02 1.751e+02 2.221e+02 3.486e+02, threshold=3.502e+02, percent-clipped=1.0 2023-03-27 06:25:10,749 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140275.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:12,066 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0573, 0.9694, 0.9867, 0.3875, 1.0056, 1.1340, 1.2348, 1.0049], device='cuda:2'), covar=tensor([0.0776, 0.0565, 0.0527, 0.0487, 0.0541, 0.0714, 0.0418, 0.0647], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0128, 0.0123, 0.0131, 0.0130, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9783e-05, 1.0672e-04, 9.1397e-05, 8.6372e-05, 9.1479e-05, 9.2145e-05, 1.0116e-04, 1.0583e-04], device='cuda:2') 2023-03-27 06:25:22,267 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:29,995 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5528, 1.0712, 0.8411, 1.3647, 2.0093, 0.7573, 1.2983, 1.3444], device='cuda:2'), covar=tensor([0.1555, 0.2234, 0.1730, 0.1273, 0.1929, 0.1997, 0.1488, 0.2193], device='cuda:2'), in_proj_covar=tensor([0.0088, 0.0093, 0.0109, 0.0091, 0.0118, 0.0092, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:25:53,836 INFO [finetune.py:976] (2/7) Epoch 25, batch 2850, loss[loss=0.1486, simple_loss=0.2213, pruned_loss=0.03795, over 4824.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2373, pruned_loss=0.04731, over 954864.23 frames. ], batch size: 30, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:25:56,968 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140320.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:26:11,146 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 06:26:27,557 INFO [finetune.py:976] (2/7) Epoch 25, batch 2900, loss[loss=0.2052, simple_loss=0.2691, pruned_loss=0.07071, over 4883.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2416, pruned_loss=0.04916, over 955502.66 frames. ], batch size: 32, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:26:28,756 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.583e+02 1.866e+02 2.190e+02 4.311e+02, threshold=3.732e+02, percent-clipped=1.0 2023-03-27 06:26:30,235 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 06:26:32,920 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4739, 1.3682, 1.8432, 3.1040, 1.9647, 2.3110, 0.8904, 2.6549], device='cuda:2'), covar=tensor([0.1869, 0.1621, 0.1557, 0.0846, 0.0999, 0.1647, 0.2014, 0.0562], device='cuda:2'), in_proj_covar=tensor([0.0098, 0.0115, 0.0132, 0.0163, 0.0101, 0.0135, 0.0123, 0.0099], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:26:41,970 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140387.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:26:59,279 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7477, 1.6487, 1.4395, 1.8260, 2.2317, 1.8884, 1.6607, 1.3921], device='cuda:2'), covar=tensor([0.2047, 0.1894, 0.1876, 0.1480, 0.1478, 0.1163, 0.2119, 0.1789], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0199, 0.0246, 0.0192, 0.0217, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:27:01,453 INFO [finetune.py:976] (2/7) Epoch 25, batch 2950, loss[loss=0.1769, simple_loss=0.2503, pruned_loss=0.0517, over 4875.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2444, pruned_loss=0.05011, over 955678.53 frames. ], batch size: 34, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:27:07,561 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:21,217 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:23,061 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140448.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:30,139 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140459.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:30,779 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:31,847 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6173, 3.6238, 3.4228, 1.5922, 3.7267, 2.7361, 0.9397, 2.5302], device='cuda:2'), covar=tensor([0.2239, 0.1389, 0.1412, 0.3242, 0.0972, 0.1009, 0.3985, 0.1372], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 06:27:32,484 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:34,686 INFO [finetune.py:976] (2/7) Epoch 25, batch 3000, loss[loss=0.1467, simple_loss=0.2156, pruned_loss=0.03893, over 4719.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2449, pruned_loss=0.05049, over 953470.66 frames. ], batch size: 23, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:27:34,686 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 06:27:48,789 INFO [finetune.py:1010] (2/7) Epoch 25, validation: loss=0.1571, simple_loss=0.2254, pruned_loss=0.04443, over 2265189.00 frames. 2023-03-27 06:27:48,789 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 06:27:49,503 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140466.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:49,624 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.61 vs. limit=2.0 2023-03-27 06:27:50,498 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.568e+02 1.888e+02 2.214e+02 4.503e+02, threshold=3.776e+02, percent-clipped=3.0 2023-03-27 06:27:59,535 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140473.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:14,351 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140489.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:20,345 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:23,959 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 06:28:29,180 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:30,342 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140508.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:31,019 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:31,590 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140510.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:34,543 INFO [finetune.py:976] (2/7) Epoch 25, batch 3050, loss[loss=0.1911, simple_loss=0.2524, pruned_loss=0.06492, over 4810.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2465, pruned_loss=0.05073, over 953760.13 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:28:49,778 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1301, 2.0177, 1.5055, 0.6623, 1.6450, 1.8130, 1.6654, 1.8328], device='cuda:2'), covar=tensor([0.0841, 0.0663, 0.1424, 0.1850, 0.1254, 0.1768, 0.2059, 0.0776], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0200, 0.0181, 0.0210, 0.0211, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:28:58,621 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140550.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:00,945 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140554.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:08,053 INFO [finetune.py:976] (2/7) Epoch 25, batch 3100, loss[loss=0.1665, simple_loss=0.2438, pruned_loss=0.04464, over 4818.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2443, pruned_loss=0.05008, over 953289.51 frames. ], batch size: 39, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:29:09,242 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.313e+01 1.495e+02 1.767e+02 2.180e+02 4.499e+02, threshold=3.535e+02, percent-clipped=1.0 2023-03-27 06:29:12,208 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140570.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:29,011 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 06:29:42,051 INFO [finetune.py:976] (2/7) Epoch 25, batch 3150, loss[loss=0.1373, simple_loss=0.22, pruned_loss=0.02726, over 4770.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2415, pruned_loss=0.04933, over 952923.83 frames. ], batch size: 28, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:29:42,119 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140615.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:30:03,342 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6795, 2.7906, 2.7442, 1.9945, 2.7566, 3.0723, 2.9610, 2.3289], device='cuda:2'), covar=tensor([0.0599, 0.0567, 0.0641, 0.0841, 0.0652, 0.0583, 0.0590, 0.1060], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0120, 0.0127, 0.0139, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:30:15,049 INFO [finetune.py:976] (2/7) Epoch 25, batch 3200, loss[loss=0.1799, simple_loss=0.2444, pruned_loss=0.05769, over 4815.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2383, pruned_loss=0.04867, over 953360.98 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:30:16,220 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.929e+01 1.486e+02 1.750e+02 2.144e+02 4.466e+02, threshold=3.500e+02, percent-clipped=2.0 2023-03-27 06:31:01,558 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0931, 1.3221, 0.8208, 1.9235, 2.3495, 1.7394, 1.6135, 1.9279], device='cuda:2'), covar=tensor([0.1367, 0.1950, 0.1954, 0.1092, 0.1788, 0.1941, 0.1301, 0.1800], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:31:06,559 INFO [finetune.py:976] (2/7) Epoch 25, batch 3250, loss[loss=0.1855, simple_loss=0.2597, pruned_loss=0.05563, over 4922.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2389, pruned_loss=0.04871, over 951684.75 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:31:11,472 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140723.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:25,183 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140743.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:35,341 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140759.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:39,361 INFO [finetune.py:976] (2/7) Epoch 25, batch 3300, loss[loss=0.2472, simple_loss=0.3151, pruned_loss=0.08958, over 4821.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2423, pruned_loss=0.04968, over 949430.46 frames. ], batch size: 51, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:31:40,537 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:41,064 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.629e+02 1.945e+02 2.397e+02 4.021e+02, threshold=3.889e+02, percent-clipped=5.0 2023-03-27 06:31:50,276 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3956, 1.3514, 1.9634, 1.7297, 1.5344, 3.4249, 1.3489, 1.5636], device='cuda:2'), covar=tensor([0.1017, 0.1847, 0.1180, 0.0960, 0.1634, 0.0235, 0.1493, 0.1736], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:31:53,064 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:08,086 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:09,340 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140809.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:12,393 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140814.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:12,930 INFO [finetune.py:976] (2/7) Epoch 25, batch 3350, loss[loss=0.1981, simple_loss=0.2786, pruned_loss=0.05875, over 4823.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2451, pruned_loss=0.05036, over 952500.34 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:32:33,951 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140845.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:44,378 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-27 06:32:44,857 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.2738, 4.5726, 4.8148, 5.1566, 5.0240, 4.6748, 5.4288, 1.7245], device='cuda:2'), covar=tensor([0.0783, 0.0838, 0.0882, 0.0902, 0.1183, 0.1555, 0.0549, 0.5906], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0247, 0.0283, 0.0296, 0.0337, 0.0286, 0.0307, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:32:46,606 INFO [finetune.py:976] (2/7) Epoch 25, batch 3400, loss[loss=0.1614, simple_loss=0.2372, pruned_loss=0.04281, over 4868.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2467, pruned_loss=0.05075, over 953358.13 frames. ], batch size: 34, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:32:46,674 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:47,794 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.564e+02 1.878e+02 2.236e+02 3.278e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-27 06:32:49,711 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:33:20,053 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3855, 2.2659, 1.7518, 2.3104, 2.3381, 2.0290, 2.5652, 2.4008], device='cuda:2'), covar=tensor([0.1369, 0.1981, 0.3033, 0.2284, 0.2434, 0.1734, 0.2477, 0.1781], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0189, 0.0234, 0.0251, 0.0247, 0.0204, 0.0212, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:33:39,160 INFO [finetune.py:976] (2/7) Epoch 25, batch 3450, loss[loss=0.1293, simple_loss=0.185, pruned_loss=0.03685, over 4191.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2458, pruned_loss=0.05019, over 953651.36 frames. ], batch size: 18, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:33:39,273 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140915.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:33:40,450 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140917.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:33:51,926 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.55 vs. limit=2.0 2023-03-27 06:34:01,683 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:11,812 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140963.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:12,941 INFO [finetune.py:976] (2/7) Epoch 25, batch 3500, loss[loss=0.1933, simple_loss=0.2511, pruned_loss=0.0678, over 4422.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2452, pruned_loss=0.0504, over 954388.55 frames. ], batch size: 19, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:34:14,173 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.468e+02 1.748e+02 2.204e+02 3.629e+02, threshold=3.496e+02, percent-clipped=0.0 2023-03-27 06:34:20,904 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:41,900 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:46,048 INFO [finetune.py:976] (2/7) Epoch 25, batch 3550, loss[loss=0.1223, simple_loss=0.1929, pruned_loss=0.02584, over 4780.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2422, pruned_loss=0.04949, over 954605.80 frames. ], batch size: 29, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:35:04,135 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141043.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:35:19,346 INFO [finetune.py:976] (2/7) Epoch 25, batch 3600, loss[loss=0.2052, simple_loss=0.2738, pruned_loss=0.06825, over 4820.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2402, pruned_loss=0.04906, over 954877.51 frames. ], batch size: 39, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:35:19,510 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 06:35:20,525 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.456e+02 1.796e+02 2.356e+02 3.995e+02, threshold=3.592e+02, percent-clipped=1.0 2023-03-27 06:35:22,986 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3041, 1.5170, 2.1893, 1.6689, 1.6990, 3.9277, 1.4265, 1.6947], device='cuda:2'), covar=tensor([0.1072, 0.1776, 0.1323, 0.0975, 0.1538, 0.0236, 0.1494, 0.1723], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:35:28,403 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141079.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:35:36,624 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141091.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:00,469 INFO [finetune.py:976] (2/7) Epoch 25, batch 3650, loss[loss=0.1409, simple_loss=0.2212, pruned_loss=0.03025, over 4815.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2412, pruned_loss=0.0492, over 953252.52 frames. ], batch size: 25, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:36:30,909 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141143.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:32,109 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:34,519 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6463, 3.7213, 3.4741, 1.7027, 3.8429, 2.8746, 1.0845, 2.5844], device='cuda:2'), covar=tensor([0.2317, 0.2416, 0.1673, 0.3536, 0.1110, 0.1050, 0.4169, 0.1596], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0179, 0.0162, 0.0131, 0.0161, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 06:36:46,082 INFO [finetune.py:976] (2/7) Epoch 25, batch 3700, loss[loss=0.2148, simple_loss=0.2817, pruned_loss=0.07398, over 4819.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2449, pruned_loss=0.05048, over 953351.27 frames. ], batch size: 40, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:36:46,158 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:46,176 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:47,288 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.708e+02 2.029e+02 2.382e+02 3.628e+02, threshold=4.058e+02, percent-clipped=1.0 2023-03-27 06:36:54,765 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 06:37:02,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9858, 1.7262, 2.3250, 1.5263, 2.1846, 2.3467, 1.6519, 2.4467], device='cuda:2'), covar=tensor([0.1224, 0.2026, 0.1431, 0.1889, 0.0754, 0.1189, 0.2683, 0.0684], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0208, 0.0192, 0.0191, 0.0174, 0.0213, 0.0217, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:37:03,880 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141193.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:11,212 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141204.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:18,590 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:19,743 INFO [finetune.py:976] (2/7) Epoch 25, batch 3750, loss[loss=0.1795, simple_loss=0.2492, pruned_loss=0.05486, over 4901.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2448, pruned_loss=0.05037, over 952650.54 frames. ], batch size: 43, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:37:33,041 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 06:37:44,738 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7890, 1.7940, 1.5941, 1.9873, 2.2972, 1.9719, 1.8245, 1.4780], device='cuda:2'), covar=tensor([0.2203, 0.2118, 0.1956, 0.1668, 0.1921, 0.1238, 0.2340, 0.1976], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0197, 0.0246, 0.0192, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:37:50,949 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3373, 1.2055, 1.3885, 0.7533, 1.3057, 1.3594, 1.2974, 1.1916], device='cuda:2'), covar=tensor([0.0547, 0.0787, 0.0669, 0.0892, 0.0884, 0.0680, 0.0675, 0.1134], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0120, 0.0127, 0.0138, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:37:52,662 INFO [finetune.py:976] (2/7) Epoch 25, batch 3800, loss[loss=0.1815, simple_loss=0.2474, pruned_loss=0.05785, over 4924.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2464, pruned_loss=0.05069, over 953172.29 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:37:54,343 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.508e+02 1.827e+02 2.217e+02 6.513e+02, threshold=3.654e+02, percent-clipped=2.0 2023-03-27 06:37:58,030 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:38:19,445 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141303.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:38:32,077 INFO [finetune.py:976] (2/7) Epoch 25, batch 3850, loss[loss=0.124, simple_loss=0.1987, pruned_loss=0.02465, over 4810.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2452, pruned_loss=0.05044, over 952386.01 frames. ], batch size: 25, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:38:49,946 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:39:04,981 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9071, 1.2694, 0.7832, 1.6342, 2.1607, 1.1423, 1.5710, 1.5090], device='cuda:2'), covar=tensor([0.1315, 0.2003, 0.1832, 0.1128, 0.1814, 0.1786, 0.1310, 0.1886], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 06:39:05,044 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8847, 1.0860, 1.9921, 1.9045, 1.7281, 1.6768, 1.7603, 1.9223], device='cuda:2'), covar=tensor([0.3784, 0.4143, 0.3512, 0.3695, 0.5169, 0.3974, 0.4573, 0.3038], device='cuda:2'), in_proj_covar=tensor([0.0264, 0.0247, 0.0267, 0.0292, 0.0293, 0.0269, 0.0299, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:39:16,920 INFO [finetune.py:976] (2/7) Epoch 25, batch 3900, loss[loss=0.1648, simple_loss=0.2258, pruned_loss=0.05188, over 4716.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2427, pruned_loss=0.04957, over 953029.60 frames. ], batch size: 23, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:39:18,105 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.504e+02 1.773e+02 2.110e+02 6.012e+02, threshold=3.546e+02, percent-clipped=1.0 2023-03-27 06:39:26,407 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141379.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:39:33,586 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:39:49,670 INFO [finetune.py:976] (2/7) Epoch 25, batch 3950, loss[loss=0.1711, simple_loss=0.2373, pruned_loss=0.05246, over 4897.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2405, pruned_loss=0.04887, over 953729.29 frames. ], batch size: 32, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:39:51,028 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141417.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:39:58,915 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141427.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:00,163 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6800, 1.5444, 1.9446, 1.2569, 1.7164, 1.9581, 1.4534, 2.1073], device='cuda:2'), covar=tensor([0.1192, 0.2176, 0.1276, 0.1814, 0.0842, 0.1228, 0.2799, 0.0761], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0191, 0.0191, 0.0174, 0.0213, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:40:05,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1096, 0.9871, 0.9470, 0.4339, 0.9223, 1.1137, 1.1574, 0.9637], device='cuda:2'), covar=tensor([0.0911, 0.0638, 0.0581, 0.0534, 0.0551, 0.0632, 0.0402, 0.0687], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0127, 0.0122, 0.0130, 0.0130, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9195e-05, 1.0641e-04, 9.0953e-05, 8.6108e-05, 9.1032e-05, 9.2162e-05, 1.0026e-04, 1.0576e-04], device='cuda:2') 2023-03-27 06:40:07,835 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141441.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:23,344 INFO [finetune.py:976] (2/7) Epoch 25, batch 4000, loss[loss=0.1625, simple_loss=0.2438, pruned_loss=0.04063, over 4738.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2398, pruned_loss=0.04872, over 952363.78 frames. ], batch size: 59, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:40:23,425 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141465.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:24,523 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.514e+02 1.750e+02 2.113e+02 3.817e+02, threshold=3.500e+02, percent-clipped=1.0 2023-03-27 06:40:33,188 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141478.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:40:45,147 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5301, 1.4367, 1.4008, 1.4480, 1.2767, 3.3548, 1.4661, 1.8990], device='cuda:2'), covar=tensor([0.4128, 0.3074, 0.2376, 0.2845, 0.1771, 0.0288, 0.2668, 0.1189], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0119, 0.0123, 0.0112, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 06:40:46,342 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141499.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:48,237 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141502.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:55,311 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:56,982 INFO [finetune.py:976] (2/7) Epoch 25, batch 4050, loss[loss=0.1244, simple_loss=0.1956, pruned_loss=0.02662, over 4718.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2419, pruned_loss=0.04927, over 953362.37 frames. ], batch size: 23, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:41:27,612 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1897, 1.3184, 1.2711, 0.6904, 1.2787, 1.4665, 1.5571, 1.2424], device='cuda:2'), covar=tensor([0.0828, 0.0528, 0.0527, 0.0482, 0.0474, 0.0654, 0.0294, 0.0630], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0127, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.8984e-05, 1.0627e-04, 9.0685e-05, 8.5961e-05, 9.1023e-05, 9.1986e-05, 1.0036e-04, 1.0572e-04], device='cuda:2') 2023-03-27 06:41:45,480 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6111, 1.5916, 1.5011, 0.8007, 1.6674, 1.8170, 1.7876, 1.4147], device='cuda:2'), covar=tensor([0.0865, 0.0600, 0.0549, 0.0594, 0.0494, 0.0621, 0.0352, 0.0693], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0127, 0.0122, 0.0130, 0.0129, 0.0140, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.8756e-05, 1.0600e-04, 9.0523e-05, 8.5732e-05, 9.0768e-05, 9.1746e-05, 1.0007e-04, 1.0547e-04], device='cuda:2') 2023-03-27 06:41:49,013 INFO [finetune.py:976] (2/7) Epoch 25, batch 4100, loss[loss=0.1485, simple_loss=0.2074, pruned_loss=0.04479, over 4246.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2458, pruned_loss=0.05096, over 953074.55 frames. ], batch size: 18, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:41:50,184 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.605e+02 1.887e+02 2.173e+02 5.231e+02, threshold=3.774e+02, percent-clipped=3.0 2023-03-27 06:41:54,402 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141573.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:14,758 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141603.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:22,403 INFO [finetune.py:976] (2/7) Epoch 25, batch 4150, loss[loss=0.1403, simple_loss=0.214, pruned_loss=0.03332, over 3950.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2461, pruned_loss=0.0509, over 951742.70 frames. ], batch size: 17, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:42:25,078 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-27 06:42:26,085 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141621.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:40,149 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1595, 2.2512, 1.8323, 2.1825, 2.0985, 2.1544, 2.1541, 2.9000], device='cuda:2'), covar=tensor([0.3655, 0.4415, 0.3278, 0.4350, 0.4383, 0.2351, 0.4079, 0.1648], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0235, 0.0276, 0.0258, 0.0229, 0.0255, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:42:46,644 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141651.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:55,970 INFO [finetune.py:976] (2/7) Epoch 25, batch 4200, loss[loss=0.154, simple_loss=0.226, pruned_loss=0.04097, over 4887.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2474, pruned_loss=0.05059, over 953301.69 frames. ], batch size: 32, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:42:56,087 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5271, 1.4040, 1.5238, 0.8044, 1.5173, 1.5020, 1.4826, 1.3245], device='cuda:2'), covar=tensor([0.0641, 0.0814, 0.0764, 0.0952, 0.0869, 0.0765, 0.0699, 0.1286], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0136, 0.0140, 0.0120, 0.0126, 0.0138, 0.0139, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:42:57,194 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.553e+02 1.832e+02 2.223e+02 5.119e+02, threshold=3.664e+02, percent-clipped=2.0 2023-03-27 06:43:09,621 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:43:29,320 INFO [finetune.py:976] (2/7) Epoch 25, batch 4250, loss[loss=0.1833, simple_loss=0.2497, pruned_loss=0.05848, over 4824.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2446, pruned_loss=0.04941, over 955601.86 frames. ], batch size: 39, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:43:47,493 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-27 06:44:05,278 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3240, 1.7619, 0.8307, 2.0125, 2.6005, 1.8410, 2.0267, 2.1095], device='cuda:2'), covar=tensor([0.1370, 0.1850, 0.1999, 0.1089, 0.1630, 0.1748, 0.1247, 0.1858], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 06:44:19,574 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0548, 1.3877, 2.0016, 2.0028, 1.8204, 1.7424, 1.9396, 1.9041], device='cuda:2'), covar=tensor([0.3560, 0.3726, 0.3285, 0.3336, 0.4653, 0.3701, 0.3972, 0.2915], device='cuda:2'), in_proj_covar=tensor([0.0265, 0.0247, 0.0268, 0.0293, 0.0293, 0.0269, 0.0299, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:44:21,245 INFO [finetune.py:976] (2/7) Epoch 25, batch 4300, loss[loss=0.152, simple_loss=0.2296, pruned_loss=0.03725, over 4819.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2429, pruned_loss=0.04947, over 956802.35 frames. ], batch size: 40, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:44:22,429 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.814e+01 1.357e+02 1.630e+02 2.025e+02 3.929e+02, threshold=3.260e+02, percent-clipped=1.0 2023-03-27 06:44:26,617 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141773.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 06:44:43,604 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141797.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:44:44,840 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141799.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:44:49,248 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.6668, 4.0817, 4.3098, 4.5361, 4.4108, 4.1460, 4.7711, 1.5698], device='cuda:2'), covar=tensor([0.0766, 0.0973, 0.1048, 0.0948, 0.1171, 0.1609, 0.0642, 0.5429], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0248, 0.0282, 0.0296, 0.0338, 0.0288, 0.0307, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:44:53,426 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7395, 1.3625, 0.8320, 1.5404, 2.0797, 1.2630, 1.5400, 1.7016], device='cuda:2'), covar=tensor([0.1460, 0.1994, 0.1901, 0.1208, 0.1859, 0.1817, 0.1356, 0.1844], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 06:44:55,201 INFO [finetune.py:976] (2/7) Epoch 25, batch 4350, loss[loss=0.1735, simple_loss=0.2402, pruned_loss=0.0534, over 4740.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2405, pruned_loss=0.04926, over 956253.35 frames. ], batch size: 59, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:45:17,465 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141847.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:45:27,151 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0095, 0.9539, 0.9094, 1.0865, 1.1689, 1.1341, 0.9958, 0.9352], device='cuda:2'), covar=tensor([0.0388, 0.0314, 0.0720, 0.0333, 0.0295, 0.0458, 0.0437, 0.0426], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0106, 0.0145, 0.0111, 0.0100, 0.0114, 0.0103, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7875e-05, 8.1190e-05, 1.1294e-04, 8.4852e-05, 7.7569e-05, 8.4409e-05, 7.6166e-05, 8.5034e-05], device='cuda:2') 2023-03-27 06:45:28,845 INFO [finetune.py:976] (2/7) Epoch 25, batch 4400, loss[loss=0.1504, simple_loss=0.2077, pruned_loss=0.04654, over 4704.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2415, pruned_loss=0.04986, over 954693.31 frames. ], batch size: 23, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:45:30,033 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.595e+02 1.814e+02 2.202e+02 4.275e+02, threshold=3.628e+02, percent-clipped=6.0 2023-03-27 06:45:50,731 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3795, 1.2500, 1.4941, 2.4535, 1.6569, 2.0525, 0.9242, 2.1303], device='cuda:2'), covar=tensor([0.1689, 0.1423, 0.1136, 0.0676, 0.0860, 0.1305, 0.1401, 0.0570], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0164, 0.0101, 0.0137, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:46:01,881 INFO [finetune.py:976] (2/7) Epoch 25, batch 4450, loss[loss=0.183, simple_loss=0.2421, pruned_loss=0.06193, over 4795.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2457, pruned_loss=0.05096, over 955087.63 frames. ], batch size: 25, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:46:02,591 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141916.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:46:10,458 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141928.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:46:11,140 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 06:46:31,920 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8687, 3.4384, 3.6044, 3.7549, 3.6142, 3.4288, 3.9535, 1.1094], device='cuda:2'), covar=tensor([0.0868, 0.0951, 0.0971, 0.1039, 0.1479, 0.1608, 0.0866, 0.5919], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0248, 0.0282, 0.0296, 0.0337, 0.0288, 0.0306, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:46:42,249 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 06:46:46,892 INFO [finetune.py:976] (2/7) Epoch 25, batch 4500, loss[loss=0.2528, simple_loss=0.3128, pruned_loss=0.09639, over 4848.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2475, pruned_loss=0.05162, over 954476.41 frames. ], batch size: 47, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:46:52,641 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.869e+01 1.594e+02 1.826e+02 2.236e+02 4.959e+02, threshold=3.653e+02, percent-clipped=2.0 2023-03-27 06:47:02,940 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141977.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:08,158 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:10,689 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141989.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:30,042 INFO [finetune.py:976] (2/7) Epoch 25, batch 4550, loss[loss=0.1624, simple_loss=0.2455, pruned_loss=0.03966, over 4906.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2477, pruned_loss=0.05114, over 955438.01 frames. ], batch size: 38, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:47:41,389 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:47,541 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2988, 1.4874, 1.4504, 0.7666, 1.5010, 1.7270, 1.7345, 1.3592], device='cuda:2'), covar=tensor([0.0894, 0.0590, 0.0551, 0.0542, 0.0476, 0.0553, 0.0339, 0.0708], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0128, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.8853e-05, 1.0597e-04, 9.1072e-05, 8.5632e-05, 9.0757e-05, 9.1828e-05, 1.0031e-04, 1.0590e-04], device='cuda:2') 2023-03-27 06:47:58,109 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4296, 3.8773, 4.0861, 4.2599, 4.2055, 3.9477, 4.5325, 1.4080], device='cuda:2'), covar=tensor([0.0825, 0.0825, 0.0848, 0.1002, 0.1278, 0.1604, 0.0709, 0.5870], device='cuda:2'), in_proj_covar=tensor([0.0354, 0.0250, 0.0285, 0.0299, 0.0341, 0.0291, 0.0308, 0.0304], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:48:03,343 INFO [finetune.py:976] (2/7) Epoch 25, batch 4600, loss[loss=0.1288, simple_loss=0.2049, pruned_loss=0.02636, over 4757.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2473, pruned_loss=0.05036, over 955399.27 frames. ], batch size: 28, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:48:04,586 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.716e+01 1.569e+02 1.799e+02 2.271e+02 3.318e+02, threshold=3.598e+02, percent-clipped=0.0 2023-03-27 06:48:05,396 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-27 06:48:08,735 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142073.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:48:23,763 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142097.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:48:36,588 INFO [finetune.py:976] (2/7) Epoch 25, batch 4650, loss[loss=0.219, simple_loss=0.2732, pruned_loss=0.08238, over 4931.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2447, pruned_loss=0.04991, over 956000.85 frames. ], batch size: 38, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:48:40,335 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142121.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:48:47,495 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0048, 0.9240, 0.8981, 0.9879, 1.1088, 1.0782, 0.9468, 0.9294], device='cuda:2'), covar=tensor([0.0430, 0.0357, 0.0798, 0.0352, 0.0317, 0.0603, 0.0419, 0.0484], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0107, 0.0145, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8275e-05, 8.1544e-05, 1.1356e-04, 8.5214e-05, 7.7992e-05, 8.4926e-05, 7.6603e-05, 8.5671e-05], device='cuda:2') 2023-03-27 06:48:57,213 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:49:19,997 INFO [finetune.py:976] (2/7) Epoch 25, batch 4700, loss[loss=0.1405, simple_loss=0.2066, pruned_loss=0.03719, over 4822.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2419, pruned_loss=0.04929, over 958407.39 frames. ], batch size: 41, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:49:21,182 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.163e+01 1.384e+02 1.765e+02 2.088e+02 3.764e+02, threshold=3.531e+02, percent-clipped=1.0 2023-03-27 06:50:00,898 INFO [finetune.py:976] (2/7) Epoch 25, batch 4750, loss[loss=0.195, simple_loss=0.2595, pruned_loss=0.0652, over 4927.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2409, pruned_loss=0.04915, over 958078.45 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:50:34,333 INFO [finetune.py:976] (2/7) Epoch 25, batch 4800, loss[loss=0.1488, simple_loss=0.2246, pruned_loss=0.03648, over 4905.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2436, pruned_loss=0.04995, over 955910.95 frames. ], batch size: 37, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:50:35,545 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.947e+01 1.535e+02 1.762e+02 2.238e+02 3.446e+02, threshold=3.524e+02, percent-clipped=1.0 2023-03-27 06:50:39,628 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:50:47,480 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142284.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:50:51,117 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4237, 1.2375, 1.9569, 2.8558, 1.9231, 2.0888, 1.2490, 2.4792], device='cuda:2'), covar=tensor([0.1723, 0.1544, 0.1165, 0.0646, 0.0822, 0.1660, 0.1457, 0.0498], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0102, 0.0137, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 06:50:56,561 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2358, 1.8263, 2.1513, 2.2662, 1.9302, 1.9227, 2.1779, 2.0669], device='cuda:2'), covar=tensor([0.3776, 0.3890, 0.3285, 0.3843, 0.5050, 0.3942, 0.4824, 0.3015], device='cuda:2'), in_proj_covar=tensor([0.0265, 0.0247, 0.0267, 0.0293, 0.0294, 0.0270, 0.0300, 0.0250], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:51:07,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0156, 1.9595, 1.7156, 1.8639, 1.8870, 1.8553, 1.8840, 2.5849], device='cuda:2'), covar=tensor([0.3525, 0.4429, 0.3232, 0.3884, 0.3946, 0.2482, 0.3747, 0.1591], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0263, 0.0235, 0.0275, 0.0258, 0.0228, 0.0256, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:51:07,517 INFO [finetune.py:976] (2/7) Epoch 25, batch 4850, loss[loss=0.1735, simple_loss=0.2501, pruned_loss=0.04846, over 4712.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2463, pruned_loss=0.05086, over 954948.24 frames. ], batch size: 59, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:51:39,149 INFO [finetune.py:976] (2/7) Epoch 25, batch 4900, loss[loss=0.2269, simple_loss=0.29, pruned_loss=0.08187, over 4144.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2476, pruned_loss=0.05121, over 954631.55 frames. ], batch size: 65, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:51:40,866 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.551e+02 1.812e+02 2.135e+02 6.918e+02, threshold=3.624e+02, percent-clipped=2.0 2023-03-27 06:52:01,117 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1196, 1.9901, 1.9499, 0.9090, 2.2376, 2.4643, 2.0805, 1.7958], device='cuda:2'), covar=tensor([0.0957, 0.0753, 0.0597, 0.0696, 0.0533, 0.0686, 0.0444, 0.0756], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0127, 0.0121, 0.0129, 0.0129, 0.0140, 0.0147], device='cuda:2'), out_proj_covar=tensor([8.8631e-05, 1.0573e-04, 9.0781e-05, 8.5065e-05, 9.0527e-05, 9.1403e-05, 1.0010e-04, 1.0530e-04], device='cuda:2') 2023-03-27 06:52:10,097 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 06:52:10,145 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 06:52:31,145 INFO [finetune.py:976] (2/7) Epoch 25, batch 4950, loss[loss=0.1698, simple_loss=0.2482, pruned_loss=0.04575, over 4809.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2486, pruned_loss=0.05146, over 954985.00 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:52:50,127 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 06:53:03,769 INFO [finetune.py:976] (2/7) Epoch 25, batch 5000, loss[loss=0.1215, simple_loss=0.1902, pruned_loss=0.02644, over 4812.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2464, pruned_loss=0.05102, over 955661.43 frames. ], batch size: 25, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:53:04,978 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.190e+01 1.432e+02 1.813e+02 2.155e+02 3.992e+02, threshold=3.625e+02, percent-clipped=1.0 2023-03-27 06:53:21,198 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 06:53:36,413 INFO [finetune.py:976] (2/7) Epoch 25, batch 5050, loss[loss=0.1932, simple_loss=0.2564, pruned_loss=0.06501, over 4828.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2448, pruned_loss=0.05085, over 957108.89 frames. ], batch size: 40, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:54:09,844 INFO [finetune.py:976] (2/7) Epoch 25, batch 5100, loss[loss=0.1717, simple_loss=0.234, pruned_loss=0.05469, over 4901.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2418, pruned_loss=0.05017, over 958063.05 frames. ], batch size: 32, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:54:11,044 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.212e+01 1.519e+02 1.807e+02 2.247e+02 4.075e+02, threshold=3.613e+02, percent-clipped=2.0 2023-03-27 06:54:14,214 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:14,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=142573.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:27,737 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142584.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:59,671 INFO [finetune.py:976] (2/7) Epoch 25, batch 5150, loss[loss=0.1285, simple_loss=0.1835, pruned_loss=0.03672, over 3998.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2405, pruned_loss=0.0495, over 955549.00 frames. ], batch size: 17, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:55:03,298 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142620.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:10,593 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:12,839 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=142634.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:33,009 INFO [finetune.py:976] (2/7) Epoch 25, batch 5200, loss[loss=0.1681, simple_loss=0.2412, pruned_loss=0.04756, over 4829.00 frames. ], tot_loss[loss=0.171, simple_loss=0.243, pruned_loss=0.04946, over 955762.68 frames. ], batch size: 30, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:55:34,192 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.563e+02 1.762e+02 2.093e+02 3.679e+02, threshold=3.523e+02, percent-clipped=1.0 2023-03-27 06:56:06,166 INFO [finetune.py:976] (2/7) Epoch 25, batch 5250, loss[loss=0.1874, simple_loss=0.2565, pruned_loss=0.05918, over 4883.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2462, pruned_loss=0.05026, over 956352.97 frames. ], batch size: 32, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:56:12,142 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=142724.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:56:39,095 INFO [finetune.py:976] (2/7) Epoch 25, batch 5300, loss[loss=0.1727, simple_loss=0.2427, pruned_loss=0.05138, over 4830.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2456, pruned_loss=0.04953, over 955912.45 frames. ], batch size: 47, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:56:40,274 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.558e+02 1.826e+02 2.127e+02 3.045e+02, threshold=3.651e+02, percent-clipped=0.0 2023-03-27 06:56:51,748 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=142785.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:57:19,998 INFO [finetune.py:976] (2/7) Epoch 25, batch 5350, loss[loss=0.1528, simple_loss=0.2271, pruned_loss=0.03929, over 4771.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2457, pruned_loss=0.04884, over 956585.97 frames. ], batch size: 26, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:57:41,739 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6431, 1.5525, 1.0936, 0.2722, 1.2716, 1.5150, 1.5046, 1.4614], device='cuda:2'), covar=tensor([0.0980, 0.0850, 0.1401, 0.2096, 0.1455, 0.2513, 0.2428, 0.0958], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0193, 0.0202, 0.0183, 0.0211, 0.0213, 0.0226, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 06:58:06,041 INFO [finetune.py:976] (2/7) Epoch 25, batch 5400, loss[loss=0.1387, simple_loss=0.2162, pruned_loss=0.03055, over 4792.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2441, pruned_loss=0.0487, over 955278.35 frames. ], batch size: 45, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:58:07,256 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.487e+02 1.682e+02 2.190e+02 4.832e+02, threshold=3.364e+02, percent-clipped=1.0 2023-03-27 06:58:38,661 INFO [finetune.py:976] (2/7) Epoch 25, batch 5450, loss[loss=0.1911, simple_loss=0.259, pruned_loss=0.06159, over 4927.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2421, pruned_loss=0.04859, over 957164.59 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:58:47,644 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:58:52,349 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1082, 1.8594, 1.9161, 0.8328, 2.3517, 2.5065, 2.0463, 1.7221], device='cuda:2'), covar=tensor([0.1134, 0.0942, 0.0601, 0.0853, 0.0463, 0.0715, 0.0526, 0.1037], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0127, 0.0122, 0.0130, 0.0129, 0.0142, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.9069e-05, 1.0600e-04, 9.0818e-05, 8.5537e-05, 9.0888e-05, 9.1768e-05, 1.0096e-04, 1.0574e-04], device='cuda:2') 2023-03-27 06:58:56,125 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-27 06:59:07,286 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 06:59:11,889 INFO [finetune.py:976] (2/7) Epoch 25, batch 5500, loss[loss=0.1436, simple_loss=0.2272, pruned_loss=0.02995, over 4751.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2389, pruned_loss=0.04795, over 955937.42 frames. ], batch size: 54, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 06:59:13,716 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.198e+01 1.372e+02 1.708e+02 2.223e+02 4.314e+02, threshold=3.415e+02, percent-clipped=3.0 2023-03-27 06:59:23,574 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-27 06:59:46,283 INFO [finetune.py:976] (2/7) Epoch 25, batch 5550, loss[loss=0.2257, simple_loss=0.2864, pruned_loss=0.08255, over 4126.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2414, pruned_loss=0.04944, over 955229.07 frames. ], batch size: 65, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:00:06,555 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.1931, 3.6268, 3.8359, 3.9981, 3.9695, 3.7383, 4.3031, 1.4893], device='cuda:2'), covar=tensor([0.0898, 0.0919, 0.0874, 0.1163, 0.1293, 0.1655, 0.0758, 0.5821], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0248, 0.0281, 0.0296, 0.0337, 0.0288, 0.0307, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:00:06,699 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 07:00:29,924 INFO [finetune.py:976] (2/7) Epoch 25, batch 5600, loss[loss=0.1993, simple_loss=0.2761, pruned_loss=0.06124, over 4845.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2453, pruned_loss=0.05079, over 953816.65 frames. ], batch size: 47, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:00:30,053 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8456, 1.7705, 1.6299, 1.9813, 2.1925, 1.9201, 1.4622, 1.5111], device='cuda:2'), covar=tensor([0.2133, 0.1912, 0.1877, 0.1608, 0.1654, 0.1216, 0.2409, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0216, 0.0199, 0.0245, 0.0193, 0.0217, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:00:30,594 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4602, 1.0080, 0.6796, 1.2886, 1.8726, 0.7993, 1.1980, 1.2837], device='cuda:2'), covar=tensor([0.1401, 0.1963, 0.1599, 0.1186, 0.1725, 0.1857, 0.1367, 0.1873], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:00:31,669 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.665e+01 1.700e+02 1.937e+02 2.306e+02 4.675e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-27 07:00:38,727 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:00:51,427 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5128, 1.5238, 1.6665, 1.6189, 1.8134, 3.0601, 1.5758, 1.6831], device='cuda:2'), covar=tensor([0.0967, 0.1628, 0.1034, 0.0874, 0.1380, 0.0358, 0.1305, 0.1645], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 07:00:59,192 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3138, 2.9367, 2.7024, 1.1745, 3.0169, 2.3127, 0.6562, 1.9284], device='cuda:2'), covar=tensor([0.2319, 0.2181, 0.1944, 0.3569, 0.1258, 0.1010, 0.3919, 0.1545], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0179, 0.0161, 0.0130, 0.0161, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:01:00,341 INFO [finetune.py:976] (2/7) Epoch 25, batch 5650, loss[loss=0.1761, simple_loss=0.2564, pruned_loss=0.04791, over 4761.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2472, pruned_loss=0.0509, over 952820.24 frames. ], batch size: 54, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:01:23,403 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8899, 3.6390, 3.1704, 2.0690, 3.4465, 2.8855, 2.7252, 3.2552], device='cuda:2'), covar=tensor([0.0649, 0.0606, 0.1325, 0.1602, 0.1101, 0.1717, 0.1692, 0.0701], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0211, 0.0213, 0.0225, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:01:29,840 INFO [finetune.py:976] (2/7) Epoch 25, batch 5700, loss[loss=0.1597, simple_loss=0.2246, pruned_loss=0.04738, over 3971.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2424, pruned_loss=0.04987, over 932148.52 frames. ], batch size: 17, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:01:30,611 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.14 vs. limit=5.0 2023-03-27 07:01:31,568 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.213e+01 1.339e+02 1.671e+02 2.043e+02 4.216e+02, threshold=3.342e+02, percent-clipped=1.0 2023-03-27 07:01:58,298 INFO [finetune.py:976] (2/7) Epoch 26, batch 0, loss[loss=0.1413, simple_loss=0.2102, pruned_loss=0.03616, over 4420.00 frames. ], tot_loss[loss=0.1413, simple_loss=0.2102, pruned_loss=0.03616, over 4420.00 frames. ], batch size: 19, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:01:58,298 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 07:02:01,143 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8035, 1.6234, 1.9746, 1.3518, 1.7330, 1.9736, 1.5581, 2.1018], device='cuda:2'), covar=tensor([0.1128, 0.1975, 0.1210, 0.1619, 0.0860, 0.1243, 0.2687, 0.0712], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0205, 0.0190, 0.0190, 0.0173, 0.0212, 0.0215, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:02:03,188 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4611, 1.3324, 1.3156, 1.4426, 1.6710, 1.6261, 1.3939, 1.2995], device='cuda:2'), covar=tensor([0.0401, 0.0366, 0.0643, 0.0346, 0.0257, 0.0453, 0.0378, 0.0409], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0107, 0.0146, 0.0112, 0.0102, 0.0116, 0.0104, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8500e-05, 8.1959e-05, 1.1422e-04, 8.5531e-05, 7.8734e-05, 8.5958e-05, 7.7183e-05, 8.6065e-05], device='cuda:2') 2023-03-27 07:02:06,193 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1214, 1.8813, 1.7416, 1.6989, 1.8581, 1.9248, 1.8737, 2.5576], device='cuda:2'), covar=tensor([0.4029, 0.4699, 0.3533, 0.4341, 0.4562, 0.2753, 0.4103, 0.1832], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0265, 0.0237, 0.0278, 0.0260, 0.0230, 0.0257, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:02:14,280 INFO [finetune.py:1010] (2/7) Epoch 26, validation: loss=0.1591, simple_loss=0.2269, pruned_loss=0.04565, over 2265189.00 frames. 2023-03-27 07:02:14,280 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 07:02:43,893 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=143229.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:03:00,607 INFO [finetune.py:976] (2/7) Epoch 26, batch 50, loss[loss=0.1342, simple_loss=0.2092, pruned_loss=0.02961, over 4814.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2456, pruned_loss=0.04914, over 215762.81 frames. ], batch size: 25, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:03:13,060 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4665, 1.5588, 1.5896, 0.8852, 1.6900, 1.8603, 1.8887, 1.5042], device='cuda:2'), covar=tensor([0.1026, 0.0676, 0.0493, 0.0598, 0.0423, 0.0553, 0.0292, 0.0708], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0128, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:2'), out_proj_covar=tensor([8.8943e-05, 1.0601e-04, 9.1068e-05, 8.5568e-05, 9.0961e-05, 9.1380e-05, 1.0051e-04, 1.0563e-04], device='cuda:2') 2023-03-27 07:03:18,452 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.460e+02 1.766e+02 2.058e+02 4.416e+02, threshold=3.532e+02, percent-clipped=3.0 2023-03-27 07:03:19,187 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3901, 2.2448, 1.8044, 2.2714, 2.2804, 2.0201, 2.5915, 2.3481], device='cuda:2'), covar=tensor([0.1302, 0.2093, 0.3031, 0.2511, 0.2600, 0.1722, 0.2724, 0.1770], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0255, 0.0251, 0.0207, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:03:24,477 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=143277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:03:30,572 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2380, 2.1117, 1.7112, 2.1161, 2.1100, 1.8472, 2.4310, 2.2214], device='cuda:2'), covar=tensor([0.1292, 0.2008, 0.2854, 0.2389, 0.2409, 0.1623, 0.2896, 0.1574], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0255, 0.0252, 0.0208, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:03:34,112 INFO [finetune.py:976] (2/7) Epoch 26, batch 100, loss[loss=0.1632, simple_loss=0.2299, pruned_loss=0.04828, over 4821.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2403, pruned_loss=0.04888, over 379652.34 frames. ], batch size: 30, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:03:34,191 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7714, 3.8231, 3.7155, 2.0052, 3.9473, 3.0502, 0.8605, 2.8724], device='cuda:2'), covar=tensor([0.2047, 0.2033, 0.1325, 0.3053, 0.0942, 0.0947, 0.4185, 0.1386], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:04:07,504 INFO [finetune.py:976] (2/7) Epoch 26, batch 150, loss[loss=0.1675, simple_loss=0.2262, pruned_loss=0.05444, over 4869.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2357, pruned_loss=0.0474, over 508507.29 frames. ], batch size: 31, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:04:15,579 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-27 07:04:25,693 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.744e+01 1.335e+02 1.679e+02 2.114e+02 2.886e+02, threshold=3.358e+02, percent-clipped=0.0 2023-03-27 07:04:33,623 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=143380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:04:41,258 INFO [finetune.py:976] (2/7) Epoch 26, batch 200, loss[loss=0.1782, simple_loss=0.2324, pruned_loss=0.06199, over 4795.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2339, pruned_loss=0.04669, over 608352.82 frames. ], batch size: 25, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:04:46,840 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=143400.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:05:05,300 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=143428.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:05:05,380 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4635, 2.2377, 1.8867, 2.2930, 2.3787, 2.1484, 2.6504, 2.4069], device='cuda:2'), covar=tensor([0.1281, 0.2248, 0.3038, 0.2604, 0.2550, 0.1583, 0.2869, 0.1705], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0256, 0.0252, 0.0208, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:05:22,139 INFO [finetune.py:976] (2/7) Epoch 26, batch 250, loss[loss=0.1386, simple_loss=0.2127, pruned_loss=0.03224, over 4797.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2378, pruned_loss=0.04757, over 686244.71 frames. ], batch size: 25, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:05:48,848 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=143461.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:05:53,068 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.741e+01 1.618e+02 1.961e+02 2.394e+02 5.476e+02, threshold=3.922e+02, percent-clipped=2.0 2023-03-27 07:05:59,889 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=143473.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:05:59,919 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1246, 2.1732, 1.9228, 2.3748, 2.4867, 2.3218, 2.0566, 1.7080], device='cuda:2'), covar=tensor([0.1978, 0.1696, 0.1684, 0.1424, 0.1873, 0.1049, 0.2011, 0.1770], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0216, 0.0199, 0.0245, 0.0192, 0.0217, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:06:12,190 INFO [finetune.py:976] (2/7) Epoch 26, batch 300, loss[loss=0.1833, simple_loss=0.2593, pruned_loss=0.05364, over 4932.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2431, pruned_loss=0.04883, over 748355.59 frames. ], batch size: 33, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:06:16,018 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 07:06:40,196 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=143534.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:06:44,912 INFO [finetune.py:976] (2/7) Epoch 26, batch 350, loss[loss=0.1952, simple_loss=0.2624, pruned_loss=0.06401, over 4143.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2467, pruned_loss=0.05123, over 793071.85 frames. ], batch size: 65, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:07:03,059 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.441e+02 1.724e+02 2.077e+02 3.544e+02, threshold=3.448e+02, percent-clipped=0.0 2023-03-27 07:07:07,612 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-27 07:07:18,098 INFO [finetune.py:976] (2/7) Epoch 26, batch 400, loss[loss=0.1919, simple_loss=0.2597, pruned_loss=0.062, over 4810.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.247, pruned_loss=0.05127, over 828668.07 frames. ], batch size: 41, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:07:21,344 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5333, 2.2659, 1.8146, 0.8724, 2.1840, 1.9467, 1.6173, 2.0562], device='cuda:2'), covar=tensor([0.0894, 0.1006, 0.1784, 0.2217, 0.1249, 0.2410, 0.2625, 0.1072], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0194, 0.0202, 0.0184, 0.0212, 0.0213, 0.0226, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:07:54,070 INFO [finetune.py:976] (2/7) Epoch 26, batch 450, loss[loss=0.205, simple_loss=0.271, pruned_loss=0.06954, over 4832.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2452, pruned_loss=0.05073, over 857052.23 frames. ], batch size: 47, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:08:22,189 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.245e+01 1.544e+02 1.809e+02 2.165e+02 3.752e+02, threshold=3.619e+02, percent-clipped=3.0 2023-03-27 07:08:37,482 INFO [finetune.py:976] (2/7) Epoch 26, batch 500, loss[loss=0.1827, simple_loss=0.2423, pruned_loss=0.06161, over 4748.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2422, pruned_loss=0.05016, over 878136.57 frames. ], batch size: 59, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:09:11,111 INFO [finetune.py:976] (2/7) Epoch 26, batch 550, loss[loss=0.1875, simple_loss=0.2587, pruned_loss=0.05816, over 4829.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2396, pruned_loss=0.04948, over 895907.66 frames. ], batch size: 39, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:09:20,248 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143756.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:09:28,911 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.175e+01 1.443e+02 1.723e+02 1.984e+02 5.074e+02, threshold=3.446e+02, percent-clipped=2.0 2023-03-27 07:09:31,416 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1861, 2.0022, 1.4729, 0.6479, 1.7384, 1.8464, 1.6684, 1.8641], device='cuda:2'), covar=tensor([0.1025, 0.0796, 0.1420, 0.2000, 0.1305, 0.2336, 0.2291, 0.0847], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0212, 0.0212, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:09:41,264 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 07:09:44,563 INFO [finetune.py:976] (2/7) Epoch 26, batch 600, loss[loss=0.2012, simple_loss=0.2758, pruned_loss=0.06333, over 4860.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2407, pruned_loss=0.0499, over 909696.61 frames. ], batch size: 44, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:10:09,699 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143829.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:10:12,217 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0421, 2.0119, 1.6956, 1.9337, 1.8285, 1.8623, 1.9401, 2.5985], device='cuda:2'), covar=tensor([0.3407, 0.3551, 0.2973, 0.3644, 0.4038, 0.2291, 0.3339, 0.1481], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0264, 0.0236, 0.0277, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:10:17,582 INFO [finetune.py:976] (2/7) Epoch 26, batch 650, loss[loss=0.1704, simple_loss=0.2521, pruned_loss=0.04441, over 4730.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2427, pruned_loss=0.04974, over 921577.34 frames. ], batch size: 59, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:10:41,256 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.600e+02 1.821e+02 2.293e+02 5.159e+02, threshold=3.642e+02, percent-clipped=4.0 2023-03-27 07:10:56,896 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4391, 1.1486, 0.8024, 1.3490, 1.8646, 0.8123, 1.2592, 1.3885], device='cuda:2'), covar=tensor([0.1534, 0.1996, 0.1676, 0.1221, 0.2087, 0.2082, 0.1421, 0.1887], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:11:02,792 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-27 07:11:12,407 INFO [finetune.py:976] (2/7) Epoch 26, batch 700, loss[loss=0.1829, simple_loss=0.2632, pruned_loss=0.0513, over 4844.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2446, pruned_loss=0.04989, over 929911.40 frames. ], batch size: 49, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:11:13,749 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6751, 1.5890, 2.0541, 1.9395, 1.7574, 3.1369, 1.5761, 1.7076], device='cuda:2'), covar=tensor([0.0888, 0.1523, 0.1295, 0.0780, 0.1279, 0.0310, 0.1227, 0.1552], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:11:41,699 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8959, 2.5491, 2.0970, 1.0339, 2.3217, 2.2760, 2.0196, 2.3742], device='cuda:2'), covar=tensor([0.0663, 0.0826, 0.1463, 0.1978, 0.1218, 0.1888, 0.1865, 0.0846], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0211, 0.0213, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:11:48,834 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 07:11:51,900 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.58 vs. limit=2.0 2023-03-27 07:11:52,958 INFO [finetune.py:976] (2/7) Epoch 26, batch 750, loss[loss=0.1703, simple_loss=0.2538, pruned_loss=0.04336, over 4901.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.245, pruned_loss=0.04982, over 936251.05 frames. ], batch size: 36, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:12:09,864 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.567e+02 1.788e+02 2.169e+02 3.888e+02, threshold=3.576e+02, percent-clipped=1.0 2023-03-27 07:12:26,435 INFO [finetune.py:976] (2/7) Epoch 26, batch 800, loss[loss=0.1577, simple_loss=0.2346, pruned_loss=0.04041, over 4764.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2439, pruned_loss=0.04894, over 940468.05 frames. ], batch size: 26, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:13:00,658 INFO [finetune.py:976] (2/7) Epoch 26, batch 850, loss[loss=0.1546, simple_loss=0.2194, pruned_loss=0.04493, over 4904.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2417, pruned_loss=0.04856, over 943073.54 frames. ], batch size: 32, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:13:00,755 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1175, 1.5832, 0.8412, 1.8987, 2.4775, 1.8160, 1.7640, 1.9760], device='cuda:2'), covar=tensor([0.1420, 0.1886, 0.1935, 0.1196, 0.1818, 0.1849, 0.1374, 0.1971], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:13:09,694 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=144056.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:13:16,918 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.460e+02 1.746e+02 2.115e+02 7.519e+02, threshold=3.492e+02, percent-clipped=2.0 2023-03-27 07:13:43,959 INFO [finetune.py:976] (2/7) Epoch 26, batch 900, loss[loss=0.1452, simple_loss=0.2214, pruned_loss=0.03452, over 4898.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2392, pruned_loss=0.04777, over 946399.37 frames. ], batch size: 32, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:13:51,324 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=144104.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:14:01,530 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5388, 1.4604, 1.9846, 3.1494, 2.1371, 2.3183, 1.0707, 2.6554], device='cuda:2'), covar=tensor([0.1837, 0.1442, 0.1380, 0.0575, 0.0877, 0.1396, 0.1918, 0.0539], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0117, 0.0134, 0.0166, 0.0102, 0.0137, 0.0126, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 07:14:05,223 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.53 vs. limit=5.0 2023-03-27 07:14:07,461 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=144129.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:14:16,859 INFO [finetune.py:976] (2/7) Epoch 26, batch 950, loss[loss=0.1988, simple_loss=0.2583, pruned_loss=0.06959, over 4928.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2379, pruned_loss=0.04776, over 949373.03 frames. ], batch size: 37, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:14:18,190 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9130, 1.5949, 2.0634, 1.4671, 1.9464, 2.1380, 1.5722, 2.3456], device='cuda:2'), covar=tensor([0.1215, 0.2184, 0.1448, 0.1805, 0.0941, 0.1369, 0.2671, 0.0791], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0205, 0.0191, 0.0189, 0.0173, 0.0212, 0.0215, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:14:33,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.468e+02 1.742e+02 2.065e+02 3.876e+02, threshold=3.485e+02, percent-clipped=2.0 2023-03-27 07:14:39,232 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=144177.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:14:47,618 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1044, 2.0448, 1.7051, 2.0203, 1.8888, 1.8902, 1.9472, 2.6601], device='cuda:2'), covar=tensor([0.3664, 0.3980, 0.3202, 0.3679, 0.4083, 0.2343, 0.3686, 0.1714], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0264, 0.0236, 0.0277, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:14:50,350 INFO [finetune.py:976] (2/7) Epoch 26, batch 1000, loss[loss=0.183, simple_loss=0.2534, pruned_loss=0.05629, over 4895.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2407, pruned_loss=0.04899, over 950760.25 frames. ], batch size: 32, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:15:04,550 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 07:15:20,614 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0681, 2.1106, 1.7911, 2.2281, 2.6644, 2.1141, 2.1343, 1.6252], device='cuda:2'), covar=tensor([0.2096, 0.1746, 0.1841, 0.1497, 0.1629, 0.1086, 0.1987, 0.1757], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0215, 0.0198, 0.0244, 0.0191, 0.0216, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:15:22,310 INFO [finetune.py:976] (2/7) Epoch 26, batch 1050, loss[loss=0.1729, simple_loss=0.2459, pruned_loss=0.04998, over 4825.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2421, pruned_loss=0.04907, over 951727.27 frames. ], batch size: 30, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:15:40,002 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.483e+02 1.785e+02 2.219e+02 5.161e+02, threshold=3.570e+02, percent-clipped=2.0 2023-03-27 07:16:01,563 INFO [finetune.py:976] (2/7) Epoch 26, batch 1100, loss[loss=0.1342, simple_loss=0.1945, pruned_loss=0.03696, over 3994.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2445, pruned_loss=0.04959, over 952503.05 frames. ], batch size: 17, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:16:03,513 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1784, 2.1362, 1.5795, 2.2265, 2.0988, 1.8191, 2.5291, 2.2274], device='cuda:2'), covar=tensor([0.1367, 0.1996, 0.2897, 0.2669, 0.2366, 0.1609, 0.3053, 0.1533], device='cuda:2'), in_proj_covar=tensor([0.0186, 0.0188, 0.0233, 0.0251, 0.0247, 0.0205, 0.0212, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:16:45,112 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-27 07:16:45,524 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1433, 1.7247, 2.3147, 1.5046, 2.1571, 2.3887, 1.6011, 2.5066], device='cuda:2'), covar=tensor([0.1307, 0.2356, 0.1805, 0.2306, 0.1074, 0.1499, 0.3015, 0.0847], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0205, 0.0191, 0.0190, 0.0174, 0.0212, 0.0216, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:16:56,473 INFO [finetune.py:976] (2/7) Epoch 26, batch 1150, loss[loss=0.1728, simple_loss=0.2584, pruned_loss=0.04362, over 4918.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2452, pruned_loss=0.04959, over 953530.40 frames. ], batch size: 29, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:17:13,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.499e+02 1.760e+02 2.197e+02 4.327e+02, threshold=3.521e+02, percent-clipped=2.0 2023-03-27 07:17:28,619 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9678, 1.7805, 1.9840, 1.1904, 1.9722, 2.0536, 1.9643, 1.6168], device='cuda:2'), covar=tensor([0.0570, 0.0754, 0.0662, 0.0906, 0.0945, 0.0587, 0.0584, 0.1157], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0120, 0.0128, 0.0139, 0.0141, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:17:30,185 INFO [finetune.py:976] (2/7) Epoch 26, batch 1200, loss[loss=0.1643, simple_loss=0.2329, pruned_loss=0.04781, over 4931.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2437, pruned_loss=0.0489, over 955028.93 frames. ], batch size: 33, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:17:39,544 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 07:17:43,720 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=144411.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:18:03,374 INFO [finetune.py:976] (2/7) Epoch 26, batch 1250, loss[loss=0.1836, simple_loss=0.2575, pruned_loss=0.05482, over 4887.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2416, pruned_loss=0.04846, over 954322.82 frames. ], batch size: 32, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:18:21,715 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.894e+01 1.606e+02 1.805e+02 2.274e+02 3.881e+02, threshold=3.611e+02, percent-clipped=1.0 2023-03-27 07:18:24,295 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=144472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:18:30,968 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-27 07:18:37,271 INFO [finetune.py:976] (2/7) Epoch 26, batch 1300, loss[loss=0.2051, simple_loss=0.258, pruned_loss=0.07611, over 4897.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.24, pruned_loss=0.04846, over 955462.55 frames. ], batch size: 35, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:19:21,353 INFO [finetune.py:976] (2/7) Epoch 26, batch 1350, loss[loss=0.1607, simple_loss=0.2476, pruned_loss=0.03693, over 4820.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2398, pruned_loss=0.04817, over 952793.78 frames. ], batch size: 40, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:19:30,833 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1277, 2.1718, 1.7739, 2.1603, 2.0051, 1.9740, 2.0851, 2.7160], device='cuda:2'), covar=tensor([0.3554, 0.3755, 0.3093, 0.3445, 0.3877, 0.2435, 0.3670, 0.1567], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0265, 0.0237, 0.0277, 0.0259, 0.0230, 0.0258, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:19:39,469 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.499e+02 1.803e+02 2.073e+02 4.281e+02, threshold=3.607e+02, percent-clipped=1.0 2023-03-27 07:19:54,502 INFO [finetune.py:976] (2/7) Epoch 26, batch 1400, loss[loss=0.1652, simple_loss=0.2405, pruned_loss=0.04496, over 4896.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2423, pruned_loss=0.04933, over 952600.81 frames. ], batch size: 35, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:20:16,429 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8792, 4.2709, 4.1073, 2.2763, 4.4488, 3.4048, 1.1068, 3.1048], device='cuda:2'), covar=tensor([0.2459, 0.1865, 0.1259, 0.3053, 0.0832, 0.0800, 0.3939, 0.1242], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0179, 0.0161, 0.0131, 0.0162, 0.0125, 0.0150, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:20:27,739 INFO [finetune.py:976] (2/7) Epoch 26, batch 1450, loss[loss=0.1828, simple_loss=0.2596, pruned_loss=0.05302, over 4807.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2441, pruned_loss=0.04974, over 952448.19 frames. ], batch size: 41, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:20:45,824 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.092e+01 1.571e+02 1.855e+02 2.334e+02 4.645e+02, threshold=3.710e+02, percent-clipped=2.0 2023-03-27 07:20:51,887 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7501, 1.7459, 1.6818, 1.8281, 1.3310, 4.2340, 1.5680, 2.0490], device='cuda:2'), covar=tensor([0.3145, 0.2328, 0.1975, 0.2161, 0.1590, 0.0137, 0.2376, 0.1140], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:21:01,362 INFO [finetune.py:976] (2/7) Epoch 26, batch 1500, loss[loss=0.1941, simple_loss=0.2639, pruned_loss=0.06214, over 4763.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2456, pruned_loss=0.05007, over 953567.15 frames. ], batch size: 59, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:21:04,695 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 07:21:50,335 INFO [finetune.py:976] (2/7) Epoch 26, batch 1550, loss[loss=0.1388, simple_loss=0.2177, pruned_loss=0.02997, over 4847.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2453, pruned_loss=0.0502, over 951776.90 frames. ], batch size: 44, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:22:13,799 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-27 07:22:18,424 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=144767.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:22:18,964 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.547e+02 1.850e+02 2.044e+02 4.068e+02, threshold=3.700e+02, percent-clipped=1.0 2023-03-27 07:22:28,534 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-27 07:22:35,464 INFO [finetune.py:976] (2/7) Epoch 26, batch 1600, loss[loss=0.1436, simple_loss=0.2246, pruned_loss=0.03127, over 4823.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2435, pruned_loss=0.04978, over 952680.48 frames. ], batch size: 40, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:22:37,580 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-27 07:22:39,207 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4234, 1.3198, 1.3330, 1.3581, 0.8173, 2.2901, 0.6947, 1.1355], device='cuda:2'), covar=tensor([0.3293, 0.2584, 0.2263, 0.2465, 0.1973, 0.0355, 0.2820, 0.1440], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:23:09,212 INFO [finetune.py:976] (2/7) Epoch 26, batch 1650, loss[loss=0.1642, simple_loss=0.2309, pruned_loss=0.04876, over 4793.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2405, pruned_loss=0.04847, over 953898.07 frames. ], batch size: 29, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:23:26,320 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.503e+01 1.460e+02 1.738e+02 2.010e+02 3.428e+02, threshold=3.475e+02, percent-clipped=0.0 2023-03-27 07:23:30,611 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 07:23:42,420 INFO [finetune.py:976] (2/7) Epoch 26, batch 1700, loss[loss=0.1839, simple_loss=0.2354, pruned_loss=0.06618, over 4225.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2383, pruned_loss=0.04783, over 954363.71 frames. ], batch size: 65, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:23:58,542 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7439, 1.2757, 0.7549, 1.5657, 2.0817, 1.3504, 1.4915, 1.5669], device='cuda:2'), covar=tensor([0.1511, 0.2115, 0.2119, 0.1252, 0.2091, 0.1965, 0.1482, 0.2019], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:23:58,597 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1238, 1.9779, 1.7498, 1.8598, 1.8634, 1.8886, 1.9611, 2.6902], device='cuda:2'), covar=tensor([0.3706, 0.3798, 0.3122, 0.3779, 0.4021, 0.2303, 0.3518, 0.1617], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0236, 0.0276, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:24:19,068 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1437, 2.2744, 1.9709, 2.3496, 2.9142, 2.3327, 2.3197, 1.7757], device='cuda:2'), covar=tensor([0.2178, 0.1782, 0.1796, 0.1512, 0.1548, 0.1054, 0.1895, 0.1801], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0216, 0.0199, 0.0245, 0.0192, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:24:25,822 INFO [finetune.py:976] (2/7) Epoch 26, batch 1750, loss[loss=0.1595, simple_loss=0.2495, pruned_loss=0.03478, over 4812.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2416, pruned_loss=0.04894, over 955575.45 frames. ], batch size: 41, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:24:42,911 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.967e+01 1.592e+02 1.823e+02 2.389e+02 4.337e+02, threshold=3.645e+02, percent-clipped=3.0 2023-03-27 07:24:59,584 INFO [finetune.py:976] (2/7) Epoch 26, batch 1800, loss[loss=0.1754, simple_loss=0.2534, pruned_loss=0.04869, over 4740.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2449, pruned_loss=0.0496, over 955544.68 frames. ], batch size: 27, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:24:59,876 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 07:25:02,602 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=144996.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:21,953 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.78 vs. limit=5.0 2023-03-27 07:25:23,064 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:33,497 INFO [finetune.py:976] (2/7) Epoch 26, batch 1850, loss[loss=0.1242, simple_loss=0.1964, pruned_loss=0.02596, over 4459.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2448, pruned_loss=0.04943, over 955013.12 frames. ], batch size: 19, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:25:43,167 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145057.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:25:48,506 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145066.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:49,086 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145067.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:49,577 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.533e+02 1.829e+02 2.183e+02 4.392e+02, threshold=3.659e+02, percent-clipped=3.0 2023-03-27 07:25:51,525 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.17 vs. limit=5.0 2023-03-27 07:26:03,673 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:06,519 INFO [finetune.py:976] (2/7) Epoch 26, batch 1900, loss[loss=0.1682, simple_loss=0.2551, pruned_loss=0.04066, over 4843.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2462, pruned_loss=0.04962, over 957129.63 frames. ], batch size: 44, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:26:07,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1276, 2.0646, 1.7531, 2.0737, 1.9355, 1.9375, 1.9675, 2.7009], device='cuda:2'), covar=tensor([0.3684, 0.4128, 0.3358, 0.3946, 0.4154, 0.2456, 0.3976, 0.1724], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0263, 0.0234, 0.0275, 0.0257, 0.0227, 0.0256, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:26:21,297 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145115.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:29,579 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:45,616 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0197, 1.9372, 1.8631, 1.9847, 1.7620, 4.5968, 1.7820, 2.2019], device='cuda:2'), covar=tensor([0.3171, 0.2382, 0.2017, 0.2245, 0.1383, 0.0121, 0.2398, 0.1223], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:26:46,246 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0306, 1.8460, 1.6612, 1.6709, 1.7580, 1.7208, 1.7885, 2.4986], device='cuda:2'), covar=tensor([0.3441, 0.3629, 0.2937, 0.3353, 0.3735, 0.2326, 0.3114, 0.1525], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0264, 0.0235, 0.0275, 0.0258, 0.0228, 0.0256, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:26:46,687 INFO [finetune.py:976] (2/7) Epoch 26, batch 1950, loss[loss=0.1985, simple_loss=0.2594, pruned_loss=0.06877, over 4827.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2446, pruned_loss=0.04908, over 957679.24 frames. ], batch size: 39, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:26:57,613 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7529, 1.3080, 0.8598, 1.5988, 2.1468, 1.4696, 1.5324, 1.6010], device='cuda:2'), covar=tensor([0.1413, 0.1923, 0.1935, 0.1223, 0.1912, 0.1962, 0.1391, 0.2008], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:27:06,394 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4238, 1.3805, 1.5521, 2.5062, 1.6968, 2.1890, 0.9448, 2.2210], device='cuda:2'), covar=tensor([0.1710, 0.1345, 0.1153, 0.0620, 0.0928, 0.1101, 0.1580, 0.0568], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0164, 0.0101, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 07:27:15,914 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.577e+02 1.831e+02 2.188e+02 4.363e+02, threshold=3.662e+02, percent-clipped=3.0 2023-03-27 07:27:33,099 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0077, 2.1228, 1.8590, 1.7819, 2.5266, 2.6036, 2.1120, 2.0421], device='cuda:2'), covar=tensor([0.0439, 0.0357, 0.0535, 0.0377, 0.0267, 0.0493, 0.0383, 0.0458], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0108, 0.0149, 0.0112, 0.0102, 0.0117, 0.0104, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.9592e-05, 8.2567e-05, 1.1592e-04, 8.6029e-05, 7.9356e-05, 8.6271e-05, 7.7143e-05, 8.6779e-05], device='cuda:2') 2023-03-27 07:27:40,100 INFO [finetune.py:976] (2/7) Epoch 26, batch 2000, loss[loss=0.1243, simple_loss=0.2051, pruned_loss=0.02178, over 4861.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2428, pruned_loss=0.04886, over 957536.75 frames. ], batch size: 34, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:28:00,000 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8699, 1.8196, 1.5749, 2.0292, 2.3190, 2.0461, 1.7250, 1.5347], device='cuda:2'), covar=tensor([0.2081, 0.1839, 0.1876, 0.1542, 0.1662, 0.1187, 0.2307, 0.1873], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0215, 0.0198, 0.0244, 0.0191, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:28:13,278 INFO [finetune.py:976] (2/7) Epoch 26, batch 2050, loss[loss=0.1731, simple_loss=0.245, pruned_loss=0.05064, over 4813.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2404, pruned_loss=0.04835, over 954955.82 frames. ], batch size: 41, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:28:19,943 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0568, 0.9976, 1.0038, 0.4173, 0.9046, 1.1456, 1.1364, 0.9986], device='cuda:2'), covar=tensor([0.0862, 0.0551, 0.0531, 0.0511, 0.0546, 0.0624, 0.0370, 0.0655], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0123, 0.0130, 0.0129, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.8954e-05, 1.0618e-04, 9.1031e-05, 8.6206e-05, 9.0856e-05, 9.1632e-05, 1.0083e-04, 1.0626e-04], device='cuda:2') 2023-03-27 07:28:30,376 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.187e+01 1.436e+02 1.792e+02 2.264e+02 4.038e+02, threshold=3.583e+02, percent-clipped=1.0 2023-03-27 07:28:45,862 INFO [finetune.py:976] (2/7) Epoch 26, batch 2100, loss[loss=0.1653, simple_loss=0.2453, pruned_loss=0.04269, over 4916.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2403, pruned_loss=0.04857, over 955654.98 frames. ], batch size: 36, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:29:03,664 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2936, 2.0732, 1.6155, 0.7263, 1.7599, 2.0197, 1.8328, 1.9352], device='cuda:2'), covar=tensor([0.0832, 0.0728, 0.1242, 0.1748, 0.1182, 0.1738, 0.1848, 0.0743], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0192, 0.0200, 0.0181, 0.0209, 0.0209, 0.0224, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:29:09,162 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3625, 1.2455, 1.2070, 0.7692, 1.2496, 1.3947, 1.5270, 1.2164], device='cuda:2'), covar=tensor([0.0843, 0.0524, 0.0535, 0.0455, 0.0546, 0.0526, 0.0293, 0.0572], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0123, 0.0130, 0.0129, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9137e-05, 1.0644e-04, 9.1238e-05, 8.6419e-05, 9.1066e-05, 9.1707e-05, 1.0097e-04, 1.0656e-04], device='cuda:2') 2023-03-27 07:29:19,662 INFO [finetune.py:976] (2/7) Epoch 26, batch 2150, loss[loss=0.1399, simple_loss=0.213, pruned_loss=0.03342, over 4769.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2438, pruned_loss=0.04985, over 956561.78 frames. ], batch size: 26, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:29:28,935 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145352.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:29:39,080 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1467, 2.0966, 1.7617, 2.0422, 1.9211, 1.9698, 2.0394, 2.7277], device='cuda:2'), covar=tensor([0.4076, 0.3999, 0.3368, 0.3800, 0.4240, 0.2554, 0.3750, 0.1724], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0278, 0.0260, 0.0230, 0.0258, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:29:43,495 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.7176, 4.0612, 4.3523, 4.5068, 4.4425, 4.1888, 4.7924, 1.6012], device='cuda:2'), covar=tensor([0.0718, 0.0867, 0.0672, 0.0832, 0.1168, 0.1558, 0.0544, 0.5596], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0249, 0.0280, 0.0295, 0.0337, 0.0287, 0.0304, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:29:47,542 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.494e+02 1.709e+02 2.298e+02 6.165e+02, threshold=3.419e+02, percent-clipped=3.0 2023-03-27 07:29:49,488 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:29:56,782 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145383.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:02,627 INFO [finetune.py:976] (2/7) Epoch 26, batch 2200, loss[loss=0.1734, simple_loss=0.2566, pruned_loss=0.0451, over 4806.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2453, pruned_loss=0.04964, over 957230.38 frames. ], batch size: 51, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:30:03,788 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2077, 1.8300, 2.4904, 4.0919, 2.8098, 2.8238, 1.0441, 3.5358], device='cuda:2'), covar=tensor([0.1600, 0.1340, 0.1322, 0.0449, 0.0747, 0.1407, 0.1829, 0.0351], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0101, 0.0134, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 07:30:22,236 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 07:30:23,255 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145422.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:29,365 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:36,243 INFO [finetune.py:976] (2/7) Epoch 26, batch 2250, loss[loss=0.1878, simple_loss=0.2565, pruned_loss=0.0595, over 4772.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2466, pruned_loss=0.05054, over 955581.83 frames. ], batch size: 26, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:30:37,630 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8306, 1.6393, 1.5136, 1.5689, 2.0263, 2.0341, 1.7299, 1.4969], device='cuda:2'), covar=tensor([0.0320, 0.0328, 0.0579, 0.0359, 0.0213, 0.0408, 0.0336, 0.0437], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0109, 0.0149, 0.0113, 0.0103, 0.0118, 0.0104, 0.0115], device='cuda:2'), out_proj_covar=tensor([8.0326e-05, 8.3024e-05, 1.1643e-04, 8.6571e-05, 7.9891e-05, 8.6772e-05, 7.7549e-05, 8.7307e-05], device='cuda:2') 2023-03-27 07:30:44,582 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3625, 1.4138, 1.6838, 1.6622, 1.4996, 3.2101, 1.3981, 1.4987], device='cuda:2'), covar=tensor([0.1096, 0.1831, 0.1226, 0.0916, 0.1547, 0.0231, 0.1452, 0.1755], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0072, 0.0076, 0.0090, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:30:53,946 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.189e+01 1.488e+02 1.760e+02 2.143e+02 3.776e+02, threshold=3.521e+02, percent-clipped=2.0 2023-03-27 07:31:08,987 INFO [finetune.py:976] (2/7) Epoch 26, batch 2300, loss[loss=0.1427, simple_loss=0.2155, pruned_loss=0.03492, over 4865.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.246, pruned_loss=0.04985, over 957111.68 frames. ], batch size: 31, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:31:41,337 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3082, 2.8727, 2.7711, 1.1850, 2.9116, 2.2011, 0.8780, 1.9211], device='cuda:2'), covar=tensor([0.2498, 0.2133, 0.1990, 0.3581, 0.1567, 0.1143, 0.3825, 0.1586], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0130, 0.0161, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:31:42,487 INFO [finetune.py:976] (2/7) Epoch 26, batch 2350, loss[loss=0.1704, simple_loss=0.2346, pruned_loss=0.05309, over 4712.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2439, pruned_loss=0.04969, over 955320.25 frames. ], batch size: 23, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:31:53,140 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4126, 2.3226, 2.3158, 1.5049, 2.3708, 2.3278, 2.3602, 1.9588], device='cuda:2'), covar=tensor([0.0558, 0.0592, 0.0772, 0.0940, 0.0642, 0.0777, 0.0667, 0.1023], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0119, 0.0128, 0.0138, 0.0139, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:32:06,623 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.544e+02 1.840e+02 2.226e+02 4.643e+02, threshold=3.680e+02, percent-clipped=1.0 2023-03-27 07:32:34,350 INFO [finetune.py:976] (2/7) Epoch 26, batch 2400, loss[loss=0.1427, simple_loss=0.207, pruned_loss=0.03918, over 4793.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2413, pruned_loss=0.04924, over 955012.11 frames. ], batch size: 29, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:32:35,678 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145594.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:32:58,798 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.7880, 3.3321, 3.5051, 3.6428, 3.5622, 3.3288, 3.8539, 1.1846], device='cuda:2'), covar=tensor([0.0913, 0.0888, 0.0935, 0.1168, 0.1445, 0.1590, 0.0949, 0.5725], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0249, 0.0280, 0.0296, 0.0338, 0.0287, 0.0304, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:32:58,934 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-27 07:33:15,387 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7084, 3.6394, 3.4323, 1.7735, 3.7111, 2.7909, 0.9239, 2.5483], device='cuda:2'), covar=tensor([0.2618, 0.2005, 0.1578, 0.3208, 0.1067, 0.0987, 0.4259, 0.1455], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0131, 0.0161, 0.0124, 0.0149, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:33:17,101 INFO [finetune.py:976] (2/7) Epoch 26, batch 2450, loss[loss=0.1809, simple_loss=0.251, pruned_loss=0.05536, over 4890.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2384, pruned_loss=0.04833, over 955252.37 frames. ], batch size: 35, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:33:17,837 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:23,872 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145652.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:33:26,198 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145655.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:33:34,917 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.521e+01 1.449e+02 1.824e+02 2.163e+02 4.630e+02, threshold=3.648e+02, percent-clipped=2.0 2023-03-27 07:33:45,640 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145683.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:51,039 INFO [finetune.py:976] (2/7) Epoch 26, batch 2500, loss[loss=0.1912, simple_loss=0.2716, pruned_loss=0.05535, over 4809.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2396, pruned_loss=0.04842, over 954145.54 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:33:55,944 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145700.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:58,923 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:10,004 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1051, 2.2119, 1.7883, 2.2436, 2.0940, 2.0895, 2.0841, 2.8900], device='cuda:2'), covar=tensor([0.3753, 0.4701, 0.3461, 0.4167, 0.4203, 0.2465, 0.4568, 0.1662], device='cuda:2'), in_proj_covar=tensor([0.0288, 0.0263, 0.0235, 0.0275, 0.0258, 0.0228, 0.0256, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:34:11,632 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145722.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:15,146 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145727.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:17,566 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145731.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:24,604 INFO [finetune.py:976] (2/7) Epoch 26, batch 2550, loss[loss=0.1959, simple_loss=0.2681, pruned_loss=0.06183, over 4863.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2438, pruned_loss=0.04919, over 954894.67 frames. ], batch size: 34, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:34:42,325 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.531e+01 1.551e+02 1.807e+02 2.106e+02 4.459e+02, threshold=3.615e+02, percent-clipped=2.0 2023-03-27 07:34:43,554 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145770.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:35:08,874 INFO [finetune.py:976] (2/7) Epoch 26, batch 2600, loss[loss=0.1646, simple_loss=0.2481, pruned_loss=0.04052, over 4760.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2451, pruned_loss=0.04967, over 954548.78 frames. ], batch size: 27, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:35:28,221 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145821.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:35:42,707 INFO [finetune.py:976] (2/7) Epoch 26, batch 2650, loss[loss=0.1725, simple_loss=0.2449, pruned_loss=0.05, over 4830.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2459, pruned_loss=0.04984, over 954193.67 frames. ], batch size: 47, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:35:55,354 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6558, 1.4481, 2.1838, 1.7753, 1.7246, 4.1361, 1.5054, 1.6190], device='cuda:2'), covar=tensor([0.0955, 0.1808, 0.1200, 0.0960, 0.1585, 0.0171, 0.1512, 0.1863], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:36:00,019 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.986e+01 1.512e+02 1.783e+02 2.110e+02 4.476e+02, threshold=3.566e+02, percent-clipped=1.0 2023-03-27 07:36:09,562 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145882.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:36:16,424 INFO [finetune.py:976] (2/7) Epoch 26, batch 2700, loss[loss=0.1989, simple_loss=0.2687, pruned_loss=0.06456, over 4686.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2453, pruned_loss=0.04929, over 955256.91 frames. ], batch size: 23, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:36:48,574 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4894, 1.4051, 2.0656, 1.8476, 1.5823, 3.6209, 1.3578, 1.4984], device='cuda:2'), covar=tensor([0.1231, 0.2449, 0.1195, 0.1150, 0.1934, 0.0265, 0.2002, 0.2501], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 07:36:49,651 INFO [finetune.py:976] (2/7) Epoch 26, batch 2750, loss[loss=0.1624, simple_loss=0.2334, pruned_loss=0.04566, over 4787.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2441, pruned_loss=0.04946, over 956000.09 frames. ], batch size: 26, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:36:53,373 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9278, 1.2394, 2.0398, 1.9261, 1.7553, 1.6837, 1.8539, 1.9418], device='cuda:2'), covar=tensor([0.3738, 0.3938, 0.3109, 0.3386, 0.4707, 0.3524, 0.4139, 0.2772], device='cuda:2'), in_proj_covar=tensor([0.0266, 0.0247, 0.0266, 0.0294, 0.0294, 0.0271, 0.0300, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:36:55,118 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145950.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:36:56,622 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-27 07:37:00,657 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2370, 2.1054, 2.1106, 1.0197, 2.4211, 2.6596, 2.2476, 1.9763], device='cuda:2'), covar=tensor([0.1125, 0.0847, 0.0641, 0.0743, 0.0569, 0.0836, 0.0499, 0.0759], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0122, 0.0130, 0.0129, 0.0142, 0.0149], device='cuda:2'), out_proj_covar=tensor([8.9061e-05, 1.0631e-04, 9.1093e-05, 8.5941e-05, 9.1062e-05, 9.1470e-05, 1.0093e-04, 1.0625e-04], device='cuda:2') 2023-03-27 07:37:07,588 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.569e+02 1.804e+02 2.200e+02 3.850e+02, threshold=3.609e+02, percent-clipped=2.0 2023-03-27 07:37:29,449 INFO [finetune.py:976] (2/7) Epoch 26, batch 2800, loss[loss=0.1573, simple_loss=0.2175, pruned_loss=0.04855, over 4821.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2395, pruned_loss=0.04751, over 955710.69 frames. ], batch size: 51, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:37:39,468 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145999.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:14,435 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:16,766 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4418, 1.3624, 1.4823, 0.7744, 1.5192, 1.5073, 1.4847, 1.3568], device='cuda:2'), covar=tensor([0.0617, 0.0798, 0.0737, 0.0965, 0.0943, 0.0744, 0.0684, 0.1301], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:38:17,430 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4880, 2.5197, 1.8863, 2.5184, 2.4268, 2.0708, 2.9177, 2.5818], device='cuda:2'), covar=tensor([0.1299, 0.2181, 0.2994, 0.2599, 0.2416, 0.1600, 0.3069, 0.1668], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0189, 0.0235, 0.0251, 0.0249, 0.0206, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:38:21,628 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-27 07:38:24,986 INFO [finetune.py:976] (2/7) Epoch 26, batch 2850, loss[loss=0.1991, simple_loss=0.2718, pruned_loss=0.06323, over 4892.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.239, pruned_loss=0.04778, over 956706.74 frames. ], batch size: 35, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:38:37,433 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3295, 2.9102, 2.8155, 1.2222, 3.0196, 2.2883, 0.7926, 1.9808], device='cuda:2'), covar=tensor([0.2393, 0.2472, 0.1921, 0.3544, 0.1431, 0.1094, 0.4016, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0177, 0.0158, 0.0129, 0.0159, 0.0122, 0.0147, 0.0122], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:38:42,241 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.185e+01 1.523e+02 1.819e+02 2.110e+02 4.930e+02, threshold=3.638e+02, percent-clipped=2.0 2023-03-27 07:38:46,430 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:58,297 INFO [finetune.py:976] (2/7) Epoch 26, batch 2900, loss[loss=0.1999, simple_loss=0.275, pruned_loss=0.06243, over 4825.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2417, pruned_loss=0.04866, over 955108.65 frames. ], batch size: 51, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:39:31,498 INFO [finetune.py:976] (2/7) Epoch 26, batch 2950, loss[loss=0.2075, simple_loss=0.2797, pruned_loss=0.06764, over 4827.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2444, pruned_loss=0.0498, over 954288.81 frames. ], batch size: 39, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:39:36,384 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146149.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:39:49,295 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.550e+02 1.859e+02 2.106e+02 3.478e+02, threshold=3.719e+02, percent-clipped=0.0 2023-03-27 07:39:54,694 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146177.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:40:04,822 INFO [finetune.py:976] (2/7) Epoch 26, batch 3000, loss[loss=0.2147, simple_loss=0.2771, pruned_loss=0.07618, over 4085.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2463, pruned_loss=0.05019, over 952623.36 frames. ], batch size: 65, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:40:04,822 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 07:40:19,935 INFO [finetune.py:1010] (2/7) Epoch 26, validation: loss=0.1577, simple_loss=0.2252, pruned_loss=0.04507, over 2265189.00 frames. 2023-03-27 07:40:19,936 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 07:40:35,396 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146210.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:40:40,649 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4093, 1.9543, 2.9603, 1.6669, 2.4186, 2.6977, 1.7114, 2.7786], device='cuda:2'), covar=tensor([0.1435, 0.2202, 0.1135, 0.2162, 0.1032, 0.1585, 0.2972, 0.0959], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0206, 0.0192, 0.0189, 0.0174, 0.0212, 0.0216, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:40:53,993 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2888, 1.3847, 1.6629, 1.5816, 1.4976, 3.0810, 1.3134, 1.5219], device='cuda:2'), covar=tensor([0.1052, 0.1766, 0.1315, 0.0954, 0.1596, 0.0303, 0.1482, 0.1720], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 07:40:56,714 INFO [finetune.py:976] (2/7) Epoch 26, batch 3050, loss[loss=0.1764, simple_loss=0.2548, pruned_loss=0.04907, over 4815.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2467, pruned_loss=0.04977, over 953689.12 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:41:02,117 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146250.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:41:14,892 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.646e+01 1.408e+02 1.795e+02 2.163e+02 4.679e+02, threshold=3.589e+02, percent-clipped=3.0 2023-03-27 07:41:29,843 INFO [finetune.py:976] (2/7) Epoch 26, batch 3100, loss[loss=0.1634, simple_loss=0.2241, pruned_loss=0.05133, over 4915.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2444, pruned_loss=0.04921, over 951133.12 frames. ], batch size: 46, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:41:34,030 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146298.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:41:34,683 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:42:02,608 INFO [finetune.py:976] (2/7) Epoch 26, batch 3150, loss[loss=0.1887, simple_loss=0.249, pruned_loss=0.06423, over 4820.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.242, pruned_loss=0.04875, over 952540.99 frames. ], batch size: 33, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:42:06,559 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:42:06,617 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2193, 2.3081, 2.2345, 1.5134, 2.2724, 2.3860, 2.2855, 1.9434], device='cuda:2'), covar=tensor([0.0603, 0.0602, 0.0731, 0.0901, 0.0694, 0.0687, 0.0646, 0.1059], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:42:21,140 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.044e+02 1.464e+02 1.851e+02 2.163e+02 3.423e+02, threshold=3.701e+02, percent-clipped=0.0 2023-03-27 07:42:38,055 INFO [finetune.py:976] (2/7) Epoch 26, batch 3200, loss[loss=0.1927, simple_loss=0.2498, pruned_loss=0.06776, over 4917.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2386, pruned_loss=0.048, over 952578.94 frames. ], batch size: 37, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:43:35,343 INFO [finetune.py:976] (2/7) Epoch 26, batch 3250, loss[loss=0.147, simple_loss=0.226, pruned_loss=0.03401, over 4823.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2383, pruned_loss=0.04767, over 954577.75 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:43:43,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5472, 1.4652, 1.4059, 1.5265, 1.0286, 3.2621, 1.1666, 1.5710], device='cuda:2'), covar=tensor([0.3245, 0.2592, 0.2161, 0.2343, 0.1851, 0.0204, 0.2817, 0.1368], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0115, 0.0120, 0.0123, 0.0112, 0.0095, 0.0093, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:43:53,733 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.451e+02 1.808e+02 2.175e+02 4.535e+02, threshold=3.616e+02, percent-clipped=3.0 2023-03-27 07:43:58,666 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146477.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:44:08,659 INFO [finetune.py:976] (2/7) Epoch 26, batch 3300, loss[loss=0.1471, simple_loss=0.2239, pruned_loss=0.0352, over 4772.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2423, pruned_loss=0.04885, over 955588.79 frames. ], batch size: 26, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:44:17,140 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:44:30,704 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146525.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:44:41,545 INFO [finetune.py:976] (2/7) Epoch 26, batch 3350, loss[loss=0.13, simple_loss=0.1952, pruned_loss=0.03236, over 3998.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2438, pruned_loss=0.0488, over 955506.15 frames. ], batch size: 17, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:45:00,358 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.852e+01 1.562e+02 1.841e+02 2.282e+02 4.006e+02, threshold=3.682e+02, percent-clipped=1.0 2023-03-27 07:45:14,929 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146591.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:45:15,433 INFO [finetune.py:976] (2/7) Epoch 26, batch 3400, loss[loss=0.1938, simple_loss=0.271, pruned_loss=0.05825, over 4862.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2457, pruned_loss=0.0496, over 956963.71 frames. ], batch size: 34, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:45:39,337 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8821, 1.7536, 1.5395, 1.4571, 1.8973, 1.5924, 1.8177, 1.8747], device='cuda:2'), covar=tensor([0.1328, 0.1838, 0.2823, 0.2474, 0.2462, 0.1669, 0.2486, 0.1633], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0188, 0.0234, 0.0250, 0.0247, 0.0205, 0.0213, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:45:48,132 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:45:51,726 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9024, 3.8087, 3.8125, 2.0581, 3.9354, 2.9774, 1.2273, 2.8141], device='cuda:2'), covar=tensor([0.3244, 0.1756, 0.1225, 0.3107, 0.1012, 0.1037, 0.4055, 0.1477], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0181, 0.0162, 0.0131, 0.0163, 0.0125, 0.0151, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:45:58,656 INFO [finetune.py:976] (2/7) Epoch 26, batch 3450, loss[loss=0.1876, simple_loss=0.2611, pruned_loss=0.05707, over 4895.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2448, pruned_loss=0.04918, over 957678.50 frames. ], batch size: 35, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:46:04,891 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146652.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:46:17,060 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.450e+02 1.708e+02 2.017e+02 4.995e+02, threshold=3.417e+02, percent-clipped=1.0 2023-03-27 07:46:21,865 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1162, 4.6304, 4.4493, 2.6926, 4.7244, 3.5425, 0.9990, 3.4141], device='cuda:2'), covar=tensor([0.2289, 0.1645, 0.1359, 0.2836, 0.0901, 0.0849, 0.4522, 0.1276], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0180, 0.0161, 0.0131, 0.0162, 0.0124, 0.0150, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:46:28,568 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146687.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:46:32,388 INFO [finetune.py:976] (2/7) Epoch 26, batch 3500, loss[loss=0.1839, simple_loss=0.2495, pruned_loss=0.05919, over 4849.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2419, pruned_loss=0.04855, over 958322.44 frames. ], batch size: 49, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:47:05,274 INFO [finetune.py:976] (2/7) Epoch 26, batch 3550, loss[loss=0.1897, simple_loss=0.2517, pruned_loss=0.06381, over 4935.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2401, pruned_loss=0.04854, over 957051.38 frames. ], batch size: 33, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:47:22,722 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.430e+02 1.748e+02 2.315e+02 5.079e+02, threshold=3.497e+02, percent-clipped=6.0 2023-03-27 07:47:28,609 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.1939, 4.5266, 4.7226, 5.0255, 4.9030, 4.6068, 5.2908, 1.6442], device='cuda:2'), covar=tensor([0.0633, 0.0899, 0.0842, 0.0876, 0.1146, 0.1565, 0.0544, 0.6080], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0249, 0.0281, 0.0295, 0.0339, 0.0287, 0.0304, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:47:32,138 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9927, 1.8052, 2.2649, 3.7405, 2.5351, 2.6521, 1.3504, 3.0387], device='cuda:2'), covar=tensor([0.1628, 0.1255, 0.1375, 0.0445, 0.0752, 0.1496, 0.1839, 0.0468], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0101, 0.0133, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 07:47:38,113 INFO [finetune.py:976] (2/7) Epoch 26, batch 3600, loss[loss=0.2296, simple_loss=0.3066, pruned_loss=0.07629, over 4088.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2388, pruned_loss=0.04834, over 955864.56 frames. ], batch size: 65, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:47:47,321 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146805.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:26,739 INFO [finetune.py:976] (2/7) Epoch 26, batch 3650, loss[loss=0.204, simple_loss=0.2693, pruned_loss=0.06932, over 4806.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2414, pruned_loss=0.04886, over 955361.27 frames. ], batch size: 41, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:48:28,127 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146844.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:39,246 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146853.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:53,564 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.488e+01 1.511e+02 1.813e+02 2.229e+02 3.524e+02, threshold=3.627e+02, percent-clipped=1.0 2023-03-27 07:49:10,717 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1612, 1.2963, 1.3892, 0.7050, 1.3651, 1.5793, 1.6271, 1.3083], device='cuda:2'), covar=tensor([0.0980, 0.0687, 0.0614, 0.0579, 0.0547, 0.0678, 0.0362, 0.0788], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0129, 0.0124, 0.0131, 0.0130, 0.0143, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.9800e-05, 1.0751e-04, 9.1922e-05, 8.6819e-05, 9.1795e-05, 9.2197e-05, 1.0161e-04, 1.0710e-04], device='cuda:2') 2023-03-27 07:49:12,956 INFO [finetune.py:976] (2/7) Epoch 26, batch 3700, loss[loss=0.1662, simple_loss=0.2418, pruned_loss=0.04535, over 4925.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2443, pruned_loss=0.04977, over 952900.57 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:49:21,434 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146905.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:49:46,521 INFO [finetune.py:976] (2/7) Epoch 26, batch 3750, loss[loss=0.2039, simple_loss=0.2703, pruned_loss=0.06878, over 4824.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2461, pruned_loss=0.05075, over 952177.45 frames. ], batch size: 33, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:49:49,608 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:50:01,714 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 07:50:03,845 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.820e+01 1.503e+02 1.791e+02 2.461e+02 5.017e+02, threshold=3.581e+02, percent-clipped=5.0 2023-03-27 07:50:12,649 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146982.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:50:19,582 INFO [finetune.py:976] (2/7) Epoch 26, batch 3800, loss[loss=0.1638, simple_loss=0.2391, pruned_loss=0.04426, over 4811.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2469, pruned_loss=0.05121, over 952743.07 frames. ], batch size: 47, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:50:55,309 INFO [finetune.py:976] (2/7) Epoch 26, batch 3850, loss[loss=0.1429, simple_loss=0.2208, pruned_loss=0.03246, over 4760.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2451, pruned_loss=0.0504, over 953115.20 frames. ], batch size: 26, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:50:56,034 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0713, 1.3214, 1.4933, 1.3234, 1.4778, 2.4246, 1.2348, 1.4509], device='cuda:2'), covar=tensor([0.1074, 0.1827, 0.0948, 0.0932, 0.1646, 0.0375, 0.1549, 0.1831], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 07:51:21,146 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.517e+02 1.854e+02 2.266e+02 5.483e+02, threshold=3.707e+02, percent-clipped=2.0 2023-03-27 07:51:26,182 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 07:51:37,019 INFO [finetune.py:976] (2/7) Epoch 26, batch 3900, loss[loss=0.1855, simple_loss=0.2522, pruned_loss=0.05933, over 4720.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2415, pruned_loss=0.04904, over 953906.94 frames. ], batch size: 54, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:51:37,764 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6138, 1.4794, 1.4026, 0.8196, 1.6758, 1.8167, 1.8032, 1.4180], device='cuda:2'), covar=tensor([0.1041, 0.0705, 0.0614, 0.0611, 0.0415, 0.0587, 0.0347, 0.0685], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0150, 0.0129, 0.0124, 0.0131, 0.0131, 0.0143, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.9957e-05, 1.0755e-04, 9.2142e-05, 8.6765e-05, 9.2035e-05, 9.2500e-05, 1.0176e-04, 1.0728e-04], device='cuda:2') 2023-03-27 07:52:09,603 INFO [finetune.py:976] (2/7) Epoch 26, batch 3950, loss[loss=0.1904, simple_loss=0.2539, pruned_loss=0.06343, over 4819.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2393, pruned_loss=0.04823, over 954745.34 frames. ], batch size: 41, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:52:27,907 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.492e+02 1.682e+02 1.976e+02 2.814e+02, threshold=3.365e+02, percent-clipped=0.0 2023-03-27 07:52:42,788 INFO [finetune.py:976] (2/7) Epoch 26, batch 4000, loss[loss=0.1834, simple_loss=0.2566, pruned_loss=0.05507, over 4279.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2396, pruned_loss=0.04871, over 954655.68 frames. ], batch size: 65, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:52:48,866 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147200.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:15,813 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:16,321 INFO [finetune.py:976] (2/7) Epoch 26, batch 4050, loss[loss=0.149, simple_loss=0.2261, pruned_loss=0.03589, over 4767.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2441, pruned_loss=0.0503, over 955286.59 frames. ], batch size: 28, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:53:21,650 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147247.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:22,257 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3165, 1.3259, 1.5085, 0.7908, 1.5263, 1.5663, 1.5195, 1.2183], device='cuda:2'), covar=tensor([0.0577, 0.0780, 0.0647, 0.0950, 0.0927, 0.0562, 0.0581, 0.1304], device='cuda:2'), in_proj_covar=tensor([0.0129, 0.0134, 0.0139, 0.0117, 0.0126, 0.0135, 0.0137, 0.0158], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:53:48,586 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.660e+02 1.920e+02 2.375e+02 4.575e+02, threshold=3.840e+02, percent-clipped=6.0 2023-03-27 07:53:58,896 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:08,318 INFO [finetune.py:976] (2/7) Epoch 26, batch 4100, loss[loss=0.1536, simple_loss=0.2277, pruned_loss=0.03975, over 4830.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2452, pruned_loss=0.05058, over 954085.66 frames. ], batch size: 49, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:54:13,959 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147295.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:14,797 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 07:54:18,796 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147302.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:24,630 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147310.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:37,227 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147330.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:37,320 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8163, 1.7133, 1.5431, 1.3325, 1.8068, 1.6095, 1.6343, 1.8084], device='cuda:2'), covar=tensor([0.1191, 0.1642, 0.2633, 0.2121, 0.2192, 0.1545, 0.2019, 0.1485], device='cuda:2'), in_proj_covar=tensor([0.0187, 0.0189, 0.0234, 0.0251, 0.0248, 0.0206, 0.0213, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:54:44,039 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 07:54:44,489 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.75 vs. limit=5.0 2023-03-27 07:54:44,910 INFO [finetune.py:976] (2/7) Epoch 26, batch 4150, loss[loss=0.19, simple_loss=0.2627, pruned_loss=0.05867, over 4885.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2465, pruned_loss=0.05067, over 953296.90 frames. ], batch size: 32, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:54:53,566 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4221, 3.8375, 4.0507, 4.2676, 4.1727, 3.9097, 4.5069, 1.5545], device='cuda:2'), covar=tensor([0.0712, 0.0854, 0.0751, 0.0803, 0.1186, 0.1495, 0.0652, 0.5395], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0250, 0.0284, 0.0298, 0.0341, 0.0289, 0.0306, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:55:03,459 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.520e+02 1.873e+02 2.208e+02 5.004e+02, threshold=3.746e+02, percent-clipped=1.0 2023-03-27 07:55:05,295 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:18,278 INFO [finetune.py:976] (2/7) Epoch 26, batch 4200, loss[loss=0.1697, simple_loss=0.2374, pruned_loss=0.05104, over 4822.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2465, pruned_loss=0.05032, over 955076.24 frames. ], batch size: 39, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:55:28,583 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147407.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:46,457 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147434.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:51,222 INFO [finetune.py:976] (2/7) Epoch 26, batch 4250, loss[loss=0.162, simple_loss=0.232, pruned_loss=0.04594, over 4826.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2454, pruned_loss=0.05026, over 952112.64 frames. ], batch size: 39, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:56:16,334 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:56:16,791 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.473e+02 1.815e+02 2.255e+02 8.587e+02, threshold=3.630e+02, percent-clipped=2.0 2023-03-27 07:56:34,765 INFO [finetune.py:976] (2/7) Epoch 26, batch 4300, loss[loss=0.1516, simple_loss=0.2163, pruned_loss=0.04349, over 4906.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2427, pruned_loss=0.04979, over 954799.66 frames. ], batch size: 36, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:56:36,700 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147495.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:56:40,201 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147500.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:57:08,536 INFO [finetune.py:976] (2/7) Epoch 26, batch 4350, loss[loss=0.1544, simple_loss=0.2246, pruned_loss=0.04216, over 4730.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2394, pruned_loss=0.04874, over 954738.31 frames. ], batch size: 54, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:57:12,264 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:57:26,970 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.870e+01 1.462e+02 1.667e+02 1.917e+02 5.708e+02, threshold=3.333e+02, percent-clipped=2.0 2023-03-27 07:57:31,208 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6063, 1.5446, 1.3729, 1.7461, 1.6542, 1.7096, 1.1630, 1.3975], device='cuda:2'), covar=tensor([0.2117, 0.1894, 0.1948, 0.1552, 0.1523, 0.1209, 0.2350, 0.1781], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0210, 0.0214, 0.0198, 0.0244, 0.0190, 0.0217, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:57:42,396 INFO [finetune.py:976] (2/7) Epoch 26, batch 4400, loss[loss=0.2058, simple_loss=0.2867, pruned_loss=0.06245, over 4841.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2397, pruned_loss=0.0486, over 953661.86 frames. ], batch size: 49, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:57:45,506 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147597.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:58:06,762 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1297, 1.4809, 0.9847, 1.8457, 2.3865, 1.8091, 1.6488, 1.9348], device='cuda:2'), covar=tensor([0.1408, 0.1938, 0.1897, 0.1166, 0.1985, 0.1858, 0.1427, 0.1899], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 07:58:16,297 INFO [finetune.py:976] (2/7) Epoch 26, batch 4450, loss[loss=0.2196, simple_loss=0.294, pruned_loss=0.07256, over 4808.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2442, pruned_loss=0.05007, over 952575.99 frames. ], batch size: 45, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:58:27,313 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:58:34,817 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:58:36,565 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.542e+02 1.864e+02 2.192e+02 3.736e+02, threshold=3.727e+02, percent-clipped=4.0 2023-03-27 07:59:04,409 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9713, 3.7476, 3.7395, 2.1347, 3.9911, 3.0385, 1.3034, 2.8648], device='cuda:2'), covar=tensor([0.2587, 0.1907, 0.1360, 0.2873, 0.0885, 0.0897, 0.3628, 0.1292], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0129, 0.0161, 0.0124, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 07:59:06,169 INFO [finetune.py:976] (2/7) Epoch 26, batch 4500, loss[loss=0.1771, simple_loss=0.2574, pruned_loss=0.04838, over 4762.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.246, pruned_loss=0.05076, over 951389.59 frames. ], batch size: 54, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:59:37,023 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147720.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:59:41,146 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1275, 2.0517, 1.7113, 2.1085, 1.9494, 1.9134, 1.9699, 2.6934], device='cuda:2'), covar=tensor([0.3902, 0.4156, 0.3380, 0.3648, 0.4177, 0.2557, 0.3922, 0.1668], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0277, 0.0260, 0.0230, 0.0258, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 07:59:52,137 INFO [finetune.py:976] (2/7) Epoch 26, batch 4550, loss[loss=0.1692, simple_loss=0.2493, pruned_loss=0.04459, over 4747.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2473, pruned_loss=0.05116, over 950999.28 frames. ], batch size: 27, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:00:04,826 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147763.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:09,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.722e+01 1.488e+02 1.756e+02 2.293e+02 4.562e+02, threshold=3.512e+02, percent-clipped=2.0 2023-03-27 08:00:23,115 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-27 08:00:24,551 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147790.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:25,690 INFO [finetune.py:976] (2/7) Epoch 26, batch 4600, loss[loss=0.1818, simple_loss=0.2508, pruned_loss=0.05639, over 4760.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2448, pruned_loss=0.04958, over 952067.08 frames. ], batch size: 26, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:00:39,900 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:59,126 INFO [finetune.py:976] (2/7) Epoch 26, batch 4650, loss[loss=0.1846, simple_loss=0.2533, pruned_loss=0.058, over 4929.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2415, pruned_loss=0.04825, over 953818.85 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:01:08,853 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147857.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:15,988 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.955e+01 1.458e+02 1.712e+02 2.175e+02 4.467e+02, threshold=3.424e+02, percent-clipped=3.0 2023-03-27 08:01:21,846 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147876.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:39,628 INFO [finetune.py:976] (2/7) Epoch 26, batch 4700, loss[loss=0.169, simple_loss=0.2339, pruned_loss=0.05203, over 4851.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2387, pruned_loss=0.04749, over 954960.00 frames. ], batch size: 44, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:01:46,510 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147897.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:55,417 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:59,699 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:16,231 INFO [finetune.py:976] (2/7) Epoch 26, batch 4750, loss[loss=0.1955, simple_loss=0.2597, pruned_loss=0.06561, over 4846.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2374, pruned_loss=0.04731, over 956211.45 frames. ], batch size: 49, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:02:18,628 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:32,359 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147966.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:34,089 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.470e+02 1.654e+02 2.049e+02 2.990e+02, threshold=3.309e+02, percent-clipped=0.0 2023-03-27 08:02:35,994 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147972.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:41,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4871, 1.4122, 1.8566, 1.7432, 1.5064, 3.1110, 1.3691, 1.5216], device='cuda:2'), covar=tensor([0.0904, 0.1681, 0.1143, 0.0869, 0.1538, 0.0264, 0.1410, 0.1668], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0081, 0.0072, 0.0076, 0.0090, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:02:50,079 INFO [finetune.py:976] (2/7) Epoch 26, batch 4800, loss[loss=0.1198, simple_loss=0.1939, pruned_loss=0.02284, over 4734.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2395, pruned_loss=0.04785, over 957118.98 frames. ], batch size: 23, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:03:04,513 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1766, 2.2301, 1.7386, 2.0783, 2.0656, 2.0416, 2.0730, 2.9835], device='cuda:2'), covar=tensor([0.4119, 0.4560, 0.3642, 0.4446, 0.4558, 0.2602, 0.4233, 0.1644], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0266, 0.0238, 0.0278, 0.0261, 0.0231, 0.0259, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:03:06,225 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:06,846 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148015.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:09,947 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5289, 2.2860, 2.1021, 2.3129, 2.2054, 2.3033, 2.1828, 2.8954], device='cuda:2'), covar=tensor([0.3501, 0.4487, 0.3286, 0.3822, 0.3789, 0.2452, 0.4034, 0.1927], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0238, 0.0278, 0.0261, 0.0231, 0.0259, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:03:24,536 INFO [finetune.py:976] (2/7) Epoch 26, batch 4850, loss[loss=0.1477, simple_loss=0.224, pruned_loss=0.03569, over 4726.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2423, pruned_loss=0.04849, over 956486.11 frames. ], batch size: 23, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:03:39,227 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148063.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:42,781 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.530e+02 1.908e+02 2.333e+02 3.886e+02, threshold=3.817e+02, percent-clipped=4.0 2023-03-27 08:03:44,275 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 08:04:04,135 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:04:05,252 INFO [finetune.py:976] (2/7) Epoch 26, batch 4900, loss[loss=0.1874, simple_loss=0.2444, pruned_loss=0.06522, over 4553.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2432, pruned_loss=0.04842, over 955743.16 frames. ], batch size: 20, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:04:30,816 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148111.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:04:42,122 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 08:04:57,727 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148138.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:05:00,561 INFO [finetune.py:976] (2/7) Epoch 26, batch 4950, loss[loss=0.1781, simple_loss=0.2631, pruned_loss=0.04657, over 4811.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2454, pruned_loss=0.04954, over 955273.02 frames. ], batch size: 39, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:05:15,400 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5592, 1.4659, 1.6863, 1.7616, 1.5277, 3.2628, 1.4427, 1.5671], device='cuda:2'), covar=tensor([0.0928, 0.1751, 0.1175, 0.0903, 0.1678, 0.0232, 0.1457, 0.1873], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:05:18,897 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.030e+01 1.572e+02 1.871e+02 2.257e+02 5.603e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-27 08:05:20,237 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:05:33,993 INFO [finetune.py:976] (2/7) Epoch 26, batch 5000, loss[loss=0.1848, simple_loss=0.2574, pruned_loss=0.05606, over 4780.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2441, pruned_loss=0.04895, over 955859.44 frames. ], batch size: 51, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:05:35,264 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 08:05:47,036 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6818, 2.4544, 2.0797, 1.0625, 2.2075, 2.0661, 1.8059, 2.2707], device='cuda:2'), covar=tensor([0.0843, 0.0824, 0.1765, 0.2051, 0.1450, 0.2135, 0.2153, 0.0954], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0200, 0.0182, 0.0209, 0.0211, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:05:47,055 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9020, 1.7973, 1.6178, 2.0383, 2.4828, 2.0520, 1.7458, 1.5829], device='cuda:2'), covar=tensor([0.1993, 0.1797, 0.1804, 0.1505, 0.1509, 0.1080, 0.2091, 0.1843], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0212, 0.0215, 0.0199, 0.0247, 0.0192, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:05:48,837 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:05:52,554 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 08:06:07,384 INFO [finetune.py:976] (2/7) Epoch 26, batch 5050, loss[loss=0.1594, simple_loss=0.2396, pruned_loss=0.0396, over 4821.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2416, pruned_loss=0.04855, over 956298.46 frames. ], batch size: 33, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:06:22,188 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.7855, 1.6601, 1.7015, 1.0799, 1.9157, 2.1810, 2.0794, 1.5933], device='cuda:2'), covar=tensor([0.0891, 0.0691, 0.0571, 0.0537, 0.0460, 0.0542, 0.0322, 0.0766], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0128, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8913e-05, 1.0701e-04, 9.1234e-05, 8.6173e-05, 9.1892e-05, 9.1849e-05, 1.0109e-04, 1.0714e-04], device='cuda:2') 2023-03-27 08:06:25,052 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148267.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:06:26,158 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.490e+02 1.796e+02 2.082e+02 4.496e+02, threshold=3.592e+02, percent-clipped=3.0 2023-03-27 08:06:34,532 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1839, 1.5771, 0.7956, 1.9910, 2.5221, 1.7505, 1.7947, 1.9821], device='cuda:2'), covar=tensor([0.1358, 0.1862, 0.2039, 0.1113, 0.1771, 0.1863, 0.1357, 0.1989], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0093, 0.0109, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:06:37,573 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:06:40,479 INFO [finetune.py:976] (2/7) Epoch 26, batch 5100, loss[loss=0.1871, simple_loss=0.246, pruned_loss=0.06408, over 4831.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2379, pruned_loss=0.04734, over 956417.73 frames. ], batch size: 33, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:07:02,530 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:15,640 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7776, 3.7289, 3.4829, 1.8913, 3.7621, 2.9346, 1.0087, 2.6181], device='cuda:2'), covar=tensor([0.2609, 0.2303, 0.1635, 0.3205, 0.1153, 0.1026, 0.4332, 0.1636], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0179, 0.0160, 0.0129, 0.0161, 0.0124, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 08:07:16,288 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3023, 2.8872, 2.7343, 1.4261, 2.8481, 2.3318, 2.1803, 2.6015], device='cuda:2'), covar=tensor([0.0973, 0.0754, 0.1585, 0.2013, 0.1511, 0.2208, 0.1989, 0.1160], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0199, 0.0181, 0.0209, 0.0210, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:07:22,781 INFO [finetune.py:976] (2/7) Epoch 26, batch 5150, loss[loss=0.1399, simple_loss=0.2195, pruned_loss=0.03015, over 4741.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2378, pruned_loss=0.04774, over 954695.80 frames. ], batch size: 27, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:07:24,162 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3511, 2.0958, 1.7654, 0.8715, 1.8973, 1.8265, 1.6216, 1.9396], device='cuda:2'), covar=tensor([0.0724, 0.0750, 0.1372, 0.1843, 0.1307, 0.2189, 0.2146, 0.0889], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0199, 0.0181, 0.0209, 0.0210, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:07:27,014 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148348.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:36,970 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148363.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:37,620 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1066, 1.2549, 1.3648, 1.2661, 1.3979, 2.4084, 1.1198, 1.3783], device='cuda:2'), covar=tensor([0.0997, 0.1719, 0.1383, 0.0909, 0.1551, 0.0388, 0.1529, 0.1737], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 08:07:41,442 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.549e+02 1.828e+02 2.233e+02 5.689e+02, threshold=3.657e+02, percent-clipped=4.0 2023-03-27 08:07:55,902 INFO [finetune.py:976] (2/7) Epoch 26, batch 5200, loss[loss=0.1752, simple_loss=0.258, pruned_loss=0.04627, over 4815.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2426, pruned_loss=0.04934, over 956079.23 frames. ], batch size: 40, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 08:08:06,146 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7050, 1.5051, 2.1906, 3.3146, 2.1904, 2.2731, 1.2055, 2.7959], device='cuda:2'), covar=tensor([0.1568, 0.1378, 0.1190, 0.0517, 0.0805, 0.1891, 0.1528, 0.0420], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0101, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:08:21,946 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148429.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:08:29,695 INFO [finetune.py:976] (2/7) Epoch 26, batch 5250, loss[loss=0.1551, simple_loss=0.2439, pruned_loss=0.03319, over 4834.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2454, pruned_loss=0.05046, over 954635.35 frames. ], batch size: 47, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:08:48,647 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.514e+02 1.742e+02 2.193e+02 4.299e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 08:08:49,333 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:08:53,915 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8659, 2.5257, 2.2125, 1.2040, 2.2813, 2.2130, 1.9277, 2.3101], device='cuda:2'), covar=tensor([0.0938, 0.0938, 0.1534, 0.2015, 0.1400, 0.2150, 0.2205, 0.1009], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0192, 0.0199, 0.0182, 0.0209, 0.0211, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:09:02,322 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148490.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:03,426 INFO [finetune.py:976] (2/7) Epoch 26, batch 5300, loss[loss=0.1492, simple_loss=0.2331, pruned_loss=0.03264, over 4839.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2466, pruned_loss=0.05072, over 955656.16 frames. ], batch size: 47, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:09:07,124 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:08,445 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 08:09:20,022 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:28,365 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148519.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:36,391 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6867, 1.5640, 2.1500, 3.2850, 2.2668, 2.2889, 0.8871, 2.8017], device='cuda:2'), covar=tensor([0.1615, 0.1390, 0.1265, 0.0538, 0.0781, 0.1659, 0.1950, 0.0446], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0101, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:09:54,000 INFO [finetune.py:976] (2/7) Epoch 26, batch 5350, loss[loss=0.1998, simple_loss=0.2563, pruned_loss=0.07163, over 4773.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.247, pruned_loss=0.05084, over 954555.17 frames. ], batch size: 54, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:10:12,443 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148559.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:12,587 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-27 08:10:13,592 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148561.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:17,774 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148567.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:19,469 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.412e+02 1.653e+02 1.941e+02 3.220e+02, threshold=3.306e+02, percent-clipped=0.0 2023-03-27 08:10:28,294 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5315, 1.0234, 0.7412, 1.4037, 2.0659, 0.8305, 1.3155, 1.4168], device='cuda:2'), covar=tensor([0.1523, 0.2074, 0.1744, 0.1173, 0.1873, 0.1861, 0.1497, 0.1925], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0093, 0.0110, 0.0092, 0.0120, 0.0093, 0.0099, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:10:34,698 INFO [finetune.py:976] (2/7) Epoch 26, batch 5400, loss[loss=0.1672, simple_loss=0.2247, pruned_loss=0.05483, over 4868.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.243, pruned_loss=0.04922, over 955728.80 frames. ], batch size: 31, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:10:34,775 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5408, 3.3753, 3.2927, 1.4438, 3.5100, 2.5232, 0.7444, 2.3221], device='cuda:2'), covar=tensor([0.2056, 0.2037, 0.1494, 0.3363, 0.1227, 0.1112, 0.3968, 0.1652], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0178, 0.0159, 0.0128, 0.0159, 0.0123, 0.0147, 0.0123], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 08:10:49,743 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148615.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:07,906 INFO [finetune.py:976] (2/7) Epoch 26, batch 5450, loss[loss=0.1284, simple_loss=0.2067, pruned_loss=0.025, over 4692.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2405, pruned_loss=0.04856, over 954804.28 frames. ], batch size: 23, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:11:08,563 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:13,463 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148651.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:25,813 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.571e+02 1.855e+02 2.174e+02 4.125e+02, threshold=3.711e+02, percent-clipped=2.0 2023-03-27 08:11:37,731 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=2.10 vs. limit=2.0 2023-03-27 08:11:41,098 INFO [finetune.py:976] (2/7) Epoch 26, batch 5500, loss[loss=0.1176, simple_loss=0.1873, pruned_loss=0.02397, over 4753.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2376, pruned_loss=0.04788, over 954671.19 frames. ], batch size: 27, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:11:43,576 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9122, 1.1255, 1.9844, 1.9392, 1.7480, 1.6905, 1.8351, 1.9421], device='cuda:2'), covar=tensor([0.3747, 0.3808, 0.3105, 0.3294, 0.4365, 0.3650, 0.4094, 0.2852], device='cuda:2'), in_proj_covar=tensor([0.0264, 0.0246, 0.0266, 0.0294, 0.0294, 0.0270, 0.0300, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:11:53,971 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148712.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:56,392 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:24,626 INFO [finetune.py:976] (2/7) Epoch 26, batch 5550, loss[loss=0.1862, simple_loss=0.2728, pruned_loss=0.04974, over 4812.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2392, pruned_loss=0.04747, over 955738.15 frames. ], batch size: 41, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:12:30,880 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1288, 1.3776, 1.9907, 1.9911, 1.8396, 1.8085, 1.9441, 1.9344], device='cuda:2'), covar=tensor([0.3964, 0.4021, 0.3561, 0.3819, 0.4992, 0.4029, 0.4402, 0.3116], device='cuda:2'), in_proj_covar=tensor([0.0264, 0.0246, 0.0266, 0.0294, 0.0294, 0.0270, 0.0299, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:12:42,332 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.664e+01 1.578e+02 1.917e+02 2.288e+02 4.413e+02, threshold=3.834e+02, percent-clipped=2.0 2023-03-27 08:12:45,303 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 08:12:47,522 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:52,461 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148785.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:56,510 INFO [finetune.py:976] (2/7) Epoch 26, batch 5600, loss[loss=0.1345, simple_loss=0.2006, pruned_loss=0.03416, over 4196.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2439, pruned_loss=0.04924, over 954807.62 frames. ], batch size: 18, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:13:24,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8903, 1.4514, 2.0345, 1.9976, 1.7919, 1.7359, 1.9480, 1.9508], device='cuda:2'), covar=tensor([0.4140, 0.4062, 0.3165, 0.3677, 0.4810, 0.3889, 0.4516, 0.2957], device='cuda:2'), in_proj_covar=tensor([0.0265, 0.0246, 0.0266, 0.0294, 0.0294, 0.0270, 0.0299, 0.0251], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:13:25,295 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.60 vs. limit=2.0 2023-03-27 08:13:25,691 INFO [finetune.py:976] (2/7) Epoch 26, batch 5650, loss[loss=0.1776, simple_loss=0.2612, pruned_loss=0.04696, over 4814.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2463, pruned_loss=0.04984, over 955103.91 frames. ], batch size: 45, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:13:32,818 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148854.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:13:41,298 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6033, 2.5122, 2.8262, 1.5532, 2.9851, 3.3122, 2.8393, 2.3583], device='cuda:2'), covar=tensor([0.0802, 0.0657, 0.0354, 0.0604, 0.0503, 0.0557, 0.0414, 0.0687], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0129, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.9052e-05, 1.0679e-04, 9.1603e-05, 8.6459e-05, 9.1664e-05, 9.1967e-05, 1.0137e-04, 1.0740e-04], device='cuda:2') 2023-03-27 08:13:42,329 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.500e+02 1.770e+02 2.150e+02 4.859e+02, threshold=3.539e+02, percent-clipped=1.0 2023-03-27 08:13:50,687 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3666, 0.8796, 0.6550, 1.2436, 1.8490, 0.7227, 1.0762, 1.1497], device='cuda:2'), covar=tensor([0.1415, 0.2145, 0.1614, 0.1162, 0.1563, 0.1773, 0.1476, 0.2079], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:13:55,305 INFO [finetune.py:976] (2/7) Epoch 26, batch 5700, loss[loss=0.1409, simple_loss=0.2001, pruned_loss=0.04087, over 4198.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2423, pruned_loss=0.0496, over 933295.53 frames. ], batch size: 18, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:14:24,189 INFO [finetune.py:976] (2/7) Epoch 27, batch 0, loss[loss=0.1993, simple_loss=0.2703, pruned_loss=0.06417, over 4907.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2703, pruned_loss=0.06417, over 4907.00 frames. ], batch size: 46, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:14:24,190 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 08:14:30,332 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8005, 1.3034, 0.9945, 1.7929, 2.2597, 1.2112, 1.6443, 1.6308], device='cuda:2'), covar=tensor([0.1446, 0.1891, 0.1751, 0.1071, 0.1805, 0.2175, 0.1297, 0.1980], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:14:30,813 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5645, 1.4357, 1.4153, 1.4884, 1.7639, 1.7436, 1.4809, 1.3524], device='cuda:2'), covar=tensor([0.0426, 0.0324, 0.0617, 0.0340, 0.0290, 0.0346, 0.0383, 0.0411], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0105, 0.0146, 0.0110, 0.0100, 0.0114, 0.0103, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7685e-05, 8.0337e-05, 1.1349e-04, 8.4398e-05, 7.7821e-05, 8.4339e-05, 7.6199e-05, 8.4954e-05], device='cuda:2') 2023-03-27 08:14:40,695 INFO [finetune.py:1010] (2/7) Epoch 27, validation: loss=0.1593, simple_loss=0.2269, pruned_loss=0.04586, over 2265189.00 frames. 2023-03-27 08:14:40,695 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 08:14:57,052 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148943.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:15:27,447 INFO [finetune.py:976] (2/7) Epoch 27, batch 50, loss[loss=0.1867, simple_loss=0.2541, pruned_loss=0.05971, over 4786.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2506, pruned_loss=0.052, over 217349.50 frames. ], batch size: 29, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:15:28,074 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.253e+01 1.427e+02 1.731e+02 2.058e+02 3.661e+02, threshold=3.462e+02, percent-clipped=4.0 2023-03-27 08:15:44,091 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148991.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:15:53,996 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149007.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:16:03,651 INFO [finetune.py:976] (2/7) Epoch 27, batch 100, loss[loss=0.1397, simple_loss=0.2159, pruned_loss=0.03175, over 4772.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2429, pruned_loss=0.04919, over 381850.44 frames. ], batch size: 26, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:16:18,838 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 08:16:34,820 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5575, 1.5542, 1.6222, 0.8046, 1.6462, 1.8939, 1.9080, 1.4500], device='cuda:2'), covar=tensor([0.0790, 0.0576, 0.0472, 0.0541, 0.0432, 0.0539, 0.0291, 0.0597], device='cuda:2'), in_proj_covar=tensor([0.0123, 0.0149, 0.0130, 0.0124, 0.0131, 0.0131, 0.0143, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.9482e-05, 1.0733e-04, 9.2393e-05, 8.7064e-05, 9.2094e-05, 9.2597e-05, 1.0171e-04, 1.0784e-04], device='cuda:2') 2023-03-27 08:16:36,435 INFO [finetune.py:976] (2/7) Epoch 27, batch 150, loss[loss=0.1511, simple_loss=0.211, pruned_loss=0.04558, over 4725.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2384, pruned_loss=0.04817, over 509513.93 frames. ], batch size: 59, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:16:37,488 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.570e+01 1.451e+02 1.770e+02 2.054e+02 3.397e+02, threshold=3.539e+02, percent-clipped=0.0 2023-03-27 08:16:39,137 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:16:47,476 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149085.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:09,497 INFO [finetune.py:976] (2/7) Epoch 27, batch 200, loss[loss=0.1494, simple_loss=0.2279, pruned_loss=0.03544, over 4848.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2356, pruned_loss=0.04716, over 609013.62 frames. ], batch size: 49, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:17:19,403 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149133.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:25,101 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.53 vs. limit=5.0 2023-03-27 08:17:39,175 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149154.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:52,878 INFO [finetune.py:976] (2/7) Epoch 27, batch 250, loss[loss=0.2578, simple_loss=0.3075, pruned_loss=0.104, over 4147.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2389, pruned_loss=0.04841, over 685867.82 frames. ], batch size: 65, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:17:53,480 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.547e+02 1.763e+02 2.073e+02 3.560e+02, threshold=3.526e+02, percent-clipped=1.0 2023-03-27 08:18:14,780 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149202.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:18:25,585 INFO [finetune.py:976] (2/7) Epoch 27, batch 300, loss[loss=0.1863, simple_loss=0.2687, pruned_loss=0.05194, over 4827.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2433, pruned_loss=0.04949, over 746099.85 frames. ], batch size: 38, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:18:44,244 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.0576, 2.7240, 2.4755, 1.4041, 2.6851, 2.1457, 2.1660, 2.4994], device='cuda:2'), covar=tensor([0.0968, 0.0874, 0.2003, 0.2137, 0.1557, 0.2282, 0.1955, 0.1145], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0191, 0.0199, 0.0181, 0.0208, 0.0211, 0.0222, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:18:58,793 INFO [finetune.py:976] (2/7) Epoch 27, batch 350, loss[loss=0.1713, simple_loss=0.2448, pruned_loss=0.0489, over 4871.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2454, pruned_loss=0.05037, over 791915.78 frames. ], batch size: 34, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:18:59,397 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.601e+02 1.876e+02 2.140e+02 5.128e+02, threshold=3.753e+02, percent-clipped=1.0 2023-03-27 08:19:00,762 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2384, 1.6114, 0.9142, 2.1045, 2.5324, 1.8940, 2.0482, 2.1807], device='cuda:2'), covar=tensor([0.1312, 0.1832, 0.1780, 0.1094, 0.1838, 0.1928, 0.1172, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:19:08,306 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7023, 2.5898, 2.1300, 2.9682, 2.6251, 2.2268, 3.1313, 2.6609], device='cuda:2'), covar=tensor([0.1293, 0.2147, 0.2983, 0.2272, 0.2549, 0.1674, 0.2515, 0.1816], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0192, 0.0240, 0.0257, 0.0252, 0.0210, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:19:24,864 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:19:32,719 INFO [finetune.py:976] (2/7) Epoch 27, batch 400, loss[loss=0.1386, simple_loss=0.2267, pruned_loss=0.02525, over 4777.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2463, pruned_loss=0.05007, over 830737.85 frames. ], batch size: 29, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:19:51,550 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8588, 1.8714, 1.6014, 2.0591, 2.3158, 2.0044, 1.6361, 1.4931], device='cuda:2'), covar=tensor([0.2103, 0.1893, 0.1969, 0.1518, 0.1641, 0.1158, 0.2320, 0.1912], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0199, 0.0246, 0.0192, 0.0218, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:20:00,666 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6276, 1.6024, 1.4920, 1.6610, 1.3183, 3.6437, 1.4421, 1.7840], device='cuda:2'), covar=tensor([0.3227, 0.2478, 0.2125, 0.2353, 0.1648, 0.0201, 0.2650, 0.1306], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:20:04,465 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.28 vs. limit=5.0 2023-03-27 08:20:07,339 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149355.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:20:16,468 INFO [finetune.py:976] (2/7) Epoch 27, batch 450, loss[loss=0.1646, simple_loss=0.2345, pruned_loss=0.04735, over 4762.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2453, pruned_loss=0.04975, over 858080.59 frames. ], batch size: 27, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:20:17,064 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.956e+01 1.496e+02 1.736e+02 2.126e+02 4.914e+02, threshold=3.471e+02, percent-clipped=1.0 2023-03-27 08:20:17,230 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8760, 1.3805, 1.9756, 1.9027, 1.6991, 1.6617, 1.8426, 1.8672], device='cuda:2'), covar=tensor([0.3472, 0.3772, 0.2762, 0.3667, 0.4507, 0.3841, 0.4165, 0.2832], device='cuda:2'), in_proj_covar=tensor([0.0267, 0.0248, 0.0268, 0.0296, 0.0296, 0.0272, 0.0302, 0.0253], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:20:17,795 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149372.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:20:32,873 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 08:21:04,694 INFO [finetune.py:976] (2/7) Epoch 27, batch 500, loss[loss=0.1438, simple_loss=0.2127, pruned_loss=0.03748, over 4737.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2417, pruned_loss=0.04918, over 877289.06 frames. ], batch size: 59, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:21:04,766 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:21:29,587 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-27 08:21:38,463 INFO [finetune.py:976] (2/7) Epoch 27, batch 550, loss[loss=0.1482, simple_loss=0.2172, pruned_loss=0.03956, over 4869.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2391, pruned_loss=0.04825, over 895741.06 frames. ], batch size: 34, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:21:39,061 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.918e+01 1.466e+02 1.717e+02 2.125e+02 3.295e+02, threshold=3.435e+02, percent-clipped=0.0 2023-03-27 08:22:08,651 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149514.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:22:12,133 INFO [finetune.py:976] (2/7) Epoch 27, batch 600, loss[loss=0.1226, simple_loss=0.2003, pruned_loss=0.02242, over 4766.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2398, pruned_loss=0.04901, over 908789.98 frames. ], batch size: 23, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:22:20,089 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3156, 1.3006, 1.6149, 2.4455, 1.7131, 2.0588, 1.0208, 2.1646], device='cuda:2'), covar=tensor([0.1731, 0.1413, 0.1159, 0.0642, 0.0882, 0.1291, 0.1413, 0.0551], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:22:27,602 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8227, 1.7075, 1.5167, 1.9632, 2.0797, 1.8772, 1.4043, 1.5071], device='cuda:2'), covar=tensor([0.2047, 0.1916, 0.1889, 0.1547, 0.1531, 0.1185, 0.2330, 0.1864], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0199, 0.0247, 0.0193, 0.0218, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:22:45,161 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149565.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:22:48,082 INFO [finetune.py:976] (2/7) Epoch 27, batch 650, loss[loss=0.1793, simple_loss=0.2587, pruned_loss=0.04994, over 4900.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2444, pruned_loss=0.05042, over 918353.36 frames. ], batch size: 43, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:22:53,183 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.592e+02 1.975e+02 2.434e+02 4.045e+02, threshold=3.949e+02, percent-clipped=4.0 2023-03-27 08:22:55,795 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:23:29,819 INFO [finetune.py:976] (2/7) Epoch 27, batch 700, loss[loss=0.1915, simple_loss=0.2733, pruned_loss=0.05481, over 4833.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2456, pruned_loss=0.04993, over 925562.94 frames. ], batch size: 47, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:23:30,644 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 08:23:33,634 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149626.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 08:23:35,585 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 08:23:46,245 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.86 vs. limit=5.0 2023-03-27 08:24:03,075 INFO [finetune.py:976] (2/7) Epoch 27, batch 750, loss[loss=0.2034, simple_loss=0.2777, pruned_loss=0.06456, over 4819.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.247, pruned_loss=0.05038, over 931609.63 frames. ], batch size: 38, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:24:03,698 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.520e+02 1.783e+02 2.094e+02 3.998e+02, threshold=3.567e+02, percent-clipped=1.0 2023-03-27 08:24:36,374 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6633, 1.3138, 1.0423, 1.5692, 2.1436, 1.5372, 1.6300, 1.7628], device='cuda:2'), covar=tensor([0.1491, 0.2062, 0.1801, 0.1191, 0.1808, 0.1882, 0.1420, 0.1867], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:24:36,876 INFO [finetune.py:976] (2/7) Epoch 27, batch 800, loss[loss=0.2151, simple_loss=0.2856, pruned_loss=0.07228, over 4884.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.246, pruned_loss=0.04944, over 936743.00 frames. ], batch size: 36, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:24:53,202 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149746.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 08:25:20,610 INFO [finetune.py:976] (2/7) Epoch 27, batch 850, loss[loss=0.1276, simple_loss=0.202, pruned_loss=0.02659, over 4753.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2443, pruned_loss=0.04923, over 939834.44 frames. ], batch size: 27, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:25:21,209 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.421e+02 1.714e+02 1.950e+02 4.580e+02, threshold=3.429e+02, percent-clipped=2.0 2023-03-27 08:25:56,737 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6720, 1.5198, 2.2173, 1.8645, 1.7061, 4.1419, 1.4630, 1.7054], device='cuda:2'), covar=tensor([0.0980, 0.1785, 0.1199, 0.0936, 0.1594, 0.0174, 0.1466, 0.1724], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0081, 0.0072, 0.0076, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:25:56,768 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149807.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:26:09,033 INFO [finetune.py:976] (2/7) Epoch 27, batch 900, loss[loss=0.2022, simple_loss=0.2609, pruned_loss=0.0717, over 4793.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2402, pruned_loss=0.04812, over 942204.01 frames. ], batch size: 25, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:26:42,227 INFO [finetune.py:976] (2/7) Epoch 27, batch 950, loss[loss=0.2122, simple_loss=0.2682, pruned_loss=0.0781, over 4796.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2388, pruned_loss=0.04792, over 945022.04 frames. ], batch size: 29, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:26:42,298 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:26:42,815 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.619e+01 1.503e+02 1.866e+02 2.296e+02 3.689e+02, threshold=3.732e+02, percent-clipped=3.0 2023-03-27 08:27:15,532 INFO [finetune.py:976] (2/7) Epoch 27, batch 1000, loss[loss=0.2288, simple_loss=0.2843, pruned_loss=0.08665, over 4175.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2417, pruned_loss=0.04916, over 946294.26 frames. ], batch size: 65, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:27:16,178 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149921.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 08:27:48,833 INFO [finetune.py:976] (2/7) Epoch 27, batch 1050, loss[loss=0.1727, simple_loss=0.2445, pruned_loss=0.05051, over 4926.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2432, pruned_loss=0.04907, over 946217.27 frames. ], batch size: 33, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:27:49,414 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.561e+02 1.767e+02 2.240e+02 3.870e+02, threshold=3.534e+02, percent-clipped=1.0 2023-03-27 08:27:55,514 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0390, 1.7457, 2.3202, 3.7905, 2.6482, 2.6472, 0.8808, 3.1926], device='cuda:2'), covar=tensor([0.1663, 0.1314, 0.1302, 0.0500, 0.0708, 0.1867, 0.1889, 0.0479], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0136, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:28:33,249 INFO [finetune.py:976] (2/7) Epoch 27, batch 1100, loss[loss=0.1593, simple_loss=0.2348, pruned_loss=0.04187, over 4763.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2439, pruned_loss=0.04876, over 949990.68 frames. ], batch size: 28, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:28:49,557 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5637, 2.3453, 1.9880, 0.9257, 2.0925, 1.9468, 1.7983, 2.0835], device='cuda:2'), covar=tensor([0.0814, 0.0785, 0.1560, 0.1977, 0.1326, 0.2270, 0.2176, 0.0994], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0192, 0.0201, 0.0182, 0.0209, 0.0211, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:29:06,475 INFO [finetune.py:976] (2/7) Epoch 27, batch 1150, loss[loss=0.2144, simple_loss=0.2929, pruned_loss=0.06798, over 4835.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2452, pruned_loss=0.04928, over 950381.23 frames. ], batch size: 47, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:29:07,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.913e+01 1.470e+02 1.766e+02 2.217e+02 3.439e+02, threshold=3.531e+02, percent-clipped=0.0 2023-03-27 08:29:13,025 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150079.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:29:26,993 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150102.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 08:29:39,273 INFO [finetune.py:976] (2/7) Epoch 27, batch 1200, loss[loss=0.1888, simple_loss=0.2576, pruned_loss=0.06002, over 4899.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2443, pruned_loss=0.04941, over 950813.23 frames. ], batch size: 36, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:29:52,965 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150140.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:14,458 INFO [finetune.py:976] (2/7) Epoch 27, batch 1250, loss[loss=0.1514, simple_loss=0.229, pruned_loss=0.03693, over 4898.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2422, pruned_loss=0.04893, over 951892.39 frames. ], batch size: 35, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:30:15,041 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150170.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:15,534 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.554e+02 1.886e+02 2.235e+02 6.588e+02, threshold=3.772e+02, percent-clipped=2.0 2023-03-27 08:30:43,354 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 08:30:56,376 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:57,557 INFO [finetune.py:976] (2/7) Epoch 27, batch 1300, loss[loss=0.1852, simple_loss=0.2447, pruned_loss=0.06287, over 4894.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2391, pruned_loss=0.04806, over 953041.22 frames. ], batch size: 43, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:31:02,854 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:31:42,207 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:31:42,765 INFO [finetune.py:976] (2/7) Epoch 27, batch 1350, loss[loss=0.1567, simple_loss=0.2316, pruned_loss=0.04093, over 4906.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2397, pruned_loss=0.04857, over 952464.94 frames. ], batch size: 35, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:31:43,034 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 08:31:43,342 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.453e+02 1.768e+02 2.125e+02 3.830e+02, threshold=3.537e+02, percent-clipped=1.0 2023-03-27 08:31:45,794 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150274.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:31:56,325 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 08:32:16,594 INFO [finetune.py:976] (2/7) Epoch 27, batch 1400, loss[loss=0.1566, simple_loss=0.2344, pruned_loss=0.03939, over 4862.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2442, pruned_loss=0.04984, over 954107.72 frames. ], batch size: 44, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:32:24,754 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1619, 1.7330, 2.3654, 1.7159, 2.2546, 2.3781, 1.6257, 2.5085], device='cuda:2'), covar=tensor([0.1238, 0.2085, 0.1582, 0.1982, 0.0902, 0.1382, 0.2897, 0.0795], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0193, 0.0190, 0.0175, 0.0214, 0.0218, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:32:28,279 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:32:42,137 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-27 08:32:44,435 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6732, 1.1838, 0.8813, 1.4798, 2.0324, 1.4757, 1.4851, 1.5332], device='cuda:2'), covar=tensor([0.1529, 0.2206, 0.1911, 0.1248, 0.1997, 0.1903, 0.1491, 0.1996], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:32:49,842 INFO [finetune.py:976] (2/7) Epoch 27, batch 1450, loss[loss=0.2087, simple_loss=0.2828, pruned_loss=0.06732, over 4813.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2457, pruned_loss=0.05005, over 954586.36 frames. ], batch size: 38, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:32:50,437 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.587e+02 1.925e+02 2.309e+02 4.827e+02, threshold=3.851e+02, percent-clipped=3.0 2023-03-27 08:33:03,343 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4139, 2.2582, 2.5087, 1.6485, 2.3449, 2.5060, 2.4233, 2.0032], device='cuda:2'), covar=tensor([0.0495, 0.0609, 0.0584, 0.0913, 0.1084, 0.0542, 0.0616, 0.1057], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0142, 0.0120, 0.0129, 0.0139, 0.0141, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:33:11,848 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150402.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:33:29,003 INFO [finetune.py:976] (2/7) Epoch 27, batch 1500, loss[loss=0.159, simple_loss=0.2367, pruned_loss=0.04067, over 4837.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2449, pruned_loss=0.04997, over 953072.85 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:33:41,931 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1225, 1.7717, 2.4342, 1.6107, 2.1362, 2.2968, 1.6456, 2.4500], device='cuda:2'), covar=tensor([0.1266, 0.2107, 0.1362, 0.2045, 0.0911, 0.1430, 0.2922, 0.0841], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0193, 0.0190, 0.0175, 0.0213, 0.0218, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:33:42,494 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150435.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:33:51,388 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4964, 2.3471, 1.9440, 2.3780, 2.9158, 2.3367, 2.3364, 1.8312], device='cuda:2'), covar=tensor([0.1951, 0.1774, 0.1888, 0.1533, 0.1575, 0.1008, 0.1936, 0.1777], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0211, 0.0215, 0.0199, 0.0245, 0.0192, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:33:53,599 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150450.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:33:57,159 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6205, 1.5241, 2.0442, 3.4584, 2.2724, 2.3444, 0.9880, 2.9191], device='cuda:2'), covar=tensor([0.1750, 0.1355, 0.1334, 0.0549, 0.0767, 0.1414, 0.1852, 0.0411], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0136, 0.0125, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:34:05,548 INFO [finetune.py:976] (2/7) Epoch 27, batch 1550, loss[loss=0.1499, simple_loss=0.2266, pruned_loss=0.03657, over 4671.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2447, pruned_loss=0.04951, over 954095.81 frames. ], batch size: 23, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:34:06,129 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.290e+01 1.580e+02 1.863e+02 2.206e+02 4.598e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-27 08:34:22,231 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 08:34:34,594 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1465, 2.0154, 2.2173, 1.4143, 2.1035, 2.2254, 2.2554, 1.7372], device='cuda:2'), covar=tensor([0.0526, 0.0679, 0.0564, 0.0806, 0.0758, 0.0577, 0.0556, 0.1150], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0140, 0.0161], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:34:38,728 INFO [finetune.py:976] (2/7) Epoch 27, batch 1600, loss[loss=0.206, simple_loss=0.2679, pruned_loss=0.07202, over 4875.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.242, pruned_loss=0.0486, over 953186.76 frames. ], batch size: 31, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:34:50,070 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150537.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:35:11,519 INFO [finetune.py:976] (2/7) Epoch 27, batch 1650, loss[loss=0.1538, simple_loss=0.2246, pruned_loss=0.0415, over 4153.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2403, pruned_loss=0.04827, over 955077.14 frames. ], batch size: 18, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:35:12,131 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.535e+02 1.741e+02 2.182e+02 5.670e+02, threshold=3.482e+02, percent-clipped=1.0 2023-03-27 08:35:20,079 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7255, 1.3391, 0.7993, 1.5543, 2.0941, 1.2666, 1.6000, 1.6366], device='cuda:2'), covar=tensor([0.1417, 0.1957, 0.1857, 0.1127, 0.1883, 0.1963, 0.1326, 0.1840], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 08:35:37,671 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150598.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:35:51,567 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 08:35:54,939 INFO [finetune.py:976] (2/7) Epoch 27, batch 1700, loss[loss=0.1654, simple_loss=0.236, pruned_loss=0.04741, over 4759.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2389, pruned_loss=0.04807, over 954162.31 frames. ], batch size: 26, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:36:01,036 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150630.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:36:01,767 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-27 08:36:41,945 INFO [finetune.py:976] (2/7) Epoch 27, batch 1750, loss[loss=0.1857, simple_loss=0.2462, pruned_loss=0.06258, over 4789.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2414, pruned_loss=0.04914, over 953125.76 frames. ], batch size: 29, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:36:42,540 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.512e+01 1.530e+02 1.821e+02 2.198e+02 3.521e+02, threshold=3.642e+02, percent-clipped=1.0 2023-03-27 08:36:45,083 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.9046, 3.4264, 3.5830, 3.7541, 3.6675, 3.4078, 3.9712, 1.3083], device='cuda:2'), covar=tensor([0.0966, 0.1008, 0.1014, 0.1094, 0.1488, 0.1903, 0.0961, 0.5834], device='cuda:2'), in_proj_covar=tensor([0.0354, 0.0249, 0.0285, 0.0298, 0.0337, 0.0289, 0.0308, 0.0304], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:37:14,450 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 08:37:15,429 INFO [finetune.py:976] (2/7) Epoch 27, batch 1800, loss[loss=0.2107, simple_loss=0.2743, pruned_loss=0.07359, over 4819.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2444, pruned_loss=0.04991, over 953350.21 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:37:27,746 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:37:33,291 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:37:57,189 INFO [finetune.py:976] (2/7) Epoch 27, batch 1850, loss[loss=0.2177, simple_loss=0.282, pruned_loss=0.07663, over 4837.00 frames. ], tot_loss[loss=0.176, simple_loss=0.248, pruned_loss=0.05204, over 951621.25 frames. ], batch size: 47, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:37:57,786 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.537e+02 1.800e+02 2.248e+02 4.542e+02, threshold=3.600e+02, percent-clipped=6.0 2023-03-27 08:38:05,038 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:38:11,121 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150793.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:38:18,809 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 08:38:30,228 INFO [finetune.py:976] (2/7) Epoch 27, batch 1900, loss[loss=0.1827, simple_loss=0.2565, pruned_loss=0.05448, over 4841.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2475, pruned_loss=0.05161, over 951157.71 frames. ], batch size: 47, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:39:14,069 INFO [finetune.py:976] (2/7) Epoch 27, batch 1950, loss[loss=0.1307, simple_loss=0.2048, pruned_loss=0.02826, over 4812.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.246, pruned_loss=0.05047, over 951214.33 frames. ], batch size: 25, lr: 2.92e-03, grad_scale: 8.0 2023-03-27 08:39:14,655 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.460e+02 1.651e+02 1.933e+02 3.642e+02, threshold=3.302e+02, percent-clipped=1.0 2023-03-27 08:39:28,775 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150893.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:39:28,836 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8801, 2.5463, 2.1921, 1.1303, 2.3125, 2.2383, 2.0792, 2.3794], device='cuda:2'), covar=tensor([0.0729, 0.0840, 0.1371, 0.2049, 0.1209, 0.1998, 0.1905, 0.0894], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0201, 0.0182, 0.0209, 0.0211, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:39:33,111 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150900.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:39:47,872 INFO [finetune.py:976] (2/7) Epoch 27, batch 2000, loss[loss=0.1335, simple_loss=0.2106, pruned_loss=0.02821, over 4900.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2427, pruned_loss=0.04914, over 952758.81 frames. ], batch size: 32, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:39:54,527 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150930.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:03,190 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3447, 2.4137, 1.9877, 2.5319, 2.3065, 2.2199, 2.3189, 3.1239], device='cuda:2'), covar=tensor([0.3737, 0.4676, 0.3503, 0.3988, 0.3933, 0.2454, 0.4209, 0.1745], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0264, 0.0237, 0.0277, 0.0261, 0.0230, 0.0259, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:40:15,360 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150961.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:18,283 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2315, 2.0545, 2.0789, 0.9474, 2.3787, 2.6106, 2.2679, 1.8884], device='cuda:2'), covar=tensor([0.0916, 0.0753, 0.0518, 0.0701, 0.0544, 0.0610, 0.0426, 0.0694], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0128, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8670e-05, 1.0558e-04, 9.1136e-05, 8.6289e-05, 9.1982e-05, 9.1704e-05, 1.0080e-04, 1.0736e-04], device='cuda:2') 2023-03-27 08:40:21,603 INFO [finetune.py:976] (2/7) Epoch 27, batch 2050, loss[loss=0.1681, simple_loss=0.2321, pruned_loss=0.05205, over 4937.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2398, pruned_loss=0.04809, over 951425.70 frames. ], batch size: 33, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:40:22,190 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.304e+01 1.432e+02 1.658e+02 2.071e+02 3.830e+02, threshold=3.317e+02, percent-clipped=1.0 2023-03-27 08:40:27,053 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:56,341 INFO [finetune.py:976] (2/7) Epoch 27, batch 2100, loss[loss=0.1572, simple_loss=0.2366, pruned_loss=0.03891, over 4836.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2395, pruned_loss=0.04829, over 952181.10 frames. ], batch size: 33, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:41:47,143 INFO [finetune.py:976] (2/7) Epoch 27, batch 2150, loss[loss=0.2122, simple_loss=0.2917, pruned_loss=0.06635, over 4920.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2425, pruned_loss=0.04918, over 955255.48 frames. ], batch size: 42, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:41:48,300 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.039e+02 1.525e+02 1.813e+02 2.166e+02 3.448e+02, threshold=3.626e+02, percent-clipped=1.0 2023-03-27 08:42:02,497 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151088.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 08:42:23,616 INFO [finetune.py:976] (2/7) Epoch 27, batch 2200, loss[loss=0.2063, simple_loss=0.2785, pruned_loss=0.06701, over 4857.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.244, pruned_loss=0.04967, over 955277.29 frames. ], batch size: 44, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:42:41,315 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5140, 1.5892, 1.2920, 1.5726, 1.8607, 1.8385, 1.6124, 1.4056], device='cuda:2'), covar=tensor([0.0436, 0.0344, 0.0629, 0.0305, 0.0227, 0.0555, 0.0321, 0.0387], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0107, 0.0148, 0.0113, 0.0103, 0.0117, 0.0104, 0.0115], device='cuda:2'), out_proj_covar=tensor([7.8810e-05, 8.1916e-05, 1.1545e-04, 8.6028e-05, 7.9441e-05, 8.6094e-05, 7.7458e-05, 8.6921e-05], device='cuda:2') 2023-03-27 08:43:04,288 INFO [finetune.py:976] (2/7) Epoch 27, batch 2250, loss[loss=0.1615, simple_loss=0.2388, pruned_loss=0.04212, over 4818.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2454, pruned_loss=0.05012, over 954707.15 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:43:04,889 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.120e+01 1.457e+02 1.754e+02 2.221e+02 3.820e+02, threshold=3.509e+02, percent-clipped=1.0 2023-03-27 08:43:14,785 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151184.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:20,790 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151193.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:37,565 INFO [finetune.py:976] (2/7) Epoch 27, batch 2300, loss[loss=0.185, simple_loss=0.2478, pruned_loss=0.06112, over 4849.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2455, pruned_loss=0.0499, over 953257.90 frames. ], batch size: 44, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:43:51,733 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:54,689 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151245.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:06,986 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151256.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:08,916 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-27 08:44:18,928 INFO [finetune.py:976] (2/7) Epoch 27, batch 2350, loss[loss=0.1649, simple_loss=0.2353, pruned_loss=0.04721, over 4825.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2439, pruned_loss=0.04986, over 953912.66 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:44:19,966 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.208e+01 1.503e+02 1.827e+02 2.189e+02 3.264e+02, threshold=3.653e+02, percent-clipped=0.0 2023-03-27 08:44:28,772 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:39,435 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151298.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:52,608 INFO [finetune.py:976] (2/7) Epoch 27, batch 2400, loss[loss=0.1225, simple_loss=0.1962, pruned_loss=0.02442, over 4782.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.241, pruned_loss=0.04889, over 953318.40 frames. ], batch size: 29, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:45:06,814 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2388, 1.8542, 2.1792, 2.1830, 1.9608, 1.9656, 2.1600, 2.0843], device='cuda:2'), covar=tensor([0.4887, 0.4780, 0.3745, 0.4774, 0.5771, 0.4667, 0.5729, 0.3665], device='cuda:2'), in_proj_covar=tensor([0.0266, 0.0248, 0.0269, 0.0297, 0.0296, 0.0273, 0.0302, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:45:09,229 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151343.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:19,400 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151359.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:26,013 INFO [finetune.py:976] (2/7) Epoch 27, batch 2450, loss[loss=0.1582, simple_loss=0.24, pruned_loss=0.03822, over 4909.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2384, pruned_loss=0.04769, over 955115.12 frames. ], batch size: 37, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:45:26,609 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.415e+02 1.689e+02 1.968e+02 4.441e+02, threshold=3.378e+02, percent-clipped=1.0 2023-03-27 08:45:38,464 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151388.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:46,933 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3988, 1.4509, 1.9327, 1.6724, 1.5827, 3.3287, 1.3290, 1.5418], device='cuda:2'), covar=tensor([0.1054, 0.1765, 0.1075, 0.0940, 0.1615, 0.0286, 0.1546, 0.1815], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 08:45:55,925 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7198, 3.8655, 3.6888, 1.7879, 3.9958, 2.9527, 0.9394, 2.7057], device='cuda:2'), covar=tensor([0.2366, 0.2000, 0.1355, 0.3341, 0.0960, 0.0950, 0.4381, 0.1514], device='cuda:2'), in_proj_covar=tensor([0.0148, 0.0178, 0.0159, 0.0129, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 08:45:58,929 INFO [finetune.py:976] (2/7) Epoch 27, batch 2500, loss[loss=0.1529, simple_loss=0.2313, pruned_loss=0.03727, over 4829.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2392, pruned_loss=0.048, over 955606.44 frames. ], batch size: 39, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:46:12,816 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151436.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:46:48,695 INFO [finetune.py:976] (2/7) Epoch 27, batch 2550, loss[loss=0.1727, simple_loss=0.2555, pruned_loss=0.04489, over 4810.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2415, pruned_loss=0.04795, over 956028.29 frames. ], batch size: 40, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:46:49,280 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.209e+01 1.472e+02 1.881e+02 2.470e+02 3.912e+02, threshold=3.762e+02, percent-clipped=2.0 2023-03-27 08:46:56,288 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 08:47:24,842 INFO [finetune.py:976] (2/7) Epoch 27, batch 2600, loss[loss=0.1533, simple_loss=0.2395, pruned_loss=0.03356, over 4887.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2436, pruned_loss=0.04905, over 955295.66 frames. ], batch size: 35, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:47:42,187 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151540.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:03,477 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151556.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:10,377 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 08:48:16,180 INFO [finetune.py:976] (2/7) Epoch 27, batch 2650, loss[loss=0.1985, simple_loss=0.2791, pruned_loss=0.05902, over 4821.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2446, pruned_loss=0.04924, over 955215.68 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:48:16,787 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.649e+02 1.887e+02 2.270e+02 4.456e+02, threshold=3.774e+02, percent-clipped=3.0 2023-03-27 08:48:39,917 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151604.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:49,946 INFO [finetune.py:976] (2/7) Epoch 27, batch 2700, loss[loss=0.159, simple_loss=0.2358, pruned_loss=0.04109, over 4843.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2436, pruned_loss=0.04866, over 954145.18 frames. ], batch size: 44, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:49:01,310 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:03,202 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6050, 1.5723, 1.4333, 1.5962, 1.8818, 1.8662, 1.6063, 1.3894], device='cuda:2'), covar=tensor([0.0333, 0.0316, 0.0585, 0.0314, 0.0218, 0.0472, 0.0338, 0.0427], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0106, 0.0146, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.7605e-05, 8.0999e-05, 1.1402e-04, 8.4815e-05, 7.8258e-05, 8.5017e-05, 7.6351e-05, 8.5796e-05], device='cuda:2') 2023-03-27 08:49:12,822 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:29,777 INFO [finetune.py:976] (2/7) Epoch 27, batch 2750, loss[loss=0.1734, simple_loss=0.2476, pruned_loss=0.04956, over 4860.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2408, pruned_loss=0.04795, over 955584.64 frames. ], batch size: 49, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:49:30,372 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.940e+01 1.418e+02 1.693e+02 2.178e+02 3.976e+02, threshold=3.385e+02, percent-clipped=1.0 2023-03-27 08:49:44,696 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151688.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:45,294 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4392, 1.4080, 1.8741, 1.7330, 1.5078, 3.4182, 1.2943, 1.5161], device='cuda:2'), covar=tensor([0.1291, 0.2577, 0.1411, 0.1221, 0.2136, 0.0331, 0.2125, 0.2572], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0074, 0.0076, 0.0092, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 08:50:06,331 INFO [finetune.py:976] (2/7) Epoch 27, batch 2800, loss[loss=0.1477, simple_loss=0.2213, pruned_loss=0.03708, over 4763.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2378, pruned_loss=0.0472, over 954481.07 frames. ], batch size: 28, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:50:12,933 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4400, 3.8263, 4.0811, 4.2639, 4.2084, 3.9564, 4.4885, 1.4438], device='cuda:2'), covar=tensor([0.0835, 0.0943, 0.0813, 0.1023, 0.1232, 0.1418, 0.0670, 0.5594], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0249, 0.0282, 0.0297, 0.0335, 0.0288, 0.0307, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:50:24,970 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151749.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:50:39,488 INFO [finetune.py:976] (2/7) Epoch 27, batch 2850, loss[loss=0.1454, simple_loss=0.2206, pruned_loss=0.0351, over 4865.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2375, pruned_loss=0.04726, over 955141.74 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:50:40,097 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.485e+02 1.795e+02 2.169e+02 3.375e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-27 08:50:51,022 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0248, 1.6879, 2.2727, 1.4992, 2.0108, 2.2752, 1.5849, 2.4239], device='cuda:2'), covar=tensor([0.1257, 0.2081, 0.1378, 0.1879, 0.0904, 0.1247, 0.2829, 0.0797], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0193, 0.0190, 0.0175, 0.0213, 0.0218, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:50:59,496 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151801.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:51:12,915 INFO [finetune.py:976] (2/7) Epoch 27, batch 2900, loss[loss=0.1906, simple_loss=0.2708, pruned_loss=0.05518, over 4744.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2405, pruned_loss=0.04821, over 954918.29 frames. ], batch size: 54, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:51:25,538 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151840.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:51:34,457 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9798, 5.0102, 4.6901, 2.5976, 5.1702, 3.9303, 1.1273, 3.6767], device='cuda:2'), covar=tensor([0.2266, 0.2177, 0.1571, 0.3405, 0.0732, 0.0847, 0.4757, 0.1242], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0179, 0.0160, 0.0130, 0.0160, 0.0124, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 08:51:54,454 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151862.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:51:56,259 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5750, 1.1283, 0.8084, 1.3905, 2.0163, 0.7594, 1.3836, 1.3822], device='cuda:2'), covar=tensor([0.1550, 0.2149, 0.1654, 0.1262, 0.1853, 0.1915, 0.1493, 0.1995], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0092, 0.0107, 0.0090, 0.0117, 0.0091, 0.0097, 0.0087], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:52:04,134 INFO [finetune.py:976] (2/7) Epoch 27, batch 2950, loss[loss=0.19, simple_loss=0.2663, pruned_loss=0.05684, over 4850.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2435, pruned_loss=0.04902, over 955300.12 frames. ], batch size: 44, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:52:04,749 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.531e+02 1.876e+02 2.281e+02 4.815e+02, threshold=3.752e+02, percent-clipped=2.0 2023-03-27 08:52:15,637 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151888.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:52:26,524 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1308, 1.4351, 1.6449, 1.2588, 1.5563, 2.4620, 1.3460, 1.5524], device='cuda:2'), covar=tensor([0.1100, 0.1826, 0.0977, 0.0991, 0.1683, 0.0427, 0.1535, 0.1760], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:52:37,407 INFO [finetune.py:976] (2/7) Epoch 27, batch 3000, loss[loss=0.1751, simple_loss=0.2522, pruned_loss=0.04905, over 4811.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2455, pruned_loss=0.05009, over 955808.51 frames. ], batch size: 25, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:52:37,407 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 08:52:40,280 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3446, 2.1856, 1.9543, 2.1387, 2.2890, 2.1276, 2.4023, 2.3558], device='cuda:2'), covar=tensor([0.1302, 0.1963, 0.2755, 0.2101, 0.2531, 0.1551, 0.2657, 0.1693], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0188, 0.0235, 0.0252, 0.0247, 0.0205, 0.0213, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:52:50,759 INFO [finetune.py:1010] (2/7) Epoch 27, validation: loss=0.1572, simple_loss=0.2248, pruned_loss=0.04486, over 2265189.00 frames. 2023-03-27 08:52:50,760 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 08:52:58,255 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4715, 2.4471, 2.2074, 2.5878, 2.3556, 2.3641, 2.3568, 3.2643], device='cuda:2'), covar=tensor([0.3709, 0.4664, 0.3258, 0.4064, 0.4348, 0.2621, 0.4259, 0.1632], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0274, 0.0260, 0.0229, 0.0258, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:52:59,433 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4148, 1.4718, 1.2632, 1.4908, 1.7204, 1.6579, 1.5138, 1.2740], device='cuda:2'), covar=tensor([0.0376, 0.0271, 0.0669, 0.0268, 0.0219, 0.0411, 0.0283, 0.0399], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0105, 0.0146, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.7414e-05, 8.0690e-05, 1.1379e-04, 8.4399e-05, 7.8016e-05, 8.4697e-05, 7.6020e-05, 8.5519e-05], device='cuda:2') 2023-03-27 08:53:00,032 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0825, 1.8976, 2.1347, 1.6256, 2.1614, 2.3721, 2.2724, 1.4673], device='cuda:2'), covar=tensor([0.0810, 0.0978, 0.0947, 0.1083, 0.0787, 0.0801, 0.0909, 0.1868], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0136, 0.0140, 0.0118, 0.0127, 0.0137, 0.0139, 0.0160], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:53:01,834 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:14,200 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151954.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:21,898 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8381, 1.6951, 1.5203, 1.6609, 2.0297, 2.1000, 1.7714, 1.5395], device='cuda:2'), covar=tensor([0.0342, 0.0366, 0.0598, 0.0342, 0.0239, 0.0383, 0.0358, 0.0436], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0106, 0.0146, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.7802e-05, 8.1025e-05, 1.1409e-04, 8.4701e-05, 7.8280e-05, 8.5046e-05, 7.6347e-05, 8.5905e-05], device='cuda:2') 2023-03-27 08:53:32,161 INFO [finetune.py:976] (2/7) Epoch 27, batch 3050, loss[loss=0.1698, simple_loss=0.2519, pruned_loss=0.04391, over 4879.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.245, pruned_loss=0.04941, over 955098.24 frames. ], batch size: 34, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:53:32,747 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.546e+02 1.837e+02 2.199e+02 4.500e+02, threshold=3.674e+02, percent-clipped=2.0 2023-03-27 08:53:44,360 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151986.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:47,998 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5858, 1.5266, 1.9303, 1.8293, 1.6469, 3.4848, 1.4520, 1.6447], device='cuda:2'), covar=tensor([0.0958, 0.1890, 0.1068, 0.0912, 0.1694, 0.0260, 0.1511, 0.1852], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 08:53:55,885 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:54:00,156 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6005, 1.5529, 1.4790, 1.5980, 1.1863, 3.1496, 1.3080, 1.6579], device='cuda:2'), covar=tensor([0.2981, 0.2351, 0.2031, 0.2245, 0.1692, 0.0218, 0.2629, 0.1181], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 08:54:05,541 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7075, 3.5860, 3.4386, 1.6986, 3.7657, 2.8680, 0.8530, 2.5925], device='cuda:2'), covar=tensor([0.2508, 0.1972, 0.1478, 0.3112, 0.0964, 0.0983, 0.4103, 0.1388], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0178, 0.0159, 0.0129, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 08:54:06,689 INFO [finetune.py:976] (2/7) Epoch 27, batch 3100, loss[loss=0.1625, simple_loss=0.2506, pruned_loss=0.03721, over 4814.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2433, pruned_loss=0.04883, over 955270.09 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:54:23,668 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152044.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:54:30,969 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6541, 1.3276, 1.2160, 1.3074, 1.7218, 1.8254, 1.5572, 1.3228], device='cuda:2'), covar=tensor([0.0344, 0.0412, 0.0881, 0.0454, 0.0313, 0.0431, 0.0340, 0.0505], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0107, 0.0147, 0.0112, 0.0102, 0.0116, 0.0104, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.8278e-05, 8.1565e-05, 1.1472e-04, 8.5317e-05, 7.8790e-05, 8.5556e-05, 7.6912e-05, 8.6471e-05], device='cuda:2') 2023-03-27 08:54:38,896 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-27 08:54:41,692 INFO [finetune.py:976] (2/7) Epoch 27, batch 3150, loss[loss=0.1615, simple_loss=0.2284, pruned_loss=0.04728, over 4852.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2409, pruned_loss=0.04844, over 955070.75 frames. ], batch size: 49, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:54:42,282 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.945e+01 1.491e+02 1.827e+02 2.202e+02 3.039e+02, threshold=3.654e+02, percent-clipped=0.0 2023-03-27 08:55:03,940 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 08:55:21,807 INFO [finetune.py:976] (2/7) Epoch 27, batch 3200, loss[loss=0.1733, simple_loss=0.2463, pruned_loss=0.05014, over 4872.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2373, pruned_loss=0.0471, over 956980.17 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:55:46,792 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152157.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:55:54,656 INFO [finetune.py:976] (2/7) Epoch 27, batch 3250, loss[loss=0.1804, simple_loss=0.2467, pruned_loss=0.05701, over 4097.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2376, pruned_loss=0.04721, over 955801.85 frames. ], batch size: 65, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:55:55,261 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.795e+01 1.453e+02 1.756e+02 2.073e+02 3.538e+02, threshold=3.512e+02, percent-clipped=0.0 2023-03-27 08:56:29,290 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.5150, 4.7775, 5.0123, 5.3547, 5.2827, 4.9864, 5.6210, 1.9430], device='cuda:2'), covar=tensor([0.0656, 0.0812, 0.0815, 0.0861, 0.1102, 0.1645, 0.0493, 0.5332], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0248, 0.0282, 0.0296, 0.0333, 0.0287, 0.0305, 0.0301], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:56:31,113 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5824, 1.1908, 0.8708, 1.4296, 1.8791, 1.4795, 1.4702, 1.6582], device='cuda:2'), covar=tensor([0.2081, 0.2914, 0.2221, 0.1665, 0.2597, 0.2631, 0.1905, 0.2515], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0092, 0.0108, 0.0091, 0.0118, 0.0092, 0.0097, 0.0087], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 08:56:32,279 INFO [finetune.py:976] (2/7) Epoch 27, batch 3300, loss[loss=0.1458, simple_loss=0.2151, pruned_loss=0.03823, over 4781.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2404, pruned_loss=0.04779, over 956000.01 frames. ], batch size: 26, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:56:35,416 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:56:45,001 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9209, 1.5614, 2.1896, 1.4532, 1.9796, 2.1304, 1.4904, 2.2715], device='cuda:2'), covar=tensor([0.1369, 0.2380, 0.1512, 0.1968, 0.0964, 0.1425, 0.3166, 0.0843], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0209, 0.0194, 0.0191, 0.0176, 0.0215, 0.0220, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 08:57:13,838 INFO [finetune.py:976] (2/7) Epoch 27, batch 3350, loss[loss=0.1889, simple_loss=0.2568, pruned_loss=0.06048, over 4916.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2434, pruned_loss=0.04904, over 955105.16 frames. ], batch size: 42, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:57:14,396 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.581e+01 1.606e+02 1.884e+02 2.337e+02 3.345e+02, threshold=3.768e+02, percent-clipped=0.0 2023-03-27 08:57:20,865 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:57:33,577 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:58:01,023 INFO [finetune.py:976] (2/7) Epoch 27, batch 3400, loss[loss=0.1698, simple_loss=0.2378, pruned_loss=0.05089, over 4835.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2448, pruned_loss=0.04985, over 956857.80 frames. ], batch size: 30, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:58:09,667 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152334.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:58:16,673 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152344.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:58:36,171 INFO [finetune.py:976] (2/7) Epoch 27, batch 3450, loss[loss=0.146, simple_loss=0.2203, pruned_loss=0.03582, over 4897.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2448, pruned_loss=0.04954, over 955803.72 frames. ], batch size: 32, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:58:36,743 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.467e+02 1.787e+02 2.252e+02 4.149e+02, threshold=3.573e+02, percent-clipped=3.0 2023-03-27 08:58:58,953 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152392.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:18,828 INFO [finetune.py:976] (2/7) Epoch 27, batch 3500, loss[loss=0.1603, simple_loss=0.2292, pruned_loss=0.04572, over 4735.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2437, pruned_loss=0.04968, over 955719.31 frames. ], batch size: 54, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:59:34,557 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:43,790 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152457.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:52,114 INFO [finetune.py:976] (2/7) Epoch 27, batch 3550, loss[loss=0.1519, simple_loss=0.2153, pruned_loss=0.04427, over 4903.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2404, pruned_loss=0.04859, over 956613.15 frames. ], batch size: 43, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:59:52,706 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.528e+01 1.381e+02 1.664e+02 2.040e+02 3.997e+02, threshold=3.328e+02, percent-clipped=1.0 2023-03-27 09:00:25,119 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:00:26,307 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:00:36,269 INFO [finetune.py:976] (2/7) Epoch 27, batch 3600, loss[loss=0.1517, simple_loss=0.2399, pruned_loss=0.03172, over 4902.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2385, pruned_loss=0.04806, over 956446.08 frames. ], batch size: 36, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:00:48,092 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-27 09:01:05,501 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2482, 1.4189, 1.4772, 0.9173, 1.3824, 1.6322, 1.6807, 1.3840], device='cuda:2'), covar=tensor([0.1009, 0.0705, 0.0601, 0.0526, 0.0651, 0.0723, 0.0393, 0.0774], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0122, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8835e-05, 1.0565e-04, 9.2416e-05, 8.6033e-05, 9.1986e-05, 9.2281e-05, 1.0059e-04, 1.0740e-04], device='cuda:2') 2023-03-27 09:01:10,218 INFO [finetune.py:976] (2/7) Epoch 27, batch 3650, loss[loss=0.1379, simple_loss=0.2024, pruned_loss=0.03668, over 3874.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2407, pruned_loss=0.04862, over 955282.50 frames. ], batch size: 17, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:01:10,827 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.982e+01 1.594e+02 1.907e+02 2.265e+02 4.160e+02, threshold=3.814e+02, percent-clipped=3.0 2023-03-27 09:01:17,557 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152581.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:18,190 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152582.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:18,826 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.5109, 3.3230, 2.9907, 1.6424, 3.1829, 2.5098, 2.5617, 2.9826], device='cuda:2'), covar=tensor([0.0934, 0.0682, 0.1699, 0.2039, 0.1301, 0.2025, 0.1792, 0.0872], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0201, 0.0182, 0.0209, 0.0211, 0.0225, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:01:22,486 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:29,810 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5850, 1.3502, 1.3020, 0.8197, 1.3893, 1.5505, 1.6045, 1.2926], device='cuda:2'), covar=tensor([0.0933, 0.0688, 0.0569, 0.0579, 0.0584, 0.0699, 0.0318, 0.0709], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0123, 0.0132, 0.0131, 0.0142, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.8967e-05, 1.0578e-04, 9.2803e-05, 8.6171e-05, 9.2185e-05, 9.2475e-05, 1.0091e-04, 1.0774e-04], device='cuda:2') 2023-03-27 09:01:46,436 INFO [finetune.py:976] (2/7) Epoch 27, batch 3700, loss[loss=0.1571, simple_loss=0.2429, pruned_loss=0.03568, over 4908.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2428, pruned_loss=0.04889, over 952556.86 frames. ], batch size: 37, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:01:52,533 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:01,193 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:02,372 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.5797, 3.9992, 4.1937, 4.4162, 4.3303, 3.9977, 4.6856, 1.4443], device='cuda:2'), covar=tensor([0.0715, 0.0808, 0.0836, 0.1018, 0.1137, 0.1732, 0.0623, 0.5228], device='cuda:2'), in_proj_covar=tensor([0.0351, 0.0246, 0.0281, 0.0294, 0.0332, 0.0287, 0.0303, 0.0300], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:02:05,489 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152650.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:22,199 INFO [finetune.py:976] (2/7) Epoch 27, batch 3750, loss[loss=0.1815, simple_loss=0.2573, pruned_loss=0.05291, over 4890.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2452, pruned_loss=0.04975, over 953086.31 frames. ], batch size: 35, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:02:22,802 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.491e+02 1.751e+02 2.166e+02 4.226e+02, threshold=3.502e+02, percent-clipped=3.0 2023-03-27 09:03:12,607 INFO [finetune.py:976] (2/7) Epoch 27, batch 3800, loss[loss=0.1751, simple_loss=0.2413, pruned_loss=0.05445, over 4749.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2466, pruned_loss=0.0502, over 953148.95 frames. ], batch size: 26, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:03:16,896 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:03:45,579 INFO [finetune.py:976] (2/7) Epoch 27, batch 3850, loss[loss=0.2026, simple_loss=0.2668, pruned_loss=0.0692, over 4202.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2452, pruned_loss=0.04946, over 953582.87 frames. ], batch size: 65, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:03:46,653 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.206e+01 1.335e+02 1.631e+02 2.144e+02 3.589e+02, threshold=3.262e+02, percent-clipped=1.0 2023-03-27 09:03:51,360 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.9203, 4.4089, 4.6147, 4.5690, 4.4835, 4.3006, 5.0645, 1.6298], device='cuda:2'), covar=tensor([0.0881, 0.1344, 0.1246, 0.1772, 0.1603, 0.2003, 0.0746, 0.6811], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0246, 0.0281, 0.0293, 0.0331, 0.0286, 0.0303, 0.0299], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:03:59,987 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152787.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:04:12,194 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6898, 1.4755, 2.2810, 3.2909, 2.2404, 2.4511, 1.2313, 2.7910], device='cuda:2'), covar=tensor([0.1633, 0.1386, 0.1110, 0.0593, 0.0775, 0.1841, 0.1606, 0.0456], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 09:04:12,776 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152801.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:04:19,367 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 09:04:28,201 INFO [finetune.py:976] (2/7) Epoch 27, batch 3900, loss[loss=0.128, simple_loss=0.2114, pruned_loss=0.02227, over 4872.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.242, pruned_loss=0.04834, over 954467.37 frames. ], batch size: 31, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:04:33,888 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2423, 2.1811, 2.2013, 1.4613, 2.1900, 2.3661, 2.2725, 1.9393], device='cuda:2'), covar=tensor([0.0509, 0.0614, 0.0726, 0.0940, 0.0689, 0.0636, 0.0586, 0.1028], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0138, 0.0142, 0.0121, 0.0129, 0.0139, 0.0142, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:05:01,433 INFO [finetune.py:976] (2/7) Epoch 27, batch 3950, loss[loss=0.1751, simple_loss=0.2388, pruned_loss=0.05569, over 4905.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2391, pruned_loss=0.04741, over 956308.04 frames. ], batch size: 35, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:05:02,042 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.044e+02 1.440e+02 1.686e+02 2.039e+02 3.105e+02, threshold=3.372e+02, percent-clipped=0.0 2023-03-27 09:05:09,747 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:10,407 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2216, 2.2381, 1.9841, 2.2990, 2.1617, 2.1562, 2.1659, 2.9526], device='cuda:2'), covar=tensor([0.3533, 0.4874, 0.3200, 0.4410, 0.4396, 0.2563, 0.4296, 0.1516], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0276, 0.0260, 0.0230, 0.0258, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:05:43,211 INFO [finetune.py:976] (2/7) Epoch 27, batch 4000, loss[loss=0.1395, simple_loss=0.2128, pruned_loss=0.03313, over 4764.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2387, pruned_loss=0.04743, over 958572.59 frames. ], batch size: 26, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:05:45,786 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.10 vs. limit=5.0 2023-03-27 09:05:49,727 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:49,773 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:56,132 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:00,334 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:16,497 INFO [finetune.py:976] (2/7) Epoch 27, batch 4050, loss[loss=0.1757, simple_loss=0.2613, pruned_loss=0.04502, over 4830.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2429, pruned_loss=0.04943, over 957889.48 frames. ], batch size: 51, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:06:17,095 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.469e+02 1.767e+02 2.180e+02 3.425e+02, threshold=3.534e+02, percent-clipped=1.0 2023-03-27 09:06:20,838 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152977.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:45,208 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8488, 1.2801, 1.8623, 1.9104, 1.6950, 1.6654, 1.8338, 1.8104], device='cuda:2'), covar=tensor([0.3917, 0.3850, 0.3205, 0.3384, 0.4550, 0.3897, 0.4190, 0.2930], device='cuda:2'), in_proj_covar=tensor([0.0267, 0.0248, 0.0269, 0.0298, 0.0296, 0.0273, 0.0302, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:06:49,256 INFO [finetune.py:976] (2/7) Epoch 27, batch 4100, loss[loss=0.1719, simple_loss=0.2414, pruned_loss=0.05122, over 4857.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2445, pruned_loss=0.04959, over 956636.14 frames. ], batch size: 44, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:07:22,844 INFO [finetune.py:976] (2/7) Epoch 27, batch 4150, loss[loss=0.2049, simple_loss=0.2757, pruned_loss=0.067, over 4820.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2459, pruned_loss=0.04985, over 958176.39 frames. ], batch size: 33, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:07:23,441 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.058e+02 1.644e+02 1.926e+02 2.373e+02 3.999e+02, threshold=3.851e+02, percent-clipped=3.0 2023-03-27 09:07:31,208 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153082.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:07:51,115 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153101.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:00,068 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0451, 2.1028, 1.7197, 2.0248, 1.9961, 1.9028, 2.0182, 2.6592], device='cuda:2'), covar=tensor([0.3631, 0.3923, 0.3092, 0.3935, 0.4050, 0.2444, 0.3749, 0.1635], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0275, 0.0260, 0.0230, 0.0259, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:08:05,307 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1413, 2.0792, 1.8114, 2.0514, 1.9717, 1.9656, 2.0162, 2.7076], device='cuda:2'), covar=tensor([0.3530, 0.4054, 0.3139, 0.3829, 0.3915, 0.2318, 0.3691, 0.1591], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0263, 0.0237, 0.0276, 0.0261, 0.0230, 0.0259, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:08:07,047 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:13,346 INFO [finetune.py:976] (2/7) Epoch 27, batch 4200, loss[loss=0.1484, simple_loss=0.2258, pruned_loss=0.03554, over 4762.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2467, pruned_loss=0.04957, over 957535.35 frames. ], batch size: 28, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:08:36,712 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153149.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:49,826 INFO [finetune.py:976] (2/7) Epoch 27, batch 4250, loss[loss=0.1636, simple_loss=0.2307, pruned_loss=0.04818, over 4787.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2433, pruned_loss=0.04837, over 955988.72 frames. ], batch size: 29, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:08:50,415 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.682e+01 1.570e+02 1.909e+02 2.227e+02 3.978e+02, threshold=3.818e+02, percent-clipped=1.0 2023-03-27 09:08:54,793 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:09:06,263 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0331, 0.9878, 0.9483, 1.1590, 1.2050, 1.1661, 1.0441, 0.9383], device='cuda:2'), covar=tensor([0.0445, 0.0298, 0.0668, 0.0301, 0.0307, 0.0450, 0.0350, 0.0444], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0107, 0.0148, 0.0112, 0.0102, 0.0116, 0.0104, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.8486e-05, 8.1583e-05, 1.1545e-04, 8.5923e-05, 7.8944e-05, 8.5557e-05, 7.7419e-05, 8.6401e-05], device='cuda:2') 2023-03-27 09:09:24,439 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6487, 1.5342, 1.4785, 1.5942, 1.2510, 3.3079, 1.2958, 1.6489], device='cuda:2'), covar=tensor([0.3269, 0.2476, 0.2134, 0.2448, 0.1702, 0.0234, 0.2538, 0.1239], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:09:33,254 INFO [finetune.py:976] (2/7) Epoch 27, batch 4300, loss[loss=0.168, simple_loss=0.2416, pruned_loss=0.04721, over 4903.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2421, pruned_loss=0.0485, over 957211.69 frames. ], batch size: 43, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:09:44,711 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:09:49,315 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153245.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:01,555 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9146, 2.9656, 2.7694, 1.8995, 2.8392, 3.1484, 3.0628, 2.6197], device='cuda:2'), covar=tensor([0.0531, 0.0566, 0.0737, 0.0972, 0.0548, 0.0684, 0.0632, 0.0912], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0139, 0.0142, 0.0121, 0.0130, 0.0140, 0.0142, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:10:06,729 INFO [finetune.py:976] (2/7) Epoch 27, batch 4350, loss[loss=0.1612, simple_loss=0.2347, pruned_loss=0.04379, over 4899.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2383, pruned_loss=0.04718, over 956330.44 frames. ], batch size: 36, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:10:07,337 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.429e+01 1.453e+02 1.753e+02 2.112e+02 4.699e+02, threshold=3.507e+02, percent-clipped=1.0 2023-03-27 09:10:11,676 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153278.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:16,467 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:21,196 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153293.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:41,648 INFO [finetune.py:976] (2/7) Epoch 27, batch 4400, loss[loss=0.1883, simple_loss=0.2562, pruned_loss=0.06018, over 4845.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2385, pruned_loss=0.04749, over 954902.71 frames. ], batch size: 44, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:11:01,179 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153339.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:11:05,488 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 09:11:15,192 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1723, 2.2117, 1.6086, 2.3002, 2.0917, 1.8415, 2.5604, 2.2731], device='cuda:2'), covar=tensor([0.1252, 0.1957, 0.2776, 0.2364, 0.2375, 0.1600, 0.2839, 0.1440], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0255, 0.0250, 0.0207, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:11:23,046 INFO [finetune.py:976] (2/7) Epoch 27, batch 4450, loss[loss=0.1682, simple_loss=0.2352, pruned_loss=0.05063, over 4897.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2414, pruned_loss=0.04826, over 954647.54 frames. ], batch size: 32, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:11:23,634 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.494e+02 1.793e+02 2.132e+02 3.020e+02, threshold=3.586e+02, percent-clipped=0.0 2023-03-27 09:11:30,912 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153382.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:11:39,342 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3623, 2.3642, 1.7439, 2.3293, 2.1887, 1.9232, 2.6529, 2.3759], device='cuda:2'), covar=tensor([0.1334, 0.1846, 0.2944, 0.2478, 0.2487, 0.1708, 0.3225, 0.1585], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0255, 0.0250, 0.0207, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:11:56,790 INFO [finetune.py:976] (2/7) Epoch 27, batch 4500, loss[loss=0.203, simple_loss=0.2698, pruned_loss=0.06813, over 4921.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2436, pruned_loss=0.04896, over 953833.27 frames. ], batch size: 42, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:11:57,456 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:03,378 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153430.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:12,206 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 09:12:29,936 INFO [finetune.py:976] (2/7) Epoch 27, batch 4550, loss[loss=0.1657, simple_loss=0.225, pruned_loss=0.0532, over 4373.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2454, pruned_loss=0.0499, over 953936.55 frames. ], batch size: 66, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:12:30,509 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.518e+01 1.590e+02 1.867e+02 2.229e+02 3.919e+02, threshold=3.734e+02, percent-clipped=1.0 2023-03-27 09:12:31,761 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153473.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:37,681 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153482.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:13:14,340 INFO [finetune.py:976] (2/7) Epoch 27, batch 4600, loss[loss=0.1813, simple_loss=0.2474, pruned_loss=0.05753, over 4849.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2449, pruned_loss=0.0495, over 953945.73 frames. ], batch size: 44, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:13:56,975 INFO [finetune.py:976] (2/7) Epoch 27, batch 4650, loss[loss=0.1455, simple_loss=0.2283, pruned_loss=0.03136, over 4819.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2415, pruned_loss=0.04819, over 955739.49 frames. ], batch size: 41, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:13:57,586 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.461e+02 1.737e+02 2.165e+02 3.643e+02, threshold=3.475e+02, percent-clipped=0.0 2023-03-27 09:14:01,333 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1679, 2.0751, 1.8037, 1.9356, 1.9238, 1.9462, 1.9572, 2.7282], device='cuda:2'), covar=tensor([0.3655, 0.4025, 0.3161, 0.3540, 0.3636, 0.2329, 0.3605, 0.1617], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0238, 0.0277, 0.0262, 0.0231, 0.0260, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:14:31,816 INFO [finetune.py:976] (2/7) Epoch 27, batch 4700, loss[loss=0.1562, simple_loss=0.2226, pruned_loss=0.04487, over 4893.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2396, pruned_loss=0.04787, over 955414.59 frames. ], batch size: 35, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:14:47,931 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153634.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:15:12,023 INFO [finetune.py:976] (2/7) Epoch 27, batch 4750, loss[loss=0.1827, simple_loss=0.2554, pruned_loss=0.055, over 4090.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2377, pruned_loss=0.04771, over 954702.40 frames. ], batch size: 65, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:15:13,079 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.473e+02 1.795e+02 2.173e+02 4.465e+02, threshold=3.590e+02, percent-clipped=3.0 2023-03-27 09:15:45,874 INFO [finetune.py:976] (2/7) Epoch 27, batch 4800, loss[loss=0.1997, simple_loss=0.2731, pruned_loss=0.06316, over 4795.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2381, pruned_loss=0.04727, over 952903.39 frames. ], batch size: 45, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:16:28,411 INFO [finetune.py:976] (2/7) Epoch 27, batch 4850, loss[loss=0.2269, simple_loss=0.2924, pruned_loss=0.08066, over 4900.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2404, pruned_loss=0.04764, over 953931.46 frames. ], batch size: 43, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:16:28,978 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.544e+02 1.777e+02 2.223e+02 4.381e+02, threshold=3.554e+02, percent-clipped=2.0 2023-03-27 09:16:30,776 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:16:31,423 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7769, 1.7127, 1.5320, 1.9000, 2.4611, 1.8990, 1.8943, 1.4855], device='cuda:2'), covar=tensor([0.2383, 0.2187, 0.2165, 0.1772, 0.1645, 0.1335, 0.2238, 0.2043], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0212, 0.0216, 0.0201, 0.0247, 0.0192, 0.0219, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:16:33,639 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:17:00,916 INFO [finetune.py:976] (2/7) Epoch 27, batch 4900, loss[loss=0.2156, simple_loss=0.2859, pruned_loss=0.07265, over 4851.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2412, pruned_loss=0.04803, over 952639.61 frames. ], batch size: 44, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:17:01,615 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153821.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:17:14,506 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0056, 0.9037, 0.9293, 1.0394, 1.1257, 1.1010, 1.0124, 0.9562], device='cuda:2'), covar=tensor([0.0466, 0.0377, 0.0805, 0.0388, 0.0343, 0.0591, 0.0469, 0.0529], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0106, 0.0148, 0.0112, 0.0101, 0.0116, 0.0104, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8277e-05, 8.1045e-05, 1.1502e-04, 8.5389e-05, 7.8216e-05, 8.5199e-05, 7.6884e-05, 8.5967e-05], device='cuda:2') 2023-03-27 09:17:29,413 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5878, 1.5079, 1.5128, 1.5346, 1.0127, 2.8891, 1.0816, 1.5677], device='cuda:2'), covar=tensor([0.3247, 0.2481, 0.2071, 0.2349, 0.1816, 0.0274, 0.2654, 0.1257], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:17:34,600 INFO [finetune.py:976] (2/7) Epoch 27, batch 4950, loss[loss=0.2017, simple_loss=0.2766, pruned_loss=0.06343, over 4838.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2431, pruned_loss=0.04867, over 953469.14 frames. ], batch size: 47, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:17:35,193 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.496e+02 1.750e+02 2.158e+02 3.393e+02, threshold=3.501e+02, percent-clipped=0.0 2023-03-27 09:18:00,448 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0121, 1.7236, 2.1828, 1.4679, 2.0056, 2.2104, 1.6699, 2.2933], device='cuda:2'), covar=tensor([0.1096, 0.1900, 0.1213, 0.1611, 0.0791, 0.1114, 0.2528, 0.0795], device='cuda:2'), in_proj_covar=tensor([0.0194, 0.0208, 0.0195, 0.0190, 0.0175, 0.0215, 0.0218, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:18:10,031 INFO [finetune.py:976] (2/7) Epoch 27, batch 5000, loss[loss=0.1509, simple_loss=0.2246, pruned_loss=0.03858, over 4925.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.242, pruned_loss=0.04863, over 952641.69 frames. ], batch size: 38, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:18:12,454 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153923.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:18:28,748 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153934.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:18:37,169 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3153, 1.5109, 1.5305, 0.9573, 1.5169, 1.8106, 1.8540, 1.3686], device='cuda:2'), covar=tensor([0.0880, 0.0576, 0.0535, 0.0497, 0.0476, 0.0591, 0.0293, 0.0711], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0147, 0.0130, 0.0122, 0.0132, 0.0131, 0.0143, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8595e-05, 1.0571e-04, 9.2169e-05, 8.5987e-05, 9.2113e-05, 9.2486e-05, 1.0137e-04, 1.0747e-04], device='cuda:2') 2023-03-27 09:19:01,714 INFO [finetune.py:976] (2/7) Epoch 27, batch 5050, loss[loss=0.1259, simple_loss=0.2022, pruned_loss=0.02475, over 4832.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2397, pruned_loss=0.04784, over 951746.15 frames. ], batch size: 33, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:19:02,311 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.210e+01 1.435e+02 1.808e+02 2.168e+02 4.775e+02, threshold=3.617e+02, percent-clipped=1.0 2023-03-27 09:19:10,570 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:19:11,828 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153984.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:19:36,570 INFO [finetune.py:976] (2/7) Epoch 27, batch 5100, loss[loss=0.1374, simple_loss=0.2215, pruned_loss=0.02663, over 4808.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2361, pruned_loss=0.04631, over 952586.97 frames. ], batch size: 51, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:19:47,462 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.56 vs. limit=5.0 2023-03-27 09:20:06,423 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154049.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:20:19,663 INFO [finetune.py:976] (2/7) Epoch 27, batch 5150, loss[loss=0.1853, simple_loss=0.2527, pruned_loss=0.05892, over 4859.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2367, pruned_loss=0.04708, over 950612.37 frames. ], batch size: 34, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:20:20,252 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.883e+01 1.500e+02 1.789e+02 2.109e+02 4.792e+02, threshold=3.578e+02, percent-clipped=3.0 2023-03-27 09:20:23,508 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154076.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:24,059 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:25,297 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6347, 2.3899, 2.0298, 0.9741, 2.0589, 2.0012, 1.8639, 2.2485], device='cuda:2'), covar=tensor([0.0827, 0.0872, 0.1583, 0.2199, 0.1464, 0.2170, 0.2166, 0.0858], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0202, 0.0182, 0.0211, 0.0211, 0.0224, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:20:28,830 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:45,304 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 09:20:46,982 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154110.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:20:47,580 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6250, 1.5539, 1.3988, 1.7615, 1.6819, 1.7542, 1.1061, 1.4498], device='cuda:2'), covar=tensor([0.2168, 0.1986, 0.1871, 0.1532, 0.1541, 0.1233, 0.2439, 0.1886], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0201, 0.0246, 0.0191, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:20:53,422 INFO [finetune.py:976] (2/7) Epoch 27, batch 5200, loss[loss=0.1653, simple_loss=0.2473, pruned_loss=0.04168, over 4923.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2414, pruned_loss=0.04883, over 949855.89 frames. ], batch size: 38, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:20:56,443 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:21:03,085 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7744, 1.5104, 1.9828, 1.3460, 1.8023, 1.9752, 1.4476, 2.1007], device='cuda:2'), covar=tensor([0.1167, 0.2230, 0.1280, 0.1666, 0.0820, 0.1188, 0.2888, 0.0736], device='cuda:2'), in_proj_covar=tensor([0.0195, 0.0211, 0.0197, 0.0192, 0.0177, 0.0217, 0.0221, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:21:04,334 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154137.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:21:12,342 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:21:34,724 INFO [finetune.py:976] (2/7) Epoch 27, batch 5250, loss[loss=0.2076, simple_loss=0.2773, pruned_loss=0.06895, over 4904.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2445, pruned_loss=0.04999, over 951401.16 frames. ], batch size: 37, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:21:35,330 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.556e+02 1.889e+02 2.346e+02 3.556e+02, threshold=3.778e+02, percent-clipped=0.0 2023-03-27 09:21:58,230 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0501, 1.7982, 2.1759, 1.5749, 1.9471, 2.2119, 2.1690, 1.3408], device='cuda:2'), covar=tensor([0.0729, 0.0902, 0.0702, 0.0968, 0.0824, 0.0701, 0.0666, 0.1846], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0140, 0.0119, 0.0128, 0.0138, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:22:08,469 INFO [finetune.py:976] (2/7) Epoch 27, batch 5300, loss[loss=0.1689, simple_loss=0.2526, pruned_loss=0.0426, over 4817.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2448, pruned_loss=0.04943, over 953371.81 frames. ], batch size: 39, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:22:09,178 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:41,917 INFO [finetune.py:976] (2/7) Epoch 27, batch 5350, loss[loss=0.198, simple_loss=0.2597, pruned_loss=0.06808, over 4858.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2448, pruned_loss=0.04885, over 955280.08 frames. ], batch size: 34, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:22:42,509 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.999e+01 1.513e+02 1.798e+02 2.139e+02 3.270e+02, threshold=3.596e+02, percent-clipped=0.0 2023-03-27 09:22:47,294 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1288, 4.8349, 4.6309, 2.3921, 4.9867, 3.8374, 0.9687, 3.5412], device='cuda:2'), covar=tensor([0.2011, 0.1644, 0.1302, 0.3072, 0.0736, 0.0760, 0.4441, 0.1217], device='cuda:2'), in_proj_covar=tensor([0.0149, 0.0178, 0.0158, 0.0129, 0.0160, 0.0122, 0.0147, 0.0124], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 09:22:47,875 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154279.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:49,734 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154282.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:22:52,173 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:55,611 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154291.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:23:15,340 INFO [finetune.py:976] (2/7) Epoch 27, batch 5400, loss[loss=0.1931, simple_loss=0.2541, pruned_loss=0.06606, over 4904.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2432, pruned_loss=0.04885, over 955209.96 frames. ], batch size: 43, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:23:33,019 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154337.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:23:42,753 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:23:46,788 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154352.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:24:08,868 INFO [finetune.py:976] (2/7) Epoch 27, batch 5450, loss[loss=0.1349, simple_loss=0.2066, pruned_loss=0.03156, over 4849.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2394, pruned_loss=0.04745, over 956097.16 frames. ], batch size: 47, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:24:09,461 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.752e+01 1.439e+02 1.730e+02 2.063e+02 4.741e+02, threshold=3.460e+02, percent-clipped=1.0 2023-03-27 09:24:12,770 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-27 09:24:26,431 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154398.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:24:32,223 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154405.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:24:40,262 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0858, 0.9611, 0.9773, 0.4444, 0.9091, 1.2006, 1.1994, 0.9763], device='cuda:2'), covar=tensor([0.1021, 0.0856, 0.0734, 0.0653, 0.0685, 0.0750, 0.0477, 0.0873], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0122, 0.0132, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8728e-05, 1.0572e-04, 9.2451e-05, 8.5947e-05, 9.2492e-05, 9.2383e-05, 1.0101e-04, 1.0760e-04], device='cuda:2') 2023-03-27 09:24:42,541 INFO [finetune.py:976] (2/7) Epoch 27, batch 5500, loss[loss=0.1574, simple_loss=0.2275, pruned_loss=0.04367, over 4820.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2365, pruned_loss=0.04721, over 953761.03 frames. ], batch size: 30, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:24:49,896 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154432.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:24:52,975 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5323, 1.4528, 1.8168, 2.5221, 1.7564, 2.3116, 0.9734, 2.2116], device='cuda:2'), covar=tensor([0.1664, 0.1307, 0.1053, 0.0650, 0.0866, 0.1034, 0.1558, 0.0557], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0134, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 09:24:54,788 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154440.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:25:27,010 INFO [finetune.py:976] (2/7) Epoch 27, batch 5550, loss[loss=0.1461, simple_loss=0.2187, pruned_loss=0.03676, over 4343.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2387, pruned_loss=0.04766, over 952347.14 frames. ], batch size: 19, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:25:27,597 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.282e+01 1.501e+02 1.802e+02 2.038e+02 5.335e+02, threshold=3.603e+02, percent-clipped=2.0 2023-03-27 09:25:29,564 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:25:33,894 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 09:25:47,196 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 09:25:57,467 INFO [finetune.py:976] (2/7) Epoch 27, batch 5600, loss[loss=0.1598, simple_loss=0.2369, pruned_loss=0.04136, over 4883.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2425, pruned_loss=0.04871, over 953346.57 frames. ], batch size: 32, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:26:00,993 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8942, 1.7805, 1.6330, 1.9048, 2.3436, 1.9050, 1.8489, 1.6392], device='cuda:2'), covar=tensor([0.1793, 0.1631, 0.1562, 0.1411, 0.1435, 0.1072, 0.2046, 0.1497], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0211, 0.0216, 0.0200, 0.0246, 0.0191, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:26:07,290 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154535.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:26:27,444 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6560, 1.5276, 1.1338, 0.3423, 1.2328, 1.4635, 1.4436, 1.4437], device='cuda:2'), covar=tensor([0.1039, 0.0934, 0.1467, 0.2036, 0.1569, 0.2766, 0.2539, 0.0895], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0191, 0.0201, 0.0181, 0.0210, 0.0210, 0.0223, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:26:30,304 INFO [finetune.py:976] (2/7) Epoch 27, batch 5650, loss[loss=0.1501, simple_loss=0.2322, pruned_loss=0.03398, over 4871.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2453, pruned_loss=0.0489, over 954053.08 frames. ], batch size: 34, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:26:30,863 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.398e+02 1.740e+02 2.119e+02 4.723e+02, threshold=3.480e+02, percent-clipped=2.0 2023-03-27 09:26:39,110 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154577.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:26:40,323 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154579.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:07,857 INFO [finetune.py:976] (2/7) Epoch 27, batch 5700, loss[loss=0.159, simple_loss=0.2183, pruned_loss=0.0498, over 3960.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2412, pruned_loss=0.04864, over 933747.40 frames. ], batch size: 17, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:27:12,038 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154627.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:12,701 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154628.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:20,867 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154642.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:34,185 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154647.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:34,737 INFO [finetune.py:976] (2/7) Epoch 28, batch 0, loss[loss=0.1713, simple_loss=0.2424, pruned_loss=0.05006, over 4820.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2424, pruned_loss=0.05006, over 4820.00 frames. ], batch size: 30, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:27:34,737 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 09:27:44,334 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3505, 1.3493, 1.3098, 0.8142, 1.3239, 1.5016, 1.5516, 1.2707], device='cuda:2'), covar=tensor([0.0855, 0.0510, 0.0581, 0.0467, 0.0539, 0.0632, 0.0299, 0.0609], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0147, 0.0129, 0.0121, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8337e-05, 1.0504e-04, 9.1824e-05, 8.5222e-05, 9.1862e-05, 9.1862e-05, 1.0083e-04, 1.0708e-04], device='cuda:2') 2023-03-27 09:27:54,285 INFO [finetune.py:1010] (2/7) Epoch 28, validation: loss=0.1583, simple_loss=0.2265, pruned_loss=0.04511, over 2265189.00 frames. 2023-03-27 09:27:54,286 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 09:27:55,412 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7253, 1.2803, 0.8709, 1.6620, 2.0579, 1.4559, 1.5665, 1.6200], device='cuda:2'), covar=tensor([0.1457, 0.2050, 0.1799, 0.1160, 0.1934, 0.1847, 0.1383, 0.1840], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 09:27:59,643 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4274, 1.2829, 1.6025, 2.3507, 1.5493, 2.2364, 0.9298, 2.0300], device='cuda:2'), covar=tensor([0.1859, 0.1671, 0.1394, 0.0873, 0.1116, 0.1237, 0.1848, 0.0718], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 09:28:08,714 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.488e+02 1.773e+02 2.221e+02 3.199e+02, threshold=3.546e+02, percent-clipped=0.0 2023-03-27 09:28:19,997 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8814, 1.7602, 1.9768, 1.1479, 1.9325, 1.9291, 1.9633, 1.5457], device='cuda:2'), covar=tensor([0.0637, 0.0756, 0.0690, 0.1027, 0.0763, 0.0761, 0.0621, 0.1256], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0120, 0.0129, 0.0140, 0.0140, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:28:21,221 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154689.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:24,050 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154693.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:28:27,013 INFO [finetune.py:976] (2/7) Epoch 28, batch 50, loss[loss=0.204, simple_loss=0.264, pruned_loss=0.07201, over 4858.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2481, pruned_loss=0.05224, over 216357.37 frames. ], batch size: 31, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:28:32,687 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154705.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:28:41,903 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154720.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:52,104 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:57,930 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:00,422 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 09:29:03,129 INFO [finetune.py:976] (2/7) Epoch 28, batch 100, loss[loss=0.1414, simple_loss=0.2214, pruned_loss=0.03068, over 4778.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2419, pruned_loss=0.05001, over 379452.22 frames. ], batch size: 28, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:29:03,218 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0700, 4.4315, 4.6461, 4.8999, 4.8243, 4.5860, 5.1768, 1.5623], device='cuda:2'), covar=tensor([0.0707, 0.0821, 0.0639, 0.0742, 0.1204, 0.1572, 0.0577, 0.6123], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0247, 0.0283, 0.0296, 0.0335, 0.0287, 0.0305, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:29:11,907 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154753.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:29:26,905 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.413e+02 1.713e+02 2.093e+02 4.180e+02, threshold=3.426e+02, percent-clipped=2.0 2023-03-27 09:29:32,447 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154780.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:33,130 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:35,449 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5005, 1.4393, 1.3154, 1.3894, 1.7516, 1.6436, 1.5452, 1.3125], device='cuda:2'), covar=tensor([0.0331, 0.0314, 0.0616, 0.0323, 0.0220, 0.0505, 0.0307, 0.0405], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0105, 0.0147, 0.0111, 0.0101, 0.0114, 0.0102, 0.0112], device='cuda:2'), out_proj_covar=tensor([7.7611e-05, 8.0611e-05, 1.1439e-04, 8.4438e-05, 7.7880e-05, 8.4048e-05, 7.5802e-05, 8.5106e-05], device='cuda:2') 2023-03-27 09:29:37,835 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:44,899 INFO [finetune.py:976] (2/7) Epoch 28, batch 150, loss[loss=0.146, simple_loss=0.2182, pruned_loss=0.03691, over 4911.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2351, pruned_loss=0.04728, over 504836.73 frames. ], batch size: 43, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:29:56,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0448, 1.9976, 1.8393, 2.1904, 2.6380, 2.2902, 1.9522, 1.7258], device='cuda:2'), covar=tensor([0.2075, 0.1855, 0.1797, 0.1472, 0.1424, 0.1029, 0.2110, 0.1856], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0200, 0.0246, 0.0192, 0.0218, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:30:06,030 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:30:18,389 INFO [finetune.py:976] (2/7) Epoch 28, batch 200, loss[loss=0.1335, simple_loss=0.2114, pruned_loss=0.0278, over 4820.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2322, pruned_loss=0.04564, over 605771.04 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:30:19,730 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1086, 2.8777, 2.4555, 1.4876, 2.5331, 2.4987, 2.3412, 2.6006], device='cuda:2'), covar=tensor([0.0788, 0.0752, 0.1551, 0.1804, 0.1228, 0.1652, 0.1603, 0.0874], device='cuda:2'), in_proj_covar=tensor([0.0169, 0.0189, 0.0198, 0.0179, 0.0207, 0.0207, 0.0221, 0.0193], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:30:40,610 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.190e+01 1.565e+02 1.831e+02 2.234e+02 3.641e+02, threshold=3.662e+02, percent-clipped=1.0 2023-03-27 09:30:48,052 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154877.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:31:02,678 INFO [finetune.py:976] (2/7) Epoch 28, batch 250, loss[loss=0.1306, simple_loss=0.2107, pruned_loss=0.02529, over 4758.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2372, pruned_loss=0.04693, over 683147.53 frames. ], batch size: 28, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:31:14,731 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=5.22 vs. limit=5.0 2023-03-27 09:31:20,576 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:31,427 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154942.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:34,984 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:35,505 INFO [finetune.py:976] (2/7) Epoch 28, batch 300, loss[loss=0.1517, simple_loss=0.2198, pruned_loss=0.04187, over 4871.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2414, pruned_loss=0.04809, over 743777.86 frames. ], batch size: 31, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:31:42,230 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3201, 1.8280, 2.0622, 0.9429, 2.4496, 2.4671, 2.1951, 1.8328], device='cuda:2'), covar=tensor([0.0889, 0.0833, 0.0544, 0.0715, 0.0451, 0.0698, 0.0514, 0.0818], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0147, 0.0129, 0.0122, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8628e-05, 1.0545e-04, 9.1969e-05, 8.5533e-05, 9.1986e-05, 9.1993e-05, 1.0103e-04, 1.0706e-04], device='cuda:2') 2023-03-27 09:31:51,475 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.523e+02 1.869e+02 2.212e+02 3.864e+02, threshold=3.739e+02, percent-clipped=1.0 2023-03-27 09:31:59,667 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:07,904 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154984.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:11,536 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154990.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:13,397 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154993.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:32:15,018 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154995.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:16,795 INFO [finetune.py:976] (2/7) Epoch 28, batch 350, loss[loss=0.1496, simple_loss=0.2268, pruned_loss=0.03622, over 4844.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.244, pruned_loss=0.04907, over 791770.20 frames. ], batch size: 47, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:32:28,686 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3614, 2.4421, 1.9687, 2.7428, 2.3512, 2.0388, 2.8330, 2.5140], device='cuda:2'), covar=tensor([0.1115, 0.2109, 0.2458, 0.2062, 0.2166, 0.1427, 0.2970, 0.1518], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0254, 0.0250, 0.0207, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:32:37,619 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 09:32:43,027 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155037.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:45,381 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155041.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:32:49,881 INFO [finetune.py:976] (2/7) Epoch 28, batch 400, loss[loss=0.2312, simple_loss=0.2904, pruned_loss=0.08602, over 4894.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2458, pruned_loss=0.04994, over 828990.10 frames. ], batch size: 36, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:33:03,824 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2331, 1.9744, 1.6389, 0.6373, 1.7506, 1.8470, 1.6366, 1.8202], device='cuda:2'), covar=tensor([0.0931, 0.0896, 0.1365, 0.2039, 0.1315, 0.2400, 0.2365, 0.0830], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0190, 0.0199, 0.0179, 0.0207, 0.0208, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:33:13,487 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:15,062 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.906e+01 1.559e+02 1.879e+02 2.352e+02 4.263e+02, threshold=3.758e+02, percent-clipped=3.0 2023-03-27 09:33:18,274 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155076.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:19,633 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.49 vs. limit=5.0 2023-03-27 09:33:31,495 INFO [finetune.py:976] (2/7) Epoch 28, batch 450, loss[loss=0.1559, simple_loss=0.227, pruned_loss=0.04238, over 4896.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2439, pruned_loss=0.04902, over 856288.34 frames. ], batch size: 37, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:33:54,283 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:54,330 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:34:03,590 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-27 09:34:05,148 INFO [finetune.py:976] (2/7) Epoch 28, batch 500, loss[loss=0.1745, simple_loss=0.2465, pruned_loss=0.05127, over 4832.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2404, pruned_loss=0.04803, over 878938.81 frames. ], batch size: 33, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:34:28,554 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.687e+01 1.475e+02 1.683e+02 2.204e+02 4.497e+02, threshold=3.366e+02, percent-clipped=1.0 2023-03-27 09:34:33,445 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:34:49,246 INFO [finetune.py:976] (2/7) Epoch 28, batch 550, loss[loss=0.2128, simple_loss=0.2743, pruned_loss=0.07566, over 4816.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2386, pruned_loss=0.04791, over 894645.25 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:35:01,237 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6725, 1.5074, 1.1176, 0.2631, 1.2875, 1.5278, 1.5187, 1.4622], device='cuda:2'), covar=tensor([0.0970, 0.0939, 0.1391, 0.1927, 0.1403, 0.2336, 0.2252, 0.0959], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0192, 0.0202, 0.0182, 0.0210, 0.0210, 0.0224, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:35:11,075 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6126, 2.4675, 2.0691, 2.7614, 2.6336, 2.2812, 3.1287, 2.6122], device='cuda:2'), covar=tensor([0.1413, 0.2206, 0.2877, 0.2592, 0.2361, 0.1604, 0.2659, 0.1664], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0235, 0.0253, 0.0249, 0.0206, 0.0214, 0.0201], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:35:14,007 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8306, 1.6148, 2.0489, 1.4538, 1.9098, 2.0335, 1.5266, 2.2168], device='cuda:2'), covar=tensor([0.1370, 0.2240, 0.1475, 0.1871, 0.0937, 0.1338, 0.2979, 0.0894], device='cuda:2'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0189, 0.0174, 0.0213, 0.0219, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:35:23,081 INFO [finetune.py:976] (2/7) Epoch 28, batch 600, loss[loss=0.1564, simple_loss=0.2404, pruned_loss=0.03619, over 4834.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2399, pruned_loss=0.04827, over 910063.57 frames. ], batch size: 47, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:35:38,977 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.640e+01 1.454e+02 1.702e+02 1.998e+02 4.828e+02, threshold=3.403e+02, percent-clipped=2.0 2023-03-27 09:35:53,267 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155284.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:05,097 INFO [finetune.py:976] (2/7) Epoch 28, batch 650, loss[loss=0.1966, simple_loss=0.2733, pruned_loss=0.05998, over 4758.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2438, pruned_loss=0.04977, over 921131.99 frames. ], batch size: 54, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:36:27,993 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1676, 2.0946, 1.6500, 2.1686, 2.0958, 1.8666, 2.4502, 2.2017], device='cuda:2'), covar=tensor([0.1223, 0.2056, 0.2875, 0.2487, 0.2429, 0.1551, 0.3120, 0.1499], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0254, 0.0250, 0.0207, 0.0215, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:36:29,094 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:29,104 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:38,730 INFO [finetune.py:976] (2/7) Epoch 28, batch 700, loss[loss=0.1638, simple_loss=0.2522, pruned_loss=0.03769, over 4898.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2447, pruned_loss=0.04975, over 928596.36 frames. ], batch size: 43, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:36:54,659 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.563e+02 1.812e+02 2.261e+02 4.160e+02, threshold=3.625e+02, percent-clipped=3.0 2023-03-27 09:36:57,795 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:05,080 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.3094, 2.9066, 3.0811, 3.2353, 3.0869, 2.8372, 3.3537, 0.9247], device='cuda:2'), covar=tensor([0.1247, 0.1176, 0.1205, 0.1352, 0.1788, 0.1990, 0.1124, 0.6358], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0247, 0.0284, 0.0297, 0.0335, 0.0287, 0.0305, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:37:19,492 INFO [finetune.py:976] (2/7) Epoch 28, batch 750, loss[loss=0.1498, simple_loss=0.2347, pruned_loss=0.03243, over 4905.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2455, pruned_loss=0.05024, over 933454.82 frames. ], batch size: 37, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:37:40,158 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155424.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:41,307 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:49,655 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3826, 1.2953, 1.2343, 1.3227, 0.7826, 2.0521, 0.6967, 1.0702], device='cuda:2'), covar=tensor([0.2737, 0.2081, 0.1909, 0.2117, 0.1697, 0.0348, 0.2235, 0.1158], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:37:56,713 INFO [finetune.py:976] (2/7) Epoch 28, batch 800, loss[loss=0.139, simple_loss=0.2208, pruned_loss=0.02856, over 4918.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.245, pruned_loss=0.04933, over 937478.04 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:38:02,330 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155457.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:38:11,933 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.495e+02 1.724e+02 1.967e+02 3.002e+02, threshold=3.447e+02, percent-clipped=0.0 2023-03-27 09:38:39,881 INFO [finetune.py:976] (2/7) Epoch 28, batch 850, loss[loss=0.1631, simple_loss=0.2338, pruned_loss=0.04622, over 4856.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2431, pruned_loss=0.04884, over 938961.91 frames. ], batch size: 49, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:38:52,646 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155518.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:38:57,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5498, 1.5460, 2.0508, 2.9531, 1.9214, 2.2441, 0.9472, 2.5469], device='cuda:2'), covar=tensor([0.1639, 0.1270, 0.1161, 0.0586, 0.0848, 0.1329, 0.1712, 0.0507], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0165, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 09:39:13,737 INFO [finetune.py:976] (2/7) Epoch 28, batch 900, loss[loss=0.2074, simple_loss=0.2592, pruned_loss=0.07778, over 4862.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2401, pruned_loss=0.0476, over 941712.68 frames. ], batch size: 49, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:39:28,252 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.783e+01 1.413e+02 1.787e+02 2.288e+02 4.282e+02, threshold=3.575e+02, percent-clipped=3.0 2023-03-27 09:39:34,798 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7589, 1.7213, 1.6486, 1.8013, 1.2584, 3.7785, 1.4489, 1.9039], device='cuda:2'), covar=tensor([0.3139, 0.2485, 0.2085, 0.2342, 0.1727, 0.0188, 0.2611, 0.1278], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0112, 0.0095, 0.0093, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:39:54,474 INFO [finetune.py:976] (2/7) Epoch 28, batch 950, loss[loss=0.1486, simple_loss=0.2191, pruned_loss=0.03906, over 4903.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2384, pruned_loss=0.04712, over 944855.53 frames. ], batch size: 32, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:40:02,638 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:11,184 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7015, 1.5813, 1.5251, 1.6132, 1.3322, 3.5232, 1.4708, 1.8775], device='cuda:2'), covar=tensor([0.4240, 0.3233, 0.2490, 0.3003, 0.1701, 0.0318, 0.2553, 0.1239], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0112, 0.0095, 0.0093, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:40:20,550 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:31,714 INFO [finetune.py:976] (2/7) Epoch 28, batch 1000, loss[loss=0.1846, simple_loss=0.2672, pruned_loss=0.05095, over 4815.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2388, pruned_loss=0.04715, over 948811.84 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:40:37,328 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155657.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:42,785 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:45,715 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.619e+01 1.519e+02 1.814e+02 2.190e+02 3.109e+02, threshold=3.628e+02, percent-clipped=0.0 2023-03-27 09:40:49,966 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.4059, 1.4066, 1.4372, 0.8206, 1.4732, 1.7091, 1.7719, 1.3000], device='cuda:2'), covar=tensor([0.0969, 0.0759, 0.0535, 0.0566, 0.0530, 0.0597, 0.0324, 0.0764], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0123, 0.0132, 0.0130, 0.0143, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.8758e-05, 1.0555e-04, 9.2581e-05, 8.6096e-05, 9.2301e-05, 9.2359e-05, 1.0139e-04, 1.0743e-04], device='cuda:2') 2023-03-27 09:40:54,174 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155680.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:13,974 INFO [finetune.py:976] (2/7) Epoch 28, batch 1050, loss[loss=0.1814, simple_loss=0.2649, pruned_loss=0.04893, over 4802.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.244, pruned_loss=0.04933, over 949584.82 frames. ], batch size: 41, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:41:26,785 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:31,003 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:46,713 INFO [finetune.py:976] (2/7) Epoch 28, batch 1100, loss[loss=0.1864, simple_loss=0.2641, pruned_loss=0.05441, over 4810.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2452, pruned_loss=0.04941, over 950313.83 frames. ], batch size: 51, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:41:49,118 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0866, 3.5658, 3.7513, 3.9181, 3.8377, 3.6322, 4.1679, 1.3698], device='cuda:2'), covar=tensor([0.0871, 0.0898, 0.0842, 0.1080, 0.1369, 0.1540, 0.0833, 0.5688], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0283, 0.0297, 0.0335, 0.0287, 0.0305, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:42:01,622 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.629e+02 1.915e+02 2.256e+02 9.973e+02, threshold=3.830e+02, percent-clipped=2.0 2023-03-27 09:42:02,899 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:42:02,960 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5408, 1.5392, 1.2159, 1.5565, 1.8800, 1.7282, 1.5956, 1.3699], device='cuda:2'), covar=tensor([0.0356, 0.0335, 0.0664, 0.0312, 0.0198, 0.0507, 0.0309, 0.0424], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0106, 0.0148, 0.0111, 0.0101, 0.0116, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8543e-05, 8.1141e-05, 1.1533e-04, 8.4781e-05, 7.8351e-05, 8.5137e-05, 7.6640e-05, 8.6105e-05], device='cuda:2') 2023-03-27 09:42:10,634 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:42:21,324 INFO [finetune.py:976] (2/7) Epoch 28, batch 1150, loss[loss=0.1805, simple_loss=0.2453, pruned_loss=0.05782, over 4882.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2453, pruned_loss=0.04958, over 952183.49 frames. ], batch size: 32, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:42:39,764 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155813.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:43:01,217 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155845.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:43:02,855 INFO [finetune.py:976] (2/7) Epoch 28, batch 1200, loss[loss=0.1395, simple_loss=0.2139, pruned_loss=0.03254, over 4795.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.244, pruned_loss=0.04954, over 953196.10 frames. ], batch size: 25, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:43:18,194 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.526e+02 1.746e+02 2.167e+02 3.236e+02, threshold=3.492e+02, percent-clipped=0.0 2023-03-27 09:43:45,612 INFO [finetune.py:976] (2/7) Epoch 28, batch 1250, loss[loss=0.1227, simple_loss=0.2105, pruned_loss=0.01741, over 4766.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2421, pruned_loss=0.0495, over 953780.39 frames. ], batch size: 27, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:44:01,998 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 09:44:15,232 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-27 09:44:19,206 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7205, 1.7912, 1.5849, 2.0684, 2.0972, 2.0130, 1.6474, 1.5090], device='cuda:2'), covar=tensor([0.2237, 0.1823, 0.1910, 0.1531, 0.1752, 0.1159, 0.2342, 0.1952], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0212, 0.0215, 0.0199, 0.0246, 0.0190, 0.0217, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:44:22,108 INFO [finetune.py:976] (2/7) Epoch 28, batch 1300, loss[loss=0.1793, simple_loss=0.2521, pruned_loss=0.05331, over 4856.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2401, pruned_loss=0.0489, over 954224.29 frames. ], batch size: 44, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:44:32,015 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155961.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:44:38,007 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.818e+01 1.476e+02 1.729e+02 2.197e+02 4.050e+02, threshold=3.458e+02, percent-clipped=1.0 2023-03-27 09:44:55,319 INFO [finetune.py:976] (2/7) Epoch 28, batch 1350, loss[loss=0.2372, simple_loss=0.2947, pruned_loss=0.08985, over 4222.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2391, pruned_loss=0.04837, over 954218.68 frames. ], batch size: 65, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:45:10,154 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156013.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:45:32,752 INFO [finetune.py:976] (2/7) Epoch 28, batch 1400, loss[loss=0.1635, simple_loss=0.2488, pruned_loss=0.03911, over 4853.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2434, pruned_loss=0.04995, over 955347.65 frames. ], batch size: 44, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:45:45,200 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7756, 1.7210, 1.4076, 1.7992, 2.3945, 1.9430, 1.7786, 1.3681], device='cuda:2'), covar=tensor([0.2153, 0.1956, 0.1923, 0.1641, 0.1567, 0.1175, 0.2206, 0.1809], device='cuda:2'), in_proj_covar=tensor([0.0245, 0.0212, 0.0216, 0.0199, 0.0246, 0.0190, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:45:48,213 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156070.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:45:48,708 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.532e+02 1.806e+02 2.221e+02 4.474e+02, threshold=3.612e+02, percent-clipped=3.0 2023-03-27 09:46:01,390 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6422, 1.1307, 0.9047, 1.6603, 2.0622, 1.4634, 1.5155, 1.6097], device='cuda:2'), covar=tensor([0.1495, 0.2070, 0.1756, 0.1159, 0.1938, 0.1904, 0.1409, 0.1821], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 09:46:06,159 INFO [finetune.py:976] (2/7) Epoch 28, batch 1450, loss[loss=0.1327, simple_loss=0.2187, pruned_loss=0.02332, over 4758.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.244, pruned_loss=0.04917, over 955938.21 frames. ], batch size: 28, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:46:23,137 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156113.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:46:26,618 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4717, 2.4389, 2.0269, 2.5098, 2.2947, 2.3360, 2.2885, 3.2045], device='cuda:2'), covar=tensor([0.3458, 0.4512, 0.3263, 0.4215, 0.4261, 0.2460, 0.4205, 0.1502], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0237, 0.0275, 0.0260, 0.0230, 0.0259, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:46:35,148 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156131.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:46:40,535 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156140.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:46:45,776 INFO [finetune.py:976] (2/7) Epoch 28, batch 1500, loss[loss=0.1705, simple_loss=0.2398, pruned_loss=0.05064, over 4150.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2449, pruned_loss=0.04927, over 954590.52 frames. ], batch size: 65, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:46:54,176 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:47:02,077 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.585e+02 1.899e+02 2.331e+02 3.577e+02, threshold=3.799e+02, percent-clipped=0.0 2023-03-27 09:47:18,929 INFO [finetune.py:976] (2/7) Epoch 28, batch 1550, loss[loss=0.1908, simple_loss=0.2534, pruned_loss=0.06409, over 4811.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2446, pruned_loss=0.0492, over 954256.52 frames. ], batch size: 40, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:47:30,562 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:47:57,643 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1868, 2.0888, 1.7041, 1.9334, 1.9775, 1.9391, 2.0459, 2.7739], device='cuda:2'), covar=tensor([0.3693, 0.4236, 0.3526, 0.3931, 0.3831, 0.2509, 0.3775, 0.1588], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0276, 0.0261, 0.0230, 0.0259, 0.0238], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:47:59,289 INFO [finetune.py:976] (2/7) Epoch 28, batch 1600, loss[loss=0.1692, simple_loss=0.239, pruned_loss=0.04971, over 4865.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2419, pruned_loss=0.04866, over 954367.05 frames. ], batch size: 34, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:48:08,661 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156261.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:15,839 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.435e+02 1.767e+02 2.111e+02 3.704e+02, threshold=3.535e+02, percent-clipped=0.0 2023-03-27 09:48:19,970 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:25,427 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3941, 2.0440, 2.1850, 1.0502, 2.5059, 2.7750, 2.2746, 1.9721], device='cuda:2'), covar=tensor([0.1052, 0.0909, 0.0512, 0.0756, 0.0597, 0.0668, 0.0609, 0.0981], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0148, 0.0131, 0.0123, 0.0132, 0.0131, 0.0143, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.9134e-05, 1.0601e-04, 9.2940e-05, 8.6287e-05, 9.2384e-05, 9.2847e-05, 1.0158e-04, 1.0757e-04], device='cuda:2') 2023-03-27 09:48:27,857 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156290.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:32,611 INFO [finetune.py:976] (2/7) Epoch 28, batch 1650, loss[loss=0.1373, simple_loss=0.21, pruned_loss=0.03229, over 4757.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2396, pruned_loss=0.04784, over 957704.89 frames. ], batch size: 27, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:48:40,816 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156309.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:43,255 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156313.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:18,367 INFO [finetune.py:976] (2/7) Epoch 28, batch 1700, loss[loss=0.1878, simple_loss=0.2537, pruned_loss=0.06094, over 4843.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2385, pruned_loss=0.04782, over 959168.34 frames. ], batch size: 47, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:49:20,354 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156351.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:24,026 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0972, 1.3153, 1.3310, 1.3676, 1.4181, 2.4272, 1.2513, 1.4157], device='cuda:2'), covar=tensor([0.1003, 0.1832, 0.1072, 0.0861, 0.1552, 0.0336, 0.1427, 0.1751], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0079, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:49:27,454 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156361.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:34,434 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.260e+01 1.525e+02 1.796e+02 2.204e+02 4.546e+02, threshold=3.593e+02, percent-clipped=3.0 2023-03-27 09:49:51,286 INFO [finetune.py:976] (2/7) Epoch 28, batch 1750, loss[loss=0.1482, simple_loss=0.2227, pruned_loss=0.03682, over 4774.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2404, pruned_loss=0.04835, over 958260.62 frames. ], batch size: 26, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:49:59,085 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-27 09:50:08,430 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.66 vs. limit=2.0 2023-03-27 09:50:09,511 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156426.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:50:19,449 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156440.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:50:19,595 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-27 09:50:24,159 INFO [finetune.py:976] (2/7) Epoch 28, batch 1800, loss[loss=0.1826, simple_loss=0.2693, pruned_loss=0.048, over 4766.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2421, pruned_loss=0.04845, over 957745.64 frames. ], batch size: 54, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:50:39,987 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.522e+02 1.851e+02 2.291e+02 4.651e+02, threshold=3.702e+02, percent-clipped=5.0 2023-03-27 09:50:51,502 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156488.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:50:57,543 INFO [finetune.py:976] (2/7) Epoch 28, batch 1850, loss[loss=0.1541, simple_loss=0.232, pruned_loss=0.0381, over 4752.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2438, pruned_loss=0.0495, over 957118.91 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:51:40,567 INFO [finetune.py:976] (2/7) Epoch 28, batch 1900, loss[loss=0.164, simple_loss=0.2385, pruned_loss=0.04477, over 4819.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.245, pruned_loss=0.05, over 956775.13 frames. ], batch size: 38, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:51:47,438 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 09:51:56,029 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.831e+01 1.540e+02 1.871e+02 2.242e+02 4.934e+02, threshold=3.741e+02, percent-clipped=1.0 2023-03-27 09:51:56,113 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:52:11,596 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-27 09:52:13,683 INFO [finetune.py:976] (2/7) Epoch 28, batch 1950, loss[loss=0.2296, simple_loss=0.2771, pruned_loss=0.09108, over 4762.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2439, pruned_loss=0.05014, over 955684.16 frames. ], batch size: 26, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:52:46,375 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:52:47,506 INFO [finetune.py:976] (2/7) Epoch 28, batch 2000, loss[loss=0.142, simple_loss=0.2142, pruned_loss=0.03492, over 4908.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.241, pruned_loss=0.04867, over 957010.67 frames. ], batch size: 43, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:53:04,320 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.919e+01 1.529e+02 1.760e+02 2.169e+02 4.761e+02, threshold=3.520e+02, percent-clipped=1.0 2023-03-27 09:53:29,982 INFO [finetune.py:976] (2/7) Epoch 28, batch 2050, loss[loss=0.1966, simple_loss=0.2666, pruned_loss=0.06326, over 4924.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2381, pruned_loss=0.04763, over 958382.81 frames. ], batch size: 43, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:53:31,393 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 09:53:42,797 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-27 09:53:47,445 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156726.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:54:08,961 INFO [finetune.py:976] (2/7) Epoch 28, batch 2100, loss[loss=0.1397, simple_loss=0.2062, pruned_loss=0.03658, over 4789.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2382, pruned_loss=0.04785, over 957174.63 frames. ], batch size: 29, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:54:09,685 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156749.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:54:37,771 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.521e+02 1.844e+02 2.179e+02 3.224e+02, threshold=3.687e+02, percent-clipped=0.0 2023-03-27 09:54:38,476 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156774.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:54:47,432 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.67 vs. limit=5.0 2023-03-27 09:54:54,962 INFO [finetune.py:976] (2/7) Epoch 28, batch 2150, loss[loss=0.1888, simple_loss=0.2773, pruned_loss=0.0502, over 4748.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2421, pruned_loss=0.04867, over 955237.70 frames. ], batch size: 54, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:55:02,856 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7794, 1.2967, 0.8710, 1.6924, 2.1317, 1.5149, 1.5847, 1.6143], device='cuda:2'), covar=tensor([0.1640, 0.2141, 0.2004, 0.1267, 0.2024, 0.1838, 0.1517, 0.2237], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 09:55:03,479 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156810.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:55:23,102 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-27 09:55:27,790 INFO [finetune.py:976] (2/7) Epoch 28, batch 2200, loss[loss=0.186, simple_loss=0.2625, pruned_loss=0.0548, over 4724.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2436, pruned_loss=0.04863, over 957362.44 frames. ], batch size: 54, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:55:36,801 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2112, 2.1391, 2.3607, 1.5472, 2.1413, 2.3683, 2.4345, 1.8657], device='cuda:2'), covar=tensor([0.0591, 0.0694, 0.0645, 0.0911, 0.0764, 0.0699, 0.0553, 0.1136], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0138, 0.0140, 0.0118, 0.0128, 0.0138, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:55:44,125 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7856, 4.1500, 3.9452, 2.1463, 4.2753, 3.1579, 0.8715, 2.9455], device='cuda:2'), covar=tensor([0.2145, 0.1811, 0.1426, 0.3025, 0.0828, 0.0909, 0.4347, 0.1341], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0178, 0.0159, 0.0129, 0.0162, 0.0123, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 09:55:44,158 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:55:44,654 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.543e+02 1.740e+02 2.127e+02 4.555e+02, threshold=3.480e+02, percent-clipped=1.0 2023-03-27 09:55:52,565 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156885.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:01,289 INFO [finetune.py:976] (2/7) Epoch 28, batch 2250, loss[loss=0.1441, simple_loss=0.2236, pruned_loss=0.03226, over 4800.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2452, pruned_loss=0.04915, over 954248.84 frames. ], batch size: 40, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:56:15,019 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7026, 3.4866, 3.2684, 1.6614, 3.6001, 2.6724, 0.8448, 2.4239], device='cuda:2'), covar=tensor([0.2652, 0.1874, 0.1641, 0.3230, 0.1054, 0.1071, 0.4090, 0.1530], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0178, 0.0159, 0.0129, 0.0162, 0.0123, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 09:56:16,207 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156920.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:31,028 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.0514, 3.5354, 3.7106, 3.9097, 3.7873, 3.5531, 4.1496, 1.3796], device='cuda:2'), covar=tensor([0.0856, 0.0923, 0.0937, 0.0969, 0.1366, 0.1738, 0.0778, 0.5787], device='cuda:2'), in_proj_covar=tensor([0.0350, 0.0244, 0.0282, 0.0293, 0.0333, 0.0285, 0.0304, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:56:32,863 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:32,885 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:33,999 INFO [finetune.py:976] (2/7) Epoch 28, batch 2300, loss[loss=0.147, simple_loss=0.221, pruned_loss=0.03648, over 4921.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2452, pruned_loss=0.04862, over 956594.89 frames. ], batch size: 33, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:57:00,119 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.437e+02 1.655e+02 2.039e+02 3.893e+02, threshold=3.311e+02, percent-clipped=1.0 2023-03-27 09:57:10,752 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 09:57:17,551 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:57:20,392 INFO [finetune.py:976] (2/7) Epoch 28, batch 2350, loss[loss=0.1357, simple_loss=0.2117, pruned_loss=0.02983, over 4851.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2433, pruned_loss=0.04859, over 956201.40 frames. ], batch size: 49, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:57:28,245 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157010.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:57:52,967 INFO [finetune.py:976] (2/7) Epoch 28, batch 2400, loss[loss=0.18, simple_loss=0.2614, pruned_loss=0.04924, over 4829.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2403, pruned_loss=0.04771, over 957219.17 frames. ], batch size: 39, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:58:08,928 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157071.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:58:12,819 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.019e+02 1.490e+02 1.799e+02 2.218e+02 3.254e+02, threshold=3.597e+02, percent-clipped=0.0 2023-03-27 09:58:26,415 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8644, 1.1557, 1.9563, 1.9210, 1.7607, 1.6718, 1.7799, 1.8943], device='cuda:2'), covar=tensor([0.3817, 0.3711, 0.3041, 0.3445, 0.4517, 0.3622, 0.4222, 0.2876], device='cuda:2'), in_proj_covar=tensor([0.0268, 0.0249, 0.0269, 0.0298, 0.0297, 0.0274, 0.0303, 0.0253], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:58:28,593 INFO [finetune.py:976] (2/7) Epoch 28, batch 2450, loss[loss=0.1711, simple_loss=0.2341, pruned_loss=0.05411, over 4839.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2392, pruned_loss=0.04773, over 957241.34 frames. ], batch size: 49, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:58:33,538 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:58:39,072 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9501, 2.6466, 2.3418, 1.2012, 2.5185, 2.1444, 2.0637, 2.5434], device='cuda:2'), covar=tensor([0.0907, 0.0964, 0.1558, 0.2120, 0.1398, 0.2202, 0.1971, 0.0893], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0192, 0.0202, 0.0182, 0.0211, 0.0211, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:58:57,107 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7417, 1.9093, 1.5627, 2.0377, 2.2903, 2.0559, 1.8937, 1.4728], device='cuda:2'), covar=tensor([0.2297, 0.1808, 0.1926, 0.1651, 0.1822, 0.1167, 0.2097, 0.2049], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0199, 0.0245, 0.0191, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:58:57,713 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:58:58,951 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2665, 2.1293, 2.3823, 1.7605, 2.2550, 2.4227, 2.5112, 1.8323], device='cuda:2'), covar=tensor([0.0560, 0.0641, 0.0582, 0.0753, 0.0689, 0.0618, 0.0466, 0.1125], device='cuda:2'), in_proj_covar=tensor([0.0130, 0.0138, 0.0139, 0.0118, 0.0128, 0.0138, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 09:58:59,644 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 09:59:01,795 INFO [finetune.py:976] (2/7) Epoch 28, batch 2500, loss[loss=0.131, simple_loss=0.2084, pruned_loss=0.0268, over 4793.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.24, pruned_loss=0.0478, over 955345.53 frames. ], batch size: 29, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:59:08,333 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6665, 1.5515, 1.5018, 1.6093, 1.1635, 2.8044, 1.1867, 1.6813], device='cuda:2'), covar=tensor([0.2739, 0.2234, 0.1856, 0.1991, 0.1559, 0.0313, 0.2135, 0.0988], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0114, 0.0096, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:59:23,964 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.586e+02 1.871e+02 2.257e+02 5.817e+02, threshold=3.742e+02, percent-clipped=3.0 2023-03-27 09:59:43,663 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3701, 1.3844, 1.3270, 0.7385, 1.4148, 1.6503, 1.6726, 1.3052], device='cuda:2'), covar=tensor([0.1177, 0.0700, 0.0576, 0.0584, 0.0549, 0.0724, 0.0363, 0.0813], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0130, 0.0123, 0.0132, 0.0131, 0.0143, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.9024e-05, 1.0649e-04, 9.2330e-05, 8.6278e-05, 9.2678e-05, 9.2424e-05, 1.0164e-04, 1.0798e-04], device='cuda:2') 2023-03-27 09:59:51,556 INFO [finetune.py:976] (2/7) Epoch 28, batch 2550, loss[loss=0.1687, simple_loss=0.2485, pruned_loss=0.04441, over 4901.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.243, pruned_loss=0.04803, over 954580.32 frames. ], batch size: 36, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:59:52,296 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6949, 1.6621, 1.6205, 1.6641, 1.4110, 4.1136, 1.6842, 2.0538], device='cuda:2'), covar=tensor([0.3279, 0.2578, 0.2141, 0.2303, 0.1628, 0.0132, 0.2598, 0.1159], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0114, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 09:59:55,385 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157203.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:00:03,519 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5354, 1.0348, 0.9013, 1.5792, 1.9509, 1.5758, 1.2390, 1.5353], device='cuda:2'), covar=tensor([0.1677, 0.2320, 0.1941, 0.1271, 0.2023, 0.2060, 0.1539, 0.1973], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 10:00:20,724 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:00:24,904 INFO [finetune.py:976] (2/7) Epoch 28, batch 2600, loss[loss=0.1634, simple_loss=0.2457, pruned_loss=0.04059, over 4900.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2455, pruned_loss=0.04937, over 954973.71 frames. ], batch size: 43, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:00:41,899 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.561e+02 1.846e+02 2.212e+02 4.271e+02, threshold=3.692e+02, percent-clipped=1.0 2023-03-27 10:00:47,851 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3734, 1.4094, 1.9890, 1.8463, 1.5327, 3.4646, 1.2614, 1.5514], device='cuda:2'), covar=tensor([0.1055, 0.1842, 0.1083, 0.0954, 0.1648, 0.0225, 0.1671, 0.1904], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0074, 0.0076, 0.0092, 0.0081, 0.0086, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 10:00:57,759 INFO [finetune.py:976] (2/7) Epoch 28, batch 2650, loss[loss=0.1253, simple_loss=0.1924, pruned_loss=0.02911, over 4693.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.247, pruned_loss=0.05014, over 956020.14 frames. ], batch size: 23, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:01:01,444 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5838, 1.3268, 0.8376, 1.5824, 2.0727, 1.2667, 1.4747, 1.6350], device='cuda:2'), covar=tensor([0.1538, 0.1930, 0.1874, 0.1191, 0.1901, 0.1799, 0.1390, 0.1883], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0120, 0.0092, 0.0098, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 10:01:03,223 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 10:01:26,020 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6128, 1.2183, 0.9027, 1.5792, 2.1061, 1.1012, 1.5208, 1.5704], device='cuda:2'), covar=tensor([0.1485, 0.1958, 0.1764, 0.1198, 0.1912, 0.1828, 0.1360, 0.1914], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 10:01:30,662 INFO [finetune.py:976] (2/7) Epoch 28, batch 2700, loss[loss=0.188, simple_loss=0.2498, pruned_loss=0.06306, over 4180.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2446, pruned_loss=0.04913, over 955069.08 frames. ], batch size: 18, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:01:35,652 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:01:42,601 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157366.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:01:47,172 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.455e+02 1.749e+02 2.146e+02 4.370e+02, threshold=3.498e+02, percent-clipped=1.0 2023-03-27 10:01:53,270 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157382.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:12,808 INFO [finetune.py:976] (2/7) Epoch 28, batch 2750, loss[loss=0.2053, simple_loss=0.2674, pruned_loss=0.07157, over 4921.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2423, pruned_loss=0.04868, over 956069.50 frames. ], batch size: 37, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:02:20,334 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157404.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:02:20,931 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157405.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:28,670 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157417.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:30,986 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7183, 1.6581, 1.4562, 1.6933, 1.9857, 1.9832, 1.7318, 1.4435], device='cuda:2'), covar=tensor([0.0337, 0.0343, 0.0650, 0.0315, 0.0232, 0.0452, 0.0445, 0.0428], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0106, 0.0148, 0.0112, 0.0102, 0.0116, 0.0104, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8772e-05, 8.0990e-05, 1.1559e-04, 8.5183e-05, 7.8957e-05, 8.5401e-05, 7.7156e-05, 8.6106e-05], device='cuda:2') 2023-03-27 10:02:36,452 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-27 10:02:46,865 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157443.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:50,280 INFO [finetune.py:976] (2/7) Epoch 28, batch 2800, loss[loss=0.2195, simple_loss=0.2782, pruned_loss=0.0804, over 4829.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2382, pruned_loss=0.04749, over 956727.90 frames. ], batch size: 40, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:02:53,299 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157453.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:01,079 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157465.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 10:03:06,316 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.288e+01 1.520e+02 1.688e+02 2.071e+02 7.416e+02, threshold=3.376e+02, percent-clipped=4.0 2023-03-27 10:03:23,413 INFO [finetune.py:976] (2/7) Epoch 28, batch 2850, loss[loss=0.1528, simple_loss=0.2385, pruned_loss=0.0336, over 4938.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2374, pruned_loss=0.04708, over 954955.34 frames. ], batch size: 38, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:03:23,483 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:28,854 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157506.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:03:42,067 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 10:03:52,492 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157541.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:57,079 INFO [finetune.py:976] (2/7) Epoch 28, batch 2900, loss[loss=0.1772, simple_loss=0.2585, pruned_loss=0.04796, over 4913.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2417, pruned_loss=0.04843, over 952385.79 frames. ], batch size: 37, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:03:58,888 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0139, 1.9734, 1.7262, 2.2532, 2.4557, 2.1048, 2.0888, 1.5371], device='cuda:2'), covar=tensor([0.2280, 0.2099, 0.1957, 0.1535, 0.1887, 0.1226, 0.2158, 0.2073], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0200, 0.0246, 0.0191, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:04:09,783 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157567.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:04:10,386 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157568.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:04:13,241 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.581e+01 1.484e+02 1.768e+02 2.098e+02 4.175e+02, threshold=3.535e+02, percent-clipped=1.0 2023-03-27 10:04:24,446 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:04:32,418 INFO [finetune.py:976] (2/7) Epoch 28, batch 2950, loss[loss=0.1799, simple_loss=0.2592, pruned_loss=0.05027, over 4822.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2432, pruned_loss=0.04847, over 952213.36 frames. ], batch size: 51, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:05:06,823 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:05:07,434 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8916, 1.8414, 1.9510, 1.4175, 1.8888, 2.0550, 2.0511, 1.6085], device='cuda:2'), covar=tensor([0.0554, 0.0621, 0.0684, 0.0819, 0.0779, 0.0633, 0.0535, 0.1109], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0138, 0.0140, 0.0119, 0.0129, 0.0139, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:05:19,837 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7198, 3.9125, 3.5992, 1.7589, 3.9928, 3.0382, 1.2377, 2.7250], device='cuda:2'), covar=tensor([0.2179, 0.1670, 0.1547, 0.3219, 0.1021, 0.0978, 0.3727, 0.1381], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0182, 0.0161, 0.0132, 0.0165, 0.0125, 0.0150, 0.0127], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:05:23,714 INFO [finetune.py:976] (2/7) Epoch 28, batch 3000, loss[loss=0.1633, simple_loss=0.2372, pruned_loss=0.04473, over 4898.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2447, pruned_loss=0.04906, over 953959.36 frames. ], batch size: 32, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:05:23,714 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 10:05:28,492 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1411, 1.2958, 2.2048, 2.0829, 1.9568, 1.8880, 1.9282, 2.0706], device='cuda:2'), covar=tensor([0.3383, 0.3495, 0.3058, 0.3295, 0.4398, 0.3599, 0.4007, 0.2779], device='cuda:2'), in_proj_covar=tensor([0.0268, 0.0249, 0.0269, 0.0297, 0.0297, 0.0274, 0.0303, 0.0253], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:05:34,508 INFO [finetune.py:1010] (2/7) Epoch 28, validation: loss=0.1567, simple_loss=0.2243, pruned_loss=0.04455, over 2265189.00 frames. 2023-03-27 10:05:34,508 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 10:05:46,467 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:05:50,618 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.524e+02 1.841e+02 2.248e+02 4.082e+02, threshold=3.682e+02, percent-clipped=3.0 2023-03-27 10:06:07,191 INFO [finetune.py:976] (2/7) Epoch 28, batch 3050, loss[loss=0.1681, simple_loss=0.2438, pruned_loss=0.04619, over 4747.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2468, pruned_loss=0.05029, over 955189.68 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:06:09,129 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 10:06:16,640 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157712.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:17,822 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157714.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:33,277 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157738.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:40,233 INFO [finetune.py:976] (2/7) Epoch 28, batch 3100, loss[loss=0.1679, simple_loss=0.239, pruned_loss=0.04843, over 4874.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2445, pruned_loss=0.04998, over 954319.01 frames. ], batch size: 34, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:06:48,818 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157760.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:06:57,051 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.389e+02 1.744e+02 2.105e+02 3.209e+02, threshold=3.488e+02, percent-clipped=0.0 2023-03-27 10:07:02,667 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5518, 2.3574, 2.0120, 0.8945, 2.0903, 1.9632, 1.8593, 2.0556], device='cuda:2'), covar=tensor([0.0851, 0.0856, 0.1713, 0.2142, 0.1452, 0.2133, 0.2245, 0.1020], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0192, 0.0202, 0.0182, 0.0212, 0.0212, 0.0225, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:07:19,510 INFO [finetune.py:976] (2/7) Epoch 28, batch 3150, loss[loss=0.1405, simple_loss=0.2161, pruned_loss=0.03245, over 4816.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2412, pruned_loss=0.04881, over 953099.00 frames. ], batch size: 25, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:07:19,605 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157798.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:03,993 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157846.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:05,667 INFO [finetune.py:976] (2/7) Epoch 28, batch 3200, loss[loss=0.1292, simple_loss=0.1966, pruned_loss=0.03091, over 4460.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2384, pruned_loss=0.04791, over 954416.36 frames. ], batch size: 19, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:08:09,374 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157853.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:15,275 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157862.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 10:08:22,856 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.817e+01 1.555e+02 1.831e+02 2.254e+02 7.078e+02, threshold=3.662e+02, percent-clipped=7.0 2023-03-27 10:08:33,072 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8838, 3.3733, 3.5738, 3.6745, 3.6688, 3.4258, 3.9442, 1.7225], device='cuda:2'), covar=tensor([0.0821, 0.0923, 0.0842, 0.1018, 0.1119, 0.1297, 0.0705, 0.4799], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0285, 0.0294, 0.0336, 0.0285, 0.0305, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:08:38,478 INFO [finetune.py:976] (2/7) Epoch 28, batch 3250, loss[loss=0.1983, simple_loss=0.2785, pruned_loss=0.05908, over 4854.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2389, pruned_loss=0.04816, over 952357.61 frames. ], batch size: 44, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:08:49,839 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157914.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:56,864 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157924.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:09:11,705 INFO [finetune.py:976] (2/7) Epoch 28, batch 3300, loss[loss=0.1949, simple_loss=0.2781, pruned_loss=0.05587, over 4821.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.241, pruned_loss=0.04816, over 951993.65 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:09:29,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.521e+02 1.758e+02 2.140e+02 4.599e+02, threshold=3.515e+02, percent-clipped=1.0 2023-03-27 10:09:44,716 INFO [finetune.py:976] (2/7) Epoch 28, batch 3350, loss[loss=0.2059, simple_loss=0.2679, pruned_loss=0.072, over 4761.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2436, pruned_loss=0.04894, over 952934.88 frames. ], batch size: 28, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:09:58,153 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158012.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:18,852 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1888, 2.9396, 2.8123, 1.2463, 3.0852, 2.2442, 0.8077, 1.8925], device='cuda:2'), covar=tensor([0.2437, 0.2065, 0.1848, 0.3434, 0.1342, 0.1174, 0.3917, 0.1658], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0181, 0.0161, 0.0131, 0.0165, 0.0125, 0.0151, 0.0127], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:10:32,991 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158038.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:38,995 INFO [finetune.py:976] (2/7) Epoch 28, batch 3400, loss[loss=0.1665, simple_loss=0.253, pruned_loss=0.03995, over 4838.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2442, pruned_loss=0.04912, over 948916.91 frames. ], batch size: 49, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:10:46,887 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:46,926 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158060.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:10:55,546 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.550e+02 1.897e+02 2.350e+02 3.360e+02, threshold=3.793e+02, percent-clipped=0.0 2023-03-27 10:11:04,978 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158086.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:11:06,886 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2152, 2.1588, 1.6749, 2.0320, 2.1227, 1.8631, 2.4151, 2.1984], device='cuda:2'), covar=tensor([0.1226, 0.1794, 0.2885, 0.2428, 0.2560, 0.1707, 0.3018, 0.1622], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0253, 0.0250, 0.0206, 0.0213, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:11:12,079 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3924, 1.0036, 0.7333, 1.2837, 1.9317, 0.7072, 1.2042, 1.3085], device='cuda:2'), covar=tensor([0.2114, 0.2848, 0.2278, 0.1624, 0.2231, 0.2531, 0.1987, 0.2745], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0120, 0.0092, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 10:11:12,584 INFO [finetune.py:976] (2/7) Epoch 28, batch 3450, loss[loss=0.1872, simple_loss=0.2648, pruned_loss=0.0548, over 4897.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.245, pruned_loss=0.04921, over 950842.60 frames. ], batch size: 37, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:11:18,606 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158108.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:11:28,434 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158122.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:11:40,107 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3913, 1.4235, 1.2627, 1.4532, 1.7079, 1.6157, 1.5362, 1.3083], device='cuda:2'), covar=tensor([0.0381, 0.0286, 0.0629, 0.0288, 0.0206, 0.0398, 0.0270, 0.0382], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0105, 0.0147, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:2'), out_proj_covar=tensor([7.8087e-05, 8.0550e-05, 1.1438e-04, 8.4481e-05, 7.8204e-05, 8.4921e-05, 7.6585e-05, 8.5550e-05], device='cuda:2') 2023-03-27 10:11:46,009 INFO [finetune.py:976] (2/7) Epoch 28, batch 3500, loss[loss=0.1445, simple_loss=0.2132, pruned_loss=0.03786, over 4826.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.242, pruned_loss=0.04841, over 951923.18 frames. ], batch size: 38, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:11:54,571 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158162.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:12:02,510 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.432e+02 1.800e+02 2.074e+02 3.411e+02, threshold=3.600e+02, percent-clipped=0.0 2023-03-27 10:12:04,580 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-27 10:12:09,603 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158183.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:12:19,503 INFO [finetune.py:976] (2/7) Epoch 28, batch 3550, loss[loss=0.1331, simple_loss=0.2063, pruned_loss=0.02992, over 4729.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2388, pruned_loss=0.04749, over 953957.38 frames. ], batch size: 23, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:12:28,583 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158209.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:12:29,187 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158210.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:12:38,710 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:13:02,523 INFO [finetune.py:976] (2/7) Epoch 28, batch 3600, loss[loss=0.1163, simple_loss=0.1945, pruned_loss=0.01904, over 4748.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2362, pruned_loss=0.04666, over 955319.07 frames. ], batch size: 27, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:13:18,162 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:13:18,718 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.425e+02 1.679e+02 2.016e+02 3.584e+02, threshold=3.358e+02, percent-clipped=0.0 2023-03-27 10:13:22,889 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1316, 1.8077, 2.1507, 2.2106, 1.9131, 1.9180, 2.1201, 2.0219], device='cuda:2'), covar=tensor([0.4331, 0.4345, 0.3446, 0.4267, 0.5614, 0.4350, 0.5243, 0.3209], device='cuda:2'), in_proj_covar=tensor([0.0267, 0.0248, 0.0268, 0.0297, 0.0296, 0.0273, 0.0303, 0.0252], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:13:36,304 INFO [finetune.py:976] (2/7) Epoch 28, batch 3650, loss[loss=0.2074, simple_loss=0.2623, pruned_loss=0.07627, over 4740.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2381, pruned_loss=0.04748, over 955580.96 frames. ], batch size: 23, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:13:52,680 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6480, 1.0925, 0.8547, 1.6751, 1.9972, 1.4841, 1.5462, 1.5274], device='cuda:2'), covar=tensor([0.1538, 0.2255, 0.1891, 0.1190, 0.2054, 0.1940, 0.1445, 0.1976], device='cuda:2'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0093, 0.0121, 0.0093, 0.0099, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 10:14:10,062 INFO [finetune.py:976] (2/7) Epoch 28, batch 3700, loss[loss=0.1929, simple_loss=0.2705, pruned_loss=0.05766, over 4894.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2408, pruned_loss=0.04766, over 955887.01 frames. ], batch size: 35, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:14:26,134 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.573e+02 1.954e+02 2.338e+02 5.991e+02, threshold=3.909e+02, percent-clipped=5.0 2023-03-27 10:14:33,449 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:14:43,249 INFO [finetune.py:976] (2/7) Epoch 28, batch 3750, loss[loss=0.1746, simple_loss=0.2559, pruned_loss=0.04663, over 4846.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2422, pruned_loss=0.04819, over 955285.71 frames. ], batch size: 44, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:15:03,241 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2635, 2.1985, 1.6405, 2.1476, 2.0991, 1.8715, 2.5055, 2.2746], device='cuda:2'), covar=tensor([0.1317, 0.2019, 0.3003, 0.2609, 0.2644, 0.1761, 0.2653, 0.1646], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0237, 0.0254, 0.0250, 0.0207, 0.0214, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:15:05,700 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.14 vs. limit=5.0 2023-03-27 10:15:08,696 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7660, 1.2321, 1.8668, 1.8616, 1.6713, 1.6064, 1.8029, 1.7804], device='cuda:2'), covar=tensor([0.4196, 0.3894, 0.3075, 0.3637, 0.4529, 0.3835, 0.4110, 0.2894], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0249, 0.0269, 0.0298, 0.0298, 0.0275, 0.0304, 0.0253], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:15:19,896 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:15:22,039 INFO [finetune.py:976] (2/7) Epoch 28, batch 3800, loss[loss=0.1741, simple_loss=0.2535, pruned_loss=0.04731, over 4922.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2443, pruned_loss=0.04886, over 955280.30 frames. ], batch size: 41, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:15:51,767 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.580e+02 1.909e+02 2.258e+02 3.504e+02, threshold=3.818e+02, percent-clipped=0.0 2023-03-27 10:15:55,428 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158478.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:07,267 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2453, 2.8931, 2.7953, 1.2133, 3.0084, 2.2627, 0.8270, 2.0023], device='cuda:2'), covar=tensor([0.2478, 0.2370, 0.1943, 0.3914, 0.1475, 0.1183, 0.4157, 0.1902], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0180, 0.0160, 0.0130, 0.0163, 0.0124, 0.0149, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:16:08,886 INFO [finetune.py:976] (2/7) Epoch 28, batch 3850, loss[loss=0.154, simple_loss=0.2197, pruned_loss=0.04418, over 4778.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2435, pruned_loss=0.04866, over 954365.64 frames. ], batch size: 26, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:16:11,935 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 10:16:16,606 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:42,035 INFO [finetune.py:976] (2/7) Epoch 28, batch 3900, loss[loss=0.1604, simple_loss=0.2349, pruned_loss=0.04298, over 4824.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2397, pruned_loss=0.04738, over 956172.10 frames. ], batch size: 40, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:16:48,975 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158557.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:58,921 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.423e+02 1.682e+02 2.011e+02 3.434e+02, threshold=3.365e+02, percent-clipped=0.0 2023-03-27 10:17:15,468 INFO [finetune.py:976] (2/7) Epoch 28, batch 3950, loss[loss=0.1575, simple_loss=0.2121, pruned_loss=0.05146, over 4173.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2367, pruned_loss=0.04627, over 954004.64 frames. ], batch size: 18, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:17:48,877 INFO [finetune.py:976] (2/7) Epoch 28, batch 4000, loss[loss=0.147, simple_loss=0.2203, pruned_loss=0.03683, over 4914.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2359, pruned_loss=0.04613, over 955438.64 frames. ], batch size: 38, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:17:53,206 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 10:18:15,602 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.131e+01 1.444e+02 1.852e+02 2.094e+02 7.470e+02, threshold=3.703e+02, percent-clipped=2.0 2023-03-27 10:18:19,199 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8956, 3.3720, 3.5733, 3.6632, 3.6954, 3.4397, 3.9350, 1.6787], device='cuda:2'), covar=tensor([0.0885, 0.0889, 0.0894, 0.1022, 0.1120, 0.1606, 0.0785, 0.5049], device='cuda:2'), in_proj_covar=tensor([0.0352, 0.0246, 0.0286, 0.0296, 0.0338, 0.0286, 0.0306, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:18:26,989 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2836, 1.0822, 1.1874, 0.6312, 1.2206, 1.3046, 1.4031, 1.1562], device='cuda:2'), covar=tensor([0.0819, 0.0671, 0.0562, 0.0455, 0.0493, 0.0621, 0.0370, 0.0587], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0130, 0.0122, 0.0131, 0.0130, 0.0143, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.8421e-05, 1.0582e-04, 9.2479e-05, 8.5619e-05, 9.2032e-05, 9.1853e-05, 1.0143e-04, 1.0781e-04], device='cuda:2') 2023-03-27 10:18:31,725 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9403, 1.8338, 1.5626, 1.7179, 1.7653, 1.7201, 1.7884, 2.3876], device='cuda:2'), covar=tensor([0.3512, 0.3708, 0.3170, 0.3432, 0.3748, 0.2274, 0.3493, 0.1621], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0265, 0.0239, 0.0276, 0.0262, 0.0232, 0.0260, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:18:32,166 INFO [finetune.py:976] (2/7) Epoch 28, batch 4050, loss[loss=0.1901, simple_loss=0.279, pruned_loss=0.05054, over 4843.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2402, pruned_loss=0.0477, over 954696.26 frames. ], batch size: 49, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:18:54,604 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2178, 1.2788, 1.3711, 0.7322, 1.3109, 1.5020, 1.5711, 1.3106], device='cuda:2'), covar=tensor([0.1080, 0.0704, 0.0580, 0.0528, 0.0593, 0.0648, 0.0382, 0.0664], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0130, 0.0122, 0.0132, 0.0130, 0.0143, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.8592e-05, 1.0581e-04, 9.2610e-05, 8.5789e-05, 9.2272e-05, 9.2087e-05, 1.0163e-04, 1.0805e-04], device='cuda:2') 2023-03-27 10:18:59,957 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:05,203 INFO [finetune.py:976] (2/7) Epoch 28, batch 4100, loss[loss=0.1621, simple_loss=0.2428, pruned_loss=0.04074, over 4840.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2431, pruned_loss=0.04851, over 954604.74 frames. ], batch size: 47, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:19:22,720 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.534e+02 1.832e+02 2.269e+02 3.411e+02, threshold=3.665e+02, percent-clipped=0.0 2023-03-27 10:19:25,288 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:25,877 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158778.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:29,370 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:38,839 INFO [finetune.py:976] (2/7) Epoch 28, batch 4150, loss[loss=0.2028, simple_loss=0.2776, pruned_loss=0.06401, over 4808.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2448, pruned_loss=0.04931, over 955113.27 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:19:45,104 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=2.03 vs. limit=2.0 2023-03-27 10:19:58,147 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158826.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:03,632 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158835.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:20:05,946 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158838.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:09,573 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158844.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:11,899 INFO [finetune.py:976] (2/7) Epoch 28, batch 4200, loss[loss=0.1551, simple_loss=0.2337, pruned_loss=0.03829, over 4753.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2452, pruned_loss=0.04923, over 955529.20 frames. ], batch size: 28, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:20:25,843 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6391, 1.5375, 2.1111, 3.4176, 2.2381, 2.3767, 0.9617, 2.8920], device='cuda:2'), covar=tensor([0.1759, 0.1447, 0.1312, 0.0623, 0.0838, 0.1421, 0.1861, 0.0465], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0164, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 10:20:35,368 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.763e+01 1.457e+02 1.627e+02 2.050e+02 3.601e+02, threshold=3.253e+02, percent-clipped=0.0 2023-03-27 10:20:58,716 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158896.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:21:04,567 INFO [finetune.py:976] (2/7) Epoch 28, batch 4250, loss[loss=0.1893, simple_loss=0.2589, pruned_loss=0.05983, over 4869.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2421, pruned_loss=0.04818, over 955571.25 frames. ], batch size: 34, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:21:46,469 INFO [finetune.py:976] (2/7) Epoch 28, batch 4300, loss[loss=0.1575, simple_loss=0.2234, pruned_loss=0.04587, over 4902.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2387, pruned_loss=0.0471, over 956512.65 frames. ], batch size: 32, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:22:00,761 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5224, 3.3458, 3.2354, 1.3807, 3.4548, 2.6794, 0.8769, 2.3817], device='cuda:2'), covar=tensor([0.2388, 0.2046, 0.1684, 0.3624, 0.1243, 0.1026, 0.4189, 0.1748], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0181, 0.0161, 0.0131, 0.0164, 0.0125, 0.0151, 0.0127], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:22:03,614 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.473e+02 1.789e+02 2.045e+02 3.501e+02, threshold=3.577e+02, percent-clipped=2.0 2023-03-27 10:22:20,211 INFO [finetune.py:976] (2/7) Epoch 28, batch 4350, loss[loss=0.2071, simple_loss=0.2634, pruned_loss=0.07538, over 4853.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2373, pruned_loss=0.04673, over 955786.85 frames. ], batch size: 49, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:22:24,934 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 10:22:48,303 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:22:53,108 INFO [finetune.py:976] (2/7) Epoch 28, batch 4400, loss[loss=0.2091, simple_loss=0.2841, pruned_loss=0.06709, over 4809.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2376, pruned_loss=0.04696, over 956927.97 frames. ], batch size: 51, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:23:06,922 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:23:09,712 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.529e+02 1.726e+02 2.218e+02 4.795e+02, threshold=3.452e+02, percent-clipped=2.0 2023-03-27 10:23:24,711 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:23:35,375 INFO [finetune.py:976] (2/7) Epoch 28, batch 4450, loss[loss=0.1853, simple_loss=0.2487, pruned_loss=0.06101, over 4825.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2414, pruned_loss=0.0479, over 956434.38 frames. ], batch size: 30, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:24:00,786 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:02,995 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159133.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:07,592 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159139.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:12,912 INFO [finetune.py:976] (2/7) Epoch 28, batch 4500, loss[loss=0.1158, simple_loss=0.1863, pruned_loss=0.0227, over 4261.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2432, pruned_loss=0.0486, over 954276.04 frames. ], batch size: 18, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:24:20,177 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159159.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:28,960 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.553e+02 1.848e+02 2.152e+02 3.966e+02, threshold=3.696e+02, percent-clipped=2.0 2023-03-27 10:24:29,062 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.4383, 3.8416, 4.0693, 4.3072, 4.1893, 3.9278, 4.5490, 1.4309], device='cuda:2'), covar=tensor([0.0824, 0.0921, 0.0912, 0.1015, 0.1317, 0.1602, 0.0636, 0.5966], device='cuda:2'), in_proj_covar=tensor([0.0356, 0.0249, 0.0289, 0.0299, 0.0342, 0.0289, 0.0309, 0.0307], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:24:35,649 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 10:24:41,943 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159191.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:24:46,586 INFO [finetune.py:976] (2/7) Epoch 28, batch 4550, loss[loss=0.1894, simple_loss=0.2634, pruned_loss=0.05773, over 4910.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2435, pruned_loss=0.04822, over 955978.31 frames. ], batch size: 37, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:24:46,653 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2520, 2.9365, 2.8109, 1.2829, 3.0607, 2.1978, 0.6950, 1.8283], device='cuda:2'), covar=tensor([0.2474, 0.2128, 0.1863, 0.3498, 0.1353, 0.1121, 0.4091, 0.1751], device='cuda:2'), in_proj_covar=tensor([0.0153, 0.0183, 0.0162, 0.0132, 0.0165, 0.0126, 0.0151, 0.0128], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:25:00,529 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159220.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:25:09,812 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0691, 1.9957, 2.1562, 1.6379, 2.0131, 2.2024, 2.1984, 1.7142], device='cuda:2'), covar=tensor([0.0508, 0.0558, 0.0555, 0.0705, 0.0876, 0.0519, 0.0488, 0.1044], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0139, 0.0141, 0.0120, 0.0129, 0.0140, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:25:20,092 INFO [finetune.py:976] (2/7) Epoch 28, batch 4600, loss[loss=0.1853, simple_loss=0.254, pruned_loss=0.05824, over 4909.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2423, pruned_loss=0.04729, over 954308.81 frames. ], batch size: 38, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:25:35,685 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.454e+02 1.668e+02 2.062e+02 3.965e+02, threshold=3.336e+02, percent-clipped=3.0 2023-03-27 10:25:38,706 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6274, 1.4606, 1.3148, 1.6160, 1.6421, 1.6529, 1.0787, 1.3954], device='cuda:2'), covar=tensor([0.2057, 0.2019, 0.1874, 0.1634, 0.1558, 0.1167, 0.2448, 0.1779], device='cuda:2'), in_proj_covar=tensor([0.0249, 0.0215, 0.0217, 0.0202, 0.0248, 0.0193, 0.0219, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:25:54,146 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159290.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:26:05,012 INFO [finetune.py:976] (2/7) Epoch 28, batch 4650, loss[loss=0.1709, simple_loss=0.238, pruned_loss=0.05188, over 4250.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2405, pruned_loss=0.04778, over 951825.73 frames. ], batch size: 65, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:26:54,500 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2853, 1.4049, 1.6367, 1.4977, 1.5564, 3.0008, 1.3644, 1.5827], device='cuda:2'), covar=tensor([0.1016, 0.1711, 0.1038, 0.0933, 0.1515, 0.0268, 0.1451, 0.1661], device='cuda:2'), in_proj_covar=tensor([0.0076, 0.0083, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 10:26:55,599 INFO [finetune.py:976] (2/7) Epoch 28, batch 4700, loss[loss=0.1294, simple_loss=0.1991, pruned_loss=0.0298, over 4826.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2378, pruned_loss=0.04731, over 950556.60 frames. ], batch size: 39, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:26:58,029 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159351.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:11,660 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.356e+01 1.407e+02 1.592e+02 1.978e+02 3.098e+02, threshold=3.183e+02, percent-clipped=0.0 2023-03-27 10:27:24,858 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 10:27:28,643 INFO [finetune.py:976] (2/7) Epoch 28, batch 4750, loss[loss=0.189, simple_loss=0.2592, pruned_loss=0.05939, over 4815.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2372, pruned_loss=0.04722, over 952375.66 frames. ], batch size: 38, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:27:46,455 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:51,861 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159433.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:55,945 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159439.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:28:02,248 INFO [finetune.py:976] (2/7) Epoch 28, batch 4800, loss[loss=0.1404, simple_loss=0.2122, pruned_loss=0.03431, over 4713.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2398, pruned_loss=0.0486, over 950775.85 frames. ], batch size: 23, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:28:18,792 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.565e+02 1.779e+02 2.195e+02 3.956e+02, threshold=3.558e+02, percent-clipped=1.0 2023-03-27 10:28:20,258 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.45 vs. limit=5.0 2023-03-27 10:28:23,704 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159481.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:28:27,814 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159487.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:28:30,786 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159491.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:28:36,823 INFO [finetune.py:976] (2/7) Epoch 28, batch 4850, loss[loss=0.1678, simple_loss=0.25, pruned_loss=0.04282, over 4764.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2437, pruned_loss=0.04958, over 949403.21 frames. ], batch size: 28, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:28:57,888 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159515.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:29:17,409 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159539.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:29:23,228 INFO [finetune.py:976] (2/7) Epoch 28, batch 4900, loss[loss=0.2347, simple_loss=0.3033, pruned_loss=0.08303, over 4824.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2463, pruned_loss=0.05069, over 949189.13 frames. ], batch size: 33, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:29:40,327 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.553e+02 1.825e+02 2.271e+02 5.584e+02, threshold=3.651e+02, percent-clipped=3.0 2023-03-27 10:29:50,068 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 10:29:55,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1435, 1.7877, 2.4771, 4.0750, 2.7375, 2.7447, 1.1942, 3.4498], device='cuda:2'), covar=tensor([0.1754, 0.1488, 0.1487, 0.0598, 0.0799, 0.1640, 0.1817, 0.0429], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0164, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 10:29:56,963 INFO [finetune.py:976] (2/7) Epoch 28, batch 4950, loss[loss=0.1939, simple_loss=0.2631, pruned_loss=0.06238, over 4745.00 frames. ], tot_loss[loss=0.173, simple_loss=0.246, pruned_loss=0.05004, over 950542.43 frames. ], batch size: 59, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:30:18,779 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159631.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:30:28,812 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:30:29,932 INFO [finetune.py:976] (2/7) Epoch 28, batch 5000, loss[loss=0.1592, simple_loss=0.2359, pruned_loss=0.04126, over 4869.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2443, pruned_loss=0.04967, over 950465.70 frames. ], batch size: 34, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:30:42,186 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2510, 2.1115, 1.7985, 1.9517, 2.2031, 1.9305, 2.2694, 2.2226], device='cuda:2'), covar=tensor([0.1392, 0.1899, 0.2979, 0.2415, 0.2599, 0.1832, 0.2935, 0.1935], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0253, 0.0250, 0.0208, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:30:47,440 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.490e+02 1.734e+02 1.957e+02 3.436e+02, threshold=3.469e+02, percent-clipped=0.0 2023-03-27 10:30:53,665 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 10:30:59,468 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:31:01,830 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:31:05,178 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:31:05,616 INFO [finetune.py:976] (2/7) Epoch 28, batch 5050, loss[loss=0.1577, simple_loss=0.2195, pruned_loss=0.04796, over 4825.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2408, pruned_loss=0.04837, over 952195.13 frames. ], batch size: 30, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:31:26,634 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1971, 2.1301, 1.6945, 2.1251, 2.1559, 1.8798, 2.4026, 2.1968], device='cuda:2'), covar=tensor([0.1371, 0.2014, 0.2898, 0.2393, 0.2411, 0.1699, 0.3077, 0.1695], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0254, 0.0251, 0.0208, 0.0216, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:31:27,412 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-27 10:31:32,267 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:31:55,164 INFO [finetune.py:976] (2/7) Epoch 28, batch 5100, loss[loss=0.144, simple_loss=0.217, pruned_loss=0.03549, over 4753.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2373, pruned_loss=0.04655, over 950521.97 frames. ], batch size: 27, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:32:21,761 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.553e+02 1.866e+02 2.180e+02 3.771e+02, threshold=3.731e+02, percent-clipped=1.0 2023-03-27 10:32:21,834 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:32:37,624 INFO [finetune.py:976] (2/7) Epoch 28, batch 5150, loss[loss=0.1663, simple_loss=0.2471, pruned_loss=0.04281, over 4902.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2377, pruned_loss=0.04695, over 950578.86 frames. ], batch size: 35, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:32:49,259 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:11,651 INFO [finetune.py:976] (2/7) Epoch 28, batch 5200, loss[loss=0.1468, simple_loss=0.2229, pruned_loss=0.03539, over 4356.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2405, pruned_loss=0.04778, over 951096.59 frames. ], batch size: 19, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:33:16,467 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159855.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:21,712 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159863.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:22,361 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.8097, 4.2199, 4.4239, 4.6264, 4.5565, 4.2872, 4.9297, 1.5492], device='cuda:2'), covar=tensor([0.0772, 0.0818, 0.0822, 0.0904, 0.1186, 0.1542, 0.0478, 0.6137], device='cuda:2'), in_proj_covar=tensor([0.0356, 0.0250, 0.0288, 0.0300, 0.0343, 0.0291, 0.0309, 0.0307], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:33:28,751 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.499e+02 1.871e+02 2.174e+02 4.313e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-27 10:33:44,908 INFO [finetune.py:976] (2/7) Epoch 28, batch 5250, loss[loss=0.1549, simple_loss=0.222, pruned_loss=0.04389, over 4808.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.242, pruned_loss=0.04808, over 952927.29 frames. ], batch size: 25, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:33:45,658 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2114, 1.9602, 2.6780, 1.5355, 2.2020, 2.4189, 1.7680, 2.5964], device='cuda:2'), covar=tensor([0.1445, 0.2007, 0.1519, 0.2172, 0.1032, 0.1664, 0.2822, 0.0834], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0205, 0.0192, 0.0188, 0.0174, 0.0212, 0.0217, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:33:48,023 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.53 vs. limit=5.0 2023-03-27 10:33:53,302 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159910.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:59,377 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159916.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:26,198 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:27,323 INFO [finetune.py:976] (2/7) Epoch 28, batch 5300, loss[loss=0.1623, simple_loss=0.2359, pruned_loss=0.04432, over 4705.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2435, pruned_loss=0.04892, over 952292.71 frames. ], batch size: 59, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:34:50,608 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159971.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:50,639 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1327, 2.1156, 1.6320, 2.1139, 2.0590, 1.7776, 2.3357, 2.1346], device='cuda:2'), covar=tensor([0.1315, 0.1823, 0.2963, 0.2263, 0.2461, 0.1729, 0.2980, 0.1630], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0189, 0.0236, 0.0252, 0.0249, 0.0207, 0.0214, 0.0202], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:34:52,304 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.498e+02 1.765e+02 2.107e+02 3.764e+02, threshold=3.530e+02, percent-clipped=1.0 2023-03-27 10:35:01,765 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:35:06,016 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:35:08,412 INFO [finetune.py:976] (2/7) Epoch 28, batch 5350, loss[loss=0.1593, simple_loss=0.2367, pruned_loss=0.04097, over 4816.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2444, pruned_loss=0.04864, over 954231.09 frames. ], batch size: 33, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:35:13,493 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3193, 2.0259, 1.8900, 1.8553, 2.0560, 2.0852, 2.0291, 2.7334], device='cuda:2'), covar=tensor([0.3577, 0.4232, 0.3160, 0.3817, 0.3926, 0.2341, 0.3649, 0.1638], device='cuda:2'), in_proj_covar=tensor([0.0287, 0.0263, 0.0236, 0.0274, 0.0260, 0.0230, 0.0258, 0.0237], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:35:24,597 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.1131, 4.4745, 4.7105, 4.9932, 4.9022, 4.5289, 5.2444, 1.8138], device='cuda:2'), covar=tensor([0.0628, 0.0870, 0.0724, 0.0697, 0.1014, 0.1522, 0.0419, 0.5628], device='cuda:2'), in_proj_covar=tensor([0.0359, 0.0252, 0.0290, 0.0302, 0.0345, 0.0293, 0.0310, 0.0309], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:35:43,142 INFO [finetune.py:976] (2/7) Epoch 28, batch 5400, loss[loss=0.1265, simple_loss=0.2007, pruned_loss=0.02621, over 4773.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2425, pruned_loss=0.04851, over 955752.36 frames. ], batch size: 28, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:36:00,195 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.280e+01 1.494e+02 1.877e+02 2.215e+02 3.734e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-27 10:36:09,709 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1931, 1.8464, 2.2817, 2.2672, 1.9616, 1.9840, 2.1745, 2.1351], device='cuda:2'), covar=tensor([0.4473, 0.4017, 0.3218, 0.3814, 0.5346, 0.4180, 0.4833, 0.3022], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0248, 0.0269, 0.0298, 0.0298, 0.0275, 0.0304, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:36:16,669 INFO [finetune.py:976] (2/7) Epoch 28, batch 5450, loss[loss=0.1482, simple_loss=0.2089, pruned_loss=0.04379, over 4285.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2395, pruned_loss=0.04766, over 956111.96 frames. ], batch size: 18, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:36:21,664 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1045, 2.1967, 2.2317, 1.6782, 2.1671, 2.4072, 2.4449, 1.9279], device='cuda:2'), covar=tensor([0.0662, 0.0693, 0.0747, 0.0829, 0.0726, 0.0740, 0.0547, 0.1080], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0139, 0.0141, 0.0119, 0.0130, 0.0140, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:36:21,672 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0778, 1.8104, 2.3382, 1.5816, 2.1454, 2.3674, 1.7227, 2.4837], device='cuda:2'), covar=tensor([0.1266, 0.2072, 0.1445, 0.1980, 0.0916, 0.1444, 0.2617, 0.0773], device='cuda:2'), in_proj_covar=tensor([0.0188, 0.0203, 0.0191, 0.0186, 0.0172, 0.0210, 0.0215, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:37:07,792 INFO [finetune.py:976] (2/7) Epoch 28, batch 5500, loss[loss=0.1868, simple_loss=0.2541, pruned_loss=0.05974, over 4910.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2371, pruned_loss=0.04672, over 957522.70 frames. ], batch size: 35, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:37:38,241 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.497e+02 1.808e+02 2.088e+02 3.346e+02, threshold=3.616e+02, percent-clipped=0.0 2023-03-27 10:37:55,667 INFO [finetune.py:976] (2/7) Epoch 28, batch 5550, loss[loss=0.1448, simple_loss=0.2121, pruned_loss=0.03874, over 4724.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2385, pruned_loss=0.04741, over 958135.56 frames. ], batch size: 23, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:38:03,854 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160211.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:27,206 INFO [finetune.py:976] (2/7) Epoch 28, batch 5600, loss[loss=0.1718, simple_loss=0.2551, pruned_loss=0.04425, over 4909.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2419, pruned_loss=0.04791, over 957930.73 frames. ], batch size: 43, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:38:37,609 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160266.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:42,223 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.448e+02 1.891e+02 2.366e+02 4.690e+02, threshold=3.782e+02, percent-clipped=2.0 2023-03-27 10:38:49,857 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:56,599 INFO [finetune.py:976] (2/7) Epoch 28, batch 5650, loss[loss=0.1824, simple_loss=0.2504, pruned_loss=0.05718, over 4936.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2444, pruned_loss=0.04823, over 957168.74 frames. ], batch size: 33, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:39:01,990 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160307.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:39:13,449 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.1272, 2.8678, 2.5661, 1.5178, 2.7358, 2.3903, 2.2180, 2.7307], device='cuda:2'), covar=tensor([0.0673, 0.0702, 0.1294, 0.1738, 0.0996, 0.1639, 0.1822, 0.0754], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0191, 0.0202, 0.0182, 0.0211, 0.0212, 0.0225, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:39:24,563 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:39:36,255 INFO [finetune.py:976] (2/7) Epoch 28, batch 5700, loss[loss=0.1677, simple_loss=0.2163, pruned_loss=0.05961, over 3838.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2403, pruned_loss=0.04771, over 936340.06 frames. ], batch size: 16, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:39:48,042 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=160368.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:39:53,236 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.932e+01 1.353e+02 1.695e+02 2.046e+02 5.031e+02, threshold=3.390e+02, percent-clipped=3.0 2023-03-27 10:40:10,863 INFO [finetune.py:976] (2/7) Epoch 29, batch 0, loss[loss=0.1809, simple_loss=0.2552, pruned_loss=0.05327, over 4836.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2552, pruned_loss=0.05327, over 4836.00 frames. ], batch size: 49, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:40:10,863 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 10:40:21,879 INFO [finetune.py:1010] (2/7) Epoch 29, validation: loss=0.1588, simple_loss=0.2262, pruned_loss=0.04569, over 2265189.00 frames. 2023-03-27 10:40:21,879 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 10:40:24,832 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6118, 1.4151, 2.0344, 3.3116, 2.1876, 2.2977, 0.8931, 2.8774], device='cuda:2'), covar=tensor([0.1685, 0.1401, 0.1320, 0.0513, 0.0811, 0.1499, 0.1858, 0.0393], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0136, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 10:40:57,962 INFO [finetune.py:976] (2/7) Epoch 29, batch 50, loss[loss=0.2243, simple_loss=0.2913, pruned_loss=0.07866, over 4253.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2498, pruned_loss=0.0519, over 216832.69 frames. ], batch size: 66, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:41:07,179 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 10:41:17,656 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.37 vs. limit=5.0 2023-03-27 10:41:23,841 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 10:41:38,857 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.505e+02 1.852e+02 2.145e+02 8.823e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-27 10:41:39,935 INFO [finetune.py:976] (2/7) Epoch 29, batch 100, loss[loss=0.1888, simple_loss=0.2568, pruned_loss=0.06041, over 4748.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2382, pruned_loss=0.04729, over 381435.31 frames. ], batch size: 54, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:42:14,758 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:42:34,272 INFO [finetune.py:976] (2/7) Epoch 29, batch 150, loss[loss=0.1513, simple_loss=0.2237, pruned_loss=0.03949, over 4901.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2377, pruned_loss=0.04806, over 510976.18 frames. ], batch size: 37, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:42:43,893 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2618, 2.2051, 1.7520, 2.2859, 2.1999, 1.9428, 2.5518, 2.2661], device='cuda:2'), covar=tensor([0.1184, 0.1987, 0.2747, 0.2357, 0.2252, 0.1582, 0.2867, 0.1528], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0238, 0.0254, 0.0251, 0.0209, 0.0217, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:42:55,945 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160559.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:00,753 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:06,534 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.446e+02 1.774e+02 2.135e+02 3.412e+02, threshold=3.547e+02, percent-clipped=0.0 2023-03-27 10:43:07,130 INFO [finetune.py:976] (2/7) Epoch 29, batch 200, loss[loss=0.1539, simple_loss=0.229, pruned_loss=0.03939, over 4899.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2338, pruned_loss=0.04657, over 610743.93 frames. ], batch size: 35, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:43:12,400 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3439, 1.2480, 1.6267, 2.4511, 1.6264, 2.1706, 0.8595, 2.1293], device='cuda:2'), covar=tensor([0.1788, 0.1474, 0.1140, 0.0773, 0.0956, 0.1198, 0.1567, 0.0568], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 10:43:32,501 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160614.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:40,981 INFO [finetune.py:976] (2/7) Epoch 29, batch 250, loss[loss=0.2005, simple_loss=0.2733, pruned_loss=0.06386, over 4812.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2385, pruned_loss=0.04782, over 687830.97 frames. ], batch size: 39, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:44:05,536 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160663.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:44:12,915 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.455e+02 1.868e+02 2.134e+02 3.558e+02, threshold=3.736e+02, percent-clipped=1.0 2023-03-27 10:44:13,967 INFO [finetune.py:976] (2/7) Epoch 29, batch 300, loss[loss=0.1472, simple_loss=0.2288, pruned_loss=0.03278, over 4873.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2427, pruned_loss=0.0489, over 747088.60 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:44:26,381 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 10:44:36,470 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 10:44:57,148 INFO [finetune.py:976] (2/7) Epoch 29, batch 350, loss[loss=0.1573, simple_loss=0.2305, pruned_loss=0.04207, over 4919.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2436, pruned_loss=0.04901, over 793581.21 frames. ], batch size: 33, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:45:20,171 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1496, 2.1692, 2.2668, 1.6189, 2.1485, 2.3883, 2.4266, 1.8756], device='cuda:2'), covar=tensor([0.0622, 0.0645, 0.0676, 0.0772, 0.0732, 0.0726, 0.0525, 0.1048], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0138, 0.0140, 0.0119, 0.0128, 0.0140, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:45:32,468 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 10:45:37,465 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.928e+01 1.581e+02 1.864e+02 2.210e+02 3.251e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-27 10:45:38,107 INFO [finetune.py:976] (2/7) Epoch 29, batch 400, loss[loss=0.1325, simple_loss=0.2137, pruned_loss=0.02561, over 4776.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2447, pruned_loss=0.04901, over 829167.32 frames. ], batch size: 29, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:45:59,576 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.6587, 1.6507, 1.6815, 1.0748, 1.9179, 2.0886, 1.9681, 1.5562], device='cuda:2'), covar=tensor([0.1037, 0.0794, 0.0628, 0.0618, 0.0499, 0.0680, 0.0405, 0.0818], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0148, 0.0130, 0.0122, 0.0131, 0.0130, 0.0142, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.7830e-05, 1.0579e-04, 9.2256e-05, 8.5358e-05, 9.1659e-05, 9.1751e-05, 1.0126e-04, 1.0756e-04], device='cuda:2') 2023-03-27 10:46:05,499 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6357, 1.5194, 1.4587, 1.6477, 1.1851, 3.5861, 1.3880, 1.9025], device='cuda:2'), covar=tensor([0.3467, 0.2629, 0.2242, 0.2430, 0.1852, 0.0192, 0.2717, 0.1270], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0093, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 10:46:11,857 INFO [finetune.py:976] (2/7) Epoch 29, batch 450, loss[loss=0.1988, simple_loss=0.2739, pruned_loss=0.0618, over 4927.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2431, pruned_loss=0.04826, over 855492.16 frames. ], batch size: 42, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:46:39,403 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8038, 4.0987, 3.7972, 1.9301, 4.2650, 3.2905, 1.1807, 2.8650], device='cuda:2'), covar=tensor([0.2096, 0.1757, 0.1441, 0.3259, 0.0753, 0.0808, 0.3966, 0.1416], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0180, 0.0160, 0.0130, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 10:46:55,074 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.473e+02 1.674e+02 2.082e+02 4.783e+02, threshold=3.348e+02, percent-clipped=2.0 2023-03-27 10:46:55,690 INFO [finetune.py:976] (2/7) Epoch 29, batch 500, loss[loss=0.1691, simple_loss=0.2466, pruned_loss=0.04577, over 4840.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2408, pruned_loss=0.0478, over 877880.85 frames. ], batch size: 44, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:47:03,499 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 10:47:15,117 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160901.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:47:36,925 INFO [finetune.py:976] (2/7) Epoch 29, batch 550, loss[loss=0.1627, simple_loss=0.2385, pruned_loss=0.04343, over 4831.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2385, pruned_loss=0.04768, over 897844.30 frames. ], batch size: 40, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:08,208 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5858, 1.4744, 1.4113, 1.5828, 1.0594, 3.1143, 1.1878, 1.6320], device='cuda:2'), covar=tensor([0.3343, 0.2443, 0.2158, 0.2424, 0.1820, 0.0263, 0.2770, 0.1261], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0114, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 10:48:08,229 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=160962.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:08,817 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160963.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:48:10,046 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160965.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:15,384 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.000e+01 1.387e+02 1.769e+02 2.082e+02 3.377e+02, threshold=3.538e+02, percent-clipped=1.0 2023-03-27 10:48:16,011 INFO [finetune.py:976] (2/7) Epoch 29, batch 600, loss[loss=0.1781, simple_loss=0.2616, pruned_loss=0.04726, over 4901.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2387, pruned_loss=0.04774, over 910694.27 frames. ], batch size: 43, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:41,020 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161011.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:48:41,637 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2769, 2.0276, 2.6483, 1.6857, 2.2204, 2.4598, 1.7700, 2.6236], device='cuda:2'), covar=tensor([0.1314, 0.2018, 0.1560, 0.2103, 0.0924, 0.1617, 0.2915, 0.0803], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0206, 0.0193, 0.0190, 0.0174, 0.0214, 0.0219, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:48:44,055 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2112, 2.0728, 2.2717, 1.5195, 2.1572, 2.3508, 2.3622, 1.8306], device='cuda:2'), covar=tensor([0.0552, 0.0678, 0.0625, 0.0880, 0.0796, 0.0729, 0.0535, 0.1229], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0128, 0.0141, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:48:49,430 INFO [finetune.py:976] (2/7) Epoch 29, batch 650, loss[loss=0.1802, simple_loss=0.2447, pruned_loss=0.05781, over 4782.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2413, pruned_loss=0.04799, over 919527.79 frames. ], batch size: 25, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:50,191 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:57,491 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 10:49:22,503 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.587e+02 1.904e+02 2.267e+02 4.706e+02, threshold=3.807e+02, percent-clipped=2.0 2023-03-27 10:49:23,104 INFO [finetune.py:976] (2/7) Epoch 29, batch 700, loss[loss=0.1788, simple_loss=0.2611, pruned_loss=0.04829, over 4891.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2427, pruned_loss=0.04854, over 928516.94 frames. ], batch size: 32, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:50:03,475 INFO [finetune.py:976] (2/7) Epoch 29, batch 750, loss[loss=0.1766, simple_loss=0.2433, pruned_loss=0.05493, over 4827.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2451, pruned_loss=0.04992, over 935498.42 frames. ], batch size: 47, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:50:46,352 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.613e+01 1.645e+02 1.965e+02 2.346e+02 5.118e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-27 10:50:46,985 INFO [finetune.py:976] (2/7) Epoch 29, batch 800, loss[loss=0.1773, simple_loss=0.2576, pruned_loss=0.04848, over 4812.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.246, pruned_loss=0.04967, over 939312.15 frames. ], batch size: 41, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:51:15,284 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:51:20,578 INFO [finetune.py:976] (2/7) Epoch 29, batch 850, loss[loss=0.2161, simple_loss=0.277, pruned_loss=0.07765, over 4268.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2436, pruned_loss=0.04931, over 942084.76 frames. ], batch size: 65, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:51:21,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7271, 2.6867, 2.1438, 3.0348, 2.7016, 2.3190, 3.1839, 2.8205], device='cuda:2'), covar=tensor([0.1278, 0.2040, 0.3036, 0.2364, 0.2440, 0.1661, 0.2774, 0.1715], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0191, 0.0238, 0.0255, 0.0252, 0.0210, 0.0217, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:51:24,346 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161231.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:51:48,477 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161257.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:51:49,708 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 10:52:04,266 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.396e+02 1.676e+02 2.040e+02 3.102e+02, threshold=3.353e+02, percent-clipped=0.0 2023-03-27 10:52:04,923 INFO [finetune.py:976] (2/7) Epoch 29, batch 900, loss[loss=0.1584, simple_loss=0.2228, pruned_loss=0.04703, over 4904.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2407, pruned_loss=0.04863, over 945527.67 frames. ], batch size: 32, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:52:06,236 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:15,371 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161292.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:35,804 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:38,179 INFO [finetune.py:976] (2/7) Epoch 29, batch 950, loss[loss=0.1279, simple_loss=0.2007, pruned_loss=0.0276, over 4894.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2383, pruned_loss=0.04763, over 945208.34 frames. ], batch size: 32, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:52:56,962 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1154, 1.3988, 0.6061, 1.9253, 2.2785, 1.7112, 1.6193, 1.9625], device='cuda:2'), covar=tensor([0.1417, 0.2065, 0.2281, 0.1165, 0.2034, 0.2002, 0.1368, 0.1963], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 10:53:00,114 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-27 10:53:28,289 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.815e+01 1.511e+02 1.765e+02 2.170e+02 3.612e+02, threshold=3.529e+02, percent-clipped=1.0 2023-03-27 10:53:28,920 INFO [finetune.py:976] (2/7) Epoch 29, batch 1000, loss[loss=0.1912, simple_loss=0.2689, pruned_loss=0.05672, over 4813.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2414, pruned_loss=0.04875, over 948029.83 frames. ], batch size: 45, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:53:48,061 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3001, 1.4058, 1.4956, 0.8222, 1.5338, 1.7632, 1.7957, 1.4036], device='cuda:2'), covar=tensor([0.0956, 0.0607, 0.0460, 0.0496, 0.0455, 0.0602, 0.0306, 0.0683], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0131, 0.0123, 0.0132, 0.0130, 0.0143, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.8578e-05, 1.0634e-04, 9.3134e-05, 8.6019e-05, 9.2506e-05, 9.2271e-05, 1.0194e-04, 1.0874e-04], device='cuda:2') 2023-03-27 10:53:49,434 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161401.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:06,939 INFO [finetune.py:976] (2/7) Epoch 29, batch 1050, loss[loss=0.1964, simple_loss=0.2685, pruned_loss=0.06219, over 4806.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2444, pruned_loss=0.04909, over 950746.79 frames. ], batch size: 51, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:54:13,082 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7683, 1.7752, 1.6186, 1.6625, 2.3524, 2.3392, 2.0182, 1.8560], device='cuda:2'), covar=tensor([0.0424, 0.0415, 0.0613, 0.0380, 0.0264, 0.0679, 0.0417, 0.0450], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0113, 0.0102, 0.0118, 0.0105, 0.0115], device='cuda:2'), out_proj_covar=tensor([7.9509e-05, 8.1921e-05, 1.1587e-04, 8.5688e-05, 7.9166e-05, 8.6512e-05, 7.8162e-05, 8.7466e-05], device='cuda:2') 2023-03-27 10:54:28,545 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:30,826 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:39,302 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.696e+01 1.443e+02 1.825e+02 2.108e+02 3.139e+02, threshold=3.650e+02, percent-clipped=0.0 2023-03-27 10:54:39,917 INFO [finetune.py:976] (2/7) Epoch 29, batch 1100, loss[loss=0.1484, simple_loss=0.2215, pruned_loss=0.03767, over 4875.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2455, pruned_loss=0.04952, over 952545.06 frames. ], batch size: 35, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:55:11,598 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161521.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:14,356 INFO [finetune.py:976] (2/7) Epoch 29, batch 1150, loss[loss=0.1669, simple_loss=0.2321, pruned_loss=0.05082, over 4856.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2459, pruned_loss=0.04952, over 953553.75 frames. ], batch size: 31, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:55:15,165 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.72 vs. limit=2.0 2023-03-27 10:55:33,332 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 10:55:39,797 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161557.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:50,715 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:51,851 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.769e+01 1.420e+02 1.779e+02 2.194e+02 3.095e+02, threshold=3.558e+02, percent-clipped=0.0 2023-03-27 10:55:52,955 INFO [finetune.py:976] (2/7) Epoch 29, batch 1200, loss[loss=0.1391, simple_loss=0.2111, pruned_loss=0.03358, over 4811.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2439, pruned_loss=0.04898, over 952327.46 frames. ], batch size: 51, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:56:03,033 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161587.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:22,442 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:27,826 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4101, 2.3993, 2.5079, 1.8531, 2.2411, 2.7640, 2.6627, 2.0430], device='cuda:2'), covar=tensor([0.0554, 0.0595, 0.0632, 0.0812, 0.1320, 0.0606, 0.0503, 0.1120], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0139, 0.0141, 0.0120, 0.0129, 0.0141, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:56:33,597 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161621.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:36,408 INFO [finetune.py:976] (2/7) Epoch 29, batch 1250, loss[loss=0.1493, simple_loss=0.2128, pruned_loss=0.04293, over 4899.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2414, pruned_loss=0.04835, over 954500.17 frames. ], batch size: 32, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:56:51,959 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8703, 1.4836, 1.9877, 1.9024, 1.7223, 1.6634, 1.9012, 1.9259], device='cuda:2'), covar=tensor([0.3281, 0.3490, 0.2766, 0.3164, 0.4025, 0.3508, 0.3483, 0.2491], device='cuda:2'), in_proj_covar=tensor([0.0271, 0.0251, 0.0271, 0.0300, 0.0300, 0.0277, 0.0307, 0.0255], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:57:07,718 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161669.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:57:07,782 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2802, 2.2199, 1.8330, 0.8895, 2.0061, 1.8202, 1.6924, 2.0538], device='cuda:2'), covar=tensor([0.0856, 0.0606, 0.1393, 0.1667, 0.1075, 0.2073, 0.1878, 0.0773], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0188, 0.0200, 0.0179, 0.0208, 0.0209, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 10:57:07,794 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.3526, 1.3856, 1.4163, 0.7631, 1.4915, 1.6784, 1.6907, 1.3450], device='cuda:2'), covar=tensor([0.0809, 0.0562, 0.0450, 0.0469, 0.0418, 0.0576, 0.0299, 0.0613], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0131, 0.0122, 0.0132, 0.0130, 0.0143, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.8020e-05, 1.0589e-04, 9.3013e-05, 8.5697e-05, 9.2468e-05, 9.2013e-05, 1.0156e-04, 1.0853e-04], device='cuda:2') 2023-03-27 10:57:10,129 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2383, 1.2253, 1.1667, 1.2297, 1.5191, 1.4357, 1.3420, 1.1819], device='cuda:2'), covar=tensor([0.0447, 0.0333, 0.0708, 0.0322, 0.0233, 0.0439, 0.0365, 0.0422], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0112, 0.0102, 0.0118, 0.0105, 0.0115], device='cuda:2'), out_proj_covar=tensor([7.9511e-05, 8.1784e-05, 1.1567e-04, 8.5501e-05, 7.9208e-05, 8.6742e-05, 7.8247e-05, 8.7668e-05], device='cuda:2') 2023-03-27 10:57:11,727 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.453e+02 1.676e+02 2.058e+02 3.499e+02, threshold=3.351e+02, percent-clipped=0.0 2023-03-27 10:57:12,870 INFO [finetune.py:976] (2/7) Epoch 29, batch 1300, loss[loss=0.1198, simple_loss=0.2008, pruned_loss=0.01944, over 4911.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2392, pruned_loss=0.04771, over 951293.13 frames. ], batch size: 36, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:57:47,285 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 10:57:54,736 INFO [finetune.py:976] (2/7) Epoch 29, batch 1350, loss[loss=0.1853, simple_loss=0.2681, pruned_loss=0.05119, over 4926.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2383, pruned_loss=0.04715, over 953379.29 frames. ], batch size: 38, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:58:29,757 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161757.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:58:49,497 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.561e+02 1.829e+02 2.338e+02 6.236e+02, threshold=3.658e+02, percent-clipped=4.0 2023-03-27 10:58:50,571 INFO [finetune.py:976] (2/7) Epoch 29, batch 1400, loss[loss=0.1301, simple_loss=0.2001, pruned_loss=0.03005, over 4783.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2419, pruned_loss=0.04827, over 953721.57 frames. ], batch size: 26, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:59:17,542 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:18,091 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161816.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:23,988 INFO [finetune.py:976] (2/7) Epoch 29, batch 1450, loss[loss=0.1906, simple_loss=0.2634, pruned_loss=0.05892, over 4836.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2429, pruned_loss=0.04805, over 954939.69 frames. ], batch size: 49, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:59:33,501 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.89 vs. limit=5.0 2023-03-27 10:59:55,399 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:56,503 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.509e+02 1.810e+02 2.171e+02 4.074e+02, threshold=3.620e+02, percent-clipped=1.0 2023-03-27 10:59:57,117 INFO [finetune.py:976] (2/7) Epoch 29, batch 1500, loss[loss=0.1599, simple_loss=0.2425, pruned_loss=0.0387, over 4902.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2451, pruned_loss=0.0491, over 955364.22 frames. ], batch size: 46, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:59:58,331 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161876.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:05,889 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:27,809 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161920.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:32,887 INFO [finetune.py:976] (2/7) Epoch 29, batch 1550, loss[loss=0.1327, simple_loss=0.2156, pruned_loss=0.0249, over 4783.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2445, pruned_loss=0.04829, over 956420.49 frames. ], batch size: 29, lr: 2.85e-03, grad_scale: 32.0 2023-03-27 11:00:44,438 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:01:16,693 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.413e+02 1.667e+02 1.984e+02 3.261e+02, threshold=3.334e+02, percent-clipped=0.0 2023-03-27 11:01:17,325 INFO [finetune.py:976] (2/7) Epoch 29, batch 1600, loss[loss=0.2371, simple_loss=0.2947, pruned_loss=0.08973, over 4341.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2436, pruned_loss=0.04872, over 954729.64 frames. ], batch size: 66, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:01:53,668 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 11:01:57,788 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7648, 2.5599, 2.1787, 2.8811, 2.7156, 2.3202, 3.1736, 2.7621], device='cuda:2'), covar=tensor([0.1214, 0.2161, 0.2820, 0.2445, 0.2283, 0.1670, 0.2777, 0.1629], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0252, 0.0249, 0.0208, 0.0215, 0.0203], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:01:59,455 INFO [finetune.py:976] (2/7) Epoch 29, batch 1650, loss[loss=0.125, simple_loss=0.1966, pruned_loss=0.02672, over 4288.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2396, pruned_loss=0.04757, over 951539.31 frames. ], batch size: 19, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:02:15,486 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.54 vs. limit=2.0 2023-03-27 11:02:22,370 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162057.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:02:34,948 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.499e+01 1.468e+02 1.758e+02 2.171e+02 4.899e+02, threshold=3.516e+02, percent-clipped=3.0 2023-03-27 11:02:35,583 INFO [finetune.py:976] (2/7) Epoch 29, batch 1700, loss[loss=0.1424, simple_loss=0.22, pruned_loss=0.03239, over 4760.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2376, pruned_loss=0.04695, over 953152.23 frames. ], batch size: 28, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:02:36,909 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:01,316 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:08,009 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162116.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:12,363 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9204, 1.8172, 2.0007, 1.1911, 1.9299, 1.9698, 1.9455, 1.6179], device='cuda:2'), covar=tensor([0.0633, 0.0729, 0.0660, 0.0919, 0.0831, 0.0696, 0.0631, 0.1235], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0140, 0.0118, 0.0128, 0.0140, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:03:16,125 INFO [finetune.py:976] (2/7) Epoch 29, batch 1750, loss[loss=0.222, simple_loss=0.2985, pruned_loss=0.0728, over 4858.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2394, pruned_loss=0.04743, over 953927.71 frames. ], batch size: 49, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:03:24,612 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162138.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:59,952 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:04:04,278 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:04:10,045 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.681e+01 1.626e+02 1.856e+02 2.284e+02 4.131e+02, threshold=3.712e+02, percent-clipped=2.0 2023-03-27 11:04:10,647 INFO [finetune.py:976] (2/7) Epoch 29, batch 1800, loss[loss=0.1601, simple_loss=0.235, pruned_loss=0.04257, over 4883.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2407, pruned_loss=0.04699, over 953126.37 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:04:44,452 INFO [finetune.py:976] (2/7) Epoch 29, batch 1850, loss[loss=0.1335, simple_loss=0.2154, pruned_loss=0.02584, over 4739.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.242, pruned_loss=0.04781, over 952049.92 frames. ], batch size: 27, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:04:48,176 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162231.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:05:17,384 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.604e+02 1.802e+02 2.267e+02 3.875e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-27 11:05:17,994 INFO [finetune.py:976] (2/7) Epoch 29, batch 1900, loss[loss=0.1847, simple_loss=0.2711, pruned_loss=0.04911, over 4863.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2444, pruned_loss=0.04815, over 951276.15 frames. ], batch size: 31, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:05:20,544 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4183, 1.3583, 1.3357, 1.3796, 1.7240, 1.6462, 1.4560, 1.2879], device='cuda:2'), covar=tensor([0.0382, 0.0333, 0.0631, 0.0321, 0.0225, 0.0542, 0.0376, 0.0455], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0108, 0.0150, 0.0113, 0.0103, 0.0119, 0.0106, 0.0117], device='cuda:2'), out_proj_covar=tensor([8.0084e-05, 8.2346e-05, 1.1699e-04, 8.6112e-05, 7.9963e-05, 8.7505e-05, 7.8961e-05, 8.8445e-05], device='cuda:2') 2023-03-27 11:05:28,432 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162292.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:05:51,672 INFO [finetune.py:976] (2/7) Epoch 29, batch 1950, loss[loss=0.1993, simple_loss=0.2639, pruned_loss=0.06732, over 4821.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2441, pruned_loss=0.04816, over 953670.60 frames. ], batch size: 41, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:06:36,624 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.724e+01 1.430e+02 1.855e+02 2.130e+02 3.474e+02, threshold=3.709e+02, percent-clipped=0.0 2023-03-27 11:06:37,251 INFO [finetune.py:976] (2/7) Epoch 29, batch 2000, loss[loss=0.167, simple_loss=0.2354, pruned_loss=0.04933, over 4906.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2406, pruned_loss=0.04681, over 954924.94 frames. ], batch size: 36, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:06:38,689 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 11:07:14,705 INFO [finetune.py:976] (2/7) Epoch 29, batch 2050, loss[loss=0.1651, simple_loss=0.2249, pruned_loss=0.05267, over 4888.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2369, pruned_loss=0.04568, over 955230.74 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:07:19,658 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162433.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:07:30,197 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4384, 2.2528, 1.8398, 2.3640, 2.4050, 2.1333, 2.6907, 2.3872], device='cuda:2'), covar=tensor([0.1371, 0.2174, 0.3029, 0.2507, 0.2490, 0.1836, 0.2801, 0.1720], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0254, 0.0251, 0.0209, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:07:45,864 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:07:47,599 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.116e+01 1.601e+02 1.964e+02 2.330e+02 4.874e+02, threshold=3.928e+02, percent-clipped=3.0 2023-03-27 11:07:48,229 INFO [finetune.py:976] (2/7) Epoch 29, batch 2100, loss[loss=0.1612, simple_loss=0.2305, pruned_loss=0.04597, over 4688.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2357, pruned_loss=0.04517, over 956230.54 frames. ], batch size: 23, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:08:22,585 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:08:28,422 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162519.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:08:34,950 INFO [finetune.py:976] (2/7) Epoch 29, batch 2150, loss[loss=0.1586, simple_loss=0.2419, pruned_loss=0.03765, over 4779.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2398, pruned_loss=0.04667, over 957060.72 frames. ], batch size: 28, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:08:55,395 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1684, 1.9343, 2.1494, 2.1143, 1.9162, 1.9430, 2.1473, 2.0459], device='cuda:2'), covar=tensor([0.4233, 0.3804, 0.3188, 0.4081, 0.5242, 0.4389, 0.4681, 0.2920], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0250, 0.0270, 0.0299, 0.0297, 0.0276, 0.0303, 0.0253], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:08:59,791 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.54 vs. limit=5.0 2023-03-27 11:09:14,979 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162572.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:09:18,163 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.475e+02 1.779e+02 2.116e+02 3.329e+02, threshold=3.558e+02, percent-clipped=0.0 2023-03-27 11:09:18,785 INFO [finetune.py:976] (2/7) Epoch 29, batch 2200, loss[loss=0.173, simple_loss=0.2547, pruned_loss=0.04561, over 4891.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.241, pruned_loss=0.04675, over 956324.06 frames. ], batch size: 36, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:09:22,369 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3997, 2.3237, 2.3801, 1.6235, 2.3710, 2.5199, 2.5813, 2.0062], device='cuda:2'), covar=tensor([0.0536, 0.0651, 0.0680, 0.0873, 0.0643, 0.0658, 0.0513, 0.1094], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0139, 0.0118, 0.0128, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:09:27,171 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162587.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:09:41,188 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4528, 1.4416, 1.6745, 1.0912, 1.2915, 1.5475, 1.4364, 1.7036], device='cuda:2'), covar=tensor([0.1010, 0.1956, 0.1192, 0.1458, 0.0948, 0.1183, 0.2763, 0.0835], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0206, 0.0193, 0.0190, 0.0175, 0.0213, 0.0218, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:10:00,795 INFO [finetune.py:976] (2/7) Epoch 29, batch 2250, loss[loss=0.1687, simple_loss=0.2466, pruned_loss=0.04541, over 4806.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2426, pruned_loss=0.04785, over 955225.00 frames. ], batch size: 40, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:10:06,545 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5922, 3.7551, 3.5777, 1.7090, 3.9447, 2.9566, 1.2130, 2.6388], device='cuda:2'), covar=tensor([0.2431, 0.2058, 0.1621, 0.3264, 0.0967, 0.0845, 0.3986, 0.1350], device='cuda:2'), in_proj_covar=tensor([0.0150, 0.0180, 0.0159, 0.0128, 0.0162, 0.0123, 0.0148, 0.0125], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 11:10:19,421 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2523, 1.7889, 2.4531, 1.5632, 2.0945, 2.4782, 1.7805, 2.4792], device='cuda:2'), covar=tensor([0.1086, 0.1940, 0.1236, 0.1922, 0.0906, 0.1298, 0.2501, 0.0852], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0206, 0.0193, 0.0190, 0.0175, 0.0213, 0.0219, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:10:33,516 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.222e+01 1.558e+02 1.861e+02 2.054e+02 5.864e+02, threshold=3.723e+02, percent-clipped=1.0 2023-03-27 11:10:34,118 INFO [finetune.py:976] (2/7) Epoch 29, batch 2300, loss[loss=0.2251, simple_loss=0.2833, pruned_loss=0.08346, over 4883.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2437, pruned_loss=0.04786, over 955189.77 frames. ], batch size: 35, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:10:46,499 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3945, 2.2970, 1.9814, 2.2512, 2.2224, 2.1609, 2.1669, 2.7987], device='cuda:2'), covar=tensor([0.3036, 0.3501, 0.2874, 0.2953, 0.3245, 0.2397, 0.3162, 0.1634], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0239, 0.0275, 0.0261, 0.0232, 0.0260, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:11:09,593 INFO [finetune.py:976] (2/7) Epoch 29, batch 2350, loss[loss=0.1499, simple_loss=0.2307, pruned_loss=0.03452, over 4906.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2416, pruned_loss=0.04758, over 954005.00 frames. ], batch size: 43, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:11:20,031 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162733.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:11:27,964 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-27 11:11:50,496 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.467e+01 1.478e+02 1.866e+02 2.325e+02 4.091e+02, threshold=3.732e+02, percent-clipped=2.0 2023-03-27 11:11:51,107 INFO [finetune.py:976] (2/7) Epoch 29, batch 2400, loss[loss=0.1304, simple_loss=0.1978, pruned_loss=0.0315, over 4761.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.238, pruned_loss=0.04641, over 953889.29 frames. ], batch size: 26, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:11:51,236 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.8162, 2.5299, 2.2015, 1.0752, 2.2850, 2.2015, 1.9328, 2.4136], device='cuda:2'), covar=tensor([0.0729, 0.0805, 0.1448, 0.1966, 0.1383, 0.1931, 0.2026, 0.0902], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0188, 0.0201, 0.0180, 0.0208, 0.0209, 0.0222, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:11:56,642 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:12:14,932 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.2450, 1.2857, 1.3177, 0.7013, 1.2101, 1.4913, 1.5208, 1.2439], device='cuda:2'), covar=tensor([0.0829, 0.0537, 0.0564, 0.0480, 0.0500, 0.0553, 0.0320, 0.0680], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0148, 0.0131, 0.0122, 0.0131, 0.0130, 0.0141, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.7853e-05, 1.0585e-04, 9.3045e-05, 8.5470e-05, 9.2102e-05, 9.1956e-05, 1.0055e-04, 1.0828e-04], device='cuda:2') 2023-03-27 11:12:16,772 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.0999, 0.9895, 0.9828, 0.4948, 0.9641, 1.1864, 1.1708, 0.9620], device='cuda:2'), covar=tensor([0.0848, 0.0667, 0.0644, 0.0599, 0.0633, 0.0672, 0.0476, 0.0752], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0148, 0.0131, 0.0122, 0.0131, 0.0130, 0.0141, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.7852e-05, 1.0583e-04, 9.3044e-05, 8.5468e-05, 9.2087e-05, 9.1949e-05, 1.0054e-04, 1.0829e-04], device='cuda:2') 2023-03-27 11:12:33,620 INFO [finetune.py:976] (2/7) Epoch 29, batch 2450, loss[loss=0.1733, simple_loss=0.2386, pruned_loss=0.05402, over 4406.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2364, pruned_loss=0.04629, over 954946.07 frames. ], batch size: 19, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:12:35,957 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8438, 1.6907, 2.1717, 3.2966, 2.2171, 2.4845, 1.1454, 2.7493], device='cuda:2'), covar=tensor([0.1625, 0.1330, 0.1276, 0.0624, 0.0813, 0.1366, 0.1817, 0.0523], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0114, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:12:35,981 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:13:02,269 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162867.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 11:13:09,317 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.738e+01 1.481e+02 1.739e+02 2.016e+02 4.521e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 11:13:09,947 INFO [finetune.py:976] (2/7) Epoch 29, batch 2500, loss[loss=0.1864, simple_loss=0.2532, pruned_loss=0.05976, over 4885.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2382, pruned_loss=0.04693, over 952866.37 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:13:27,464 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:13:28,125 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162888.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:13:35,590 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5869, 3.7784, 3.4998, 1.8813, 3.8858, 3.0246, 0.6612, 2.5715], device='cuda:2'), covar=tensor([0.2353, 0.2337, 0.1688, 0.3193, 0.1188, 0.0965, 0.4732, 0.1672], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0181, 0.0160, 0.0129, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 11:13:52,411 INFO [finetune.py:976] (2/7) Epoch 29, batch 2550, loss[loss=0.1774, simple_loss=0.247, pruned_loss=0.05388, over 4927.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2431, pruned_loss=0.04825, over 952853.10 frames. ], batch size: 38, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:14:01,592 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:14:36,961 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.986e+01 1.516e+02 1.791e+02 2.248e+02 3.208e+02, threshold=3.582e+02, percent-clipped=0.0 2023-03-27 11:14:37,596 INFO [finetune.py:976] (2/7) Epoch 29, batch 2600, loss[loss=0.1821, simple_loss=0.2394, pruned_loss=0.06236, over 4774.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2438, pruned_loss=0.04829, over 950843.42 frames. ], batch size: 26, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:14:38,355 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.3397, 2.3533, 1.9311, 2.3888, 2.2459, 2.2298, 2.2460, 3.0957], device='cuda:2'), covar=tensor([0.3649, 0.4623, 0.3308, 0.4216, 0.4496, 0.2620, 0.4100, 0.1593], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0239, 0.0275, 0.0261, 0.0232, 0.0259, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:15:16,609 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9708, 1.7382, 2.4808, 3.9944, 2.6299, 2.7949, 0.9191, 3.3347], device='cuda:2'), covar=tensor([0.1669, 0.1403, 0.1363, 0.0623, 0.0809, 0.1770, 0.1984, 0.0407], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:15:17,741 INFO [finetune.py:976] (2/7) Epoch 29, batch 2650, loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04417, over 4916.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2456, pruned_loss=0.04916, over 950150.84 frames. ], batch size: 38, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:15:33,928 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7574, 1.6110, 2.3445, 3.6996, 2.4061, 2.6450, 1.2069, 3.0602], device='cuda:2'), covar=tensor([0.1720, 0.1462, 0.1339, 0.0556, 0.0839, 0.1230, 0.1805, 0.0486], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0163, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:15:36,279 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0522, 1.7149, 2.3036, 1.4600, 2.0191, 2.2539, 1.6386, 2.3689], device='cuda:2'), covar=tensor([0.1279, 0.2123, 0.1496, 0.2098, 0.0995, 0.1448, 0.3056, 0.0898], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0208, 0.0195, 0.0191, 0.0176, 0.0214, 0.0220, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:15:40,894 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0287, 1.9636, 1.7076, 2.2305, 2.4517, 2.1764, 1.8480, 1.6755], device='cuda:2'), covar=tensor([0.2108, 0.1963, 0.1892, 0.1556, 0.1487, 0.1113, 0.2222, 0.1890], device='cuda:2'), in_proj_covar=tensor([0.0248, 0.0213, 0.0217, 0.0200, 0.0247, 0.0192, 0.0219, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:15:44,510 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2508, 1.7026, 1.1961, 1.9387, 2.5154, 1.7375, 1.9191, 2.0073], device='cuda:2'), covar=tensor([0.1214, 0.1747, 0.1616, 0.1016, 0.1497, 0.1724, 0.1197, 0.1765], device='cuda:2'), in_proj_covar=tensor([0.0089, 0.0093, 0.0108, 0.0092, 0.0119, 0.0091, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:15:51,094 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.228e+01 1.484e+02 1.662e+02 2.061e+02 3.704e+02, threshold=3.324e+02, percent-clipped=1.0 2023-03-27 11:15:51,717 INFO [finetune.py:976] (2/7) Epoch 29, batch 2700, loss[loss=0.1367, simple_loss=0.2108, pruned_loss=0.0313, over 4820.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2442, pruned_loss=0.04826, over 952105.86 frames. ], batch size: 41, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:16:02,255 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=163090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:16:06,447 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8420, 1.7596, 1.8960, 1.2844, 1.8410, 1.9620, 1.9102, 1.5644], device='cuda:2'), covar=tensor([0.0635, 0.0731, 0.0697, 0.0860, 0.0898, 0.0635, 0.0619, 0.1228], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0128, 0.0140, 0.0141, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:16:25,137 INFO [finetune.py:976] (2/7) Epoch 29, batch 2750, loss[loss=0.1653, simple_loss=0.2335, pruned_loss=0.04857, over 4808.00 frames. ], tot_loss[loss=0.169, simple_loss=0.242, pruned_loss=0.04796, over 951965.30 frames. ], batch size: 41, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:16:52,295 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=163151.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:16:53,469 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-27 11:17:03,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163167.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 11:17:10,462 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.483e+02 1.714e+02 1.995e+02 3.279e+02, threshold=3.429e+02, percent-clipped=0.0 2023-03-27 11:17:10,478 INFO [finetune.py:976] (2/7) Epoch 29, batch 2800, loss[loss=0.1689, simple_loss=0.2283, pruned_loss=0.05472, over 4907.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2394, pruned_loss=0.04764, over 953873.43 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:17:15,462 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163183.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:19,124 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2274, 1.9040, 1.9976, 0.9658, 2.2698, 2.3851, 2.1947, 1.8494], device='cuda:2'), covar=tensor([0.0805, 0.0704, 0.0527, 0.0658, 0.0625, 0.0650, 0.0471, 0.0800], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0131, 0.0122, 0.0131, 0.0130, 0.0142, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.8128e-05, 1.0595e-04, 9.3285e-05, 8.5566e-05, 9.2127e-05, 9.2092e-05, 1.0058e-04, 1.0853e-04], device='cuda:2') 2023-03-27 11:17:37,946 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163215.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:40,939 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4318, 2.1886, 1.7416, 0.7821, 1.9332, 2.0090, 1.8094, 2.0493], device='cuda:2'), covar=tensor([0.0843, 0.0807, 0.1470, 0.2005, 0.1293, 0.2190, 0.2033, 0.0896], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0188, 0.0201, 0.0180, 0.0208, 0.0209, 0.0223, 0.0194], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:17:44,450 INFO [finetune.py:976] (2/7) Epoch 29, batch 2850, loss[loss=0.1313, simple_loss=0.2115, pruned_loss=0.02553, over 4085.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2377, pruned_loss=0.04707, over 953838.12 frames. ], batch size: 65, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:17:59,013 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=163240.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:18:31,673 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.921e+01 1.528e+02 1.890e+02 2.266e+02 3.566e+02, threshold=3.779e+02, percent-clipped=1.0 2023-03-27 11:18:31,689 INFO [finetune.py:976] (2/7) Epoch 29, batch 2900, loss[loss=0.1725, simple_loss=0.2492, pruned_loss=0.04789, over 4804.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2399, pruned_loss=0.04783, over 954777.50 frames. ], batch size: 41, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:18:41,443 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 11:18:41,940 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0268, 1.9604, 2.4257, 3.5770, 2.5803, 2.6603, 1.4697, 3.0121], device='cuda:2'), covar=tensor([0.1461, 0.1054, 0.1070, 0.0604, 0.0642, 0.1217, 0.1509, 0.0471], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0114, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:18:52,049 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=163301.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:19:08,253 INFO [finetune.py:976] (2/7) Epoch 29, batch 2950, loss[loss=0.1842, simple_loss=0.2681, pruned_loss=0.05015, over 4807.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2438, pruned_loss=0.04878, over 955342.14 frames. ], batch size: 41, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:19:29,126 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.27 vs. limit=5.0 2023-03-27 11:19:36,098 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2418, 1.9364, 2.3963, 1.6244, 1.9990, 2.4847, 1.8638, 2.5707], device='cuda:2'), covar=tensor([0.1168, 0.2006, 0.1221, 0.1766, 0.1021, 0.1123, 0.2653, 0.0695], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0208, 0.0195, 0.0191, 0.0176, 0.0214, 0.0220, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:19:49,294 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.530e+02 1.927e+02 2.309e+02 4.929e+02, threshold=3.855e+02, percent-clipped=3.0 2023-03-27 11:19:49,310 INFO [finetune.py:976] (2/7) Epoch 29, batch 3000, loss[loss=0.1656, simple_loss=0.2429, pruned_loss=0.04417, over 4744.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2443, pruned_loss=0.04888, over 954804.73 frames. ], batch size: 27, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:19:49,310 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 11:20:05,061 INFO [finetune.py:1010] (2/7) Epoch 29, validation: loss=0.158, simple_loss=0.2251, pruned_loss=0.04545, over 2265189.00 frames. 2023-03-27 11:20:05,062 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 11:20:22,899 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=2.96 vs. limit=5.0 2023-03-27 11:20:25,911 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.07 vs. limit=5.0 2023-03-27 11:20:43,037 INFO [finetune.py:976] (2/7) Epoch 29, batch 3050, loss[loss=0.1483, simple_loss=0.2338, pruned_loss=0.03135, over 4759.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2452, pruned_loss=0.0485, over 955283.36 frames. ], batch size: 28, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:20:54,226 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1126, 1.8227, 2.5601, 4.1243, 2.9179, 2.7612, 1.2124, 3.4810], device='cuda:2'), covar=tensor([0.1733, 0.1454, 0.1353, 0.0515, 0.0695, 0.1456, 0.1833, 0.0348], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:20:57,931 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:21:13,514 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 11:21:16,335 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.385e+02 1.622e+02 1.937e+02 3.745e+02, threshold=3.244e+02, percent-clipped=0.0 2023-03-27 11:21:16,351 INFO [finetune.py:976] (2/7) Epoch 29, batch 3100, loss[loss=0.1586, simple_loss=0.2345, pruned_loss=0.04135, over 4891.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2416, pruned_loss=0.04714, over 954325.72 frames. ], batch size: 35, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:21:22,348 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:21:34,188 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4699, 2.3282, 1.9932, 2.3713, 2.7894, 2.4532, 2.2459, 1.7891], device='cuda:2'), covar=tensor([0.1955, 0.1836, 0.1856, 0.1560, 0.1646, 0.1046, 0.2027, 0.1823], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0199, 0.0246, 0.0191, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:21:34,752 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5958, 1.1077, 0.7318, 1.3837, 2.0694, 0.8269, 1.2747, 1.4003], device='cuda:2'), covar=tensor([0.1426, 0.2165, 0.1699, 0.1221, 0.1839, 0.1827, 0.1559, 0.1904], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0093, 0.0108, 0.0093, 0.0120, 0.0091, 0.0097, 0.0088], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:21:51,332 INFO [finetune.py:976] (2/7) Epoch 29, batch 3150, loss[loss=0.1662, simple_loss=0.245, pruned_loss=0.04364, over 4846.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2391, pruned_loss=0.04652, over 955264.61 frames. ], batch size: 44, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:21:55,026 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163531.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:22:33,521 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.497e+02 1.704e+02 2.227e+02 4.213e+02, threshold=3.407e+02, percent-clipped=5.0 2023-03-27 11:22:33,537 INFO [finetune.py:976] (2/7) Epoch 29, batch 3200, loss[loss=0.1603, simple_loss=0.2261, pruned_loss=0.04729, over 4752.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2375, pruned_loss=0.04609, over 955876.83 frames. ], batch size: 23, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:22:49,203 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163596.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:22:50,460 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9579, 1.7983, 2.4420, 3.5951, 2.5108, 2.5599, 1.3330, 3.0399], device='cuda:2'), covar=tensor([0.1580, 0.1265, 0.1117, 0.0526, 0.0707, 0.1385, 0.1760, 0.0430], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:22:57,913 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1699, 2.0527, 2.1580, 1.5890, 2.1407, 2.2966, 2.2682, 1.7721], device='cuda:2'), covar=tensor([0.0538, 0.0626, 0.0669, 0.0814, 0.0803, 0.0565, 0.0537, 0.1107], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0140, 0.0119, 0.0129, 0.0139, 0.0140, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:23:04,076 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 11:23:07,473 INFO [finetune.py:976] (2/7) Epoch 29, batch 3250, loss[loss=0.1949, simple_loss=0.2664, pruned_loss=0.06167, over 4760.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2382, pruned_loss=0.04698, over 955407.01 frames. ], batch size: 54, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:23:52,620 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.629e+02 1.981e+02 2.506e+02 4.570e+02, threshold=3.962e+02, percent-clipped=6.0 2023-03-27 11:23:52,636 INFO [finetune.py:976] (2/7) Epoch 29, batch 3300, loss[loss=0.1462, simple_loss=0.2399, pruned_loss=0.02621, over 4899.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2402, pruned_loss=0.04754, over 952229.36 frames. ], batch size: 32, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:23:59,352 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.7214, 3.8048, 3.7257, 1.7517, 3.9974, 3.1474, 0.7020, 2.7580], device='cuda:2'), covar=tensor([0.2348, 0.2455, 0.1376, 0.3509, 0.0935, 0.0904, 0.4610, 0.1598], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0181, 0.0161, 0.0131, 0.0163, 0.0125, 0.0150, 0.0127], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 11:24:37,005 INFO [finetune.py:976] (2/7) Epoch 29, batch 3350, loss[loss=0.1585, simple_loss=0.2022, pruned_loss=0.05737, over 4094.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.243, pruned_loss=0.04871, over 952379.47 frames. ], batch size: 18, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:24:46,126 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.82 vs. limit=5.0 2023-03-27 11:24:55,353 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:25:21,356 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.486e+02 1.801e+02 2.174e+02 4.171e+02, threshold=3.603e+02, percent-clipped=1.0 2023-03-27 11:25:21,371 INFO [finetune.py:976] (2/7) Epoch 29, batch 3400, loss[loss=0.1644, simple_loss=0.2495, pruned_loss=0.03962, over 4853.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2437, pruned_loss=0.0486, over 952484.05 frames. ], batch size: 44, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:25:37,708 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:25:58,974 INFO [finetune.py:976] (2/7) Epoch 29, batch 3450, loss[loss=0.1537, simple_loss=0.2241, pruned_loss=0.0416, over 4759.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2426, pruned_loss=0.0477, over 952447.20 frames. ], batch size: 27, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:26:12,323 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 11:26:41,241 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.103e+01 1.539e+02 1.877e+02 2.200e+02 3.171e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-27 11:26:41,257 INFO [finetune.py:976] (2/7) Epoch 29, batch 3500, loss[loss=0.1625, simple_loss=0.2248, pruned_loss=0.05011, over 4939.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2408, pruned_loss=0.04756, over 952862.32 frames. ], batch size: 33, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:26:55,487 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163896.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:27:17,137 INFO [finetune.py:976] (2/7) Epoch 29, batch 3550, loss[loss=0.1487, simple_loss=0.2289, pruned_loss=0.0342, over 4824.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2379, pruned_loss=0.04661, over 951561.50 frames. ], batch size: 41, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:27:38,584 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163944.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:27:59,310 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.875e+01 1.446e+02 1.734e+02 2.162e+02 4.667e+02, threshold=3.468e+02, percent-clipped=1.0 2023-03-27 11:27:59,326 INFO [finetune.py:976] (2/7) Epoch 29, batch 3600, loss[loss=0.1984, simple_loss=0.2607, pruned_loss=0.06811, over 4788.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2367, pruned_loss=0.04662, over 951258.52 frames. ], batch size: 51, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:28:36,274 INFO [finetune.py:976] (2/7) Epoch 29, batch 3650, loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04227, over 4829.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2384, pruned_loss=0.04758, over 950419.62 frames. ], batch size: 49, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:28:58,903 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.6278, 4.0190, 4.3069, 4.4104, 4.3682, 4.0942, 4.6315, 1.9273], device='cuda:2'), covar=tensor([0.0728, 0.0947, 0.0820, 0.0897, 0.1103, 0.1438, 0.0687, 0.5313], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0247, 0.0287, 0.0296, 0.0340, 0.0286, 0.0306, 0.0302], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:29:00,631 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2575, 1.3360, 1.4498, 0.9397, 1.2498, 1.4631, 1.3375, 1.6663], device='cuda:2'), covar=tensor([0.1258, 0.2381, 0.1428, 0.1635, 0.1022, 0.1270, 0.3110, 0.0799], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0205, 0.0192, 0.0188, 0.0173, 0.0211, 0.0216, 0.0195], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:29:06,513 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8649, 1.3607, 1.8398, 1.8320, 1.6344, 1.6040, 1.7726, 1.7755], device='cuda:2'), covar=tensor([0.4418, 0.4091, 0.3363, 0.3870, 0.5062, 0.4501, 0.4757, 0.3007], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0250, 0.0269, 0.0300, 0.0299, 0.0277, 0.0305, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:29:09,053 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 11:29:19,111 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.932e+01 1.541e+02 1.828e+02 2.331e+02 7.133e+02, threshold=3.656e+02, percent-clipped=4.0 2023-03-27 11:29:19,127 INFO [finetune.py:976] (2/7) Epoch 29, batch 3700, loss[loss=0.148, simple_loss=0.2289, pruned_loss=0.03358, over 4756.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2401, pruned_loss=0.0478, over 947775.51 frames. ], batch size: 28, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:29:36,852 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:29:45,699 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6664, 1.4918, 1.0301, 0.3267, 1.2313, 1.4984, 1.4788, 1.4218], device='cuda:2'), covar=tensor([0.1030, 0.0939, 0.1550, 0.2098, 0.1582, 0.2596, 0.2489, 0.0973], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0189, 0.0202, 0.0181, 0.0209, 0.0210, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:29:54,117 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1597, 2.0947, 1.8680, 2.0727, 2.0608, 1.9861, 2.0205, 2.7336], device='cuda:2'), covar=tensor([0.3614, 0.4138, 0.3085, 0.3744, 0.3875, 0.2396, 0.3761, 0.1580], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0265, 0.0239, 0.0275, 0.0261, 0.0232, 0.0260, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:30:00,904 INFO [finetune.py:976] (2/7) Epoch 29, batch 3750, loss[loss=0.1718, simple_loss=0.2505, pruned_loss=0.04657, over 4807.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2426, pruned_loss=0.04877, over 947521.79 frames. ], batch size: 45, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:30:14,174 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5718, 1.4583, 1.3851, 1.5148, 1.0310, 3.2204, 1.2272, 1.5999], device='cuda:2'), covar=tensor([0.3405, 0.2505, 0.2270, 0.2463, 0.1824, 0.0213, 0.2699, 0.1264], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 11:30:17,231 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164151.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:30:20,023 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164155.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:30:37,149 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.292e+01 1.655e+02 1.832e+02 2.179e+02 3.362e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 11:30:37,165 INFO [finetune.py:976] (2/7) Epoch 29, batch 3800, loss[loss=0.1916, simple_loss=0.2765, pruned_loss=0.05329, over 4921.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2438, pruned_loss=0.04836, over 950605.46 frames. ], batch size: 42, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:31:03,888 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:31:09,782 INFO [finetune.py:976] (2/7) Epoch 29, batch 3850, loss[loss=0.1906, simple_loss=0.2502, pruned_loss=0.06555, over 4794.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2419, pruned_loss=0.04744, over 952098.87 frames. ], batch size: 29, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:31:24,851 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.5995, 1.5288, 1.5132, 0.9048, 1.7241, 1.8257, 1.8585, 1.4035], device='cuda:2'), covar=tensor([0.0935, 0.0783, 0.0540, 0.0595, 0.0472, 0.0614, 0.0313, 0.0778], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0148, 0.0132, 0.0122, 0.0131, 0.0130, 0.0142, 0.0152], device='cuda:2'), out_proj_covar=tensor([8.8098e-05, 1.0615e-04, 9.3731e-05, 8.5384e-05, 9.2076e-05, 9.2372e-05, 1.0087e-04, 1.0842e-04], device='cuda:2') 2023-03-27 11:31:45,645 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.748e+01 1.394e+02 1.747e+02 2.221e+02 3.425e+02, threshold=3.494e+02, percent-clipped=0.0 2023-03-27 11:31:45,661 INFO [finetune.py:976] (2/7) Epoch 29, batch 3900, loss[loss=0.165, simple_loss=0.2364, pruned_loss=0.04682, over 4903.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2397, pruned_loss=0.04692, over 953742.29 frames. ], batch size: 35, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:32:27,498 INFO [finetune.py:976] (2/7) Epoch 29, batch 3950, loss[loss=0.1501, simple_loss=0.2236, pruned_loss=0.03828, over 4832.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2375, pruned_loss=0.04662, over 954653.87 frames. ], batch size: 39, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:32:58,653 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164360.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:33:11,862 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.819e+01 1.491e+02 1.748e+02 1.970e+02 3.605e+02, threshold=3.496e+02, percent-clipped=1.0 2023-03-27 11:33:11,878 INFO [finetune.py:976] (2/7) Epoch 29, batch 4000, loss[loss=0.2236, simple_loss=0.2936, pruned_loss=0.07687, over 4744.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2371, pruned_loss=0.04695, over 953015.38 frames. ], batch size: 59, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:33:42,985 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:33:43,623 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8570, 1.4574, 1.9155, 1.9565, 1.6764, 1.6586, 1.9141, 1.8367], device='cuda:2'), covar=tensor([0.3711, 0.3638, 0.3100, 0.3233, 0.4403, 0.3838, 0.3916, 0.2833], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0250, 0.0270, 0.0300, 0.0299, 0.0277, 0.0305, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:33:45,304 INFO [finetune.py:976] (2/7) Epoch 29, batch 4050, loss[loss=0.1799, simple_loss=0.2312, pruned_loss=0.06432, over 4275.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2405, pruned_loss=0.04827, over 953587.62 frames. ], batch size: 18, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:34:06,764 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:34:07,465 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5634, 2.3328, 1.9882, 2.5449, 2.4043, 2.1462, 2.9204, 2.5937], device='cuda:2'), covar=tensor([0.1347, 0.2241, 0.3042, 0.2493, 0.2650, 0.1752, 0.2863, 0.1670], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0254, 0.0251, 0.0210, 0.0215, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:34:09,198 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4027, 1.3234, 1.2895, 1.2714, 1.6220, 1.5865, 1.3651, 1.2299], device='cuda:2'), covar=tensor([0.0392, 0.0333, 0.0752, 0.0357, 0.0270, 0.0408, 0.0397, 0.0497], device='cuda:2'), in_proj_covar=tensor([0.0102, 0.0106, 0.0147, 0.0111, 0.0102, 0.0117, 0.0103, 0.0114], device='cuda:2'), out_proj_covar=tensor([7.8655e-05, 8.1013e-05, 1.1457e-04, 8.4474e-05, 7.8944e-05, 8.6373e-05, 7.6838e-05, 8.6698e-05], device='cuda:2') 2023-03-27 11:34:11,075 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1135, 1.9665, 1.6634, 1.7710, 2.0655, 1.8359, 2.3015, 2.1284], device='cuda:2'), covar=tensor([0.1335, 0.2004, 0.2924, 0.2668, 0.2587, 0.1732, 0.2800, 0.1681], device='cuda:2'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0254, 0.0252, 0.0210, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:34:29,030 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.788e+01 1.600e+02 1.814e+02 2.182e+02 3.811e+02, threshold=3.628e+02, percent-clipped=1.0 2023-03-27 11:34:29,046 INFO [finetune.py:976] (2/7) Epoch 29, batch 4100, loss[loss=0.1606, simple_loss=0.2464, pruned_loss=0.03742, over 4794.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2428, pruned_loss=0.04872, over 952450.70 frames. ], batch size: 45, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:04,640 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:35:13,898 INFO [finetune.py:976] (2/7) Epoch 29, batch 4150, loss[loss=0.2089, simple_loss=0.2707, pruned_loss=0.07352, over 4907.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2447, pruned_loss=0.04946, over 953259.57 frames. ], batch size: 37, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:21,813 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 11:35:42,373 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164568.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:35:46,474 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.582e+02 1.835e+02 2.360e+02 4.097e+02, threshold=3.670e+02, percent-clipped=1.0 2023-03-27 11:35:46,490 INFO [finetune.py:976] (2/7) Epoch 29, batch 4200, loss[loss=0.1461, simple_loss=0.216, pruned_loss=0.03811, over 4849.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2438, pruned_loss=0.04864, over 951493.32 frames. ], batch size: 44, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:48,887 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164578.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:36:14,623 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 11:36:20,292 INFO [finetune.py:976] (2/7) Epoch 29, batch 4250, loss[loss=0.1974, simple_loss=0.2621, pruned_loss=0.06631, over 4902.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2417, pruned_loss=0.04792, over 950750.21 frames. ], batch size: 37, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:36:23,243 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:36:29,279 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164639.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 11:36:43,152 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164658.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:36:53,339 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4281, 2.4071, 2.1109, 2.5907, 2.3133, 2.2946, 2.3169, 3.2919], device='cuda:2'), covar=tensor([0.3797, 0.4515, 0.3473, 0.3969, 0.4217, 0.2448, 0.4257, 0.1500], device='cuda:2'), in_proj_covar=tensor([0.0289, 0.0264, 0.0239, 0.0275, 0.0262, 0.0233, 0.0260, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:36:53,782 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.450e+02 1.663e+02 2.112e+02 3.483e+02, threshold=3.326e+02, percent-clipped=0.0 2023-03-27 11:36:53,798 INFO [finetune.py:976] (2/7) Epoch 29, batch 4300, loss[loss=0.1343, simple_loss=0.2039, pruned_loss=0.03241, over 4789.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2389, pruned_loss=0.04719, over 951910.83 frames. ], batch size: 29, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:37:21,324 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.33 vs. limit=5.0 2023-03-27 11:37:38,950 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 11:37:39,418 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:41,324 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164719.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:44,823 INFO [finetune.py:976] (2/7) Epoch 29, batch 4350, loss[loss=0.181, simple_loss=0.2422, pruned_loss=0.05992, over 4858.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2376, pruned_loss=0.04704, over 955321.86 frames. ], batch size: 31, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:37:56,288 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5824, 2.3887, 2.0368, 1.0248, 2.1872, 1.9808, 1.8613, 2.1824], device='cuda:2'), covar=tensor([0.0827, 0.0824, 0.1642, 0.2020, 0.1430, 0.2289, 0.2183, 0.0962], device='cuda:2'), in_proj_covar=tensor([0.0173, 0.0191, 0.0205, 0.0183, 0.0212, 0.0213, 0.0226, 0.0199], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:37:58,682 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=164746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:58,696 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:13,632 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3545, 1.2319, 1.6322, 2.4264, 1.5723, 2.1992, 0.8989, 2.1864], device='cuda:2'), covar=tensor([0.1721, 0.1538, 0.1130, 0.0790, 0.0957, 0.1360, 0.1574, 0.0519], device='cuda:2'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0164, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 11:38:20,795 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.196e+01 1.395e+02 1.786e+02 2.060e+02 3.738e+02, threshold=3.572e+02, percent-clipped=2.0 2023-03-27 11:38:20,811 INFO [finetune.py:976] (2/7) Epoch 29, batch 4400, loss[loss=0.1731, simple_loss=0.2571, pruned_loss=0.04457, over 4820.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2392, pruned_loss=0.04768, over 954671.41 frames. ], batch size: 39, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:38:21,686 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-27 11:38:33,467 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=164794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:43,356 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:46,288 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=164811.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:48,167 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5481, 1.5433, 2.4431, 2.0628, 1.8452, 4.4390, 1.6020, 1.7427], device='cuda:2'), covar=tensor([0.0989, 0.1974, 0.1054, 0.0960, 0.1670, 0.0147, 0.1528, 0.1895], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0077, 0.0092, 0.0081, 0.0086, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 11:38:54,767 INFO [finetune.py:976] (2/7) Epoch 29, batch 4450, loss[loss=0.2004, simple_loss=0.2889, pruned_loss=0.05597, over 4756.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2417, pruned_loss=0.04795, over 955942.92 frames. ], batch size: 28, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:39:23,087 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=164859.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:39:35,487 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.10 vs. limit=2.0 2023-03-27 11:39:37,135 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.552e+02 1.889e+02 2.217e+02 4.905e+02, threshold=3.778e+02, percent-clipped=1.0 2023-03-27 11:39:37,151 INFO [finetune.py:976] (2/7) Epoch 29, batch 4500, loss[loss=0.186, simple_loss=0.259, pruned_loss=0.05653, over 4821.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2432, pruned_loss=0.04823, over 955237.68 frames. ], batch size: 38, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:40:22,140 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164924.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:40:22,678 INFO [finetune.py:976] (2/7) Epoch 29, batch 4550, loss[loss=0.1931, simple_loss=0.2583, pruned_loss=0.06394, over 4781.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2451, pruned_loss=0.04905, over 956601.15 frames. ], batch size: 51, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:40:28,155 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164934.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 11:40:29,430 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2377, 2.1725, 1.7809, 2.1946, 2.1725, 1.9367, 2.5158, 2.3055], device='cuda:2'), covar=tensor([0.1389, 0.2099, 0.2977, 0.2501, 0.2597, 0.1731, 0.2952, 0.1574], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0192, 0.0238, 0.0255, 0.0252, 0.0210, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:40:55,993 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.494e+02 1.764e+02 2.048e+02 3.220e+02, threshold=3.528e+02, percent-clipped=0.0 2023-03-27 11:40:56,009 INFO [finetune.py:976] (2/7) Epoch 29, batch 4600, loss[loss=0.1741, simple_loss=0.2446, pruned_loss=0.0518, over 4702.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2439, pruned_loss=0.04859, over 957436.59 frames. ], batch size: 59, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:40:59,730 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3421, 2.1208, 2.2162, 1.0002, 2.5915, 2.8583, 2.3947, 1.9897], device='cuda:2'), covar=tensor([0.0974, 0.0789, 0.0524, 0.0825, 0.0510, 0.0769, 0.0446, 0.0828], device='cuda:2'), in_proj_covar=tensor([0.0122, 0.0149, 0.0133, 0.0122, 0.0133, 0.0132, 0.0143, 0.0153], device='cuda:2'), out_proj_covar=tensor([8.9072e-05, 1.0639e-04, 9.4256e-05, 8.5812e-05, 9.3138e-05, 9.3183e-05, 1.0189e-04, 1.0934e-04], device='cuda:2') 2023-03-27 11:41:04,565 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2780, 2.1990, 1.7577, 2.2674, 2.1970, 1.9454, 2.5167, 2.3216], device='cuda:2'), covar=tensor([0.1200, 0.1924, 0.2685, 0.2253, 0.2368, 0.1663, 0.2699, 0.1500], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0192, 0.0239, 0.0255, 0.0253, 0.0211, 0.0217, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:41:22,214 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:41:23,387 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165016.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:41:29,265 INFO [finetune.py:976] (2/7) Epoch 29, batch 4650, loss[loss=0.1164, simple_loss=0.1916, pruned_loss=0.02064, over 4791.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2422, pruned_loss=0.04873, over 956931.78 frames. ], batch size: 29, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:41:46,534 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4730, 1.6204, 1.3857, 1.4299, 1.8096, 1.7960, 1.5920, 1.3827], device='cuda:2'), covar=tensor([0.0395, 0.0276, 0.0649, 0.0341, 0.0242, 0.0469, 0.0327, 0.0449], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0107, 0.0148, 0.0112, 0.0103, 0.0118, 0.0105, 0.0115], device='cuda:2'), out_proj_covar=tensor([7.9352e-05, 8.1574e-05, 1.1518e-04, 8.4871e-05, 7.9890e-05, 8.6799e-05, 7.7735e-05, 8.7574e-05], device='cuda:2') 2023-03-27 11:41:53,745 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165064.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:01,328 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.478e+02 1.798e+02 2.114e+02 1.110e+03, threshold=3.597e+02, percent-clipped=3.0 2023-03-27 11:42:01,344 INFO [finetune.py:976] (2/7) Epoch 29, batch 4700, loss[loss=0.1092, simple_loss=0.1869, pruned_loss=0.01572, over 4760.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2381, pruned_loss=0.04713, over 955249.68 frames. ], batch size: 27, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:42:27,132 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:43,748 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:44,889 INFO [finetune.py:976] (2/7) Epoch 29, batch 4750, loss[loss=0.1602, simple_loss=0.2358, pruned_loss=0.04233, over 4836.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2359, pruned_loss=0.04613, over 954751.36 frames. ], batch size: 47, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:43:21,524 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.579e+02 1.786e+02 2.031e+02 3.721e+02, threshold=3.572e+02, percent-clipped=1.0 2023-03-27 11:43:21,540 INFO [finetune.py:976] (2/7) Epoch 29, batch 4800, loss[loss=0.1915, simple_loss=0.2699, pruned_loss=0.05658, over 4832.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2386, pruned_loss=0.04762, over 953750.77 frames. ], batch size: 39, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:43:26,991 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2197, 2.0861, 2.2325, 1.3959, 2.2504, 2.3013, 2.2298, 1.9218], device='cuda:2'), covar=tensor([0.0572, 0.0689, 0.0606, 0.0845, 0.0852, 0.0544, 0.0556, 0.1048], device='cuda:2'), in_proj_covar=tensor([0.0133, 0.0139, 0.0142, 0.0119, 0.0130, 0.0141, 0.0141, 0.0164], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:43:27,571 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165184.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:43:38,544 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 11:43:53,932 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:43:54,433 INFO [finetune.py:976] (2/7) Epoch 29, batch 4850, loss[loss=0.2036, simple_loss=0.266, pruned_loss=0.07063, over 4926.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2415, pruned_loss=0.04815, over 954439.12 frames. ], batch size: 38, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:43:58,207 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7164, 1.5816, 1.5755, 1.7099, 1.3688, 4.1302, 1.4542, 1.8614], device='cuda:2'), covar=tensor([0.3292, 0.2608, 0.2191, 0.2300, 0.1576, 0.0154, 0.2687, 0.1268], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0125, 0.0114, 0.0096, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 11:44:00,488 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165234.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 11:44:24,478 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:44:26,733 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.742e+02 1.982e+02 2.317e+02 4.079e+02, threshold=3.965e+02, percent-clipped=3.0 2023-03-27 11:44:26,749 INFO [finetune.py:976] (2/7) Epoch 29, batch 4900, loss[loss=0.1801, simple_loss=0.2486, pruned_loss=0.0558, over 3901.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2437, pruned_loss=0.04904, over 954081.86 frames. ], batch size: 16, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:44:41,265 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:44:43,161 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0048, 0.9774, 0.9521, 1.0445, 1.1510, 1.1215, 1.0195, 0.9294], device='cuda:2'), covar=tensor([0.0440, 0.0328, 0.0694, 0.0340, 0.0305, 0.0516, 0.0358, 0.0491], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0112, 0.0104, 0.0119, 0.0105, 0.0116], device='cuda:2'), out_proj_covar=tensor([7.9600e-05, 8.1856e-05, 1.1556e-04, 8.5139e-05, 8.0183e-05, 8.7354e-05, 7.8119e-05, 8.8121e-05], device='cuda:2') 2023-03-27 11:45:00,512 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6813, 1.6213, 1.3749, 1.8439, 1.9601, 1.8383, 1.4561, 1.4225], device='cuda:2'), covar=tensor([0.2054, 0.1856, 0.1823, 0.1445, 0.1631, 0.1122, 0.2355, 0.1790], device='cuda:2'), in_proj_covar=tensor([0.0247, 0.0212, 0.0216, 0.0199, 0.0246, 0.0191, 0.0218, 0.0206], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:45:03,722 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:45:07,942 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 11:45:16,149 INFO [finetune.py:976] (2/7) Epoch 29, batch 4950, loss[loss=0.197, simple_loss=0.2681, pruned_loss=0.06294, over 4909.00 frames. ], tot_loss[loss=0.172, simple_loss=0.245, pruned_loss=0.04952, over 955417.28 frames. ], batch size: 42, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:45:48,520 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165362.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:45:51,014 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4288, 2.2250, 1.7444, 0.8125, 1.9046, 1.8964, 1.7820, 2.1123], device='cuda:2'), covar=tensor([0.0805, 0.0703, 0.1383, 0.1975, 0.1343, 0.2251, 0.2015, 0.0824], device='cuda:2'), in_proj_covar=tensor([0.0171, 0.0189, 0.0203, 0.0180, 0.0209, 0.0210, 0.0222, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:45:56,866 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.088e+01 1.449e+02 1.743e+02 2.087e+02 3.629e+02, threshold=3.486e+02, percent-clipped=0.0 2023-03-27 11:45:56,882 INFO [finetune.py:976] (2/7) Epoch 29, batch 5000, loss[loss=0.1815, simple_loss=0.2486, pruned_loss=0.05718, over 4729.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2437, pruned_loss=0.04891, over 953775.16 frames. ], batch size: 23, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:46:05,089 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7479, 1.5894, 2.4466, 2.0187, 1.7778, 4.1456, 1.6449, 1.6657], device='cuda:2'), covar=tensor([0.0914, 0.1778, 0.1025, 0.0873, 0.1535, 0.0186, 0.1402, 0.1776], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 11:46:15,384 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165402.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:46:30,165 INFO [finetune.py:976] (2/7) Epoch 29, batch 5050, loss[loss=0.1761, simple_loss=0.2535, pruned_loss=0.04935, over 4827.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.241, pruned_loss=0.04797, over 954404.06 frames. ], batch size: 39, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:46:47,814 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:47:03,404 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.397e+02 1.700e+02 2.095e+02 3.650e+02, threshold=3.400e+02, percent-clipped=1.0 2023-03-27 11:47:03,420 INFO [finetune.py:976] (2/7) Epoch 29, batch 5100, loss[loss=0.1522, simple_loss=0.2185, pruned_loss=0.04292, over 4836.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.238, pruned_loss=0.04682, over 955260.20 frames. ], batch size: 33, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:47:06,335 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165479.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:47:09,429 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8270, 1.7631, 1.5422, 2.0465, 2.0994, 1.9479, 1.3698, 1.5432], device='cuda:2'), covar=tensor([0.2197, 0.1937, 0.1946, 0.1495, 0.1615, 0.1148, 0.2484, 0.1952], device='cuda:2'), in_proj_covar=tensor([0.0246, 0.0212, 0.0217, 0.0199, 0.0246, 0.0191, 0.0218, 0.0207], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:47:46,454 INFO [finetune.py:976] (2/7) Epoch 29, batch 5150, loss[loss=0.2197, simple_loss=0.2815, pruned_loss=0.07894, over 4850.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2388, pruned_loss=0.04746, over 955546.70 frames. ], batch size: 47, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:47:51,414 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.71 vs. limit=5.0 2023-03-27 11:48:04,766 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:48:06,605 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5311, 2.3842, 1.9507, 0.8984, 2.1347, 1.9373, 1.8655, 2.1506], device='cuda:2'), covar=tensor([0.0765, 0.0740, 0.1456, 0.2107, 0.1332, 0.2214, 0.1929, 0.0938], device='cuda:2'), in_proj_covar=tensor([0.0172, 0.0190, 0.0203, 0.0181, 0.0209, 0.0211, 0.0224, 0.0198], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:48:15,905 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 11:48:20,242 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.818e+01 1.606e+02 1.856e+02 2.249e+02 3.990e+02, threshold=3.713e+02, percent-clipped=3.0 2023-03-27 11:48:20,258 INFO [finetune.py:976] (2/7) Epoch 29, batch 5200, loss[loss=0.1657, simple_loss=0.2419, pruned_loss=0.04481, over 4895.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2411, pruned_loss=0.04762, over 952635.54 frames. ], batch size: 35, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:48:33,285 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1860, 2.2552, 1.6901, 2.2414, 2.0504, 1.7924, 2.4675, 2.2285], device='cuda:2'), covar=tensor([0.1333, 0.1914, 0.2792, 0.2528, 0.2681, 0.1776, 0.3099, 0.1536], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0191, 0.0238, 0.0254, 0.0252, 0.0210, 0.0216, 0.0205], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:48:37,391 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 11:48:45,780 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165612.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:48:47,551 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.9024, 4.4348, 4.1927, 2.3349, 4.6030, 3.4727, 0.9519, 3.3044], device='cuda:2'), covar=tensor([0.2316, 0.1481, 0.1382, 0.3008, 0.0693, 0.0887, 0.4313, 0.1246], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0130, 0.0163, 0.0124, 0.0148, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 11:48:53,937 INFO [finetune.py:976] (2/7) Epoch 29, batch 5250, loss[loss=0.2043, simple_loss=0.2629, pruned_loss=0.0729, over 4788.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2435, pruned_loss=0.0484, over 951799.40 frames. ], batch size: 51, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:49:26,775 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.319e+01 1.511e+02 1.772e+02 2.250e+02 3.554e+02, threshold=3.544e+02, percent-clipped=0.0 2023-03-27 11:49:26,791 INFO [finetune.py:976] (2/7) Epoch 29, batch 5300, loss[loss=0.203, simple_loss=0.2751, pruned_loss=0.06539, over 4770.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2443, pruned_loss=0.04856, over 951559.14 frames. ], batch size: 26, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:50:10,068 INFO [finetune.py:976] (2/7) Epoch 29, batch 5350, loss[loss=0.1564, simple_loss=0.2309, pruned_loss=0.041, over 4703.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.244, pruned_loss=0.04835, over 951866.97 frames. ], batch size: 23, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:50:14,347 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7354, 1.7215, 1.5615, 1.6546, 2.0889, 2.1029, 1.7864, 1.5367], device='cuda:2'), covar=tensor([0.0426, 0.0327, 0.0656, 0.0331, 0.0236, 0.0461, 0.0316, 0.0439], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0107, 0.0149, 0.0112, 0.0104, 0.0119, 0.0105, 0.0117], device='cuda:2'), out_proj_covar=tensor([8.0285e-05, 8.2035e-05, 1.1608e-04, 8.5210e-05, 8.0218e-05, 8.7550e-05, 7.8278e-05, 8.8689e-05], device='cuda:2') 2023-03-27 11:50:33,771 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165750.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:50:52,207 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 11:50:59,727 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.485e+02 1.788e+02 2.126e+02 3.231e+02, threshold=3.576e+02, percent-clipped=0.0 2023-03-27 11:50:59,743 INFO [finetune.py:976] (2/7) Epoch 29, batch 5400, loss[loss=0.1298, simple_loss=0.2116, pruned_loss=0.02399, over 4760.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2411, pruned_loss=0.04736, over 952587.21 frames. ], batch size: 26, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:51:02,243 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:17,889 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165803.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:24,165 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165811.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:33,003 INFO [finetune.py:976] (2/7) Epoch 29, batch 5450, loss[loss=0.144, simple_loss=0.2158, pruned_loss=0.03612, over 4718.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.238, pruned_loss=0.04634, over 953922.53 frames. ], batch size: 23, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:51:34,301 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:59,789 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165864.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:52:06,351 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.128e+01 1.456e+02 1.721e+02 2.001e+02 5.868e+02, threshold=3.442e+02, percent-clipped=2.0 2023-03-27 11:52:06,367 INFO [finetune.py:976] (2/7) Epoch 29, batch 5500, loss[loss=0.1229, simple_loss=0.1871, pruned_loss=0.02932, over 4365.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2351, pruned_loss=0.04588, over 954990.14 frames. ], batch size: 19, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:52:27,320 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165907.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:52:40,105 INFO [finetune.py:976] (2/7) Epoch 29, batch 5550, loss[loss=0.2132, simple_loss=0.2855, pruned_loss=0.07049, over 4899.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2384, pruned_loss=0.04776, over 953054.88 frames. ], batch size: 43, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:52:54,635 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 11:53:22,712 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.570e+02 1.926e+02 2.203e+02 4.633e+02, threshold=3.853e+02, percent-clipped=1.0 2023-03-27 11:53:22,728 INFO [finetune.py:976] (2/7) Epoch 29, batch 5600, loss[loss=0.1757, simple_loss=0.2547, pruned_loss=0.04836, over 4760.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2416, pruned_loss=0.04843, over 954253.07 frames. ], batch size: 54, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:53:25,754 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.0925, 1.2348, 1.3644, 1.2569, 1.3679, 2.4481, 1.2215, 1.3452], device='cuda:2'), covar=tensor([0.1031, 0.1963, 0.1040, 0.0992, 0.1772, 0.0322, 0.1595, 0.1987], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 11:53:54,008 INFO [finetune.py:976] (2/7) Epoch 29, batch 5650, loss[loss=0.1471, simple_loss=0.23, pruned_loss=0.03212, over 4836.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2437, pruned_loss=0.04835, over 954636.45 frames. ], batch size: 49, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:53:57,032 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4670, 2.3555, 1.9891, 2.8050, 2.5137, 2.0995, 3.0511, 2.4786], device='cuda:2'), covar=tensor([0.1430, 0.2304, 0.2999, 0.2246, 0.2297, 0.1787, 0.2736, 0.1835], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0191, 0.0238, 0.0253, 0.0251, 0.0210, 0.0216, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:54:11,186 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166054.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 11:54:23,584 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.094e+01 1.435e+02 1.719e+02 2.053e+02 3.744e+02, threshold=3.437e+02, percent-clipped=0.0 2023-03-27 11:54:23,599 INFO [finetune.py:976] (2/7) Epoch 29, batch 5700, loss[loss=0.1062, simple_loss=0.1726, pruned_loss=0.01986, over 4164.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2395, pruned_loss=0.04724, over 938090.00 frames. ], batch size: 18, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:54:50,334 INFO [finetune.py:976] (2/7) Epoch 30, batch 0, loss[loss=0.2053, simple_loss=0.2718, pruned_loss=0.06939, over 4805.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2718, pruned_loss=0.06939, over 4805.00 frames. ], batch size: 39, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:54:50,335 INFO [finetune.py:1001] (2/7) Computing validation loss 2023-03-27 11:54:52,290 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8454, 1.1274, 1.9827, 1.9078, 1.7736, 1.6755, 1.7778, 1.9168], device='cuda:2'), covar=tensor([0.4203, 0.4216, 0.3559, 0.3978, 0.5179, 0.4337, 0.4914, 0.3166], device='cuda:2'), in_proj_covar=tensor([0.0270, 0.0250, 0.0269, 0.0301, 0.0300, 0.0277, 0.0306, 0.0255], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:54:57,593 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1582, 1.8936, 1.7774, 1.7784, 1.8331, 1.8394, 1.8680, 2.5364], device='cuda:2'), covar=tensor([0.3488, 0.4173, 0.3232, 0.3688, 0.4248, 0.2498, 0.3821, 0.1646], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0264, 0.0239, 0.0274, 0.0261, 0.0232, 0.0259, 0.0239], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 11:55:06,737 INFO [finetune.py:1010] (2/7) Epoch 30, validation: loss=0.1598, simple_loss=0.2264, pruned_loss=0.04658, over 2265189.00 frames. 2023-03-27 11:55:06,737 INFO [finetune.py:1011] (2/7) Maximum memory allocated so far is 6366MB 2023-03-27 11:55:11,871 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166106.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:55:21,144 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166115.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:55:43,741 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:55:52,370 INFO [finetune.py:976] (2/7) Epoch 30, batch 50, loss[loss=0.1754, simple_loss=0.2496, pruned_loss=0.0506, over 4863.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2414, pruned_loss=0.0484, over 214904.39 frames. ], batch size: 34, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:56:02,558 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166159.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:56:16,047 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.060e+01 1.403e+02 1.686e+02 1.983e+02 3.736e+02, threshold=3.372e+02, percent-clipped=1.0 2023-03-27 11:56:16,166 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4005, 1.4045, 1.8527, 1.7563, 1.6392, 3.3615, 1.3798, 1.5062], device='cuda:2'), covar=tensor([0.0939, 0.1762, 0.1021, 0.0897, 0.1485, 0.0218, 0.1461, 0.1765], device='cuda:2'), in_proj_covar=tensor([0.0074, 0.0082, 0.0072, 0.0076, 0.0090, 0.0080, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 11:56:17,489 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 11:56:28,089 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4638, 1.3608, 1.3910, 1.3285, 0.7449, 2.2646, 0.7697, 1.1214], device='cuda:2'), covar=tensor([0.3251, 0.2581, 0.2252, 0.2572, 0.2058, 0.0370, 0.2827, 0.1431], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:2'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:2') 2023-03-27 11:56:34,672 INFO [finetune.py:976] (2/7) Epoch 30, batch 100, loss[loss=0.1566, simple_loss=0.2244, pruned_loss=0.0444, over 4762.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2354, pruned_loss=0.04599, over 377376.68 frames. ], batch size: 28, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:56:38,188 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166207.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:56:39,341 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166208.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:57:02,479 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2660, 2.9638, 2.8101, 1.3170, 3.0110, 2.2388, 0.7814, 1.9835], device='cuda:2'), covar=tensor([0.2517, 0.2175, 0.2028, 0.3926, 0.1503, 0.1275, 0.4407, 0.1841], device='cuda:2'), in_proj_covar=tensor([0.0152, 0.0179, 0.0160, 0.0131, 0.0163, 0.0124, 0.0149, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 11:57:07,203 INFO [finetune.py:976] (2/7) Epoch 30, batch 150, loss[loss=0.1667, simple_loss=0.2299, pruned_loss=0.05181, over 4886.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2335, pruned_loss=0.0463, over 507800.37 frames. ], batch size: 35, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:57:08,104 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-27 11:57:08,956 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166255.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:57:21,825 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.442e+02 1.804e+02 2.095e+02 4.016e+02, threshold=3.609e+02, percent-clipped=1.0 2023-03-27 11:57:37,654 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.07 vs. limit=5.0 2023-03-27 11:57:39,808 INFO [finetune.py:976] (2/7) Epoch 30, batch 200, loss[loss=0.2367, simple_loss=0.3118, pruned_loss=0.08076, over 4817.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2336, pruned_loss=0.04654, over 606141.85 frames. ], batch size: 40, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:57:42,823 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:00,752 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 11:58:14,802 INFO [finetune.py:976] (2/7) Epoch 30, batch 250, loss[loss=0.1753, simple_loss=0.2681, pruned_loss=0.04129, over 4852.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2384, pruned_loss=0.04803, over 685707.09 frames. ], batch size: 47, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:58:19,108 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([0.1721, 1.3292, 1.3457, 0.7297, 1.2501, 1.5607, 1.6093, 1.2461], device='cuda:2'), covar=tensor([0.0889, 0.0605, 0.0563, 0.0492, 0.0552, 0.0592, 0.0398, 0.0698], device='cuda:2'), in_proj_covar=tensor([0.0121, 0.0147, 0.0131, 0.0121, 0.0131, 0.0130, 0.0141, 0.0151], device='cuda:2'), out_proj_covar=tensor([8.8058e-05, 1.0484e-04, 9.2994e-05, 8.4829e-05, 9.1985e-05, 9.2202e-05, 1.0038e-04, 1.0789e-04], device='cuda:2') 2023-03-27 11:58:26,042 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166368.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:26,668 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3627, 1.2999, 1.2235, 1.3181, 1.5660, 1.5163, 1.3856, 1.2067], device='cuda:2'), covar=tensor([0.0406, 0.0321, 0.0682, 0.0355, 0.0278, 0.0540, 0.0358, 0.0488], device='cuda:2'), in_proj_covar=tensor([0.0104, 0.0107, 0.0149, 0.0112, 0.0104, 0.0119, 0.0105, 0.0117], device='cuda:2'), out_proj_covar=tensor([8.0245e-05, 8.2049e-05, 1.1591e-04, 8.4978e-05, 8.0031e-05, 8.7710e-05, 7.7878e-05, 8.8393e-05], device='cuda:2') 2023-03-27 11:58:30,118 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.508e+02 1.830e+02 2.341e+02 4.348e+02, threshold=3.661e+02, percent-clipped=2.0 2023-03-27 11:58:48,138 INFO [finetune.py:976] (2/7) Epoch 30, batch 300, loss[loss=0.1348, simple_loss=0.2087, pruned_loss=0.03043, over 4758.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2424, pruned_loss=0.04879, over 747208.18 frames. ], batch size: 26, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:58:50,095 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166406.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:52,515 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166410.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:59:21,479 INFO [finetune.py:976] (2/7) Epoch 30, batch 350, loss[loss=0.1494, simple_loss=0.2269, pruned_loss=0.03595, over 4784.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2435, pruned_loss=0.04889, over 794133.23 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:59:22,660 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166454.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:25,704 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166459.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:37,685 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.476e+02 1.813e+02 2.161e+02 3.890e+02, threshold=3.626e+02, percent-clipped=2.0 2023-03-27 11:59:41,456 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166481.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:55,115 INFO [finetune.py:976] (2/7) Epoch 30, batch 400, loss[loss=0.1337, simple_loss=0.2116, pruned_loss=0.02791, over 4828.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2442, pruned_loss=0.04863, over 828990.92 frames. ], batch size: 49, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:59:55,184 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166503.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:58,015 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166507.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:59,264 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:59,337 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 12:00:20,208 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166531.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:00:35,353 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166542.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:00:45,989 INFO [finetune.py:976] (2/7) Epoch 30, batch 450, loss[loss=0.179, simple_loss=0.2469, pruned_loss=0.05559, over 4763.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2422, pruned_loss=0.04797, over 855452.16 frames. ], batch size: 28, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:00:56,915 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166570.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:01:00,792 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.283e+01 1.415e+02 1.694e+02 2.083e+02 4.695e+02, threshold=3.388e+02, percent-clipped=2.0 2023-03-27 12:01:20,455 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.6803, 3.4405, 3.2480, 1.5866, 3.4442, 2.7789, 0.9547, 2.5416], device='cuda:2'), covar=tensor([0.2108, 0.2377, 0.1714, 0.3707, 0.1202, 0.0997, 0.4258, 0.1594], device='cuda:2'), in_proj_covar=tensor([0.0151, 0.0178, 0.0160, 0.0130, 0.0162, 0.0124, 0.0148, 0.0126], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:2') 2023-03-27 12:01:22,348 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166592.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:01:32,867 INFO [finetune.py:976] (2/7) Epoch 30, batch 500, loss[loss=0.1546, simple_loss=0.2183, pruned_loss=0.0454, over 4872.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2389, pruned_loss=0.04675, over 877185.86 frames. ], batch size: 49, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:02:06,080 INFO [finetune.py:976] (2/7) Epoch 30, batch 550, loss[loss=0.1362, simple_loss=0.2171, pruned_loss=0.02762, over 4756.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2368, pruned_loss=0.04597, over 896669.92 frames. ], batch size: 27, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:02:12,181 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166663.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:02:20,414 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 7.871e+01 1.519e+02 1.786e+02 2.255e+02 3.834e+02, threshold=3.573e+02, percent-clipped=3.0 2023-03-27 12:02:32,243 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6029, 1.6365, 2.1128, 1.7975, 1.8174, 3.0761, 1.5109, 1.7476], device='cuda:2'), covar=tensor([0.0972, 0.1527, 0.1307, 0.0846, 0.1257, 0.0301, 0.1283, 0.1423], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 12:02:39,362 INFO [finetune.py:976] (2/7) Epoch 30, batch 600, loss[loss=0.226, simple_loss=0.2869, pruned_loss=0.08251, over 4816.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2383, pruned_loss=0.04662, over 908845.57 frames. ], batch size: 45, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:02:43,709 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166710.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 12:03:00,926 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:12,732 INFO [finetune.py:976] (2/7) Epoch 30, batch 650, loss[loss=0.1887, simple_loss=0.2721, pruned_loss=0.05264, over 4828.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2397, pruned_loss=0.04693, over 917182.99 frames. ], batch size: 33, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:03:15,792 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166758.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 12:03:19,476 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166764.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:26,540 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.552e+02 1.821e+02 2.218e+02 3.611e+02, threshold=3.642e+02, percent-clipped=1.0 2023-03-27 12:03:41,555 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166796.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:45,812 INFO [finetune.py:976] (2/7) Epoch 30, batch 700, loss[loss=0.1749, simple_loss=0.2608, pruned_loss=0.04453, over 4904.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2423, pruned_loss=0.04759, over 926031.62 frames. ], batch size: 35, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:03:45,899 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166803.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:00,331 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166825.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:08,129 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166837.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:18,068 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166851.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:19,223 INFO [finetune.py:976] (2/7) Epoch 30, batch 750, loss[loss=0.1528, simple_loss=0.24, pruned_loss=0.03284, over 4722.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2435, pruned_loss=0.04784, over 933122.49 frames. ], batch size: 59, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:04:24,103 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8339, 1.6545, 1.5034, 1.4094, 1.5877, 1.5789, 1.6228, 2.1840], device='cuda:2'), covar=tensor([0.3486, 0.3650, 0.3053, 0.3204, 0.3785, 0.2294, 0.3160, 0.1784], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0266, 0.0241, 0.0277, 0.0263, 0.0234, 0.0262, 0.0241], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:04:27,145 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:33,233 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.605e+01 1.409e+02 1.752e+02 2.054e+02 3.389e+02, threshold=3.504e+02, percent-clipped=0.0 2023-03-27 12:04:41,697 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:50,365 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5354, 1.3732, 1.9354, 1.7637, 1.5316, 3.2902, 1.2824, 1.4739], device='cuda:2'), covar=tensor([0.0973, 0.1890, 0.1092, 0.0945, 0.1629, 0.0246, 0.1559, 0.1897], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 12:04:52,671 INFO [finetune.py:976] (2/7) Epoch 30, batch 800, loss[loss=0.1548, simple_loss=0.2295, pruned_loss=0.04008, over 4931.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2448, pruned_loss=0.04803, over 939656.35 frames. ], batch size: 33, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:05:09,504 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:05:12,491 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3806, 1.2707, 1.7184, 2.4236, 1.6342, 2.1733, 1.0742, 2.1708], device='cuda:2'), covar=tensor([0.1740, 0.1371, 0.1013, 0.0749, 0.0897, 0.1416, 0.1458, 0.0575], device='cuda:2'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0164, 0.0100, 0.0134, 0.0124, 0.0101], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 12:05:32,101 INFO [finetune.py:976] (2/7) Epoch 30, batch 850, loss[loss=0.1654, simple_loss=0.2359, pruned_loss=0.04748, over 4777.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2426, pruned_loss=0.04779, over 943081.15 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:05:42,349 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166963.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:05:53,394 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0668, 1.9922, 1.7847, 1.9697, 1.8925, 1.8466, 1.9134, 2.6091], device='cuda:2'), covar=tensor([0.3501, 0.3812, 0.3068, 0.3424, 0.3743, 0.2347, 0.3300, 0.1435], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0266, 0.0241, 0.0276, 0.0263, 0.0234, 0.0261, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:05:53,457 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-27 12:05:53,842 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.507e+02 1.769e+02 2.105e+02 4.355e+02, threshold=3.539e+02, percent-clipped=2.0 2023-03-27 12:05:54,620 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.6464, 1.5490, 1.0789, 0.2824, 1.2936, 1.4997, 1.5399, 1.4889], device='cuda:2'), covar=tensor([0.1031, 0.0899, 0.1460, 0.2028, 0.1429, 0.2643, 0.2390, 0.0899], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0188, 0.0201, 0.0180, 0.0208, 0.0209, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:06:07,544 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166990.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:06:16,006 INFO [finetune.py:976] (2/7) Epoch 30, batch 900, loss[loss=0.1353, simple_loss=0.2135, pruned_loss=0.02853, over 4804.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2404, pruned_loss=0.04748, over 947536.73 frames. ], batch size: 25, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:06:23,180 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167011.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:06:23,235 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9300, 1.8314, 2.0068, 1.3938, 1.8265, 2.0006, 1.9722, 1.5804], device='cuda:2'), covar=tensor([0.0500, 0.0541, 0.0606, 0.0846, 0.1167, 0.0567, 0.0503, 0.1093], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0129, 0.0140, 0.0140, 0.0163], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:06:59,310 INFO [finetune.py:976] (2/7) Epoch 30, batch 950, loss[loss=0.1777, simple_loss=0.2424, pruned_loss=0.05655, over 4831.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2387, pruned_loss=0.04748, over 946692.01 frames. ], batch size: 30, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:07:03,008 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8209, 1.5647, 1.8855, 1.2305, 1.8141, 1.9554, 1.8636, 1.2898], device='cuda:2'), covar=tensor([0.0674, 0.0946, 0.0660, 0.0900, 0.0961, 0.0648, 0.0644, 0.1717], device='cuda:2'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0118, 0.0129, 0.0139, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:07:13,660 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.522e+02 1.848e+02 2.259e+02 3.601e+02, threshold=3.696e+02, percent-clipped=1.0 2023-03-27 12:07:24,437 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167091.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:31,393 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167100.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:33,104 INFO [finetune.py:976] (2/7) Epoch 30, batch 1000, loss[loss=0.1853, simple_loss=0.2666, pruned_loss=0.05194, over 4912.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2408, pruned_loss=0.04824, over 948452.33 frames. ], batch size: 37, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:07:33,847 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167104.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:44,466 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167120.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:51,225 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.4205, 2.1536, 1.6872, 0.7353, 1.9438, 1.8734, 1.7916, 2.0096], device='cuda:2'), covar=tensor([0.0845, 0.0736, 0.1640, 0.2129, 0.1417, 0.2302, 0.2096, 0.0939], device='cuda:2'), in_proj_covar=tensor([0.0170, 0.0188, 0.0201, 0.0180, 0.0209, 0.0210, 0.0223, 0.0196], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:07:55,385 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167137.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:58,420 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5918, 2.4353, 2.0018, 2.5836, 2.2892, 2.0522, 2.8629, 2.7023], device='cuda:2'), covar=tensor([0.1147, 0.2108, 0.2778, 0.2538, 0.2484, 0.1602, 0.3198, 0.1414], device='cuda:2'), in_proj_covar=tensor([0.0191, 0.0192, 0.0239, 0.0254, 0.0252, 0.0211, 0.0217, 0.0204], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:08:06,884 INFO [finetune.py:976] (2/7) Epoch 30, batch 1050, loss[loss=0.1738, simple_loss=0.2545, pruned_loss=0.0465, over 4918.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2431, pruned_loss=0.04811, over 949330.26 frames. ], batch size: 36, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:08:11,880 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:14,778 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:14,806 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:19,111 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.5609, 2.1636, 2.7975, 1.7359, 2.4552, 2.6057, 1.9329, 2.6207], device='cuda:2'), covar=tensor([0.1188, 0.1931, 0.1519, 0.2237, 0.0882, 0.1496, 0.2672, 0.0821], device='cuda:2'), in_proj_covar=tensor([0.0192, 0.0207, 0.0194, 0.0190, 0.0174, 0.0213, 0.0219, 0.0200], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:08:21,246 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.490e+02 1.818e+02 2.230e+02 3.401e+02, threshold=3.636e+02, percent-clipped=0.0 2023-03-27 12:08:27,350 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167185.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:28,587 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167187.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:39,895 INFO [finetune.py:976] (2/7) Epoch 30, batch 1100, loss[loss=0.1719, simple_loss=0.244, pruned_loss=0.04991, over 4862.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2433, pruned_loss=0.04799, over 949610.12 frames. ], batch size: 34, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:08:46,974 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:01,417 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167235.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:13,765 INFO [finetune.py:976] (2/7) Epoch 30, batch 1150, loss[loss=0.1767, simple_loss=0.2574, pruned_loss=0.04795, over 4913.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2441, pruned_loss=0.04848, over 949959.52 frames. ], batch size: 46, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:09:14,096 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-27 12:09:28,392 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.511e+02 1.772e+02 2.131e+02 4.281e+02, threshold=3.544e+02, percent-clipped=3.0 2023-03-27 12:09:34,970 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167285.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:47,290 INFO [finetune.py:976] (2/7) Epoch 30, batch 1200, loss[loss=0.1496, simple_loss=0.2203, pruned_loss=0.03947, over 4786.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2429, pruned_loss=0.0485, over 949372.19 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:10:04,005 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8704, 1.4726, 1.9514, 1.8789, 1.6780, 1.6393, 1.8688, 1.8523], device='cuda:2'), covar=tensor([0.4096, 0.4129, 0.3116, 0.3788, 0.4669, 0.4023, 0.4412, 0.2851], device='cuda:2'), in_proj_covar=tensor([0.0270, 0.0250, 0.0270, 0.0301, 0.0301, 0.0279, 0.0306, 0.0255], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:10:20,453 INFO [finetune.py:976] (2/7) Epoch 30, batch 1250, loss[loss=0.1339, simple_loss=0.2128, pruned_loss=0.02746, over 4764.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2397, pruned_loss=0.04767, over 952231.69 frames. ], batch size: 26, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:10:36,984 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.462e+02 1.711e+02 2.091e+02 4.240e+02, threshold=3.422e+02, percent-clipped=1.0 2023-03-27 12:10:56,501 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167391.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:09,176 INFO [finetune.py:976] (2/7) Epoch 30, batch 1300, loss[loss=0.1877, simple_loss=0.2531, pruned_loss=0.06119, over 4877.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2369, pruned_loss=0.04687, over 954088.59 frames. ], batch size: 34, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:11:24,564 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:32,405 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:38,997 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167439.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:41,466 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2562, 1.9737, 2.1604, 1.1213, 2.4501, 2.6392, 2.2452, 1.8422], device='cuda:2'), covar=tensor([0.0984, 0.0899, 0.0490, 0.0700, 0.0510, 0.0679, 0.0527, 0.0882], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0145, 0.0130, 0.0120, 0.0131, 0.0129, 0.0140, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.7866e-05, 1.0402e-04, 9.2133e-05, 8.3933e-05, 9.1372e-05, 9.1583e-05, 9.9259e-05, 1.0724e-04], device='cuda:2') 2023-03-27 12:11:55,876 INFO [finetune.py:976] (2/7) Epoch 30, batch 1350, loss[loss=0.1851, simple_loss=0.2571, pruned_loss=0.05658, over 4918.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2372, pruned_loss=0.04686, over 955339.99 frames. ], batch size: 37, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:11:58,248 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167456.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:00,698 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:07,068 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:11,227 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.213e+01 1.429e+02 1.665e+02 1.960e+02 3.889e+02, threshold=3.329e+02, percent-clipped=2.0 2023-03-27 12:12:23,231 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:29,688 INFO [finetune.py:976] (2/7) Epoch 30, batch 1400, loss[loss=0.1959, simple_loss=0.2786, pruned_loss=0.05662, over 4748.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2406, pruned_loss=0.04814, over 951473.29 frames. ], batch size: 54, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:12:42,740 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.2866, 2.2199, 2.3212, 1.5973, 2.1756, 2.3836, 2.4549, 1.8316], device='cuda:2'), covar=tensor([0.0658, 0.0690, 0.0698, 0.0894, 0.0828, 0.0676, 0.0575, 0.1207], device='cuda:2'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0118, 0.0128, 0.0139, 0.0139, 0.0162], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:13:02,946 INFO [finetune.py:976] (2/7) Epoch 30, batch 1450, loss[loss=0.1963, simple_loss=0.2601, pruned_loss=0.0662, over 4131.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2423, pruned_loss=0.04859, over 952649.84 frames. ], batch size: 65, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:13:17,516 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.4140, 1.4963, 2.2789, 1.6717, 1.7587, 4.2029, 1.5752, 1.6384], device='cuda:2'), covar=tensor([0.1062, 0.1826, 0.1274, 0.1068, 0.1676, 0.0181, 0.1465, 0.1861], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 12:13:19,227 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.535e+02 1.844e+02 2.174e+02 4.375e+02, threshold=3.689e+02, percent-clipped=4.0 2023-03-27 12:13:25,324 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167585.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:13:29,087 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=3.35 vs. limit=5.0 2023-03-27 12:13:36,787 INFO [finetune.py:976] (2/7) Epoch 30, batch 1500, loss[loss=0.1634, simple_loss=0.2399, pruned_loss=0.04349, over 4781.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2448, pruned_loss=0.04938, over 954439.05 frames. ], batch size: 51, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:13:57,372 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167633.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:14:07,485 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5564, 1.3855, 1.9434, 2.9221, 1.9450, 2.1305, 1.0751, 2.5168], device='cuda:2'), covar=tensor([0.1745, 0.1423, 0.1230, 0.0610, 0.0868, 0.1453, 0.1692, 0.0479], device='cuda:2'), in_proj_covar=tensor([0.0101, 0.0116, 0.0134, 0.0165, 0.0100, 0.0136, 0.0125, 0.0102], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:2') 2023-03-27 12:14:10,432 INFO [finetune.py:976] (2/7) Epoch 30, batch 1550, loss[loss=0.1694, simple_loss=0.2375, pruned_loss=0.05062, over 4920.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2439, pruned_loss=0.04851, over 955044.57 frames. ], batch size: 38, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:14:22,915 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0735, 1.5028, 2.1271, 2.0568, 1.9002, 1.8391, 1.9819, 2.0275], device='cuda:2'), covar=tensor([0.3649, 0.3716, 0.3041, 0.3594, 0.4459, 0.3674, 0.3999, 0.2807], device='cuda:2'), in_proj_covar=tensor([0.0269, 0.0250, 0.0268, 0.0299, 0.0300, 0.0277, 0.0306, 0.0254], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:14:26,674 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.931e+01 1.416e+02 1.751e+02 2.105e+02 4.024e+02, threshold=3.503e+02, percent-clipped=1.0 2023-03-27 12:14:44,013 INFO [finetune.py:976] (2/7) Epoch 30, batch 1600, loss[loss=0.1625, simple_loss=0.2287, pruned_loss=0.04818, over 4837.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2415, pruned_loss=0.04743, over 956356.74 frames. ], batch size: 49, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:14:54,737 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167719.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:17,591 INFO [finetune.py:976] (2/7) Epoch 30, batch 1650, loss[loss=0.1377, simple_loss=0.213, pruned_loss=0.03118, over 4702.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2389, pruned_loss=0.04678, over 957941.21 frames. ], batch size: 23, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:15:19,513 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167756.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:21,892 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167760.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:26,701 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167767.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:32,525 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.722e+01 1.428e+02 1.633e+02 1.926e+02 4.440e+02, threshold=3.266e+02, percent-clipped=1.0 2023-03-27 12:15:36,044 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167780.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:41,290 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:55,581 INFO [finetune.py:976] (2/7) Epoch 30, batch 1700, loss[loss=0.2154, simple_loss=0.284, pruned_loss=0.07344, over 4909.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2365, pruned_loss=0.04564, over 956816.21 frames. ], batch size: 36, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:15:56,725 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167804.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:03,315 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 12:16:03,660 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167808.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:14,409 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5870, 1.5112, 2.0513, 1.8207, 1.6989, 3.6832, 1.5239, 1.6813], device='cuda:2'), covar=tensor([0.1009, 0.1792, 0.1078, 0.0978, 0.1645, 0.0210, 0.1567, 0.1847], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 12:16:25,135 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167828.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:42,118 INFO [finetune.py:976] (2/7) Epoch 30, batch 1750, loss[loss=0.1787, simple_loss=0.2564, pruned_loss=0.05046, over 4814.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2396, pruned_loss=0.04709, over 955986.77 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:16:45,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([4.3686, 3.8286, 4.0075, 4.2223, 4.1465, 3.9092, 4.4509, 1.3503], device='cuda:2'), covar=tensor([0.0807, 0.0923, 0.0908, 0.0868, 0.1289, 0.1539, 0.0693, 0.5729], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0247, 0.0287, 0.0298, 0.0338, 0.0288, 0.0307, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:16:57,034 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.486e+02 1.829e+02 2.140e+02 4.770e+02, threshold=3.658e+02, percent-clipped=2.0 2023-03-27 12:17:25,520 INFO [finetune.py:976] (2/7) Epoch 30, batch 1800, loss[loss=0.1676, simple_loss=0.2437, pruned_loss=0.04577, over 4927.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.243, pruned_loss=0.04817, over 956627.28 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:17:44,599 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.0898, 2.0482, 1.7265, 1.9598, 1.9241, 1.9492, 1.9273, 2.7106], device='cuda:2'), covar=tensor([0.3559, 0.3977, 0.3186, 0.3729, 0.3868, 0.2433, 0.3505, 0.1575], device='cuda:2'), in_proj_covar=tensor([0.0291, 0.0265, 0.0240, 0.0276, 0.0263, 0.0233, 0.0260, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:17:58,718 INFO [finetune.py:976] (2/7) Epoch 30, batch 1850, loss[loss=0.2003, simple_loss=0.2655, pruned_loss=0.06754, over 4833.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2447, pruned_loss=0.04891, over 955204.96 frames. ], batch size: 49, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:18:13,645 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.458e+02 1.787e+02 2.144e+02 3.700e+02, threshold=3.573e+02, percent-clipped=1.0 2023-03-27 12:18:15,464 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.1254, 1.8169, 1.9683, 0.8369, 2.4116, 2.4839, 2.1401, 1.8125], device='cuda:2'), covar=tensor([0.1025, 0.0973, 0.0569, 0.0717, 0.0543, 0.0723, 0.0489, 0.0899], device='cuda:2'), in_proj_covar=tensor([0.0120, 0.0146, 0.0130, 0.0120, 0.0131, 0.0130, 0.0140, 0.0150], device='cuda:2'), out_proj_covar=tensor([8.7772e-05, 1.0434e-04, 9.2314e-05, 8.3998e-05, 9.1575e-05, 9.1988e-05, 9.9347e-05, 1.0739e-04], device='cuda:2') 2023-03-27 12:18:33,562 INFO [finetune.py:976] (2/7) Epoch 30, batch 1900, loss[loss=0.1612, simple_loss=0.239, pruned_loss=0.04172, over 4883.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2453, pruned_loss=0.04904, over 956098.50 frames. ], batch size: 43, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:18:36,552 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([3.8561, 3.3647, 3.5655, 3.7511, 3.5921, 3.4314, 3.9204, 1.1999], device='cuda:2'), covar=tensor([0.0965, 0.0874, 0.1028, 0.1132, 0.1461, 0.1744, 0.0907, 0.5948], device='cuda:2'), in_proj_covar=tensor([0.0355, 0.0248, 0.0288, 0.0299, 0.0339, 0.0288, 0.0308, 0.0304], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:18:46,773 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9741, 2.0078, 1.6003, 2.1461, 2.5186, 2.1239, 1.8642, 1.5629], device='cuda:2'), covar=tensor([0.1898, 0.1703, 0.1794, 0.1428, 0.1429, 0.1078, 0.1956, 0.1741], device='cuda:2'), in_proj_covar=tensor([0.0249, 0.0213, 0.0218, 0.0200, 0.0247, 0.0193, 0.0220, 0.0208], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:18:54,997 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168035.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:07,330 INFO [finetune.py:976] (2/7) Epoch 30, batch 1950, loss[loss=0.1567, simple_loss=0.2371, pruned_loss=0.03817, over 4844.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2436, pruned_loss=0.04831, over 954406.64 frames. ], batch size: 44, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:19:21,549 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:22,072 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.458e+02 1.758e+02 2.118e+02 3.555e+02, threshold=3.516e+02, percent-clipped=0.0 2023-03-27 12:19:25,196 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.9980, 1.5607, 1.1211, 1.9746, 2.2377, 1.9248, 1.8547, 1.9063], device='cuda:2'), covar=tensor([0.1140, 0.1657, 0.1645, 0.0888, 0.1610, 0.1652, 0.1066, 0.1411], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0094, 0.0121, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 12:19:30,464 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:35,821 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=168096.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 12:19:40,898 INFO [finetune.py:976] (2/7) Epoch 30, batch 2000, loss[loss=0.1622, simple_loss=0.2298, pruned_loss=0.0473, over 4933.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2399, pruned_loss=0.04739, over 954506.45 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:19:54,029 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:20:01,892 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:20:13,576 INFO [scaling.py:679] (2/7) Whitening: num_groups=1, num_channels=384, metric=4.20 vs. limit=5.0 2023-03-27 12:20:14,572 INFO [finetune.py:976] (2/7) Epoch 30, batch 2050, loss[loss=0.1456, simple_loss=0.214, pruned_loss=0.03862, over 4053.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2366, pruned_loss=0.04611, over 954798.62 frames. ], batch size: 17, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:20:29,517 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.484e+02 1.729e+02 2.115e+02 4.273e+02, threshold=3.459e+02, percent-clipped=3.0 2023-03-27 12:20:45,085 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-27 12:20:47,506 INFO [finetune.py:976] (2/7) Epoch 30, batch 2100, loss[loss=0.1623, simple_loss=0.2404, pruned_loss=0.04213, over 4816.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2374, pruned_loss=0.04676, over 954915.45 frames. ], batch size: 41, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:21:19,173 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:21:22,807 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.2537, 1.4319, 1.1587, 1.3389, 1.5761, 1.5291, 1.3784, 1.2864], device='cuda:2'), covar=tensor([0.0384, 0.0265, 0.0582, 0.0258, 0.0211, 0.0400, 0.0321, 0.0366], device='cuda:2'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0111, 0.0102, 0.0118, 0.0105, 0.0115], device='cuda:2'), out_proj_covar=tensor([7.9664e-05, 8.1300e-05, 1.1553e-04, 8.4658e-05, 7.9194e-05, 8.6489e-05, 7.7557e-05, 8.7417e-05], device='cuda:2') 2023-03-27 12:21:41,934 INFO [finetune.py:976] (2/7) Epoch 30, batch 2150, loss[loss=0.1953, simple_loss=0.2474, pruned_loss=0.07163, over 4102.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2409, pruned_loss=0.04786, over 953798.78 frames. ], batch size: 18, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:22:00,990 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.551e+02 1.892e+02 2.224e+02 4.404e+02, threshold=3.784e+02, percent-clipped=3.0 2023-03-27 12:22:12,556 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=168294.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:22:18,785 INFO [finetune.py:976] (2/7) Epoch 30, batch 2200, loss[loss=0.2054, simple_loss=0.2737, pruned_loss=0.06861, over 4743.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2427, pruned_loss=0.0481, over 955143.70 frames. ], batch size: 59, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:23:02,560 INFO [finetune.py:976] (2/7) Epoch 30, batch 2250, loss[loss=0.176, simple_loss=0.2519, pruned_loss=0.05007, over 4898.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2446, pruned_loss=0.04887, over 952418.77 frames. ], batch size: 36, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:23:17,400 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168375.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:23:17,906 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.029e+01 1.513e+02 1.826e+02 2.132e+02 3.584e+02, threshold=3.652e+02, percent-clipped=0.0 2023-03-27 12:23:28,072 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168391.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 12:23:34,128 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.8344, 1.4151, 1.9046, 1.8935, 1.6984, 1.6518, 1.8711, 1.7888], device='cuda:2'), covar=tensor([0.4235, 0.4182, 0.3251, 0.3551, 0.5004, 0.4059, 0.4293, 0.3100], device='cuda:2'), in_proj_covar=tensor([0.0271, 0.0251, 0.0270, 0.0301, 0.0302, 0.0280, 0.0307, 0.0256], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:23:36,283 INFO [finetune.py:976] (2/7) Epoch 30, batch 2300, loss[loss=0.1309, simple_loss=0.2137, pruned_loss=0.02412, over 4787.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2448, pruned_loss=0.04799, over 954300.18 frames. ], batch size: 51, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:23:40,009 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-27 12:23:49,958 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168423.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:23:50,004 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168423.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:24:09,556 INFO [finetune.py:976] (2/7) Epoch 30, batch 2350, loss[loss=0.1394, simple_loss=0.2263, pruned_loss=0.02631, over 4816.00 frames. ], tot_loss[loss=0.168, simple_loss=0.242, pruned_loss=0.04699, over 955582.00 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:24:21,867 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:24:24,771 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.258e+01 1.458e+02 1.699e+02 2.116e+02 4.301e+02, threshold=3.398e+02, percent-clipped=1.0 2023-03-27 12:24:42,042 INFO [finetune.py:976] (2/7) Epoch 30, batch 2400, loss[loss=0.1178, simple_loss=0.1891, pruned_loss=0.02324, over 4747.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2387, pruned_loss=0.04618, over 955120.46 frames. ], batch size: 28, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:25:15,072 INFO [finetune.py:976] (2/7) Epoch 30, batch 2450, loss[loss=0.1927, simple_loss=0.2553, pruned_loss=0.06505, over 4864.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2363, pruned_loss=0.0455, over 952616.23 frames. ], batch size: 34, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:25:30,927 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.487e+02 1.838e+02 2.245e+02 3.083e+02, threshold=3.676e+02, percent-clipped=0.0 2023-03-27 12:25:39,364 INFO [zipformer.py:1188] (2/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:25:48,868 INFO [finetune.py:976] (2/7) Epoch 30, batch 2500, loss[loss=0.1905, simple_loss=0.2705, pruned_loss=0.0553, over 4919.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2378, pruned_loss=0.04638, over 953977.83 frames. ], batch size: 36, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:26:27,843 INFO [finetune.py:976] (2/7) Epoch 30, batch 2550, loss[loss=0.1892, simple_loss=0.2687, pruned_loss=0.05485, over 4828.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2421, pruned_loss=0.04727, over 953570.22 frames. ], batch size: 47, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:26:55,336 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.538e+02 1.821e+02 2.163e+02 4.307e+02, threshold=3.641e+02, percent-clipped=2.0 2023-03-27 12:27:09,670 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:27:17,863 INFO [finetune.py:976] (2/7) Epoch 30, batch 2600, loss[loss=0.2049, simple_loss=0.2808, pruned_loss=0.06452, over 4816.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2439, pruned_loss=0.04806, over 953809.06 frames. ], batch size: 40, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:27:26,312 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([5.0259, 4.4082, 4.5305, 4.8287, 4.7598, 4.4625, 5.1439, 1.5676], device='cuda:2'), covar=tensor([0.0759, 0.0791, 0.0828, 0.0974, 0.1183, 0.1687, 0.0487, 0.6060], device='cuda:2'), in_proj_covar=tensor([0.0353, 0.0247, 0.0287, 0.0298, 0.0337, 0.0287, 0.0307, 0.0303], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:27:44,432 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168739.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:28:01,792 INFO [finetune.py:976] (2/7) Epoch 30, batch 2650, loss[loss=0.1575, simple_loss=0.2269, pruned_loss=0.04403, over 4895.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2439, pruned_loss=0.04792, over 951368.92 frames. ], batch size: 37, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:28:21,692 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.510e+02 1.724e+02 1.979e+02 3.263e+02, threshold=3.448e+02, percent-clipped=0.0 2023-03-27 12:28:43,723 INFO [finetune.py:976] (2/7) Epoch 30, batch 2700, loss[loss=0.1475, simple_loss=0.226, pruned_loss=0.03449, over 4763.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2418, pruned_loss=0.0468, over 953406.65 frames. ], batch size: 51, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:29:01,582 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.7751, 1.2585, 0.8866, 1.7181, 2.1374, 1.6241, 1.5433, 1.5921], device='cuda:2'), covar=tensor([0.1416, 0.1983, 0.1877, 0.1110, 0.1858, 0.1951, 0.1375, 0.1848], device='cuda:2'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:2'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:2') 2023-03-27 12:29:09,929 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-27 12:29:17,012 INFO [finetune.py:976] (2/7) Epoch 30, batch 2750, loss[loss=0.149, simple_loss=0.2198, pruned_loss=0.0391, over 4815.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2389, pruned_loss=0.04569, over 952593.19 frames. ], batch size: 25, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:29:27,241 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 12:29:28,388 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([2.1382, 2.0442, 1.8116, 1.8799, 2.0210, 1.9387, 2.0085, 2.6296], device='cuda:2'), covar=tensor([0.3480, 0.3553, 0.2946, 0.3368, 0.3417, 0.2216, 0.3264, 0.1515], device='cuda:2'), in_proj_covar=tensor([0.0290, 0.0264, 0.0239, 0.0276, 0.0263, 0.0232, 0.0260, 0.0240], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:29:32,259 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.461e+01 1.436e+02 1.670e+02 1.989e+02 2.987e+02, threshold=3.340e+02, percent-clipped=0.0 2023-03-27 12:29:40,000 INFO [scaling.py:679] (2/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 12:29:41,552 INFO [zipformer.py:1188] (2/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168889.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:29:50,503 INFO [finetune.py:976] (2/7) Epoch 30, batch 2800, loss[loss=0.1857, simple_loss=0.2546, pruned_loss=0.05838, over 4912.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2369, pruned_loss=0.0454, over 954246.98 frames. ], batch size: 37, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:30:13,012 INFO [zipformer.py:1188] (2/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168937.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:30:23,986 INFO [finetune.py:976] (2/7) Epoch 30, batch 2850, loss[loss=0.1339, simple_loss=0.2148, pruned_loss=0.02647, over 4780.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2361, pruned_loss=0.04536, over 956674.11 frames. ], batch size: 27, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:30:33,492 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168967.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:30:34,686 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168969.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 12:30:38,878 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 8.606e+01 1.456e+02 1.728e+02 2.104e+02 5.266e+02, threshold=3.457e+02, percent-clipped=2.0 2023-03-27 12:30:54,784 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.3458, 1.3134, 1.4893, 1.0795, 1.2858, 1.5063, 1.2795, 1.6804], device='cuda:2'), covar=tensor([0.1223, 0.2151, 0.1309, 0.1508, 0.0991, 0.1146, 0.3108, 0.0827], device='cuda:2'), in_proj_covar=tensor([0.0189, 0.0205, 0.0191, 0.0188, 0.0173, 0.0210, 0.0218, 0.0197], device='cuda:2'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:2') 2023-03-27 12:30:57,840 INFO [finetune.py:976] (2/7) Epoch 30, batch 2900, loss[loss=0.205, simple_loss=0.2873, pruned_loss=0.0613, over 4813.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2398, pruned_loss=0.04677, over 955204.34 frames. ], batch size: 39, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:31:14,731 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=169028.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:31:15,303 INFO [zipformer.py:2441] (2/7) attn_weights_entropy = tensor([1.5304, 1.3436, 2.0813, 1.7561, 1.5534, 3.5952, 1.2637, 1.5186], device='cuda:2'), covar=tensor([0.1058, 0.2054, 0.1083, 0.1053, 0.1747, 0.0230, 0.1717, 0.1977], device='cuda:2'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0092, 0.0081, 0.0085, 0.0081], device='cuda:2'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:2') 2023-03-27 12:31:15,928 INFO [zipformer.py:1188] (2/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=169030.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 12:31:31,768 INFO [finetune.py:976] (2/7) Epoch 30, batch 2950, loss[loss=0.1531, simple_loss=0.2252, pruned_loss=0.04053, over 4775.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2419, pruned_loss=0.04688, over 954858.47 frames. ], batch size: 26, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:31:49,075 INFO [optim.py:369] (2/7) Clipping_scale=2.0, grad-norm quartiles 9.406e+01 1.615e+02 1.887e+02 2.255e+02 4.054e+02, threshold=3.773e+02, percent-clipped=1.0 2023-03-27 12:31:53,306 INFO [zipformer.py:1188] (2/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=169082.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:32:19,262 INFO [finetune.py:976] (2/7) Epoch 30, batch 3000, loss[loss=0.1225, simple_loss=0.1946, pruned_loss=0.02516, over 4703.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2437, pruned_loss=0.0473, over 954757.24 frames. ], batch size: 23, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:32:19,262 INFO [finetune.py:1001] (2/7) Computing validation loss